Next Article in Journal
Study on the Wear Resistance Performance of the Hot-Rolled BTW1/Q345 Composite Plate under Different Annealing Temperatures
Next Article in Special Issue
Identifications of False Positives Amongst Sodium(I) Cations in Protein Three-Dimensional Structures—A Validation Approach Extendible to Any Alkali or Alkaline Earth Cation and to Any Monoatomic Anion
Previous Article in Journal
Understanding the Mechanistic Pathways of N2 Reduction to Ammonia on (110) Facets of Transition Metal Carbides
Previous Article in Special Issue
Novel Quaternary Ammonium Aldimine Derivatives Featuring 3,4,5-Trimethoxy Phenyl Fragment: Synthesis, Crystal Structure and Evaluation of Antioxidant and Antibacterial Activity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Predicting X-ray Diffraction Quality of Protein Crystals Using a Deep-Learning Method

1
School of Information and Electronic Engineering, Zhejiang Gongshang University, Hangzhou 310018, China
2
Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai 201210, China
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Crystals 2024, 14(9), 771; https://doi.org/10.3390/cryst14090771
Submission received: 19 July 2024 / Revised: 23 August 2024 / Accepted: 26 August 2024 / Published: 29 August 2024
(This article belongs to the Special Issue Protein Crystallography: The State of the Art)

Abstract

:
Over the past few decades, significant advancements in protein crystallography have led to a steady increase in the number of determined protein structures. The X-ray diffraction experiment remains one of the primary methods for investigating protein crystal structures. To obtain information about crystal structures, a sufficient number of high-quality crystals are typically required. At present, X-ray diffraction experiments on protein crystals primarily rely on manual selection by experimenters. However, each experiment is not only costly but also time-consuming. To address the urgent need for automatic selection of the proper protein crystal candidates for X-ray diffraction experiments, a protein-crystal-quality classification network, leveraging the ConvNeXt network architecture, is proposed. Subsequently, a new database is created, which includes protein crystal images and their corresponding X-ray diffraction images. Additionally, a novel method for categorizing protein quality based on the number of diffraction spots and the resolution is introduced. To further enhance the network’s focus on essential features of protein crystal images, a CBAM (Convolutional Block Attention Module) attention mechanism is incorporated between convolution layers. The experimental results demonstrate that the network achieves significant improvement in performing the prediction task, thereby effectively enhancing the probability of high-quality crystals being selected by experimenters.

1. Introduction

Proteins are key molecules that perform various life activities in living organisms, and analyzing their structure and function is of great significance in the fields of drug design and bioengineering [1]. Crystallographic methods, as one of the most important methods in protein structure analysis, provide key information for the development and design of drugs for human diseases [2].
At present, the X-ray diffraction (XRD) experiment is one of the main methods to study protein structure, which infers protein structure by analyzing the Bragg diffraction results of crystals. However, it requires an adequate number of high-quality crystals for X-ray diffraction experiments to obtain reliable results for protein structure analysis. Thanks to the rapid development of high-throughput protein-crystallization techniques, the difficulty of successfully crystallizing proteins has been decreasing [3]. However, not every protein crystal has a positive effect on the analysis of protein structure. The availability of high-quality protein crystals suitable for X-ray diffraction experiments remains unpredictable. In practice, an experienced crystallographer is able to roughly judge the diffraction quality of a protein crystal from its visual features and then filter out some protein crystals that appear to have bad diffraction results before harvesting crystals. Qin et al. [4] used an instance segmentation algorithm to locate protein crystals in droplets. Based on successfully locating protein crystals, exploring finer crystal categorization will make the automatic selection system of protein crystals more robust. Therefore, using a deep-learning-based image-classification method to predict the diffraction quality of protein crystals to allow for fine-grained classification of protein crystals would help the experimenter skip the crystals that may have bad diffraction quality and select only the good ones.
Deep-learning methods are increasingly being used in the field of protein crystallography [5]. One example is the utilization of deep-learning networks for predicting protein crystallization outcomes. Wang et al. [6] proposed a deep-learning framework using self-attention and auto-encoder networks for predicting protein crystallization propensity. Bruno et al. [7] proposed a classification algorithm based on a convolutional neural network, which achieved good classification results for the presence of protein crystals in droplets. Elbasir et al. [8] proposed a deep-learning framework to identify proteins that can produce diffraction-quality crystals based on the sequence information of proteins. Currently, the determination of crystallization conditions often relies on trial-and-error methods, making it challenging to predict the likelihood of obtaining high-quality protein crystals for X-ray diffraction [9]. The above teams have attempted to apply deep learning to the classification of images during protein crystallization or to the classification of crystal X-ray diffraction images. Few of them have attempted to establish the relationship directly between protein crystals’ visual images and their X-ray diffraction results through deep learning. It should be noted here that this relationship is between the image taken by the visible-light microscope and the X-ray diffraction result, not between the diffraction image and the diffraction result. At present, there are many mature forms of X-ray diffraction data-processing software, such as MosFlm (www.ccp4.ac.uk/html/mosflm.html, accessed on 25 August 2024) [10], XDS (xds.mr.mpg.de/html_doc/XDS.html, accessed on 25 August 2024) [11], Dials (dials.github.io/index.html, accessed on 25 August 2024) [12], CrystFEL (www.desy.de/~twhite/crystfel/index.html, accessed on 25 August 2024) [13], etc., which can handle diffraction data well. There is a pipeline called MeshBest [14] that is able to calculate and arrange crystals by their quality in seconds without using neural networks. However, they deal with X-ray diffraction data and cannot predict the diffraction quality of the crystals based on the microscope images because no X-ray diffraction images had been collected at that time.
The purpose of this study is to propose a deep-learning model for predicting the diffraction quality of protein crystals. A new database including protein crystal images and the corresponding X-ray diffraction images is created, and then a naive protein-quality-categorization method according to the number of diffraction spots and the diffraction resolution is proposed. Meanwhile, a protein-crystal-quality-classification network based on the ConvNeXt network is proposed. Furthermore, to enhance the network’s attention to important features of protein crystal images, the CBAM (Convolutional Block Attention Module) is added between layers.

2. Image Preprocessing

2.1. Data Collection

This study focuses on the classification of protein crystal images based on their X-ray diffraction quality. To construct the database, it is crucial to obtain both protein crystal images and corresponding X-ray diffraction experimental data for each crystal. However, existing public databases lack these integrated data, making it essential to create a database that includes both protein crystal image data and their corresponding X-ray diffraction results in order to explore the relationship between the two.
Due to various factors affecting the process of protein crystallization, such as the experimental environment and the culture method [15], there are significant differences in the crystal shapes of proteins with distinct structures after crystallization. In this paper, lysozyme serves as the target for protein crystal cultivation. Figure 1a displays an example image of one lysozyme crystal, while its X-ray diffraction results are presented in Figure 1b. The X-ray diffraction results are obtained at beamline BL18U1 of the Shanghai Synchrotron Radiation Facility (SSRF). The obtained images and diffraction data are then analyzed using Albula 3.3.3 software, as illustrated in Figure 1b.

2.2. Quantitative Analysis of Diffraction Results

The presentation of the protein crystal X-ray diffraction experiment results consists of two-dimensional images accompanied by their corresponding diffraction information. To establish a scoring mechanism for protein crystals, a connected components analysis (CCA) algorithm [16], which is commonly used in the field of image analysis and machine vision, is applied in this paper to count the number of diffraction spots in diffraction images. The score is then calculated by combining this information with the resolution of the diffracted spots.

2.2.1. Statistics of Diffraction Spots

Before the CCA algorithm could be used to count the number of diffracted spots, the diffraction images underwent several prepossessing steps. Firstly, shadow areas resulting from the experimental instruments were covered. Secondly, grayscale transformation was applied to convert the RGB images into easily processable grayscale images. The diffraction images were subsequently binarized using a grayscale threshold of 80 to highlight potential diffraction spots within the image. Subsequently, to highlight potential diffraction points in the image, the diffraction image was binarized. The grayscale threshold setting for binarization was obtained through constant attempts based on the number of intensity peak points obtained from the query in Albula, and it was eventually set to 80. Finally, processed diffraction images were subjected to analysis using the connected components algorithm. The visual representation of the diffraction-image-processing procedure is provided in Figure 2.
To facilitate the analysis of diffracted images, the diffraction pattern is processed into a binary image in which possible noise and artifacts are treated as background pixels. The CCA algorithm is then used to identify and label connected regions of foreground pixels, which correspond to individual diffraction spots. Furthermore, the algorithm allows for the extraction of valuable information, such as the total number of diffraction spots, their spatial distribution, and various statistical properties. Leveraging this information, it is possible to identify and reject grids or large areas of noise in the diffraction image.

2.2.2. Analysis of Diffraction Spots with Resolution

Resolution is a metric that determines the level of detail or sharpness of an image. In X-ray diffraction experiments, resolution refers to the ability to distinguish the distance between the smallest structural features, which is typically expressed in units of angstroms (Å, 10−10 m).
The quality classification standard for protein crystal diffraction depends not only on the number of diffraction spots observed in the crystal diffraction results but also the resolution of diffraction spots. The diffraction spots at varying resolutions hold varying importance in the analysis of protein crystal structures. Protein crystals exhibiting clear, dense, and uniformly distributed diffraction spots are more conducive to the analysis of protein structures. For researchers, detecting more diffraction spots in regions with smaller resolution values signifies better diffraction quality of the crystal. In diffraction images, resolution information can be obtained from the pixel distance, calculated via Equation (1):
R e s o l u t i o n = D 2 sin tan 1 x 2 + y 2 · d p x / λ 2
where D represents the distance between the detector and the object being imaged; ( x , y ) represent the relative pixel coordinates of each point with respect to the center point; d p x represents the size of each pixel, indicating the actual length it represents; and λ represents the wavelength of the light used in the imaging process.

2.2.3. Establish a Scoring Mechanism

Typically, at high resolution (better than 2.0 Å), the protein and bound water molecules are well-defined, and it is unlikely that the structure will contain any serious errors. Such protein crystals are recognized as high-resolution protein crystals [17]. In the scoring mechanism developed in this paper to evaluate diffraction results, a resolution of 2 Å is specifically selected as the reference point for scoring, which is assigned a score of 1. This decision indicates that the diffraction spots achieving a resolution of 2 Å or higher are considered the standard for excellence and receive higher score in the evaluation process. The 3D visualization of the scoring mechanism can be observed in Figure 3.
As it can be seen, the distance from the diffracted spot to the diffraction center is proportional to its score. In this paper, a scoring method is proposed to quantitatively analyze the diffraction quality of crystals. For each diffraction spot, its resolution value can be calculated using Equation (1) by multiplying the reciprocal of this value by 2 as the scoring weight for each diffraction spot. The total score is calculated by adding up the diffraction scores corresponding to each diffraction spot in the diffraction result map. The scoring method mentioned above is shown in Equation (2):
T o t a l   S c o r e = i = 1 N 2 R e s o l u t i o n i
where N represents the number of diffraction points and Resolution represents the resolution value of the diffraction point computed by Equation (1).

2.2.4. Establish a Classification Dataset

After calculating the scores for each protein crystal diffraction result image, the crystals are categorized into three levels based on their respective scores. These levels are designated as level 1 (good), level 2 (normal), and level 3 (bad), with an attempt to balance the number of images in each level. Each class is then subjected to boundary filtering to improve the accuracy and consistency of the classification.
Following this initial classification process, the crystal-XRD dataset is divided into three portions for network training. The testing set is composed of 10 images from each class, which are reserved for network evaluation. The remaining images are randomly distributed into the training set and the validation set in a ratio of 4:1. The number of original images contained within each class is shown in Table 1.

2.3. Data Augmentation

Given that the process from protein crystal culture to X-ray diffraction is prone to loss and that experimental data are understandably valuable, it is necessary to expand the data of classified crystal images. Considering the loss of internal details within protein crystal images, pixel-level data augmentation methods like noise adding and image blurring are avoided, and only horizontal flipping, vertical flipping, diagonal flipping, and rotation are applied to increase the image dataset. Consequently, the data augmentation resulted in the generation of 1450 protein crystal images at three diffraction levels, with the number of images in each class shown in Table 1.

3. Algorithm Design

With the development of deep learning, various algorithmic networks in the field of image classification have emerged. Among them, Vision Transformer (ViT) [18] models, particularly Swin transformer [19], have demonstrated superior performance compared to traditional convolutional neural networks (CNNs) [20]. Building upon inspiration derived from networks like Swin transformer, ResNeXt [21], MobileNetv2 [22], and others, ConvNeXt [23] further enhances classification accuracy by introducing modifications to layer structures, down-sampling methods, activation functions, inverted bottlenecks, and deep convolutions. This reinstates the significance of CNNs in image-classification tasks. In this study, ConvNeXt is employed to extract protein crystal features for distinguishing diffraction quality. Additionally, to enhance the model’s attention to important features within protein crystal images, the CBAM attention module is integrated at different convolutional stages in ConvNeXt. The subsequent section provides a detailed description of our network architecture.

3.1. Network Introduction

A protein-crystal-quality-classification network based on ConvNeXt is proposed, which adopts a multi-stage design in which different stages are responsible for extracting feature maps of different resolutions. The network consists of four stages, with the number of ConvNeXt blocks set as (3, 3, 9, 3) for each respective stage. After the first three stages, a down-sampling module is applied, comprising a regularization module and a convolutional layer. Subsequently, the feature maps obtained from four stages are transformed into probability-distribution outputs through global average pooling and a fully connected layer. To enhance network’s attention towards important features within protein crystal images, the CBAM module is integrated into the ConvNeXt network. Additionally, inspired by ResNet [24], skip connections are included to retain crucial crystal features from the previous feature map during the convolution process. The improved network structure is illustrated in Figure 4.
In this paper, modifications have been made to the structure of the ConvNeXt block to improve its effectiveness in classifying the quality of protein crystals. The original ConvNeXt block utilizes an inverted bottleneck structure, where the number of channels increases and then decreases to prevent information loss caused by compressed dimensionality. To reduce computational complexity, the depth-wise convolution layer has been relocated to the beginning. Furthermore, the activation function has been replaced with GELU instead of ReLU, and the normalization process has been substituted with Layer Norm rather than Batch Norm to mitigate any negative impact on network performance. The structure of the modified ConvNeXt block is shown in Figure 5.

3.2. CBAM

The CBAM (Convolutional Block Attention Module) [25] is an attention mechanism used for computer vision tasks. The CBAM module consists of two sub-modules (as shown in Figure 6): a Channel Attention Module and a Spatial Attention Module. The intermediate feature map is adaptively refined at every convolutional block.
The Channel Attention Module computes the global average pooling value for each channel and generates a weight vector through the fully connected layer. This weight vector is then multiplied by the input feature map, thus effectively adjusting the weight of each channel. By learning the inter-dependencies between channels, the network can dynamically adapt the significance of each channel, thereby enhancing the quality of the feature map.
In a similar vein, the Spatial Attention Module leverages two parallel convolutional layers to calculate the correlations of feature maps in both horizontal and vertical directions. The resulting correlation maps are combined, normalized, and subsequently multiplied with the input feature map to adjust the weight of each position. By learning the relationships among different positions, the network can modulate the importance of spatial information, thereby enhancing the accuracy of feature maps.
To further amplify the network’s attention to important features within protein crystal images, the CBAM module is combined into our protein crystal quality classification network, and the network structure is shown in Figure 4.

4. Results and Discussion

4.1. Experiment Platform

A computing device with an operating system of Windows 10 with a CPU of Intel core i7-10700K, 32 GB of RAM, a GPU of NVIDIA GeForce RTX 3090, and 24 GB of video memory is used in our experiments. The model is based on the PyTorch framework, and the training epoch is set to 300. The area with no image is filled with white edges to maintain the image scale, and then adjusted images are input into the network for training.

4.2. Evaluation Indicators

The evaluation indicators for multi-classification tasks are selected based on the needs and nature of the task. The following are some commonly used multi-classification task evaluation indicators.
  • Accuracy: accuracy is one of the most commonly used evaluation indicators, which represents the ratio of the number of correctly classified samples to the total number of samples. The model’s accuracy is calculated as shown in Equation (3).
    A c c u r a c y = T P + T N T P + T N + F P + F N
    where TP means true positive; FP means false positive; TN means true negative; and FN means false negative.
  • Precision: precision refers to the ratio of the number of correctly classified samples to the total number of samples classified into a certain class. Precision measures the classifier’s ability to judge positive examples. The model’s precision is calculated as shown in Equation (4).
    P r e c i s i o n = T P T P + F P
  • Recall rate: the recall rate refers to the ratio of the number of correctly classified samples among all samples that belong to a certain class to the total number of samples that actually belong to that class. The recall rate measures the classifier’s ability to cover positive examples. The model’s recall rate is calculated as shown in Equation (5).
    R e c a l l = T P T P + F N
  • F1 score: the F1 score combines the precision rate and the recall rate, which is the harmonic mean of the precision rate and the recall rate. The higher the F1 score, the better the classifier performs in terms of accuracy and recall. The model’s F1 value is calculated as shown in Equation (6).
    F 1 = 2 ( P r e c i s i o n × R e c a l l ) P r e c i s i o n + R e c a l l
  • Confusion matrix: the confusion matrix is a visual tool used to display the classification results of classifiers in various categories. It can display the classification accuracy and misclassification status of the classifier.
  • ROC curve and AUC value: the ROC curve is drawn with the true positive rate as the vertical axis and the false positive rate as the horizontal axis. The AUC (Area Under Curve) value represents the area under the ROC curve, and it is used to evaluate the performance of the classifier.

4.3. Experiment Results and Analysis

To assess the feasibility and reliability of the proposed approach, four additional image-classification networks (Vision Transformer, DenseNet [26], ShuffleNet [27], and ResNet50) are employed for comparative experiments alongside the ConvNeXt classification network utilized in this study. The impact of three different attention mechanisms (SA [28], ECA [29], and CBAM) on model performance is also investigated using the test dataset. The experimental findings pertaining to the model evaluation metrics are presented in Table 2.
Because the target of this experiment is to compare evaluation indexes for multi-classification tasks, Table 2 presents macro-average values for precision, recall, and the F1 score. The number of protein crystal pictures in all three diffraction quality categories in the test set is equal to 50, which means that the TP + FN in each classification is equal. From Equations (3) and (5), it can be seen that with a balanced number of datasets, the accuracy in the classification task performs numerically the same as the macro-average recall. As shown in the table above, ConvNeXt exhibits superior ability in extracting protein crystal features, resulting in classification accuracies improved by at least 10% compared to ViT, DenseNet, and ShuffleNet. Furthermore, it achieves 2.00% improvement in accuracy compared to ResNet50. Subsequently, different attention modules are incorporated after each convolutional stage of the original ConvNeXt network for comparison. It is observed that the introduction of the Spatial Attention (SA) module leads to a decrease in network accuracy by 4.00%. Conversely, the inclusion of ECA and CBAM modules significantly enhances the network’s classification accuracy performance. The network structure ConvNeXt+CBAM employed in this study achieves an accuracy of 75.33%. Compared to the ConvNeXt network without attention mechanisms, the CBAM demonstrates an 8.00% improvement in classification accuracy, surpassing other attention mechanisms and networks used in this experiment.
In terms of inference speed, the ViT model is much slower than the other models due to its large structure. ResNet and DenseNet are not different in their network layer structures, but DenseNet adopts a densely connected structure that is different from the residual structure of ResNet, and it takes 59 ms to finish each inference, which is a little slower than ResNet’s 54 ms. ShuffleNet uses pointwise group convolution to multiplicatively reduce network computation, and it guarantees accuracy through channel shuffle, achieving an inference speed of 10 ms per image. ConvNeXt tries a number of design tricks from other network models and significantly improves GFLOPs (Giga Floating-point Operations Per Second), achieving an inference speed of 7 ms per image with little difference in the number of parameters. The addition of SA, ECA, and CBAM increases the inference time by 2 ms, 2 ms, and 4 ms, respectively, which still maintains a certain inference speed advantage while improving the model’s performance.
Figure 7 shows the confusion matrices of the ConvNeXt network after embedding different attention mechanisms. Notably, the classification network exhibits limited effectiveness in accurately classifying protein crystals at level 2 (normal). Crystals with similar numbers of diffraction spots pose a genuine challenge for differentiation based solely on image information. Therefore, this study tries to boundary-filter the dataset beforehand and reduce the number of images close to the classification boundary in the training phase to enhance interclass difference, but the loss of data-filtering for small datasets makes the classification accuracy of crystals in layer 2 still appear to be lower than those of layer 1 and layer 3.
As for each network’s performance, the ViT network demonstrates a higher frequency of misclassifications towards level 1. In comparison to ConvNeXt, the ResNet50 network showcases inferior performance across all categories. Incorporating the SA module [22] into the original ConvNeXt network enhances the classification accuracy for level 1 crystals but diminishes it for level 2 and level 3 crystals. Remarkably, the ConvNeXt + CBAM network achieves classification accuracy comparable to the ConvNeXt + ECA network for protein crystals at level 1 and level 3, while significantly surpassing this network and others in terms of classifying protein crystals at level 2.
Figure 8 shows the ROC curves and AUC values of different models. The faster the convergence of the curve towards 1, the closer the AUC value is to 1, which indicates better classification performance of model. The ViT network has poor performance in ROC curves for level 1 and level 2 due to the misclassification of crystal images from other categories to level 1, especially level 2, which can be seen from ViT’s confusion matrix in Figure 7. Comparatively, ResNet50 shows slightly higher AUC values than ConvNeXt, but they are lower than those of all three ConvNeXt networks incorporating an attention mechanism. After adding the Spatial Attention (SA) module, the AUC value of the macro-average ROC curve is slightly increased by 2.97%. In comparison, embedding the ECA module and the CBAM module between the convolutional stages of the original ConvNeXt network results in AUC value improvements of 6.18% and 9.38%, respectively. Furthermore, the ROC curve also shows that the CBAM outperforms the ECA module in improving the performance of the ConvNeXt classification network.

5. Conclusions

The intention of this study is to select crystals to harvest before diffraction experiments. A protein-crystal-quality-classification network based on the ConvNeXt architecture is proposed to tackle the problem of poor effects caused by manual selection in the process of choosing protein crystals for X-ray diffraction experiments. It is a new study to predict the crystal diffraction quality before the diffraction experiment. A novel method is proposed that combines the number of diffraction spots with diffraction resolutions for protein crystal quality categorization. The ConvNeXt network is selected as the classification model after experimental comparison with other networks. Furthermore, an attention mechanism and skip connections are incorporated into the ConvNeXt architecture to enhance its performance. Compared to different network models and different attention mechanisms, the ConvNeXt + CBAM network structure has good accuracy in classifying protein crystals with respect to their X-ray diffraction quality. The practicality of this method lies in its ability to assist researchers in rapidly and accurately selecting high-quality protein crystals suitable for X-ray diffraction experiments, thereby enhancing experimental efficiency and results’ reliability.
However, due to inherent uncertainties in protein crystal crystallization, significant variations exist among protein crystal images from different batches. The entire process, ranging from protein crystal crystallization to the completion of X-ray diffraction experiments utilizing synchrotron light sources, is not only costly but also time-consuming. Consequently, the size of the crystal-XRD database used in this study is limited. Moreover, the absence of historical and publicly available data also affects the model’s performance to some extent. Therefore, future experiments should focus on enhancing the accuracy and robustness of network predictions through two main approaches. Firstly, this would involve collecting a wider range of protein crystal images and their corresponding X-ray diffraction results from various types of crystals. Secondly, additional optimization mechanisms should be implemented to prevent overfitting.

Author Contributions

Conceptualization, B.S.; investigation, Y.W.; project administration, Y.W.; methodology, Y.S. and Z.Z.; data curation, Y.S.; funding acquisition, B.S., Q.X. and Y.W.; resources, Z.Z. and Q.W.; validation, Q.X. and K.Y.; writing—original draft, Y.S.; writing—review and editing, Y.W. and B.S. All authors contributed to the study conception and design. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant Nos. 32271248, 32200988), the Major Project in Basic and Applied Basic Research of Guangdong Province (Grant No. 2023B0303000003); and the Natural Science Foundation of Zhejiang Province (Grant No. LTGG23F010002).

Data Availability Statement

The data that support findings of this study are openly available on Kaggle at https://www.kaggle.com/datasets/superredworld/crystals-diffraction, accessed on 25 August 2024. The codes and pre-trained models are available on GitHub at https://github.com/superredworld/crystal-diffraction, accessed on 25 August 2024.

Acknowledgments

The authors thank the staff from BL18U1 beamline and the Experimental Auxiliary System of the Shanghai Synchrotron Radiation Facility (SSRF) for on-site assistance.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Abola, E.; Kuhn, P.; Earnest, T.; Stevens, R.C. Automation of X-ray crystallography. Nat. Struct. Biol. 2000, 7, 973–977. [Google Scholar] [CrossRef] [PubMed]
  2. Maveyraud, L.; Mourey, L. Protein X-ray Crystallography and Drug Discovery. Molecules 2020, 25, 1030. [Google Scholar] [CrossRef] [PubMed]
  3. McCarthy, A.A.; Barrett, R.; Beteva, A.; Caserotto, H.; Dobias, F.; Felisaz, F.; Giraud, T.; Guijarro, M.; Janocha, R.; Khadrouche, A.; et al. ID30B—A versatile beamline for macromolecular crystallography experiments at the ESRF. J. Synchrotron Radiat. 2018, 25, 1249–1260. [Google Scholar] [CrossRef] [PubMed]
  4. Qin, J.; Zhang, Y.; Zhou, H.; Yu, F.; Sun, B.; Wang, Q. Protein crystal instance segmentation based on mask R-CNN. Crystals 2021, 11, 157. [Google Scholar] [CrossRef]
  5. Elez, K.; Bonvin, A.M.J.J.; Vangone, A. Distinguishing crystallographic from biological interfaces in protein complexes: Role of intermolecular contacts and energetics for classification. BMC Bioinf. 2018, 19, 19–28. [Google Scholar] [CrossRef] [PubMed]
  6. Wang, S.; Zhao, H. SADeepcry: A deep learning framework for protein crystallization propensity prediction using self-attention and auto-encoder networks. Briefings Bioinf. 2022, 23, bbac352. [Google Scholar] [CrossRef] [PubMed]
  7. Bruno, A.E.; Charbonneau, P.; Newman, J.; Snell, E.H.; So, D.R.; Vanhoucke, V.; Watkins, C.J.; Williams, S.; Wilson, J. Classification of crystallization outcomes using deep convolutional neural networks. PLoS ONE 2018, 13, e0198883. [Google Scholar] [CrossRef] [PubMed]
  8. Elbasir, A.; Moovarkumudalvan, B.; Kunji, K.; Kolatkar, P.R.; Mall, R.; Bensmail, H. DeepCrystal: A deep learning framework for sequence-based protein crystallization prediction. Bioinformatics 2019, 35, 2216–2225. [Google Scholar] [CrossRef] [PubMed]
  9. Luft, J.R.; Collins, R.J.; Fehrman, N.A.; Lauricella, A.M.; Veatch, C.K.; DeTitta, G.T. A deliberate approach to screening for initial crystallization conditions of biological macromolecules. J. Struct. Biol. 2003, 142, 170–179. [Google Scholar] [CrossRef] [PubMed]
  10. Leslie, A.G.W.; Powell, H.R. Processing diffraction data with mosflm. In Evolving Methods for Macromolecular Crystallography; NATO Science Series; Springer: Dordrecht, The Netherlands, 2007; Volume 245. [Google Scholar] [CrossRef]
  11. Kabsch, W. XDS. Acta Crystallogr. Sect. D Biol. Crystallogr. 2010, 66 Pt 2, 125–132. [Google Scholar] [CrossRef] [PubMed]
  12. Waterman, D.G.; Winter, G.; Gildea, R.J.; Parkhurst, J.M.; Brewster, A.S.; Sauter, N.K.; Evans, G. Diffraction-geometry refinement in the DIALS framework. Crystallogr. Sect. D Struct. Biol. 2016, 72, 558–575. [Google Scholar] [CrossRef] [PubMed]
  13. White, T.A. Processing serial crystallography data with CrystFEL: A step-by-step guide. Crystallogr. Sect. D Struct. Biol. 2019, D75, 219–233. [Google Scholar] [CrossRef]
  14. Melnikov, I.; Svensson, O.; Bourenkov, G.; Leonard, G.; Popov, A. The complex analysis of X-ray mesh scans for macromolecular crystallography. Crystallogr. Sect. D Struct. Biol. 2018, 74 Pt 4, 355–365. [Google Scholar] [CrossRef] [PubMed]
  15. McPherson, A.; Gavira, J.A. Introduction to protein crystallization. Acta Crystallogr. Sect. F Struct. Biol. Commun. 2014, 70, 2–20. [Google Scholar] [CrossRef] [PubMed]
  16. Wu, K.; Otoo, E.; Suzuki, K. Optimizing two-pass connected-component labeling algorithms. Pattern Anal. Applic. 2009, 12, 117–135. [Google Scholar] [CrossRef]
  17. Rondeau, J.M.; Schreuder, H. Protein crystallography and drug discovery. In The Practice of Medicinal Chemistry; Camille, G.W., Ed.; Elsevier Ltd.: Amsterdam, The Netherlands, 2008; pp. 605–634. [Google Scholar]
  18. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  19. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 10012–10022. [Google Scholar]
  20. LeCun, Y.; Boser, B.; Denker, J.S.; Henderson, D.; Howard, R.E.; Hubbard, W.; Jackel, L.D. Backpropagation applied to handwritten zip code recognition. Neural Comput. 1989, 1, 541–551. [Google Scholar] [CrossRef]
  21. Xie, S.; Girshick, R.; Dollár, P.; Tu, Z.; He, K. Aggregated residual transformations for deep neural networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Juan, Puerto Rico, 21–26 July 2017; pp. 1492–1500. [Google Scholar]
  22. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 4510–4520. [Google Scholar]
  23. Liu, Z.; Mao, H.; Wu, C.Y.; Feichtenhofer, C.; Darrell, T.; Xie, S. A convnet for the 2020s. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 19–24 June 2022; pp. 11976–11986. [Google Scholar]
  24. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  25. Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the 2018 European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
  26. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Juan, Puerto Rico, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  27. Zhang, X.; Zhou, X.; Lin, M.; Sun, J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 6848–6856. [Google Scholar]
  28. Zhu, X.; Cheng, D.; Zhang, Z.; Lin, S.; Dai, J. An empirical study of spatial attention mechanisms in deep networks. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 6688–6697. [Google Scholar]
  29. Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; Hu, Q. ECA-Net: Efficient channel attention for deep convolutional neural networks. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual, 14–19 June 2020; pp. 11534–11542. [Google Scholar]
Figure 1. A pair of images of one lysozyme crystal and its X-ray diffraction results. (a) Lysozyme crystals under microscope; (b) X-ray diffraction result. The red circle and blue line in (a) are used to locate the protein crystal, and the red cross in (b) is used to determine the center of diffraction.
Figure 1. A pair of images of one lysozyme crystal and its X-ray diffraction results. (a) Lysozyme crystals under microscope; (b) X-ray diffraction result. The red circle and blue line in (a) are used to locate the protein crystal, and the red cross in (b) is used to determine the center of diffraction.
Crystals 14 00771 g001
Figure 2. The series of processing steps for diffraction images: (a) original diffraction image; (b) shadow area masking and grayscale conversion; (c) binarization; (d) partial visualization results using the connected components analysis algorithm, each diffraction spot is assigned with different colors, which means each diffraction spot can be accurately located.
Figure 2. The series of processing steps for diffraction images: (a) original diffraction image; (b) shadow area masking and grayscale conversion; (c) binarization; (d) partial visualization results using the connected components analysis algorithm, each diffraction spot is assigned with different colors, which means each diffraction spot can be accurately located.
Crystals 14 00771 g002
Figure 3. Three-dimensional visualization of the scoring mechanism. The red circle in (a) represents the projection of points with a score equal to 1 in (b) on the diffraction result map. The score of the diffraction spot circled by blue circles in (a) can be obtained from the score surface in (b).
Figure 3. Three-dimensional visualization of the scoring mechanism. The red circle in (a) represents the projection of points with a score equal to 1 in (b) on the diffraction result map. The score of the diffraction spot circled by blue circles in (a) can be obtained from the score surface in (b).
Crystals 14 00771 g003
Figure 4. Network structure of ConvNeXt-Tiny with CBAM. The red CBAM blocks are added after four convolutional stages, and they form a skip-connection structure with the input before convolution.
Figure 4. Network structure of ConvNeXt-Tiny with CBAM. The red CBAM blocks are added after four convolutional stages, and they form a skip-connection structure with the input before convolution.
Crystals 14 00771 g004
Figure 5. Internal structure of the ConvNeXt block.
Figure 5. Internal structure of the ConvNeXt block.
Crystals 14 00771 g005
Figure 6. Overview of CBAM.
Figure 6. Overview of CBAM.
Crystals 14 00771 g006
Figure 7. Classification confusion matrix of each network: (a) Vision Transformer; (b) ConvNeXt; (c) ConvNeXt + SA; (d) ResNet50; (e) ConvNeXt + ECA; (f) ConvNeXt + CBAM.
Figure 7. Classification confusion matrix of each network: (a) Vision Transformer; (b) ConvNeXt; (c) ConvNeXt + SA; (d) ResNet50; (e) ConvNeXt + ECA; (f) ConvNeXt + CBAM.
Crystals 14 00771 g007
Figure 8. The ROC curve and the AUC value of each network: (a) Vision Transformer; (b) ConvNeXt; (c) ConvNeXt + SA; (d) ResNet50; (e) ConvNeXt + ECA; (f) ConvNeXt + CBAM. The black dashed line is the baseline, and the farther the ROC curve is from the baseline, the better the model’s prediction is.
Figure 8. The ROC curve and the AUC value of each network: (a) Vision Transformer; (b) ConvNeXt; (c) ConvNeXt + SA; (d) ResNet50; (e) ConvNeXt + ECA; (f) ConvNeXt + CBAM. The black dashed line is the baseline, and the farther the ROC curve is from the baseline, the better the model’s prediction is.
Crystals 14 00771 g008
Table 1. Comparison before and after dataset expansion.
Table 1. Comparison before and after dataset expansion.
Data CategoryOriginalEnhanced
TrainValTestTotalTrainValTestTotal
Level 1 (good)7218101003609050500
Level 2 (normal)641610903208050450
Level 3 (bad)7218101003609050500
Total208523029010402601501450
Table 2. Parameters evaluated by different model.
Table 2. Parameters evaluated by different model.
NetworkAccuracy/Recall (%)Precision (%)F1 Score (%)Inference Time (ms)
Vision Transformer [18]45.3348.5041.93129
DenseNet [26]53.3350.5749.4759
ShuffleNet [27]57.3354.0354.9010
ResNet50 [24]65.3368.2562.1154
ConvNeXt [23]67.3371.3066.197
ConvNeXt + SA [28]63.3367.4262.209
ConvNeXt + ECA [29]69.3369.0568.919
ConvNeXt + CBAM [25]75.3376.1175.3111
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shen, Y.; Zhu, Z.; Xiao, Q.; Ye, K.; Wang, Q.; Wang, Y.; Sun, B. Predicting X-ray Diffraction Quality of Protein Crystals Using a Deep-Learning Method. Crystals 2024, 14, 771. https://doi.org/10.3390/cryst14090771

AMA Style

Shen Y, Zhu Z, Xiao Q, Ye K, Wang Q, Wang Y, Sun B. Predicting X-ray Diffraction Quality of Protein Crystals Using a Deep-Learning Method. Crystals. 2024; 14(9):771. https://doi.org/10.3390/cryst14090771

Chicago/Turabian Style

Shen, Yujian, Zhongjie Zhu, Qingjie Xiao, Kanglei Ye, Qisheng Wang, Yue Wang, and Bo Sun. 2024. "Predicting X-ray Diffraction Quality of Protein Crystals Using a Deep-Learning Method" Crystals 14, no. 9: 771. https://doi.org/10.3390/cryst14090771

APA Style

Shen, Y., Zhu, Z., Xiao, Q., Ye, K., Wang, Q., Wang, Y., & Sun, B. (2024). Predicting X-ray Diffraction Quality of Protein Crystals Using a Deep-Learning Method. Crystals, 14(9), 771. https://doi.org/10.3390/cryst14090771

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop