Classification of HEp-2 Staining Pattern Images Using Adapted Multilayer Perceptron Neural Network-Based Intra-Class Variation of Cell Shape
Abstract
:1. Introduction
2. Related Work
3. Proposed Method
3.1. Segmentation of HEp-2 Cell Staining Images
3.2. Feature Extraction
- (1)
- Radon projection: We use a Radon projection to convert a two dimension image into one dimension vector. In this paper, is calculated using the procedure from research of [20].
- (2)
- Bispectrum: is the product of the Fourier coefficients at component frequencies [35]. The bispectrum S in the frequency domain is then written:“where the Fourier transform is represented by of R at each in the range degrees. and are the normalised frequencies divided by one and half of the sampling frequency, and are in the range [0, 1] ” [35].
- (3)
3.3. MultiLayer Perceptron
3.3.1. L-Moment Measuring
- -
- L-Mean which considers location features of cell, where , and ,
- -
- L-Scale which measures variation in scaling of the cell, where , and ,
- -
- L-Skewness which measures variation in concavity of cell, where , and ,
- -
- L-Kurtosis which measures variation in sharpness of cell, where , and .
3.3.2. Softmax Activation Function
3.3.3. Cross-Entropy Function
4. Experimental Analysis
4.1. Description of Dataset
4.2. Implementation of Proposed Method
- 1
- Pre-processing and segmentation:Pre-processing is done by adjusting the intensity of image for increasing the contrast of the image. A level set method via edge-based GACs is then applied to detect the HEp-2 cell shape information from original microscope images used by [22].
- 2
- Feature extraction using HOS:The HOS technique is applied to the results of segmentation to extract features. The segmented HEp-2 cell image has been then converted to a set of 1D vectors using the “MATLAB Radon projection function”. This function can produce a Radon vector R for each angel from 0 to 180 degrees. A total of 256 features have been extracted, and the length of FFT used for each Radon projection is 1024. Finally, we obtained a set of 23,040 features for each image.
- 3
- MLP using Softmax regression via gradient descent:For the work presented in this paper, the neural network is implemented using Python 3.5.8. The MLP classifier model has four layers. Firstly, we encode the class labels into a certain format. One-hot encoding is applied, in which a sample belonging to Class-1 has the 1 value in the first cell of its row; a sample belonging to Class-2 has the 1 value in the second cell of its row, and so on. The input layer is a vector of (23,040) features multiplying by training images size of 10,833 × 23,040. Then, we initialise the parameter of weight matrix size of 10,833 × 23,040 × 6 (one column for each class and one row feature), where k represents four weights for each node. For example, the first row the matrix of dimensional weights is [0.1 0.2 0.3 0.4 0.5 0.6]. We construct a neural network with two hidden layers. The first hidden layer is calculated by summing the L-moments function, which includes the “ L-mean, L-scale, L-skewness and L-kurtosis”. We multiply this sum by the weight matrix w, and add the bias unit, which is [0.01 0.1 0.1 0.1 0.1 0.1], the result will be a 10,833 × 1024 matrix. The second hidden layer is calculated using a Softmax activation function. Following this, we find the average of all cross-entropies for training images in order to learn our Softmax model, determining the weight coefficients (“regularised” weight) using gradient descent method. The learning rate (eta) is between [0.0, 1.0], and has a default value . Using parameters and , the prediction label is then created. The output layer is a vector of six class. Figure 5 shows an adapted MLP classifier using Softmax based gradient descent classification features using data augmentation and no data augmentation. Figure 6 shows an adapted MLP classifier using Softmax based gradient descent calculation cost and iteration and the best result is on .
5. Discussion
5.1. Evaluation Results
5.2. Benchmarking and Comparison with Other Techniques
6. Conclusions and Future Work
6.1. Proposed Methodology Advances
6.2. Proposed Methodology Limitation
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
HEp-2 | Human Epithelial Type-2 |
CAD | Computer Aided Diagnoses |
MLP | Multilayer Perceptron |
GACs | Geometric Active Contours |
SNP | Sullivan Nicolaides pathology |
MCA | Mean Class Accuracy |
SVMs | Support Vector Machines |
CNNs | Convolutional Neural Networks |
DCAE | Deep Convolutional AutoEncoder |
FFT | Fast Fourier Transform |
RNNs | Recurrent Neural Networks |
CCR | Correct Classification Rate |
TP | True Positive |
TN | True Negative |
References
- Di Cataldo, S.; Tonti, S.; Bottino, A.; Ficarra, E. ANAlyte: A modular image analysis tool for ANA testing with indirect immunofluorescence. Comput. Methods Programs Biomed. 2016, 128, 86–99. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Manivannan, S.; Li, W.; Akbar, S.; Wang, R.; Zhang, J.; McKenna, S.J. An automated pattern recognition system for classifying indirect immunofluorescence images of HEp-2 cells and specimens. Pattern Recognit. 2016, 51, 12–26. [Google Scholar] [CrossRef] [Green Version]
- Shen, L.; Jia, X.; Li, Y. Deep cross residual network for HEp-2 cell staining pattern classification. Pattern Recognit. 2018, 82, 68–78. [Google Scholar] [CrossRef]
- Srinidhi, C.L.; Kim, S.W.; Chen, F.D.; Martel, A.L. Self-supervised driven consistency training for annotation efficient histopathology image analysis. Med. Image Anal. 2022, 75, 102256. [Google Scholar] [CrossRef]
- Al-Dulaimi, K.; Chandran, V.; Nguyen, K.; Banks, J.; Tomeo-Reyes, I. Benchmarking HEp-2 Specimen Cells Classification Using Linear Discriminant Analysis on Higher Order Spectra Features of Cell Shape. Pattern Recognit. Lett. 2019, 125, 534–541. [Google Scholar] [CrossRef]
- Hobson, P.; Lovell, B.C.; Percannella, G.; Vento, M.; Wiliem, A. Benchmarking human epithelial type 2 interphase cells classification methods on a very large dataset. Artif. Intell. Med. 2015, 65, 239–250. [Google Scholar] [CrossRef] [Green Version]
- Hobson, P.; Lovell, B.C.; Percannella, G.; Saggese, A.; Vento, M.; Wiliem, A. HEp-2 staining pattern recognition at cell and specimen levels datasets, algorithms and results. Pattern Recognit. Lett. 2016, 82, 12–22. [Google Scholar] [CrossRef] [Green Version]
- Ponomarev, G.V.; Arlazarov, V.L.; Gelfand, M.S.; Kazanov, M.D. ANA HEp-2 cells image classification using number, size, shape and localization of targeted cell regions. Pattern Recognit. 2014, 47, 2360–2366. [Google Scholar] [CrossRef] [Green Version]
- Ensafi, S.; Lu, S.; Kassim, A.A.; Tan, C.L. Automatic CAD System for HEp-2 Cell Image Classification. In Proceedings of the 2014 22nd International Conference on Pattern Recognition, Stockholm, Sweden, 24–28 August 2014; pp. 3321–3326. [Google Scholar]
- Larsen, A.B.L.; Vestergaard, J.S.; Larsen, R. HEp-2 cell classification using shape index histograms with donut-shaped spatial pooling. IEEE Trans. Med. Imaging 2014, 33, 1573–1580. [Google Scholar] [CrossRef]
- Di Cataldo, S.; Bottino, A.; Islam, I.U.; Vieira, T.F.; Ficarra, E. Subclass discriminant analysis of morphological and textural features for hep-2 staining pattern classification. Pattern Recognit. 2014, 47, 2389–2399. [Google Scholar] [CrossRef]
- Cordelli, E.; Soda, P. Methods for greyscale representation of HEp-2 colour images. In Proceedings of the IEEE 23rd International Symposium on Computer-Based Medical Systems (CBMS), Bentley, Australia, 12–15 October 2010; pp. 383–388. [Google Scholar]
- Carvalho, E.D.; Antonio Filho, O.; Silva, R.R.; Araujo, F.H.; Diniz, J.O.; Silva, A.C.; Paiva, A.C.; Gattass, M. Breast cancer diagnosis from histopathological images using textural features and CBIR. Artif. Intell. Med. 2020, 105, 101845. [Google Scholar] [CrossRef] [PubMed]
- de Melo Cruvinel, W.; Andrade, L.E.C.; von Muhlen, C.A.; Dellavance, A.; Ximenes, A.C.; Bichara, C.D.; Bueno, C.; Mangueira, C.L.P.; Bonfa, E.; de Almeida Brito, F.; et al. V Brazilian consensus guidelines for detection of anti-cell autoantibodies on hep-2 cells. Adv. Rheumatol. 2022, 59, 28. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Jungo, A.; Scheidegger, O.; Reyes, M.; Balsiger, F. pymia: A Python package for data handling and evaluation in deep learning-based medical image analysis. Comput. Methods Programs Biomed. 2021, 198, 105796. [Google Scholar] [CrossRef] [PubMed]
- Khan, A.I.; Shah, J.L.; Bhat, M.M. CoroNet: A deep neural network for detection and diagnosis of COVID-19 from chest x-ray images. Comput. Methods Programs Biomed. 2020, 196, 105581. [Google Scholar] [CrossRef]
- Zhou, X.; Li, Z.; Xue, Y.; Chen, S.; Zheng, M.; Chen, C.; Yu, Y.; Nie, X.; Lin, X.; Wang, L.; et al. CUSS-Net: A Cascaded Unsupervised-based Strategy and Supervised Network for Biomedical Image Diagnosis and Segmentation. IEEE J. Biomed. Health Inform. 2023. [Google Scholar] [CrossRef]
- AL-Dulaimi, K.; Al-Sabaawi, A.; Resen, R.D.; Stephan, J.J.; Zwayen, A. Using adapted JSEG algorithm with Fuzzy C Mean for segmentation and counting of white blood cell and nucleus images. In Proceedings of the 6th IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE), Melbourne, Australia, 9–11 December 2019; Volume 2019, pp. 1–7. [Google Scholar]
- AL-Dulaimi, K.; Tomeo-Reyes, I.; Banks, J.; Chandran, V. Evaluation and benchmarking of level set-based three forces via geometric active contours for segmentation of white blood cell nuclei shape. Comput. Biol. Med. 2020, 116, 103568. [Google Scholar] [CrossRef]
- AL-Dulaimi, K.; Chandran, V.; Banks, J.; Tomeo-Reyes, I.; Nguyen, K. Classification of white blood cells using bispectral invariant features of nuclei shape. In Proceedings of the International Conference on Digital Image Computing: Techniques and Applications (DICTA), Canberra, ACT, Australia, 10–13 December 2018; pp. 19–26. [Google Scholar]
- Al-Dulaimi, K.; Nguyen, K.; Banks, J.; Chandran, V.; Tomeo-Reyes, I. Classification of White Blood Cells Using L-Moments Invariant Features of Nuclei Shape. In Proceedings of the International Conference on Image and Vision Computing New Zealand (IVCNZ), Auckland, New Zealand, 19–21 November 2018; pp. 1–6. [Google Scholar]
- AL-Dulaimi, K.; Banks, J.; Tomeo-Reyes, I.; Chandran, V. Automatic segmentation of HEp-2 cell Fluorescence microscope images using level set method via geometric active contours. In Proceedings of the 23rd International Conference on Pattern Recognition (ICPR), Cancun, Mexico, 4–8 December 2016; pp. 81–83. [Google Scholar]
- Hobson, P.; Percannella, G.; Vento, M.; Wiliem, A. International Competition on Cells Classification by Fluorescent Image Analysis. In Proceedings of the 20th IEEE International Conference on Image Processing (ICIP), Melbourne, Australia, 15–18 September 2013. [Google Scholar]
- Kastaniotis, D.; Fotopoulou, F.; Theodorakopoulos, I.; Economou, G.; Fotopoulos, S. HEp-2 cell classification with Vector of Hierarchically Aggregated Residuals. Pattern Recognit. 2017, 65, 47–57. [Google Scholar] [CrossRef]
- Stoklasa, R.; Majtner, T.; Svoboda, D. Efficient K-NN based HEp-2 cells classifier. Pattern Recognit. 2014, 47, 2409–2418. [Google Scholar] [CrossRef]
- Ensafi, S.; Lu, S.; Kassim, A.A.; Tan, C.L. Accurate HEp-2 cell classification based on sparse coding of superpixels. Pattern Recognit. Lett. 2016, 82, 64–71. [Google Scholar] [CrossRef]
- Gragnaniello, D.; Sansone, C.; Verdoliva, L. Cell image classification by a scale and rotation invariant dense local descriptor. Pattern Recognit. Lett. 2016, 82, 72–78. [Google Scholar] [CrossRef]
- Sarrafzadeh, O.; Rabbani, H.; Dehnavi, A.M.; Talebi, A. Analyzing features by SWLDA for the classification of HEp-2 cell images using GMM. Pattern Recognit. Lett. 2016, 82, 44–55. [Google Scholar] [CrossRef]
- Sakrapee, P.; Chunhua, S.; van den Hengel, A. A scalable stagewise approach to large-margin multiclass loss-based boosting. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 1002–1013. [Google Scholar]
- Codrescu, C. Quadratic recurrent finite impulse response MLP for indirect immunofluorescence image recognition. In Proceedings of the 2014 1st Workshop on Pattern Recognition Techniques for Indirect Immunofluorescence Images, Stockholm, Sweden, 24 August 2014; pp. 49–52. [Google Scholar]
- Vununu, C.; Lee, S.H.; Kwon, K.R. A Strictly Unsupervised DL Method for HEp-2 Cell Image Classification. Sensors 2020, 20, 2717. [Google Scholar] [CrossRef] [PubMed]
- Li, H.; Huang, H.; Zheng, W.-S.; Xie, X.; Zhang, J. HEp-2 specimen classification via deep CNNs and pattern histogram. In Proceedings of the 23rd International Conference on Pattern Recognition (ICPR), Cancun, Mexico, 4–8 December 2016; pp. 2145–2149. [Google Scholar]
- Gao, Z.; Wang, L.; Zhou, L.; Zhang, J. HEp-2 cell image classification with deep convolutional neural networks. IEEE J. Biomed. Health Inform. 2017, 21, 416–428. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- AL-Dulaimi, K.; Tomeo-Reyes, I.; Banks, J.; .Chandran, V. White blood cell nuclei segmentation using level set methods and geometric active contours. In Proceedings of the International Conference on Digital Image Computing: Techniques and Applications (DICTA), Gold Coast, QLD, Australia, 30 November–2 December 2016; pp. 1–7. [Google Scholar]
- Vinod, C.; Brett, C.; Boualem, B.; Steve, E. Pattern recognition using invariants defined from higher order spectra: 2-D image inputs. Trans. Image Process. 1997, 6, 703–712. [Google Scholar]
- Raschka, S. Softmax Regression-Gradient Descent. 2014–2019. Available online: http://rasbt.github.io/mlxtend/user_guide/classifier/SoftmaxRegression/ (accessed on 10 December 2022).
- Gibson, E.; Li, W.; Sudre, C.; Fidon, L.; Shakir, D.I.; Wang, G.; Eaton-Rosen, Z.; Gray, R.; Doel, T.; Hu, Y.; et al. NiftyNet: A deep-learning platform for medical imaging. Comput. Methods Programs Biomed. 2018, 158, 113–122. [Google Scholar] [CrossRef]
- Oei, R.W.; Hou, G.; Liu, F.; Zhong, J.; Zhang, J.; An, Z.; Xu, L.; Yang, Y. Convolutional neural network for cell classification using microscope images of intracellular actin networks. PLoS ONE 2019, 14, e0213626. [Google Scholar] [CrossRef]
- Qi, X.; Zhao, G.; Chen, J.; Pietikäinen, M. HEp-2 cell classification: The role of gaussian scale space theory as a pre-processing approach. Pattern Recognit. Lett. 2016, 82, 36–43. [Google Scholar] [CrossRef] [Green Version]
- Han, X.H.; Chen, Y.W.; Xu, G. Integration of spatial and orientation contexts in local ternary patterns for HEp-2 cell classification. Pattern Recognit. Lett. 2016, 82, 23–27. [Google Scholar] [CrossRef]
- Nanni, L.; Lumini, A.; dos Santos, F.L.C.; Paci, M.; Hyttinen, J. Ensembles of dense and dense sampling descriptors for the HEp-2 cells classification problem. Pattern Recognit. Lett. 2016, 82, 28–35. [Google Scholar] [CrossRef]
- Cascio, D.; Taormina, V.; Cipolla, M.; Bruno, S.; Fauci, F.; Raso, G. A multi-process system for HEp-2 cells classification based on SVM. Pattern Recognit. Lett. 2016, 82, 56–63. [Google Scholar] [CrossRef]
- Theodorakopoulos, I.; Kastaniotis, D.; Economou, G.; Fotopoulos, S. HEp-2 cells classification using morphological features and a bundle of local gradient descriptors. In Proceedings of the 1st Workshop on Pattern Recognition Techniques for Indirect Immunofluorescence Images, Stockholm, Sweden, 24 August 2014; pp. 33–36. [Google Scholar]
- Meng, N.; Lam, E.Y.; Tsia, K.K.; So, H.K.H. Large-scale multi-class image-based cell classification with deep learning. IEEE J. Biomed. Health Inform. 2018, 23, 2091–2098. [Google Scholar] [CrossRef]
- Nguyen, L.D.; Gao, R.; Lin, D.; Lin, Z. Biomedical image classification based on a feature concatenation and ensemble of deep CNNs. J. Ambient Intell. Humaniz. Comput. 2019, 1–13. [Google Scholar] [CrossRef]
- Jorgensen, B.; AL-Dulaimi, K.; Banks, J. HEp-2 Specimen Cell Detection and Classification Using Very Deep Convolutional Neural Networks-Based Cell Shape. In Proceedings of the International Conference on Digital Image Computing: Techniques and Applications (DICTA), Goldcoast, Australia, 30 November–2 December 2021; Volume 2021. [Google Scholar]
- Manivannan, S.; Li, W.; Akbar, S.; Wang, R.; Zhang, J.; McKenna, S.J. HEp-2 cell classification using multi-resolution local patterns and ensemble SVMs. In Proceedings of the 1st Workshop on Pattern Recognition Techniques for Indirect Immunofluorescence Images, Stockholm, Sweden, 24 August 2014; pp. 37–40. [Google Scholar]
- El-kenawy, E.S.M.; Abutarboush, H.F.; Mohamed, A.W.; Ibrahim, A. Advance artificial intelligence technique for designing double T-shaped monopole antenna. Comput. Mater. Contin. 2021, 69, 2983–2995. [Google Scholar] [CrossRef]
- Yu, Z.; Guindani, M.; Grieco, S.F.; Chen, L.; Holmes, T.C.; Xu, X. Beyond t test and ANOVA: Applications of mixed-effects models for more rigorous statistical analysis in neuroscience research. Neuron 2022, 110, 21–35. [Google Scholar] [CrossRef] [PubMed]
Actual Class | Positive Cell | Intermediate Cell | ||
---|---|---|---|---|
Images | Rate% | Images | Rate% | |
Homogeneous | 722/815 | 88.59% | 902/1055 | 85.50% |
Speckled | 900/1092 | 82.42% | 900/1030 | 87.38% |
Nucleola | 450/500 | 90.00% | 1000/1248 | 80.13% |
Centromere | 955/1033 | 92.45% | 950/1022 | 92.96% |
Nuclear Membrane | 655/707 | 92.65% | 810/948 | 85.44% |
Golgi | 300/366 | 81.97% | 200/280 | 71.43% |
Overall | 3982/4513 | 88.23% | 4762/5583 | 85.30% |
References | Feature Extraction and Selection | Classifier | Data Augmentation | Train Set | Test Set |
---|---|---|---|---|---|
[30] | Trainable features | QR-FIRMLP | Mirroring and rotation | 98.94 | 74.68 |
[29] | CoALBP, STR, LPC | Multiclass boosting | Rotation | 100.00 | 81.50 |
[26] | SIFT and SURF with BoW | Linear SVMs & Majority voting | – | 98.07 | 80.84 |
[27] | SID with soft BoW | Linear SVM (one-vs-all) | – | 95.47 | 83.85 |
[33] | Trainable features | Deep CNNs | Rotation | 89.02 | 76.26 |
[39] | LOAD with IFV | Linear SVM (one-vs-all) | – | 99.91 | 84.26 |
[2] | Multi-resolution LP & Root-SIFT | SVMs with Platte re-scaling | Rotation | 95 | 87.42 |
[40] | RICWLTP | Linear SVM (one-vs-all) | – | 94.68 | 68.37 |
[41] | LCP, RIC-LBP, ELBP, PLBP, STR | SVM with Kernel RBF | Resizing & Rotation | 100.00 | 79.91 |
[42] | Geometry, morphology & entropy | SVM (one-vs-one) cell level | – | 90.25 | 80.45 |
[43] | Morphological & textural features | Linear SVM (one-vs-all) | – | 93.82 | 83.06 |
[28] | Statistical, spectral & LDA | Gaussian mixture model | – | 88.59 | 73.78 |
[24] | SIFT descriptors | Vector of hierarchically residuals | – | – | 82.80 |
[38] | – | CNN-based Softmax | rotation, cropping & flipping | 95.32 | 91.33 |
[44] | – | CNNs | – | 94.01 | 89.52 |
[45] | feature concatenation & ensemble | CNNs | – | 96.56 | 89.00 |
[46] | – | Very deep CNNs | Rotation | 89.36 | |
MLP method | Higher order spectra | Plain MLP | No augmentation | 90.22 | 84.32 |
Our proposed method | Higher order spectra | AMLP based L-moment | No augmentation | 95.82 | 87.55 |
Our proposed method | Higher order spectra | AMLP based L-moment | Rotation | 97.11 | 90.83 |
References | Hm | Sp | Cn | Nu | Go | Nm |
---|---|---|---|---|---|---|
[30] | 69.16 | 72.59 | 68.68 | 67.08 | 94.21 | 76.15 |
[29] | 75.84 | 82.93 | 76.4 | 75.56 | 94.51 | 83.78 |
[26] | 75.53 | 81.43 | 76.61 | 73.7 | 94.18 | 83.57 |
[27] | 83.02 | 82.26 | 85.37 | 78.4 | 95.37 | 78.68 |
[33] | 80.79 | 64.65 | 73.51 | 67.62 | 85.52 | 73.3 |
[39] | 89.91 | 80.67 | 86.84 | 81.53 | 85.5 | 80.11 |
[2] | 87.47 | 80.51 | 83.04 | 91.01 | 89.84 | 92.09 |
[40] | 68.58 | 53.51 | 63.03 | 64.74 | 83.02 | 77.36 |
[41] | 89.69 | 76.21 | 70.31 | 78.05 | 84.46 | 80.05 |
[42] | 89.19 | 76.3 | 70.17 | 77.95 | 86.28 | 82.33 |
[43] | 91.11 | 79.24 | 75.05 | 78.06 | 87.29 | 87.12 |
[28] | 80.72 | 63.62 | 71.11 | 66.6 | 84.83 | 75.15 |
[24] | 88.93 | 77.7 | 79.25 | 83.00 | 90.40 | 77.05 |
[38] | 93.28 | 90.01 | 88.08 | 91.36 | 91.56 | 93.57 |
[44] | 92.12 | 87.80 | 86.51 | 88.05 | 91.62 | 91.01 |
[45] | 84.52 | 90.01 | 91.33 | 80.04 | 91.97 | 96.31 |
[46] | - | - | - | - | - | - |
plain MLP | 82.13 | 82.14 | 90.12 | 88.13 | 90.08 | 82.10 |
Proposed method-1 | 84.53 | 85.34 | 91.32 | 80.13 | 91.80 | 91.15 |
Proposed method-2 | 91.91 | 89.81 | 89.51 | 85.97 | 91.67 | 96.16 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Al-Dulaimi, K.; Banks, J.; Al-Sabaawi, A.; Nguyen, K.; Chandran, V.; Tomeo-Reyes, I. Classification of HEp-2 Staining Pattern Images Using Adapted Multilayer Perceptron Neural Network-Based Intra-Class Variation of Cell Shape. Sensors 2023, 23, 2195. https://doi.org/10.3390/s23042195
Al-Dulaimi K, Banks J, Al-Sabaawi A, Nguyen K, Chandran V, Tomeo-Reyes I. Classification of HEp-2 Staining Pattern Images Using Adapted Multilayer Perceptron Neural Network-Based Intra-Class Variation of Cell Shape. Sensors. 2023; 23(4):2195. https://doi.org/10.3390/s23042195
Chicago/Turabian StyleAl-Dulaimi, Khamael, Jasmine Banks, Aiman Al-Sabaawi, Kien Nguyen, Vinod Chandran, and Inmaculada Tomeo-Reyes. 2023. "Classification of HEp-2 Staining Pattern Images Using Adapted Multilayer Perceptron Neural Network-Based Intra-Class Variation of Cell Shape" Sensors 23, no. 4: 2195. https://doi.org/10.3390/s23042195
APA StyleAl-Dulaimi, K., Banks, J., Al-Sabaawi, A., Nguyen, K., Chandran, V., & Tomeo-Reyes, I. (2023). Classification of HEp-2 Staining Pattern Images Using Adapted Multilayer Perceptron Neural Network-Based Intra-Class Variation of Cell Shape. Sensors, 23(4), 2195. https://doi.org/10.3390/s23042195