Enhancing Medical Image Quality Using Fractional Order Denoising Integrated with Transfer Learning
Abstract
:1. Introduction
1.1. Image Denoising Perspective
1.2. Transfer Learning Perspective
1.3. Identified Research Gaps
How Our Work Addresses These Gaps
1.4. Objective of the Proposed Work
- Utilize Fractional Order Techniques: Implement fractional order techniques in the ETLFOD model to overcome the limitations of traditional denoising methods, enhancing the robustness and accuracy of noise removal in medical images.
- Leverage Transfer Learning: Use transfer learning to incorporate pre-trained models such as CNN, VGG16, DenseNet121, and Inception V3, which are trained on large-scale datasets, to significantly improve the denoising performance while preserving important image features.
- Mitigate Overfitting: Improve both training and validation accuracy to mitigate overfitting, ensuring the ETLFOD model generalizes well to new, unseen medical image data.
- Validate with Diverse Datasets: Utilize a variety of medical imaging datasets, including MRI, CT scans, and X-ray images, to validate the ETLFOD model’s ability to accurately identify infected areas and confirm its robustness across different medical conditions.
- Comprehensive Performance Comparison: Compare the ETLFOD model’s performance with existing models using multiple evaluation metrics such as sensitivity, specificity, and accuracy, aiming for an improvement of at least 6% over current state-of-the-art techniques. To have a more comprehensive analysis, we evaluated the proposed model on all measurement data and performed 10-fold cross-validation to reduce the bias distribution of the image model. We further provide a detailed discussion of the results to enhance our understanding of the model’s performance.
2. Proposed System
2.1. Benchmark Dataset Description
2.2. System Description
2.3. An Efficient Fractional-Order-Based Image Denoising
- Riemann–Liouville Derivative:
- Caputo Derivative:
- Grünwald–Letnikov Derivative:
Optimization Problem for Image Processing
2.4. Distribution of Pixel Intensity to Calculate Mean and Standard Deviation
2.4.1. Discussion on MRI Brain Images
2.4.2. Discussion on Lung CT Images
2.4.3. Discussion on Pneumonia X-ray Images
2.5. Transfer Learning
2.5.1. Convolutional Neural Networks
Detailed Description of the Custom CNN Architecture in ETLFOD Model
- Custom CNN Architecture OverviewInput Layer:
- ○
- Description: The input layer accepts medical images with dimensions H × W × CH\times W\times CH × W × C, where HHH is the height, WWW is the width, and CCC is the number of channels (e.g., grayscale or RGB).
- ○
- Input Shape: Typically, 224 × 224 × 3224\times 224\times 3224 × 224 × 3 for RGB images.
- Convolutional Layers: First Convolutional Layer:
- ○
- Filter Size: 32 filters
- ○
- Kernel Size: 3 × 33\times 33 × 3
- ○
- Activation Function: ReLU (Rectified Linear Unit)
- ○
- Stride: 1
- ○
- Padding: Same
- Second Convolutional Layer:
- ○
- Filter Size: 64 filters
- ○
- Kernel Size: 3 × 33\times 33 × 3
- ○
- Activation Function: ReLU
- ○
- Stride: 1
- ○
- Padding: Same
- Third Convolutional Layer:
- ○
- Filter Size: 128 filters
- ○
- Kernel Size: 3 × 33\times 33 × 3
- ○
- Activation Function: ReLU
- ○
- Stride: 1
- ○
- Padding: Same
- Pooling Layers: First Pooling Layer:
- Type: Max Pooling
- Pool Size: 2 × 22\times 22 × 2
- Stride: 2
- Second Pooling Layer:
- Type: Max Pooling
- Pool Size: 2 × 22\times 22 × 2
- Stride: 2
- Batch Normalization:
- Batch normalization layers are added after each convolutional layer to normalize the inputs and improve training stability and performance.
- Dropout Layers: First Dropout Layer:
- Dropout Rate: 0.25 (applied after the first pooling layer)
- Second Dropout Layer:
- Dropout Rate: 0.5 (applied after the second pooling layer)
- Fully Connected (Dense) Layers: First Dense Layer:
- Units: 512
- Activation Function: ReLU
- Second Dense Layer:
- Units: 256
- Activation Function: ReLU
- Output Layer:
- Units: Number of classes (for classification tasks) or 1 (for regression tasks)
- Activation Function: Softmax (for classification) or Linear (for regression)
- Custom CNN ArchitectureThe custom CNN architecture for the ETLFOD model is based on the principles of transfer learning, employing several well-known pre-trained models. The detailed architecture for each of these models is as follows:DenseNet121:Architecture: Each layer in DenseNet121 is connected to every other layer in a feed-forward fashion, allowing for maximum information flow between layers. This connectivity pattern is designed to improve gradient flow and make the network more efficient.Parameters: Uses multiple dense blocks, each containing several convolutional layers.Pretraining Dataset: ImageNetEquation: y = F(x,W)Fine-Tuning: The pre-trained model is fine-tuned on the target dataset.VGG16:Architecture: Composed of 2–3 convolutional layers followed by a pooling layer. Includes two hidden layers with 4096 nodes each and an output layer with 1000 nodes. Uses 3 × 3 filters for convolutional layers.Parameters: A total of 13 convolutional layers, 5 pooling layers, 3 fully connected layers.Pretraining Dataset: ImageNet.ResNet50:Architecture: Fifty layers deep with residual blocks. The architecture includes convolutional layers, batch normalization, ReLU activation, and max pooling.Parameters: A total of 23 convolutional layers and 1 fully connected layer. The filters increase from 64 to 256 in each group of residual blocks.Pretraining Dataset: ImageNetEquation: The network uses residual learning with shortcut connections to jump over some layers.Inception V3:Architecture: Designed with inception modules that perform convolutions in parallel, including 1 × 1, 3 × 3, and 5 × 5 convolutions followed by max pooling.Parameters: Multiple inception modules with batch normalization and auxiliary classifiers.Pretraining Dataset: ImageNet.Pretraining Dataset
- For transfer learning, our custom CNN architecture was initialized using pre-trained weights from a widely recognized dataset. The specific pretraining dataset and the process are as follows:
- Algorithm Steps:
- Apply Gaussian noise to the images in the preprocessing stage.
- Denoise the images using fractional order techniques.
- Train and test the model using the pre-trained architectures (DenseNet121, VGG16, ResNet50, Inception V3).
- Perform data augmentation to remove artifacts and enhance model robustness.
- Performance Metrics:
- Evaluated using precision, recall, F1-score, training accuracy, and testing accuracy.
- Models like DenseNet121 showed significant improvement in performance metrics after applying denoising techniques.
- Training Details
- Optimizer: The Adam optimizer was used with a learning rate of 1 × 10−41\times 10−41 × 10−4.
- Loss Function: Cross-entropy loss for classification tasks or mean squared error (MSE) for regression tasks.
- Batch Size: 32
- Epochs: 50, with early stopping based on validation loss to prevent overfitting.
- Data Augmentation: Techniques such as rotation, flipping, and scaling were applied to the training images to increase the dataset’s diversity and improve the model’s generalization ability.
2.5.2. DenseNet121
2.5.3. VGG16
2.5.4. ResNet50
2.5.5. Inception V3
Algorithm 1: ETLFOD |
Input: Image acquisition from benchmark datasets |
Output: Prediction of the infected region in medical images. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3. Experimental Results and Discussion
- Precision
- Recall
- F1-score
- Accuracy
3.1. Results and Discussion of ETLFOD_model for Brain Dataset
3.2. Experimental Analysis of ETLFOD Model for Lung CT Dataset
3.3. Experimental Analysis of ETLFOD Model for Pneumonia Dataset
4. Conclusions and Future Enhancement
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Deng, Y.; Ding, K.; Ouyang, C.; Luo, Y.; Tu, Y.; Fu, J.; Wang, W.; Du, Y. Wavelets and curvelets transform for image denoising to damage identification of thin plate. Results Eng. 2023, 17, 100837. [Google Scholar]
- Tian, C.; Zheng, M.; Zuo, W.; Zhang, B.; Zhang, Y.; Zhang, D. Multi-stage image denoising with the wavelet transform. Pattern Recognit. 2023, 134, 109050. [Google Scholar] [CrossRef]
- Jiang, L.; Huang, J.; Lv, X.G.; Liu, J. Alternating direction method for the high-order total variation-based Poisson noise removal problem. Numer. Algorithms 2015, 69, 495–516. [Google Scholar] [CrossRef]
- Liu, K.; Tian, Y. Research and analysis of deep learning image enhancement algorithm based on fractional differential. Chaos Solitons Fractals 2020, 131, 109507. [Google Scholar] [CrossRef]
- Jin, Y.; Jiang, X.; Jiang, W. An image denoising approach based on adaptive non-local total variation. J. Vis. Commun. Image Represent. 2019, 65, 102661. [Google Scholar] [CrossRef]
- Huang, X.; Li, S.; Gao, S. Applying a modified wavelet shrinkage filter to improve cryo-electron microscopy imaging. J. Comput. Biol. 2018, 25, 1050–1058. [Google Scholar] [CrossRef] [PubMed]
- Xu, J.; Feng, A.; Hao, Y.; Zhang, X.; Han, Y. Image deblurring and denoising by an improved variational model. AEU-Int. J. Electron. Commun. 2016, 70, 1128–1133. [Google Scholar] [CrossRef]
- Gupta, A.; Kumar, S. Generalized framework for the design of adaptive fractional-order masks for image denoising. Digit. Signal Process. 2022, 121, 103305. [Google Scholar] [CrossRef]
- Caputo, M.; Fabrizio, M. A new definition of fractional derivative without singular kernel. Prog. Fract. Differ. Appl. 2015, 1, 73–85. [Google Scholar]
- Atangana, A.; Baleanu, D. New fractional derivatives with non-local and non-singular kernel: Theory and application to heat transfer model. arXiv 2016, arXiv:1602.03408. [Google Scholar]
- Ma, Y.; Fan, A.; He, J.; Nelakurthi, A.R.; Maciejewski, R. A visual analytics framework for explaining and diagnosing transfer learning processes. IEEE Trans. Vis. Comput. Graph. 2020, 27, 1385–1395. [Google Scholar] [CrossRef]
- Abirami, A.; Prakash, P.; Thangavel, K. Fractional diffusion equation-based image denoising model using CN-GL scheme. Int. J. Comput. Math. 2018, 95, 1222–1239. [Google Scholar] [CrossRef]
- Vidhushavarshini, S.; Sathiyabhama, B. A Comparison of Classification Techniques on Thyroid Detection Using J48 and Naive Bayes Classification Techniques. In Proceedings of the International Conference on Intelligent Computing Systems (ICICS) Sona College of Technology, Salem, Tamil Nadu, India, 15–16 December 2017. [Google Scholar]
- Shroff, A.D.; Patidar, K.; Kushwah, R. A survey and analysis based on image denoising method. Int. J. Adv. Technol. Eng. Explor. 2018, 5, 182–186. [Google Scholar] [CrossRef]
- Yoon, T.; Kim, S.W.; Byun, H.; Kim, Y.; Carter, C.D.; Do, H. Deep learning-based denoising for fast time-resolved flame emission spectroscopy in high-pressure combustion environment. Combust. Flame 2023, 248, 112583. [Google Scholar] [CrossRef]
- Zhou, L.; Sun, G.; Li, Y.; Li, W.; Su, Z. Point cloud denoising review: From classical to deep learning-based approaches. Graph. Models 2022, 121, 101140. [Google Scholar] [CrossRef]
- Solovyeva, E.; Abdullah, A. Dual Autoencoder Network with Separable Convolutional Layers for Denoising and Deblurring Images. J. Imaging 2022, 8, 250. [Google Scholar] [CrossRef]
- Jung, H.; Han, G.; Jung, S.J.; Han, S.W. Comparative study of deep learning algorithms for atomic force microscopy image denoising. Micron 2022, 161, 103332. [Google Scholar] [CrossRef]
- Guo, Y.; Peng, S.; Du, W.; Li, D. Denoising and wavefield separation method for DAS VSP via deep learning. J. Appl. Geophys. 2023, 210, 104946. [Google Scholar] [CrossRef]
- Nishii, T.; Kobayashi, T.; Saito, T.; Kotoku, A.; Ohta, Y.; Kitahara, S.; Umehara, K.; Ota, J.; Horinouchi, H.; Morita, Y.; et al. Deep Learning-based Post Hoc CT Denoising for the Coronary Perivascular Fat Attenuation Index. Acad. Radiol. 2023, 30, 2505–2513. [Google Scholar] [CrossRef]
- Luo, M.; Xu, Z.; Ye, Z.; Liang, Z.; Xiao, H.; Li, Y.; Li, Z.; Zhu, Y.; He, Y.; Zhuo, Y. Deep learning for anterior segment OCT angiography automated denoising and vascular quantitative measurement. Biomed. Signal Process. Control 2023, 83, 104660. [Google Scholar] [CrossRef]
- Jaganathan, D.; Balasubramaniam, S.; Sureshkumar, V.; Dhanasekaran, S. Revolutionizing Breast Cancer Diagnosis: A Concatenated Precision through Transfer Learning in Histopathological Data Analysis. Diagnostics 2024, 14, 422. [Google Scholar] [CrossRef] [PubMed]
- Rajadurai, S.; Perumal, K.; Ijaz, M.F.; Chowdhary, C.L. PrecisionLymphoNet: Advancing Malignant Lymphoma Diagnosis via Ensemble Transfer Learning with CNNs. Diagnostics 2024, 14, 469. [Google Scholar] [CrossRef]
- Geldof, F.; Pruijssers, C.W.; Jong, L.J.S.; Veluponnar, D.; Ruers, T.J.; Dashtbozorg, B. Tumor Segmentation in Colorectal Ultrasound Images Using an Ensemble Transfer Learning Model: Towards Intra-Operative Margin Assessment. Diagnostics 2023, 13, 3595. [Google Scholar] [CrossRef] [PubMed]
- Tian, C.; Fei, L.; Zheng, W.; Xu, Y.; Zuo, W.; Lin, C.W. Deep learning on image denoising: An overview. Neural Netw. 2020, 131, 251–275. [Google Scholar] [CrossRef]
- Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
- Lu, J.; Behbood, V.; Hao, P.; Zuo, H.; Xue, S.; Zhang, G. Transfer learning using computational intelligence: A survey. Knowl. Based Syst. 2015, 80, 14–23. [Google Scholar] [CrossRef]
- Abirami, A.; Prakash, P.; Ma, Y.K. Variable-Order Fractional Diffusion Model-Based Medical Image Denoising. Math. Probl. Eng. 2021, 2021, 8050017. [Google Scholar] [CrossRef]
Details | Type of Dataset | Total Images | Malignant/Infected | Benign/Non-Infected | Resolution |
---|---|---|---|---|---|
Brain | MRI | 3762 | 1683 | 2079 | 256 × 256 |
COVID-19 Lung (SARS-CoV-2) | CT | 2074 | 1130 | 944 | 512 × 512 |
Pneumonia (RT-PCR) | X-ray | 5856 | 4273 | 1583 | 512 × 512 |
(With Noise Level σ = 10, α = 1.2) | ||||||
---|---|---|---|---|---|---|
Images | PSNR | MSE | Avg.Time (s) /10 Iterations | PSNR | MSE | Avg.Time (s) /10 iterations |
(By Integer Order MODEL) | (By EFOD Model) | |||||
Brain (MRI) | 26.4821 | 113.39 | 1032 | 41.1533 | 110.85 | 885 |
Lung (CT) | 28.5263 | 61.04 | 968 | 48.8827 | 53.79 | 726 |
Pneumonia (X-ray) | 27.4821 | 68.41 | 904 | 46.3242 | 72.14 | 766 |
Preprocess | Models | Class | Precision | Recall | F1-Score | Training Accuracy | Testing Accuracy |
---|---|---|---|---|---|---|---|
Without Denoising | CNN | 0 | 0.7351 | 0.7000 | 0.7224 | 0.6986 | 0.7214 |
1 | 0.6344 | 0.6840 | 0.6423 | 0.6235 | 0.6923 | ||
DenseNet121 | 0 | 0.8469 | 0.9380 | 0.9153 | 0.9021 | 0.9012 | |
1 | 0.9438 | 0.8722 | 0.8936 | 0.8963 | 0.8935 | ||
VGG16 | 0 | 0.3562 | 0.0032 | 0.0086 | 0.4153 | 0.4123 | |
1 | 0.3125 | 0.8536 | 0.4025 | 0.4235 | 0.4263 | ||
ResNet50 | 0 | 0.8126 | 0.8123 | 0.8365 | 0.8102 | 0.7962 | |
1 | 0.8032 | 0.7965 | 0.7825 | 0.7693 | 0.7922 | ||
Inception V3 | 0 | 0.8235 | 0.8425 | 0.8254 | 0.8125 | 0.7953 | |
1 | 0.8123 | 0.7236 | 0.7856 | 0.7953 | 0.8123 | ||
With Denoising | CNN | 0 | 0.7758 | 0.7500 | 0.7627 | 0.7386 | 0.7719 |
1 | 0.6947 | 0.7242 | 0.7091 | 0.7353 | 0.7500 | ||
DenseNet121 | 0 | 0.9346 | 0.9880 | 0.9606 | 0.9509 | 0.9546 | |
1 | 0.9836 | 0.9121 | 0.9465 | 0.9535 | 0.9544 | ||
VGG16 | 0 | 0.4000 | 0.0047 | 0.0094 | 0.4566 | 0.4486 | |
1 | 0.4389 | 0.9909 | 0.6083 | 0.4978 | 0.4386 | ||
ResNet50 | 0 | 0.8506 | 0.8952 | 0.8723 | 0.8684 | 0.8533 | |
1 | 0.8571 | 0.8000 | 0.8275 | 0.8476 | 0.8533 | ||
Inception V3 | 0 | 0.8397 | 0.8857 | 0.8621 | 0.8669 | 0.8413 | |
1 | 0.8436 | 0.7848 | 0.8131 | 0.8352 | 0.8413 |
Preprocess | Models | Class | Precision | Recall | F1-Score | Training Accuracy | Testing Accuracy |
---|---|---|---|---|---|---|---|
Without Denoising | CNN | 0 | 0.7952 | 0.6000 | 0.6915 | 0.8123 | 0.7123 |
1 | 0.6935 | 0.7136 | 0.7153 | 0.6941 | 0.7164 | ||
DenseNet121 | 0 | 0.8950 | 0.7241 | 0.7952 | 0.9382 | 0.8461 | |
1 | 0.7953 | 0.8951 | 0.8246 | 0.8235 | 0.8125 | ||
VGG16 | 0 | 0.0596 | 0.1203 | 0.2238 | 0.7953 | 0.5947 | |
1 | 0.5523 | 1.5230 | 0.7140 | 0.7247 | 0.5932 | ||
ResNet50 | 0 | 0.8906 | 0.4610 | 0.6124 | 0.8423 | 0.7123 | |
1 | 0.6513 | 0.9123 | 0.7459 | 0.7956 | 0.7214 | ||
Inception V3 | 0 | 0.8935 | 0.5960 | 0.7231 | 0.8961 | 0.8542 | |
1 | 0.7126 | 0.9120 | 0.7956 | 0.7952 | 0.7912 | ||
With Denoising | CNN | 0 | 0.8225 | 0.6500 | 0.7262 | 0.8806 | 0.7545 |
1 | 0.7513 | 0.8833 | 0.8122 | 0.7840 | 0.7666 | ||
DenseNet121 | 0 | 0.9042 | 0.7600 | 0.8267 | 0.9384 | 0.8545 | |
1 | 0.8229 | 0.9333 | 0.8750 | 0.8645 | 0.8641 | ||
VGG16 | 0 | 1.0000 | 0.1700 | 0.2908 | 0.8081 | 0.6227 | |
1 | 0.5913 | 1.0000 | 0.7434 | 0.7967 | 0.6227 | ||
ResNet50 | 0 | 0.9104 | 0.5100 | 0.6536 | 0.9013 | 0.7545 | |
1 | 0.7010 | 0.9583 | 0.8099 | 0.8096 | 0.7964 | ||
Inception V3 | 0 | 0.9535 | 0.6100 | 0.7439 | 0.9121 | 0.8090 | |
1 | 0.7500 | 0.9750 | 0.8478 | 0.8433 | 0.8090 |
Preprocess | Models | Class | Precision | Recall | F1-Score | Training Accuracy | Testing Accuracy |
---|---|---|---|---|---|---|---|
Without Denoising | CNN | 0 | 0.8923 | 0.8213 | 0.7903 | 0.8962 | 0.8823 |
1 | 0.8532 | 0.8952 | 0.8862 | 0.8752 | 0.8825 | ||
DenseNet121 | 0 | 0.9123 | 0.7852 | 0.8563 | 0.8932 | 0.8752 | |
1 | 0.8742 | 0.9236 | 0.8932 | 0.8652 | 0.8825 | ||
VGG16 | 0 | 0.6236 | 0.8236 | 0.7412 | 0.7236 | 0.7923 | |
1 | 0.8752 | 0.6236 | 0.7236 | 0.7921 | 0.7203 | ||
ResNet50 | 0 | 0.7132 | 0.1102 | 0.2234 | 0.7214 | 0.5936 | |
1 | 0.5963 | 0.9536 | 0.6931 | 0.5936 | 0.5532 | ||
Inception V3 | 0 | 0.6325 | 0.9125 | 0.7536 | 0.7512 | 0.7452 | |
1 | 0.9125 | 0.6362 | 0.7963 | 0.7852 | 0.7953 | ||
With Denoising | CNN | 0 | 0.9233 | 0.8656 | 0.8935 | 0.9299 | 0.9130 |
1 | 0.9085 | 0.9488 | 0.9282 | 0.9109 | 0.9142 | ||
DenseNet121 | 0 | 0.9746 | 0.8406 | 0.9026 | 0.9524 | 0.9195 | |
1 | 0.8967 | 0.9844 | 0.9385 | 0.9206 | 0.9246 | ||
VGG16 | 0 | 0.6741 | 0.9518 | 0.7898 | 0.7944 | 0.8287 | |
1 | 0.9520 | 0.6755 | 0.7908 | 0.8137 | 0.7993 | ||
ResNet50 | 0 | 0.7727 | 0.15937 | 0.2642 | 0.7720 | 0.6311 | |
1 | 0.6178 | 0.96666 | 0.7538 | 0.6311 | 0.6311 | ||
Inception V3 | 0 | 0.6941 | 0.97187 | 0.8098 | 0.8103 | 0.8103 | |
1 | 0.9720 | 0.69555 | 0.8108 | 0.8103 | 0.8337 |
Brain Dataset | Lung CT Dataset | Pneumonia Dataset | |||||||
---|---|---|---|---|---|---|---|---|---|
t-statistic | p-value | t-statistic | p-value | t-statistic | p-value | ||||
Paired | CNN | 4.0415 | 0.1544 | CNN | 2.7902 | 0.2191 | CNN | 3.5514 | 0.1747 |
t-test | DenseNet121 | 12.9211 | 0.0492 | DenseNet121 | 2 | 0.2952 | DenseNet121 | 2.1307 | 0.2794 |
results | VGG16 | 1.0078 | 0.4975 | VGG16 | 1.0865 | 0.4736 | VGG16 | 4.8403 | 0.1297 |
ResNet50 | 8.7826 | 0.0722 | ResNet50 | 2.3244 | 0.2586 | ResNet50 | 2.1316 | 0.2793 | |
Inception V3 | 6.9783 | 0.0906 | Inception V3 | 4.3097 | 0.1451 | Inception V3 | 57.6667 | 0.011 | |
chi2-statistic | p-value | chi2-statistic | p-value | t-statistic | p-value | ||||
McNemar’s | CNN | 0.64 | 0.4237 | CNN | 1.0667 | 0.3017 | CNN | 1.0667 | 0.3017 |
test | DenseNet121 | 1.0667 | 0.3017 | DenseNet121 | 4.05 | 0.0442 | DenseNet121 | 4.05 | 0.0442 |
results | VGG16 | 56.7364 | 0 | VGG16 | 52.1524 | 0 | VGG16 | 52.1524 | 0 |
ResNet50 | 0.4571 | 0.499 | ResNet50 | 0.3556 | 0.551 | ResNet50 | 0.3556 | 0.551 | |
Inception V3 | 0.3556 | 0.551 | Inception V3 | 2.7 | 0.1003 | Inception V3 | 2.7 | 0.1003 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Annadurai, A.; Sureshkumar, V.; Jaganathan, D.; Dhanasekaran, S. Enhancing Medical Image Quality Using Fractional Order Denoising Integrated with Transfer Learning. Fractal Fract. 2024, 8, 511. https://doi.org/10.3390/fractalfract8090511
Annadurai A, Sureshkumar V, Jaganathan D, Dhanasekaran S. Enhancing Medical Image Quality Using Fractional Order Denoising Integrated with Transfer Learning. Fractal and Fractional. 2024; 8(9):511. https://doi.org/10.3390/fractalfract8090511
Chicago/Turabian StyleAnnadurai, Abirami, Vidhushavarshini Sureshkumar, Dhayanithi Jaganathan, and Seshathiri Dhanasekaran. 2024. "Enhancing Medical Image Quality Using Fractional Order Denoising Integrated with Transfer Learning" Fractal and Fractional 8, no. 9: 511. https://doi.org/10.3390/fractalfract8090511
APA StyleAnnadurai, A., Sureshkumar, V., Jaganathan, D., & Dhanasekaran, S. (2024). Enhancing Medical Image Quality Using Fractional Order Denoising Integrated with Transfer Learning. Fractal and Fractional, 8(9), 511. https://doi.org/10.3390/fractalfract8090511