Research on the Quality Grading Method of Ginseng with Improved DenseNet121 Model
Abstract
:1. Introduction
2. Dataset
2.1. Dataset Composition
2.2. Dataset Preprocessing
2.3. Dataset Partitioning
3. Building the Network Model
3.1. Original Network Model
3.2. Ginseng Network Classification Model
3.3. CA Attention Mechanism
3.4. Group Convolution
- (1)
- Combination of grouped convolution and spatiotemporal interleaved networks [33] TCN (Temporal Convolutional Network):
- (2)
- Application of grouped convolution in action recognition [34]:
- (3)
- Application of grouped convolution in gesture recognition [33]:
3.5. ELU Activation Function
4. Experiment and Result Analysis
4.1. Experimental Environment
4.2. Experiments on Attention Mechanisms
4.2.1. The Performance of Attention Mechanisms in Different Positions
4.2.2. Comparison Between Different Attention Mechanisms
4.3. Experimental Study on Grouped Convolution
The Effect of Grouped Convolution at Different Positions
4.4. Experiment on Activation Function
Comparison Between Different Activation Functions
4.5. Performance Comparison of Different Network Models
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Li, F.; Bao, H. Herbal Textual Research and progress on pharmacological actions of Ginseng Radix et Rhizoma. Ginseng Res. 2017, 29, 43–46. [Google Scholar]
- Liu, W.; Li, W. Review on industrialization development status and prospect of panax ginseng processing. J. Jilin Agric. Univ. 2023, 45, 639–648. [Google Scholar]
- Chen, J.; Yang, L.; Li, R.; Zhang, J. Identification of Panax japonicus and its related species or adulterants using ITS2 sequence. Chin. Tradit. Herb. Drugs 2018, 49, 9. [Google Scholar]
- Chen, K.; Huang, L.; Liu, Y. Development history of methodology of Chinese Medicines’ Authentication. China J. Chin. Mater. Med. 2014, 39, 1203–1208. [Google Scholar]
- Xu, S.; Sun, G.; Mu, S.; Sun, Q. Fingerprint Comparison of Mountain Cultivated Ginseng and Wild Ginseng by HPLC. J. Chin. Med. Mater. 2013, 36, 213–216. [Google Scholar]
- Hua, Y.; Geng, C.; Wang, S.; Liu, X. Analysis of Gene Expression of Pseudostellariae Radix from Different Provenances and Habitats Based on cDNA-AFLP. Nat. Prod. Res. Dev. 2016, 28, 188. [Google Scholar] [CrossRef]
- Geng, L.; Huang, Y.; Guo, Y. Apple variety classification method based on fusion attention mechanism. Trans. Chin. Soc. Agric. Mach. 2022, 53, 304–310. [Google Scholar]
- Huang, F.; Yu, L.; Shen, T.; Xu, H. Research and Implementation of Chinese Herbal Medicine Plant Image Classification Based on AlexNet Deep Learning Mode. J. Qilu Univ. Technol. 2020, 34, 44–49. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Li, L.P.; Shi, F.P.; Tian, W.B.; Chen, L. Wild plant image recognition method based on residual network and transfer learning. Radio Eng 2021, 51, 857–863. [Google Scholar]
- Ghosal, P.; Nandanwar, L.; Kanchan, S.; Bhadra, A.; Chakraborty, J.; Nandi, D. Brain Tumor Classification Using ResNet-101 Based Squeeze and Excitation Deep Neural Network. In Proceedings of the 2019 Second International Conference on Advanced Computational and Communication Paradigms (ICACCP), Gangtok, India, 25–28 February 2019; pp. 1–6. [Google Scholar]
- Pereira, C.S.; Morais, R.; Reis, M.J. Deep learning techniques for grape plant species identification in natural images. Sensors 2019, 19, 4850. [Google Scholar] [CrossRef] [PubMed]
- Gui, Y. Classification and Recognition of Crop Seedings and Weeds Based on Attention Mechanism. Master’s Thesis, Anhui Agricultural University, Hefei, China, 2020. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Chen, J.; Chen, J.; Zhang, D.; Sun, Y.; Nanehkaran, Y.A. Using deep transfer learning for image-based plant disease identification. Comput. Electron. Agr. 2020, 173, 105393. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Kadir, A.; Nugroho, L.E.; Susanto, A.; Santosa, P.I. Leaf classification using shape, color, and texture features. arXiv 2013, arXiv:1401.4447. [Google Scholar]
- Li, D.; Zhai, M.; Piao, X.; Li, W.; Zhang, L. A Ginseng Appearance Quality Grading Method Based on an Improved ConvNeXt Model. Agronomy 2023, 13, 1770. [Google Scholar] [CrossRef]
- Ding, X.; Chen, H.; Zhang, X.; Han, J. Repmlpnet: Hierarchical vision mlp with re-parameterized locality. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 578–587. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. In Proceedings of the 2015 IEEE International Conference On Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1026–1034. [Google Scholar]
- Li, D.; Piao, X.; Lei, Y.; Li, W.; Zhang, L.; Ma, L. A Grading Method of Ginseng (Panax ginseng C. A. Meyer) Appearance Quality Based on an Improved ResNet50 Model. Agronomy 2022, 12, 2925. [Google Scholar] [CrossRef]
- Kim, M.; Kim, J.; Kim, J.S.; Lim, J.; Moon, K. Automated Grading of Red Ginseng Using DenseNet121 and Image Preprocessing Techniques. Agronomy 2023, 13, 2943. [Google Scholar] [CrossRef]
- Chen, B.; Zhu, L.; Kong, C.; Zhu, H.; Wang, S.; Li, Z. No-reference image quality assessment by hallucinating pristine features. IEEE Trans. Image Process. 2022, 31, 6139–6151. [Google Scholar] [CrossRef]
- Wu, H.; Zhu, H.; Zhang, Z.; Zhang, E.; Chen, C.; Liao, L.; Li, C.; Wang, A.; Sun, W.; Yan, Q. Towards open-ended visual quality comparison. arXiv 2024, arXiv:2402.16641. [Google Scholar]
- Kong, C.; Luo, A.; Wang, S.; Li, H.; Rocha, A.; Kot, A.C. Pixel-inconsistency modeling for image manipulation localization. arXiv 2023, arXiv:2310.00234. [Google Scholar]
- Zhu, H.; Chen, B.; Zhu, L.; Wang, S. Learning spatiotemporal interactions for user-generated video quality assessment. IEEE Trans. Circuits Syst. Video Technol. 2022, 33, 1031–1042. [Google Scholar] [CrossRef]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Hou, Q.; Zhou, D.; Feng, J. Coordinate Attention for Efficient Mobile Network Design. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 13708–13717. [Google Scholar]
- Djork-Arné, C.; Unterthiner, T.; Hochreiter, S. Fast and accurate deep network learning by exponential linear units (elus). arXiv 2015, arXiv:1511.07289. [Google Scholar]
- Ge, C.; Song, Y.; Ma, C.; Qi, Y.; Luo, P. Rethinking attentive object detection via neural attention learning. IEEE Trans. Image Process. 2023, 33, 1726–1739. [Google Scholar] [CrossRef] [PubMed]
- Chen, W.; Hong, D.; Qi, Y.; Han, Z.; Wang, S.; Qing, L.; Huang, Q.; Li, G. Multi-attention network for compressed video referring object segmentation. In Proceedings of the 30th ACM International Conference on Multimedia, Lisboa, Portugal, 10–14 October 2022; pp. 4416–4425. [Google Scholar]
- Phan, V.M.H.; Xie, Y.; Zhang, B.; Qi, Y.; Liao, Z.; Perperidis, A.; Phung, S.L.; Verjans, J.W.; To, M. Structural Attention: Rethinking Transformer for Unpaired Medical Image Synthesis. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Marrakesh, Morocco, 6–10 October 2024; Springer: Berlin/Heidelberg, Germany, 2024; pp. 690–700. [Google Scholar]
- Yi, Y.; Ni, F.; Ma, Y.; Zhu, X.; Qi, Y.; Qiu, R.; Zhao, S.; Li, F.; Wang, Y. High Performance Gesture Recognition via Effective and Efficient Temporal Modeling. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, Macao, China, 10–16 August 2019; pp. 1003–1009. [Google Scholar]
- Jiang, S.; Zhang, H.; Qi, Y.; Liu, Q. Spatial-Temporal Interleaved Network for Efficient Action Recognition. IEEE Trans. Ind. Inform. 2024, 1–10. [Google Scholar] [CrossRef]
- Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; Hu, Q. ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 11531–11539. [Google Scholar]
- Li, X.; Wang, W.; Hu, X.; Yang, J. Selective Kernel Networks. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 510–519. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.; Kweon, I. CBAM: Convolutional Block Attention Module. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Elfwing, S.; Uchibe, E.; Doya, K. Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning. Neural Netw. Off. J. Int. Neural Netw. Soc. 2017, 107, 3–11. [Google Scholar] [CrossRef]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Liu, Z.; Yang, C.; Huang, J.; Liu, S.; Zhuo, Y.; Lu, X. Deep learning framework based on integration of S-Mask R-CNN and Inception-v3 for ultrasound image-aided diagnosis of prostate cancer. Future Gener. Comput. Syst. 2021, 114, 358–367. [Google Scholar] [CrossRef]
Project | Principal | First-Class | Second-Class |
---|---|---|---|
Main Root | Cylindrical | ||
Branch Root | There are 2~3 obvious branched roots, and the thickness is more uniform | One to four branches, coarser and finer | |
Rutabaga | Complete with reed head and ginseng fibrous roots | The reed head and ginseng fibrous roots are more complete | Rutabaga and ginseng with incomplete fibrous roots |
Groove | Clear and obvious grooves | Not obvious, distinct groove | Without grooves |
Section | Section neat, clear | Segment is obvious | Segments are not obvious |
Surface | Yellowish-white or grayish-yellow, no water rust, no draw grooves | Yellowish-white or grayish-yellow, light water rust, or with pumping grooves | Yellowish-white or grayish-yellow, slightly more water rust, with pumping grooves |
Texture | Harder, powdery, non-hollow | ||
Springtails | Square or rectangular | Made conical or cylindrical | Irregular shape |
Insects, Mildew, Impurities | None | Mild | Presence |
Level | Train | Val |
---|---|---|
Principal | 1428 | 357 |
First-class | 1564 | 391 |
Second-class | 1500 | 375 |
Total | 4492 | 1123 |
Parameter | Set Up |
---|---|
Optimizer | Adam |
Learning rate | 0.0001 |
Weight decay | 0.0001 |
Batch size | 32 |
Epoch | 100 |
Loss function | CrossEntropyLoss |
Num | Location | Accuracy | AUC | Loss |
---|---|---|---|---|
1 | No-Attention | 0.931 | 0.942 | 0.029 |
2 | BN-Before | 0.944 | 0.953 | 0.028 |
3 | BN-After | 0.936 | 0.941 | 0.034 |
4 | Conv1-After | 0.942 | 0.951 | 0.031 |
5 | Conv2-Before | 0.929 | 0.938 | 0.036 |
Num | Module | Accuracy | AUC | Loss |
---|---|---|---|---|
1 | ECA | 0.941 | 0.949 | 0.028 |
2 | SK | 0.928 | 0.932 | 0.034 |
3 | CBAM | 0.935 | 0.939 | 0.031 |
4 | CA | 0.944 | 0.953 | 0.029 |
Num | Location | Accuracy | Loss | AUC | Params |
---|---|---|---|---|---|
A | D1D2 | 0.944 | 0.029 | 0.953 | 6.95 M |
B | G1D2 | 0.948 | 0.027 | 0.959 | 4.93 M |
C | D1G2 | 0.945 | 0.031 | 0.951 | 5.88 M |
D | G1G2 | 0.931 | 0.035 | 0.944 | 3.87 M |
Activate Function | Accuracy | Loss | AUC | Epochs |
---|---|---|---|---|
ReLU | 0.948 | 0.028 | 0.959 | 40 |
PReLU | 0.953 | 0.026 | 0.951 | 37 |
SiLU | 0.951 | 0.027 | 0.957 | 38 |
ELU | 0.955 | 0.025 | 0.961 | 35 |
Model | Accuracy | Precision | Recall | F1-Score | AUC |
---|---|---|---|---|---|
DenseNet121 | 0.931 | 0.935 | 0.926 | 0.928 | 0.941 |
ResNet50 | 0.885 | 0.869 | 0.862 | 0.864 | 0.882 |
ResNet101 | 0.861 | 0.851 | 0.849 | 0.867 | 0.869 |
GoogleNet | 0.924 | 0.933 | 0.925 | 0.927 | 0.931 |
InceptionV3 | 0.943 | 0.944 | 0.939 | 0.941 | 0.952 |
Our Model | 0.955 | 0.954 | 0.948 | 0.949 | 0.963 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gu, J.; Li, Z.; Zhang, L.; Yin, Y.; Lv, Y.; Yu, Y.; Li, D. Research on the Quality Grading Method of Ginseng with Improved DenseNet121 Model. Electronics 2024, 13, 4504. https://doi.org/10.3390/electronics13224504
Gu J, Li Z, Zhang L, Yin Y, Lv Y, Yu Y, Li D. Research on the Quality Grading Method of Ginseng with Improved DenseNet121 Model. Electronics. 2024; 13(22):4504. https://doi.org/10.3390/electronics13224504
Chicago/Turabian StyleGu, Jinlong, Zhiyi Li, Lijuan Zhang, Yingying Yin, Yan Lv, Yue Yu, and Dongming Li. 2024. "Research on the Quality Grading Method of Ginseng with Improved DenseNet121 Model" Electronics 13, no. 22: 4504. https://doi.org/10.3390/electronics13224504
APA StyleGu, J., Li, Z., Zhang, L., Yin, Y., Lv, Y., Yu, Y., & Li, D. (2024). Research on the Quality Grading Method of Ginseng with Improved DenseNet121 Model. Electronics, 13(22), 4504. https://doi.org/10.3390/electronics13224504