Non-Destructive Detection Method of Apple Watercore: Optimization Using Optical Property Parameter Inversion and MobileNetV3
Abstract
:1. Introduction
2. Materials and Methods
2.1. Data Acquisition
2.2. Simulation Data Collection
2.2.1. Three-Layer Model of Apples
- Determine whether a photon crossed the boundary. If the boundary was not crossed, the photon was absorbed and scattered, and the weight was updated. If the boundary was crossed, it was determined whether the photon escaped from the apple tissue surface;
- Determine whether a photon overflows. If it does not overflow, it is necessary to determine whether it is refracted or reflected and update the weights accordingly. If it overflows;
- Whether this is the last photon is determined. If this is the last photon, the operation is terminated.
2.2.2. Intersection Point Calculation
Algorithm 1: Junction calculation |
Input: photon Output: Photon out of bounds position 1: current_pos = photon.current.position 2: previous_pos = photon.current-1.position 3: current_layer = current_pos.current.layer 4: previous_layer = previous_pos.current.layer 5: if abs (current_layer − previous_layer) == 1: 6: while True: 7: midpoint = (current_pos + previous_pos)/2 8: midpoint_layer = midpoint.current.layer 9: if current_layer == midpoint_layer: 10: current_pos = midpoint 11: else: 12: previous_pos = midpoint 13: if midpoint_layer == current_layer or midpoint_layer == previous_layer: 14: break 15: end while 16: return midpoint 17: end if |
2.2.3. Photon Cross-Layer Constraint Algorithm
Algorithm 2: Photon transboundary confinement |
Input: photon Output: Ensure the photon crosses the layer boundary correctly 1: current_pos = photon.current.position 2: previous_pos = photon.current-1.position 3: current_layer = current_pos.current.layer 4: previous_layer = previous_pos.current.layer 5: while abs(current_layer − previous_layer) == 1 do 6: midpoint = (current_pos + previous_pos)/2 7: midpoint = Junction calculation(photon) 8: current_layer = midpoint.current.layer 9: current_pos = midpoint 10: end while |
2.2.4. Absorption and Scattering of Photons
2.3. Data Augmentation
- For the training set, the images are randomly cropped and the cropped images are resized to 224 × 224 pixels. The diversity of the images is enhanced by flipping the images randomly horizontally with 50% probability and mirroring the flipped images with 50% probability.
- For the test set, the longer side of the image was resized to 256 pixels, keeping the aspect ratio constant. The center region was cropped from the adjusted image with a cropping size of 224 × 224 pixels. It ensures that the data processing of the validation set is consistent with the training set and keeps the image features of the validation set unchanged, so as to accurately evaluate the performance of the model.
- The mean and standard deviation values obtained from the ImageNet dataset are used to normalise the pixel values of an image on both the test and training sets. This allows the data to have a distribution that is more suitable for training before feeding into the neural network.
2.4. Experimental Method
2.4.1. MobileNetV3 Model and Dilated Convolution
2.4.2. Transfer Learning
2.4.3. Comparison Algorithm
- SVM [50] is a classical machine learning method that excels at handling small samples and high-dimensional data. It is used to compare the improvement of deep learning models in feature extraction and classification tasks.
- ResNet50 [51] is a deep residual network that addresses the vanishing gradient problem in deep neural networks by introducing residual connections. It serves as a good reference for evaluating the performance of new models in complex tasks.
- VGG16 [52] increases the depth of the network by stacking small convolutional kernels to enhance feature extraction capabilities. It performs well in large-scale image classification tasks and is a classic benchmark in comparative experiments.
- ShuffleNetV2 is a lightweight neural network whose efficiency and low computational requirements make it ideal for comparison with MobileNetV3-small, especially in resource-constrained environments.
- InceptionNetV3 [53] utilises the Inception module, whose complex architecture and multi-scale feature extraction capabilities provide a contrast with our model in terms of feature diversity and complexity.
- MobileNetV3_large is another version of MobileNetV3, and comparing its performance with our model helps us understand the trade-off between model complexity and performance.
2.4.4. Evaluation Index
2.5. Experimental Environment and Parameter Setting
3. Results and Discussion
3.1. Data Augmentation Effect and Comparative Analysis
3.2. Comparison of the Model Pre-Training Algorithms
3.3. Comparison Experiments of Transfer Learning Methods
3.4. Ablation Experiment
3.5. Confusion Matrix
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
References
- Hong, J.; Zhang, T.; Shen, X.; Zhai, Y.; Bai, Y.; Hong, J. Water, energy, and carbon integrated footprint analysis from the environmental-economic perspective for apple production in China. J. Clean. Prod. 2022, 368, 133184. [Google Scholar] [CrossRef]
- Moriya, S.; Kunihisa, M.; Okada, K.; Iwanami, H.; Iwata, H.; Minamikawa, M.; Katayose, Y.; Matsumoto, T.; Mori, S.; Sasaki, H. Identification of QTLs for flesh mealiness in apple (Malus× domestica Borkh.). Hortic. J. 2017, 86, 159–170. [Google Scholar] [CrossRef]
- Liu, Z.; Du, M.; Liu, H.; Zhang, K.; Xu, X.; Liu, K.; Tu, J.; Liu, Q. Chitosan films incorporating litchi peel extract and titanium dioxide nanoparticles and their application as coatings on watercored apples. Prog. Org. Coat. 2021, 151, 106103. [Google Scholar] [CrossRef]
- Li, W.; Liu, Z.; Wang, H.; Zheng, Y.; Zhou, Q.; Duan, L.; Tang, Y.; Jiang, Y.; Li, X.; Jiang, Y. Harvest maturity stage affects watercore dissipation and postharvest quality deterioration of watercore’Fuji’apples. Postharvest Biol. Technol. 2024, 210, 112736. [Google Scholar] [CrossRef]
- Itai, A. Watercore in fruits. In Abiotic Stress Biology in Horticultural Plants; Springer: Berlin/Heidelberg, Germany, 2015; pp. 127–145. [Google Scholar]
- Zupan, A.; Mikulic-Petkovsek, M.; Stampar, F.; Veberic, R. Sugar and phenol content in apple with or without watercore. J. Sci. Food Agric. 2016, 96, 2845–2850. [Google Scholar] [CrossRef] [PubMed]
- Arnold, M.; Gramza-Michałowska, A. Enzymatic browning in apple products and its inhibition treatments: A comprehensive review. Compr. Rev. Food Sci. Food Saf. 2022, 21, 5038–5076. [Google Scholar] [CrossRef] [PubMed]
- Herremans, E.; Melado-Herreros, A.; Defraeye, T.; Verlinden, B.; Hertog, M.; Verboven, P.; Val, J.; Fernández-Valle, M.E.; Bongaers, E.; Estrade, P. Comparison of X-ray CT and MRI of watercore disorder of different apple cultivars. Postharvest Biol. Technol. 2014, 87, 42–50. [Google Scholar] [CrossRef]
- Rittiron, R.; Narongwongwattana, S.; Boonprakob, U.; Seehalak, W. Rapid and nondestructive detection of watercore and sugar content in Asian pear by near infrared spectroscopy for commercial trade. J. Innov. Opt. Health Sci. 2014, 7, 1350073. [Google Scholar] [CrossRef]
- Prananto, J.A.; Minasny, B.; Weaver, T. Near infrared (NIR) spectroscopy as a rapid and cost-effective method for nutrient analysis of plant leaf tissues. Adv. Agron. 2020, 164, 1–49. [Google Scholar]
- Fatihoglu, E.; Aydin, S.; Gokharman, F.D.; Ece, B.; Kosar, P.N. X-ray use in chest imaging in emergency department on the basis of cost and effectiveness. Acad. Radiol. 2016, 23, 1239–1245. [Google Scholar] [CrossRef]
- Sun, X.-L.; Zhou, T.-T.; Sun, Z.-Z.; Li, Z.-M.; Hu, D. Research progress into optical property-based nondestructive fruit and vegetable quality assessment. Food Res. Dev. 2022, 43, 208–209. [Google Scholar]
- Wang, S.; Huang, X.; Lyu, R.; Pan, S. Research progress of nondestructive detection methods in fruit quality. Food Ferment. Ind. 2018, 44, 319–324. [Google Scholar]
- Pan, L.; Wei, K.; Cao, N.; Sun, K.; Liu, Q.; Tu, K.; Zhu, Q. Measurement of optical parameters of fruits and vegetables and its application in quality detection. J. Nanjing Agric. Univ. 2018, 41, 26–37. [Google Scholar]
- Mondal, A.; Mandal, A. Stratified random sampling for dependent inputs in Monte Carlo simulations from computer experiments. J. Stat. Plan. Inference 2020, 205, 269–282. [Google Scholar] [CrossRef]
- Guan, T.; Zhao, H.; Wang, Z.; Yu, D. Optical properties reconstruction of layered tissue and experimental demonstration. In Proceedings of the Complex Dynamics and Fluctuations in Biomedical Photonics IV, San Jose, CA, USA, 20–25 January 2007; SPIE: Paris, France, 2007; pp. 227–234. [Google Scholar]
- Zhang, M.; Li, C.; Yang, F. Optical properties of blueberry flesh and skin and Monte Carlo multi-layered simulation of light interaction with fruit tissues. Postharvest Biol. Technol. 2019, 150, 28–41. [Google Scholar] [CrossRef]
- Mahesh, B. Machine learning algorithms-a review. Int. J. Sci. Res. IJSR 2020, 9, 381–386. [Google Scholar] [CrossRef]
- Shruthi, U.; Nagaveni, V.; Raghavendra, B. A review on machine learning classification techniques for plant disease detection. In Proceedings of the 2019 5th International Conference on Advanced Computing & Communication Systems (ICACCS), Coimbatore, India, 15–16 March 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 281–284. [Google Scholar]
- Guan, Z.; Tang, J.; Yang, B.; Zhou, Y.; Fan, D.; Yao, Q. Study on recognition method of rice disease based on image. Chin. J. Rice Sci. 2010, 24, 497. [Google Scholar]
- Kattenborn, T.; Leitloff, J.; Schiefer, F.; Hinz, S. Review on Convolutional Neural Networks (CNN) in vegetation remote sensing. ISPRS J. Photogramm. Remote Sens. 2021, 173, 24–49. [Google Scholar] [CrossRef]
- Anagnostis, A.; Asiminari, G.; Papageorgiou, E.; Bochtis, D. A convolutional neural networks based method for anthracnose infected walnut tree leaves identification. Appl. Sci. 2020, 10, 469. [Google Scholar] [CrossRef]
- Rachmad, A.; Fuad, M.; Rochman, E.M.S. Convolutional Neural Network-Based Classification Model of Corn Leaf Disease. Math. Model. Eng. Probl. 2023, 10, 530–536. [Google Scholar] [CrossRef]
- Weiss, K.; Khoshgoftaar, T.M.; Wang, D. A survey of transfer learning. J. Big Data 2016, 3, 1345–1459. [Google Scholar] [CrossRef]
- Deng, L.; Li, J.; Han, Z. Online defect detection and automatic grading of carrots using computer vision combined with deep learning methods. LWT 2021, 149, 111832. [Google Scholar] [CrossRef]
- Mansheng, L.; Chunjuan, O.; Huan, L.; Qing, F. Image recognition of Camellia oleifera diseases based on convolutional neural network & transfer learning. Trans. Chin. Soc. Agric. Eng. Trans. CSAE 2018, 34, 194–201. [Google Scholar]
- Elharrouss, O.; Akbari, Y.; Almaadeed, N.; Al-Maadeed, S. Backbones-review: Feature extraction networks for deep learning and deep reinforcement learning approaches. arXiv 2022, arXiv:2206.08016. [Google Scholar] [CrossRef]
- Zhang, X.; Zhou, X.; Lin, M.; Sun, J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6848–6856. [Google Scholar]
- Türkmen, S.; Heikkilä, J. An efficient solution for semantic segmentation: ShuffleNet V2 with atrous separable convolutions. In Proceedings of the Scandinavian Conference on Image Analysis, Norrköping, Sweden, 11–13 June 2019; Springer: Cham, Switzerland, 2019; pp. 41–53. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
- Howard, A.; Sandler, M.; Chu, G.; Chen, L.-C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V. Searching for mobilenetv3. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1314–1324. [Google Scholar]
- Yang, L.; Quan, F.; Shuzhi, W. Plant disease identification method and mobile application based on lightweight CNN. Trans. Chin. Soc. Agric. Eng. 2019, 35, 194–204. [Google Scholar]
- Si, H.; Wang, Y.; Zhao, W.; Wang, M.; Song, J.; Wan, L.; Song, Z.; Li, Y.; Fernando, B.; Sun, C. Apple Surface Defect Detection Method Based on Weight Comparison Transfer Learning with MobileNetV3. Agriculture 2023, 13, 824. [Google Scholar] [CrossRef]
- Peng, Z.; Cai, C. An effective segmentation algorithm of apple watercore disease region using fully convolutional neural networks. In Proceedings of the 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Kuala Lumpur, Malaysia, 12–15 December 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1292–1299. [Google Scholar]
- Pan, L.-Q.; Fang, L.; Hou, B.-J.; Zhang, B.; Peng, J.; Tu, K. System and principle of optical properties measurement and advances on quality detection of fruits and vegetables. J. Nanjing Agric. Univ. 2021, 44, 401–411. [Google Scholar]
- Xu, Z.; Wang, Z.; Huang, L.; Liu, Z.; Hou, R.; Wang, C. Double-integrating-sphere system for measuring optical properties of farm products and its application. Trans. CSAE 2006, 22, 244–249. [Google Scholar]
- Xu, H.; Sun, Y.; Cao, X.; Ji, C.; Chen, L.; Wang, H. Apple quality detection based on photon transmission simulation and convolutional neural network. Trans. Chin. Soc. Agric. Mach. 2021, 52, 338–345. [Google Scholar]
- Li, J.; Xue, J.; Li, J.; Zhao, L. Study of the Changes in Optical Parameters of Diseased Apple Pulps Based on the Integrating Sphere Technique. Spectroscopy 2020, 35, 32–38. [Google Scholar]
- Solanki, C.; Thapliyal, P.; Tomar, K. Role of bisection method. Int. J. Comput. Appl. Technol. Res. 2014, 3, 535. [Google Scholar] [CrossRef]
- Toublanc, D. Henyey–Greenstein and Mie phase functions in Monte Carlo radiative transfer computations. Appl. Opt. 1996, 35, 3270–3274. [Google Scholar] [CrossRef] [PubMed]
- Maharana, K.; Mondal, S.; Nemade, B. A review: Data pre-processing and data augmentation techniques. Glob. Transit. Proc. 2022, 3, 91–99. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Ma, L.; Liu, X.; Li, H.; Duan, K.; Niu, B. Neural network lightweight method with dilated convolution. Comput. Eng. Appl. 2022, 58, 85–93. [Google Scholar] [CrossRef]
- Niu, S.; Liu, Y.; Wang, J.; Song, H. A decade survey of transfer learning (2010–2020). IEEE Trans. Artif. Intell. 2020, 1, 151–166. [Google Scholar] [CrossRef]
- Mormont, R.; Geurts, P.; Marée, R. Comparison of deep transfer learning strategies for digital pathology. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2262–2271. [Google Scholar]
- Soekhoe, D.; Van Der Putten, P.; Plaat, A. On the impact of data set size in transfer learning using deep neural networks. In Proceedings of the Advances in Intelligent Data Analysis XV: 15th International Symposium, IDA 2016, Stockholm, Sweden, 13–15 October 2016; Proceedings 15. Springer: Berlin/Heidelberg, Germany, 2016; pp. 50–60. [Google Scholar]
- Kruithof, M.C.; Bouma, H.; Fischer, N.M.; Schutte, K. Object recognition using deep convolutional neural networks with complete transfer and partial frozen layers. In Proceedings of the Optics and Photonics for Counterterrorism, Crime Fighting, and Defence XII, Edinburgh, UK, 26–27 September 2016; SPIE: Cergy, France, 2016; pp. 159–165. [Google Scholar]
- Noor, A.; Zhao, Y.; Koubâa, A.; Wu, L.; Khan, R.; Abdalla, F.Y. Automated sheep facial expression classification using deep transfer learning. Comput. Electron. Agric. 2020, 175, 105528. [Google Scholar] [CrossRef]
- Li, Z.; Niu, B.; Peng, F.; Li, G.; Yang, Z.; Wu, J. Classification of peanut images based on multi-features and SVM. IFAC-Pap. 2018, 51, 726–731. [Google Scholar] [CrossRef]
- Mukti, I.Z.; Biswas, D. Transfer learning based plant diseases detection using ResNet50. In Proceedings of the 2019 4th International Conference on Electrical Information and Communication Technology (EICT), Khulna, Bangladesh, 20–22 December 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–6. [Google Scholar]
- Qassim, H.; Feinzimer, D.; Verma, A. Residual squeeze vgg16. arXiv 2017, arXiv:1705.03004. [Google Scholar]
- Xia, X.; Xu, C.; Nan, B. Inception-v3 for flower classification. In Proceedings of the 2017 2nd International Conference on Image Vision and Computing (ICIVC), Chengdu, China, 2–4 June 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 783–787. [Google Scholar]
- Shorten, C.; Khoshgoftaar, T.M. A survey on image data augmentation for deep learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
- Cang, H.; Yan, T.; Duan, L.; Yan, J.; Zhang, Y.; Tan, F.; Lv, X.; Gao, P. Jujube quality grading using a generative adversarial network with an imbalanced data set. Biosyst. Eng. 2023, 236, 224–237. [Google Scholar] [CrossRef]
- Li, Y.; Wang, H.; Zhang, Y.; Wang, J.; Xu, H. Inversion of the optical properties of apples based on the convolutional neural network and transfer learning methods. Appl. Eng. Agric. 2022, 38, 931–939. [Google Scholar] [CrossRef]
- Korzhebin, T.A.; Egorov, A.D. Comparison of combinations of data augmentation methods and transfer learning strategies in image classification used in convolution deep neural networks. In Proceedings of the 2021 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (ElConRus), St. Petersburg, Moscow, Russia, 26–29 January 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 479–482. [Google Scholar]
- Guo, L.; Lei, Y.; Xing, S.; Yan, T.; Li, N. Deep convolutional transfer learning network: A new method for intelligent fault diagnosis of machines with unlabeled data. IEEE Trans. Ind. Electron. 2018, 66, 7316–7325. [Google Scholar] [CrossRef]
- Haoyun, W.; Yiba, L.; Yuzhuo, Z.; Xiaoli, Z.; Huanliang, X. Research on hyperspectral light and probe source location on apple for quality detection based on photon transmission simulation. Trans. Chin. Soc. Agric. Eng. 2019, 35, 281–289. [Google Scholar]
- Ribani, R.; Marengoni, M. A survey of transfer learning for convolutional neural networks. In Proceedings of the 2019 32nd SIBGRAPI Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T), Rio de Janeiro, Brazil, 28–31 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 47–57. [Google Scholar]
- Yang, M.; He, Y.; Zhang, H.; Li, D.; Bouras, A.; Yu, X.; Tang, Y. The research on detection of crop diseases ranking based on transfer learning. In Proceedings of the 2019 6th International Conference on Information Science and Control Engineering (ICISCE), Shanghai, China, 20–22 December 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 620–624. [Google Scholar]
- Xu, C.; Wang, X.; Zhang, S. Dilated convolution capsule network for apple leaf disease identification. Front. Plant Sci. 2022, 13, 1002312. [Google Scholar] [CrossRef]
- Dou, S.; Wang, L.; Fan, D.; Miao, L.; Yan, J.; He, H. Classification of Citrus huanglongbing degree based on cbam-mobilenetv2 and transfer learning. Sensors 2023, 23, 5587. [Google Scholar] [CrossRef]
- Flach, P.A. ROC analysis. In Encyclopedia of Machine Learning and Data Mining; Springer: Berlin/Heidelberg, Germany, 2016; pp. 1–8. [Google Scholar]
- Lobo, J.M.; Jiménez-Valverde, A.; Real, R. AUC: A misleading measure of the performance of predictive distribution models. Glob. Ecol. Biogeogr. 2008, 17, 145–151. [Google Scholar] [CrossRef]
Classification | Classification Criteria |
---|---|
Two-class | Area = 0; Area! = 0 |
Three-class | Area = 0; Area ≤ 0.15; Area > 0.15 |
Four-class | Area = 0; Area ≤ 0.1; 0.1 < Area ≤ 0.2; Area > 0.2 |
Optical Parameter | Interval Range | Value (mm−1) |
---|---|---|
[0.40, 1.60) | 1.00 | |
[1.60, 2.80) | 2.20 | |
[2.80, 4.00) | 3.40 | |
[4.00, 5.20) | 4.60 | |
[5.20, 6.40) | 5.80 | |
[0.03, 2.20) | 1.10 | |
[2.20, 4.40) | 3.30 | |
[4.40, 6.60) | 5.50 | |
[6.60, 8.70) | 7.70 | |
[0.01, 0.65) | 0.30 | |
[0.65, 1.30) | 0.90 | |
[1.30, 1.95) | 1.50 | |
[1.95, 2.70) | 2.10 | |
[1.69, 60.00) | 30.00 | |
[60.00, 75.00) | 67.50 | |
[75.00, 90.00) | 82.50 | |
[90.00, 110.00) | 100.00 | |
[110.00, 260.00) | 190.00 | |
[0.01, 15.00) | 7.50 | |
[15.00, 30.00) | 22.50 | |
[30.00, 45.00) | 37.50 | |
[45.00, 60.00) | 52.50 | |
[60.00, 75.00) | 67.50 | |
[0.01, 7.50) | 5.00 | |
[7.50, 15.00) | 12.50 | |
[15.00, 22.50) | 20.00 | |
[22.50, 30.00) | 27.50 |
Input | Operator | Exp Size | Output | SE | NL | Step Size |
---|---|---|---|---|---|---|
224 × 224 × 3 | conv2d,3 × 3 | - | 16 | - | HS | 2 |
112 × 112 × 16 | bneck,3 × 3 | 16 | 16 | √ | RE | 2 |
56 × 56 × 16 | bneck,3 × 3 | 72 | 24 | - | RE | 2 |
28 × 28 × 24 | bneck,3 × 3 | 88 | 24 | - | RE | 1 |
28 × 28 × 24 | bneck,5 × 5 | 96 | 40 | √ | HS | 2 |
14 × 14 × 40 | bneck,5 × 5 | 240 | 40 | √ | HS | 1 |
14 × 14 × 40 | bneck,5 × 5 | 240 | 40 | √ | HS | 1 |
14 × 14 × 40 | bneck,5 × 5 | 120 | 48 | √ | HS | 1 |
14 × 14 × 48 | bneck,5 × 5 | 144 | 48 | √ | HS | 1 |
14 × 14 × 48 | bneck,5 × 5 | 288 | 96 | √ | HS | 2 |
7 × 7 × 96 | bneck,5 × 5 | 576 | 96 | √ | HS | 1 |
7 × 7 × 96 | bneck,5 × 5 | 576 | 96 | √ | HS | 1 |
7 × 7 × 96 | conv2d,1 × 1 | - | 576 | √ | HS | 1 |
7 × 7 × 576 | pool,7 × 7 | - | - | - | - | 1 |
1 × 1 × 576 | conv2d,1 × 1, NBU | - | 1024 | - | HS | 1 |
1 × 1 × 1024 | conv2d,1 × 1, NBU | - | k | - | - | 1 |
Model | Classification | Loss | Accuracy (%) | Precision (%) | Recall (%) | F1-Score (%) | Model Size (M) |
---|---|---|---|---|---|---|---|
Our Method | Two-class | 0.0291 | 99.05 | 98.43 | 98.58 | 98.29 | 18.89 |
Three-class | 0.0327 | 96.77 | 96.30 | 96.45 | 96.12 | ||
Four-class | 0.1048 | 94.45 | 94.12 | 94.06 | 93.87 | ||
SVM | Two-class | 0.3941 | 80.16 | 79.76 | 79.21 | 79.45 | 114.6 |
Three-class | 0.4315 | 73.18 | 72.69 | 72.17 | 72.43 | ||
Four-class | 0.4819 | 71.99 | 71.74 | 71.53 | 71.68 | ||
ResNet50 | Two-class | 0.1032 | 96.14 | 95.53 | 95.74 | 96.03 | 97.7 |
Three-class | 0.1396 | 94.43 | 93.82 | 94.06 | 94.11 | ||
Four-class | 0.2311 | 92.43 | 92.16 | 92.63 | 91.95 | ||
VGG16 | Two-class | 0.1504 | 96.69 | 96.47 | 96.31 | 96.48 | 526.3 |
Three-class | 0.1685 | 94.81 | 94.59 | 94.43 | 94.61 | ||
Four-class | 0.2381 | 92.12 | 91.88 | 91.42 | 91.92 | ||
InceptionNetV3 | Two-class | 0.1283 | 94.28 | 94.10 | 93.85 | 93.71 | 91.16 |
Three-class | 0.1611 | 93.19 | 93.01 | 92.94 | 92.88 | ||
Four-class | 0.2524 | 91.98 | 91.70 | 91.59 | 91.29 | ||
ShuffleNetV2 | Two-class | 0.1471 | 93.83 | 83.31 | 93.22 | 93.48 | 28.8 |
Three-class | 0.1936 | 91.64 | 91.12 | 92.04 | 91.99 | ||
Four-class | 0.2736 | 87.76 | 88.14 | 87.31 | 87.14 | ||
MobileNetV3-large | Two-class | 0.0748 | 97.86 | 97.19 | 97.92 | 97.38 | 20.6 |
Three-class | 0.0912 | 95.45 | 95.17 | 95.96 | 95.16 | ||
Four-class | 0.1758 | 93.62 | 93.13 | 93.34 | 93.10 |
Number of Unfrozen Network Layers | Loss Value | Accuracy (%) | Precision (%) | Recall (%) | F1-Score (%) | Params (M) |
---|---|---|---|---|---|---|
1 | 0.0308 | 97.20 | 96.65 | 97.12 | 96.89 | 7.54 |
2 | 0.0304 | 97.60 | 97.24 | 97.33 | 96.92 | 7.54 |
3 | 0.0304 | 97.60 | 97.24 | 97.33 | 96.92 | 7.54 |
4 | 0.0304 | 97.60 | 97.24 | 97.33 | 96.92 | 7.54 |
5 | 0.0304 | 97.60 | 97.24 | 97.33 | 96.92 | 7.52 |
6 | 0.0322 | 96.70 | 96.18 | 96.27 | 96.23 | 7.50 |
7 | 0.0322 | 96.70 | 96.18 | 96.27 | 96.23 | 7.47 |
8 | 0.0349 | 95.90 | 95.47 | 95.25 | 95.28 | 7.49 |
9 | 0.0322 | 96.70 | 96.18 | 96.27 | 96.23 | 7.46 |
10 | 0.0349 | 95.90 | 95.47 | 95.25 | 95.28 | 7.43 |
Transfer Learning Methods | Classification | Loss Value | Accuracy (%) | Precision (%) | Recall (%) | F1-Score (%) | Params (M) |
---|---|---|---|---|---|---|---|
Freezing of all network layers | Two-class | 0.1768 | 92.25 | 91.53 | 91.74 | 91.48 | 4.24 |
Three-class | 0.2136 | 89.70 | 89.82 | 89.25 | 88.46 | 4.24 | |
Four-class | 0.2415 | 87.53 | 87.12 | 87.09 | 86.94 | 4.24 | |
Partial freezing of the network layers | Two-class | 0.0288 | 99.13 | 98.67 | 98.51 | 98.70 | 7.52 |
Three-class | 0.0304 | 97.60 | 97.19 | 97.23 | 97.41 | 7.52 | |
Four-class | 0.0982 | 95.32 | 94.91 | 94.95 | 95.13 | 7.52 | |
No freezing of network layers | Two-class | 0.0341 | 98.55 | 98.23 | 98.36 | 97.92 | 7.54 |
Three-class | 0.0376 | 96.20 | 95.97 | 96.68 | 96.45 | 7.54 | |
Four-class | 0.0419 | 93.97 | 93.60 | 93.34 | 93.65 | 7.54 |
Methods | Classification | Loss Value | Accuracy (%) | Precision (%) | Recall (%) | F1-Score (%) | Params (M) |
---|---|---|---|---|---|---|---|
MobileNetV3 | Two-class | 0.3251 | 81.23 | 81.68 | 81.80 | 81.42 | 2.54 |
Three-class | 0.4069 | 78.76 | 78.35 | 78.44 | 78.94 | 2.54 | |
Four-class | 0.4782 | 72.43 | 72.02 | 71.78 | 72.11 | 2.54 | |
MobileNetV3 + Dilated convolution | Two-class | 0.2471 | 90.58 | 90.26 | 89.97 | 90.12 | 24.10 |
Three-class | 0.2949 | 86.97 | 86.64 | 86.55 | 86.34 | 24.10 | |
Four-class | 0.3182 | 81.35 | 81.03 | 81.52 | 80.93 | 24.10 | |
MobileNetV3 + Dilatedconvolution + transfer learning | Two-class | 0.0288 | 99.13 | 98.67 | 98.51 | 98.70 | 7.52 |
Three-class | 0.0304 | 97.60 | 97.19 | 97.23 | 97.41 | 7.52 | |
Four-class | 0.0982 | 95.32 | 94.91 | 94.95 | 95.13 | 7.52 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, Z.; Wang, H.; Wang, J.; Xu, H.; Mei, N.; Zhang, S. Non-Destructive Detection Method of Apple Watercore: Optimization Using Optical Property Parameter Inversion and MobileNetV3. Agriculture 2024, 14, 1450. https://doi.org/10.3390/agriculture14091450
Chen Z, Wang H, Wang J, Xu H, Mei N, Zhang S. Non-Destructive Detection Method of Apple Watercore: Optimization Using Optical Property Parameter Inversion and MobileNetV3. Agriculture. 2024; 14(9):1450. https://doi.org/10.3390/agriculture14091450
Chicago/Turabian StyleChen, Zihan, Haoyun Wang, Jufei Wang, Huanliang Xu, Ni Mei, and Sixu Zhang. 2024. "Non-Destructive Detection Method of Apple Watercore: Optimization Using Optical Property Parameter Inversion and MobileNetV3" Agriculture 14, no. 9: 1450. https://doi.org/10.3390/agriculture14091450
APA StyleChen, Z., Wang, H., Wang, J., Xu, H., Mei, N., & Zhang, S. (2024). Non-Destructive Detection Method of Apple Watercore: Optimization Using Optical Property Parameter Inversion and MobileNetV3. Agriculture, 14(9), 1450. https://doi.org/10.3390/agriculture14091450