Determination Model of Epidermal Wettability for Apple Rootstock Cutting Based on the Improved U-Net
Abstract
:1. Introduction
2. Materials and Methods
2.1. Acquisition of Datasets
2.2. Data Enhancement and Expansion
2.3. Selection of Color Space
2.4. U-DSE-AG-Net Neural Network
2.4.1. Neural Network Backbone Structure and Improvement
- (1)
- Squeeze
- (2)
- Excitation
- (3)
- Scale Operation
2.4.2. Loss Function Selection and Improvement
2.5. Surface Humidification System for Cuttings
2.5.1. Design for Humidification System
2.5.2. Test of Humidification System
3. Results
3.1. Configuration of Model Training Environment Parameters
3.1.1. Evaluation Index and Result Based on Loss Value
3.1.2. Evaluation Indices and Results Based on Confusion Matrix
4. Discussion
- In this study, the classification of epidermis wetness of the cuttings included wet and dry categories. The classification of the epidermis wetness of the cuttings needs to be further divided into more detailed grading according to the actual requirements of production. The factors affecting the epidermis wetness of the cuttings include not only the interaction with the external environment, but also the consumption and generation of water by the physiological activities of the cuttings themselves (transpiration consumes water, while photosynthesis produces water). Therefore, the epidermis wetness of the cuttings is a dynamic process, which requires not only the observation of the changes in the external environment, but also the physiological activities of the cuttings themselves. Spectroscopy technology can accurately reflect the content of chlorophyll as well as carotene in vegetation cells. It can also indirectly reflect the intensity of photosynthesis in epidermal cells of cuttings. Microscopy imaging technology is used to observe the degree of stomatal opening and closing of cuttings’ epidermal cells, which could also be used to monitor the intensity of transpiration of cuttings’ epidermal cells. Therefore, in order to establish a model of the change in the epidermis wetness of the cuttings, the image segmentation technology is combined with spectral technology as well as microscope imaging technology to fully explore the dynamic communication relationship between the cuttings and the external environment, constructing a dynamic classification model of the change in the wetness of the epidermis of the cuttings.
- In the process of applying machine vision technology to the monitoring of the moisture of the epidermis of cuttings, the intensity and color of the fill light have a direct impact on the imaging results. Therefore, when constructing the cuttings model of the epidermal wetness of cuttings, it is necessary to further explore the relationship between the spectral information, transmission mode, transmission path of external light, and the imaging of epidermal wetness of cuttings. The physical characteristics of the epidermis of cuttings of different genotypes are diverse, such as roughness, fluff, or not. It not only leads to the difference in the adhesion state of the water film, but also changes the transmission of the fill light line as well as influences the effect on the expression of the epidermis of the cuttings in the color space, so it is necessary to explore the influence mechanism of the physical properties of the epidermis of the cuttings and the influence of light source on the wetness of the epidermis of the cuttings. In order to further improve the understanding of the wetness information of cuttings by the neural network model, the advantages of multiple color spaces, such as RGB, HSV, LAB, HIS, and so on, can be combined in the future to obtain the features of the epidermal wetness of cutting images more efficiently.
- With regard to classifying the epidermal wetness of cuttings, aiming at further improving the model transferability of the neural network model, it is necessary to change the supervised training method in this study to the unsupervised training method in the future, so that the neural network can independently divide the level of the epidermis wetness of the cuttings, improving the classification effect of the neural network on the epidermis wetness of the cuttings. In order to better embed the mobile terminal to provide data for the humidification system, the U-DSE-AG-Net model is needed for lightweight design in the future.
5. Conclusions
- This study converted RGB images of the cuttings into the HSV color space, achieving the effective expression of the wetness information of the cutting epidermis. The Hue channel and the Saturation channel information can be used as the basis for classifying the wetness of the epidermis of cuttings in the environment of supplemental lighting, improving the classification ability of the model in complex light environments.
- The module DSE strengthens the model’s ability to capture Hue channel and Saturation channel information based on the module SE. The module DSE is integrated with the improved module AG, assigning non-negative weights to important features, which reduces the error of prediction. The skip connection layers of the U-Net embedded module DSE and module AG result in U-DSE-AG-Net. This network structure can weaken the lighting noise interference in the skipping connection layer. The comparative test and ablation test show that the U-DSE-AG-Net neural network model has the best performance. Loss and Val_Loss are the smallest; that is, 0.033 and 0.037, respectively. The F1-Score is improved by 3.2% compared to U-Net. The accuracy of the model in predicting the wetness and dryness of the cuttings epidermis is increased by 45.41% and 40.62% in the supplementary blue-purple light environment. The model has a solid ability to resist light noise interference.
- The experiment of identifying the moisture content of the epidermis of three kinds of cuttings with genotypes N40, N59, and G935 was carried out. The average accuracy of the model was 91.69%. The detection speed rate of the model was 38.45 fps. The average moisture retention rate of the humidification system for cuttings was 92.51%. The system can realize the real-time monitoring of the moisture content of the cuttings’ epidermis and ensure consistent moisturizing. The model has good generalization and practicability. It qualifies as an economical, non-contact, as well as non-destructive monitoring method.
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Xu, L.; Huang, H. GeNetic and epigeNetic controls of plant regeneration. Curr. Top. Dev. Biol. 2014, 108, 1–33. [Google Scholar] [CrossRef] [PubMed]
- Liu, K.; Yang, A.; Yan, J.; Liang, Z.; Yuan, G.; Cong, P.; Zhang, L.; Han, X.; Zhang, C. MdAIL5 overexpression promotes apple adventitious shoot regeneration by regulating hormone signaling and activating the expression of shoot development-related genes. Hortic. Res. 2023, 10, uhad198. [Google Scholar] [CrossRef] [PubMed]
- Wang, X.; Liu, L.; Xie, J.; Wang, X.; Gu, H.; Li, J.; Liu, H.; Wang, P.; Yang, X. Research Status and Prospects on the Construction Methods of Temperature and Humidity Environmental Models in Arbor Tree Cuttage. Agronomy 2023, 14, 58. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural Networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Camargo, A.; Smith, J.S. Image pattern classification for the identification of disease causing agents in plants. Comput. Electron. Agric. 2009, 66, 121–125. [Google Scholar] [CrossRef]
- Yuan, J.; Zhou, W.; Luo, T. DMFNet: Deep multi-modal fusion Network for RGB-D indoor scene segmentation. IEEE Access 2019, 7, 169350–169358. [Google Scholar] [CrossRef]
- Zhang, Y.; Yang, J.; Deng, H.; Zhou, Y.; Miao, Y. Semantic segmentation model for greenhouse tomato images using RGB. Trans. Chin. Soc. Agric. Eng. 2024, 40, 295–306. [Google Scholar] [CrossRef]
- Yao, Y.; Peng, Y.; Chen, Z.; He, W.; Wu, Q.; Huang, W.; Chen, W. An Improved YOLO Algorithm Supporting Anti-illumination Target Detection. Automot. Eng. 2023, 45, 777–785. [Google Scholar] [CrossRef]
- Feng, S.; Yang, X.; Li, G.; Zhao, D.; Yu, F.; Xu, T. Unsupervised extraction of rice coverage with incorporating CLAHE-SV enhanced Lab color features. Trans. Chin. Soc. Agric. Eng. 2023, 39, 195–206. [Google Scholar] [CrossRef]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
- Yu, C.; Wang, J.; Peng, C.; Gao, C.; Yu, G.; Sang, N. BiseNet: Bilateral segmentation Network for real-time semantic segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 325–341. [Google Scholar]
- Li, L.; Hu, W.; Lu, J.; Zhang, C. Leaf vein segmentation with self-supervision. Comput. Electron. Agric. 2022, 203, 107352. [Google Scholar] [CrossRef]
- Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid scene parsing Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2881–2890. [Google Scholar] [CrossRef]
- Li, Z.; Yu, J.; Pan, S.; Jia, Z.; Niu, Z. Individual Tree Skeleton Extraction and Crown Prediction Method of Winter Kiwifruit Trees. Smart Agric. 2023, 5, 92–104. [Google Scholar] [CrossRef]
- Chen, L.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 834–848. [Google Scholar] [CrossRef] [PubMed]
- Cao, Y.; Zhao, Y.; Yang, L.; Li, J.; Qin, L. Weed Identification Method in Rice Field Based on Improved DeepLabv3+. Trans. Chin. Soc. Agric. Mach. 2023, 54, 242–252. [Google Scholar]
- Li, C.; Tan, Y.; Chen, W.; Luo, X.; Gao, Y.; Jia, X.; Wang, Z. Attention unet++: A nested attention-aware u-net for liver ct image segmentation. In Proceedings of the 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 25–28 October 2020; pp. 345–349. [Google Scholar] [CrossRef]
- Liu, Y.; Zhou, X.; Wang, Y.; Yu, H.; Geng, C.; He, M. Straw coverage detection of conservation tillage farmland based on improved U-Net model. Opt. Precis. Eng. 2022, 30, 1101–1112. [Google Scholar] [CrossRef]
- Cai, W.; Wang, B.; Zeng, F. CUDU-Net: Collaborative up-sampling decoder U-Net for leaf vein segmentation. Digit. Signal Process. 2024, 144, 104287. [Google Scholar] [CrossRef]
- Chaudhari, S.; Mithal, V.; Polatkan, G.; Ramanath, R. An attentive survey of attention models. ACM Trans. Intell. Syst. Technol. (TIST) 2021, 12, 1–32. [Google Scholar] [CrossRef]
- Zhu, D.; Wen, R.; Xiong, J. Lightweight corn silk detection network incorporating with coordinate attention mechanism. Trans. Chin. Soc. Agric. Eng. 2023, 39, 145–153. [Google Scholar] [CrossRef]
- Han, X.; Zhao, C.; Wu, H.; Zhu, H.; Zhang, Y. Image classification method for tomato leaf deficient nutrient elements based on attention mechanism and multi-scale feature fusion. Trans. Chin. Soc. Agric. Eng. 2021, 37, 177–188. [Google Scholar] [CrossRef]
- Wang, Z.; Ma, F.; Zhang, Y.; Zhang, F.; Ji, P.; Cao, M. Crop disease recognition using attention mechanism andmulti-scale lightweight network. Trans. Chin. Soc. Agric. Eng. 2022, 38, 176–183. [Google Scholar] [CrossRef]
- Su, B.; Shen, L.; Chen, S.; Mi, Z.; Song, Y.; Lu, N. Multi-features Identification of Grape Cultivars Based on Attention Mechanism. Trans. Chin. Soc. Agric. Mach. 2021, 52, 226–233+252. [Google Scholar] [CrossRef]
- Zhang, Q.; Hu, S.; Shu, W.; Cheng, H. Wheat Spikes Detection Method Based on Pyramidal Network of Attention Mechanism. Trans. Chin. Soc. Agric. Mach. 2021, 52, 253–262. [Google Scholar] [CrossRef]
- Huo, P.; Ma, S.; Su, C.; Ding, Z. Emergency obstacle avoidance system of sugarcane basecutter based on improved YOLOv5s. Comput. Electron. Agric. 2024, 216, 108468. [Google Scholar] [CrossRef]
- Dosovitskiy, A.; Fischer, P.; Springenberg, J.T.; Riedmiller, M.; Brox, T. Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 1734–1747. [Google Scholar] [CrossRef] [PubMed]
- Kang, J.; Liu, L.; Zhang, F.; Shen, C.; Wang, N.; Shao, L. Semantic segmentation model of cotton roots in-situ image based on attention mechanism. Comput. Electron. Agric. 2021, 189, 106370. [Google Scholar] [CrossRef]
- Ma, Y.; Bian, M.; Fan, Y.; Chen, Z.; Yang, G.; Feng, H. Estimation of Potassium Content of Potato Plants Based on UAV RGB Images. Trans. Chin. Soc. Agric. Mach. 2023, 54, 196–203+233. [Google Scholar] [CrossRef]
- Zhang, P.; Chen, Z.; Ma, S.; Yin, D.; Jiang, H. Prediction of soybean yield by using RGB model with skew distribution pattern of canopy leaf color. Trans. Chin. Soc. Agric. Eng. 2021, 37, 120–126. [Google Scholar] [CrossRef]
- Song, C.; Qu, X.; Hu, G.; Su, T. Crop Identification in Mature Stage with Remote Sensing Based on NDVI-NSSI Space and HSV Transformation. Trans. Chin. Soc. Agric. Mach. 2023, 54, 193–200. [Google Scholar] [CrossRef]
- Xiong, X.; Yu, L.; Yang, W.; Liu, M.; Jiang, N.; Wu, D.; Chen, G.; Xiong, L.; Liu, K.; Liu, Q. A high-throughput stereo-imaging system for quantifying rape leaf traits during the seedling stage. Plant Methods 2017, 13, 1–17. [Google Scholar] [CrossRef]
- Li, T.; Sun, M.; Ding, X.; Li, Y.; Zhang, G.; Shi, G.; Li, W. Tomato recognition method at the ripening stage based on YOLO v4 and HSV. Trans. Chin. Soc. Agric. Eng. 2021, 37, 183–190. [Google Scholar] [CrossRef]
- Liu, P.; Liu, L.; Wang, C.; Zhu, Y.; Wang, H.; Li, X. Determination Method of Field Wheat Flowering Period Based on Machine Vision. Trans. Chin. Soc. Agric. Mach. 2022, 53, 251–258. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015, Proceedings, Part III 18; Springer International Publishing: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar] [CrossRef]
- Bahdanau, D. Neural machine translation by jointly learning to align and translate. arXiv 2014, arXiv:1409.0473. [Google Scholar] [CrossRef]
- Zhang, Y.; Yi, P.; Zhou, D.; Yang, X.; Yang, D.; Zhang, Q.; Wei, X. CSANet: Channel and spatial mixed attention CNN for pedestrian detection. IEEE Access 2020, 8, 76243–76252. [Google Scholar] [CrossRef]
- Jin, C.; Ben, X.; Chao, J. Apple inflorescence recognition of phenology stage in complex background based on improved YOLOv7. Comput. Electron. Agric. 2023, 51, 211–219. [Google Scholar] [CrossRef]
- Li, S.; Zhang, S.; Xue, J.; Sun, H. Lightweight target detection for the field flat jujube based on improved YOLOv5. Comput. Electron. Agric. 2022, 202, 107391. [Google Scholar] [CrossRef]
Encoder | Skip Connection Layer | Decoder |
---|---|---|
Input (512, 512, 3) | ||
↓ | ||
f1 | Conv2d filters = 64 (512, 512, 64) | |
Conv2d filters = 64 (512, 512, 64) | → DSE (512, 512, 64) → AG (512, 512, 128) → | Conv2d filters = 64 (512, 512, 64) |
Conv2d filters = 64 (512, 512, 64) | ↑ | Concatenate (512, 512, 192) |
Maxpoolings = 2 (256, 256, 64) | UpSampling2D (512, 512, 128) | f1′ |
↓ | ||
f2 | UpSampling2D (512, 512, 128) | |
Conv2d filters = 128 (256, 256, 128) | ||
Conv2d filters = 128 (256, 256, 128) | → DSE (256, 256, 128) → AG (256, 256, 256) → | Conv2d filters = 128 (256, 256, 128) |
Conv2d filters = 128 (256, 256, 128) | ↑ | Concatenate (256, 256, 384) |
Maxpoolings = 2 (128, 128, 128) | UpSampling2D (256, 256, 256) | f2′ |
↓ | ||
f3 | UpSampling2D (256, 256, 256) | |
Conv2d filters = 256 (128, 128, 256) | Conv2d filters = 256 (128, 128, 256) | |
Conv2d filters = 256 (128, 128, 256) | → DSE (128, 128, 256) → AG (128, 128, 512) → | Conv2d filters = 256 (128, 128, 256) |
Conv2d filters = 256 (128, 128, 256) | ↑ | Concatenate (128, 128, 768) |
Maxpoolings = 2 (64, 64, 256) | UpSampling2D (128, 128, 512) | f3′ |
↓ | ||
f4 | UpSampling2D (128, 128, 512) | |
Conv2d filters = 512 (64, 64, 512) | Conv2d filters = 512 (64, 64, 512) | |
Conv2d filters = 512 (64, 64, 512) | → DSE (64, 64,512) → AG (64, 64, 512) → | Conv2d filters = 512 (64, 64, 512) |
Conv2d filters = 512 (64, 64, 512) | ↑ | Concatenate (64, 64, 1024) |
Maxpoolings = 2 (32, 32, 512) | UpSampling2D (64, 64, 512) | f4′ |
↓ | ||
f5 | ||
Conv2d filters = 512 (32, 32, 512) | UpSampling2D (64, 64, 512) | |
Conv2d filters = 512 (32, 32, 512) | ↑ | |
Conv2d filters = 512 (32, 32, 512) | → | f5′ |
Neural Network | Accuracy Rating (A)/% | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
No Supplementary Lighting | Supplementary Lighting | |||||||||||
Wet Epidermis | Dry Epidermis | Wet Epidermis | Dry Epidermis | |||||||||
N40 | N59 | G935 | N40 | N59 | G935 | N40 | N59 | G935 | N40 | N59 | G935 | |
DeeplabV3+ | 79.03 | 78.55 | 89.63 | 69.02 | 67.77 | 89.54 | 23.99 | 34.15 | 37.84 | 32.57 | 35.09 | 35.12 |
PSPNet | 44.23 | 55.21 | 65.03 | 33.38 | 37.46 | 63.21 | 20.11 | 10.56 | 27.49 | 24.16 | 23.33 | 27.47 |
U-Net | 87.22 | 85.69 | 90.04 | 85.23 | 86.52 | 90.42 | 31.28 | 29.55 | 42.75 | 35.26 | 45.07 | 49.06 |
U-SE-Net | 90.89 | 90.11 | 92.15 | 89.33 | 88.29 | 92.64 | 69.24 | 79.02 | 78.58 | 71.59 | 65.22 | 78.84 |
U-DSE-Net | 92.11 | 90.01 | 93.14 | 90.31 | 91.28 | 93.47 | 79.62 | 81.18 | 83.35 | 82.32 | 80.58 | 84.93 |
U-DSE-AG-Net | 95.01 | 94.99 | 95.25 | 94.73 | 95.05 | 95.44 | 85.62 | 88.44 | 88.16 | 89.09 | 88.97 | 89.68 |
Neural Network | Number of Images | Average Detection Time (ms) | Number of Converted Frames (Fps) | Moisture Retention Rate of Cuttings (%) | |
---|---|---|---|---|---|
No Supplementary Lighting | Supplementary Lighting | ||||
DeeplabV3+ | 300 | 85.86 | 34.93 | 85.43 | 32.04 |
PSPNet | 300 | 164.07 | 18.28 | 61.31 | 23.46 |
U-Net | 300 | 78.99 | 37.98 | 90.22 | 41.04 |
U-SE-Net | 300 | 75.53 | 40.80 | 90.62 | 73.53 |
U-DSE-Net | 300 | 77.16 | 38.88 | 92.37 | 83.66 |
U-DSE-AG-Net | 300 | 78.03 | 38.45 | 95.14 | 89.87 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, X.; Liu, L.; Zou, J.; Liu, H.; Li, J.; Wang, P.; Yang, X. Determination Model of Epidermal Wettability for Apple Rootstock Cutting Based on the Improved U-Net. Agriculture 2024, 14, 2223. https://doi.org/10.3390/agriculture14122223
Wang X, Liu L, Zou J, Liu H, Li J, Wang P, Yang X. Determination Model of Epidermal Wettability for Apple Rootstock Cutting Based on the Improved U-Net. Agriculture. 2024; 14(12):2223. https://doi.org/10.3390/agriculture14122223
Chicago/Turabian StyleWang, Xu, Lixing Liu, Jinxuan Zou, Hongjie Liu, Jianping Li, Pengfei Wang, and Xin Yang. 2024. "Determination Model of Epidermal Wettability for Apple Rootstock Cutting Based on the Improved U-Net" Agriculture 14, no. 12: 2223. https://doi.org/10.3390/agriculture14122223
APA StyleWang, X., Liu, L., Zou, J., Liu, H., Li, J., Wang, P., & Yang, X. (2024). Determination Model of Epidermal Wettability for Apple Rootstock Cutting Based on the Improved U-Net. Agriculture, 14(12), 2223. https://doi.org/10.3390/agriculture14122223