Based on FCN and DenseNet Framework for the Research of Rice Pest Identification Methods
Abstract
:1. Introduction
2. Materials and Methods
2.1. Experimental Data Set
2.2. Data Augmentation
3. Model Refinement
3.1. Feature Extraction Based on Encoder–Decoder Network
3.2. Long-Hop and Short-Hop Connections
3.3. CRF
3.4. Feature Extraction Network DenseNet
3.5. Channel Attention Mechanism
3.6. Lab Environment
3.7. Evaluation Standard
- TP is defined as the positive sample predicted by the model as a positive class.
- TN represents the negative samples predicted by the model as negative classes.
- FP denotes the negative samples predicted by the model as positive.
- FN refers to the positive sample predicted by the model as a negative class.
4. Results and Analysis
4.1. Parameter Settings
4.2. Segmentation Process Method
4.3. The Comparison of Different Classification Models
4.4. Impact of ECA on Model Performance
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Zhang, B.; Zhang, M.; Chen, Y. Crop pest identification based on spatial pyramidal cisterification and deep convolutional neural network. Chin. J. Agric. Eng. 2019, 35, 209–215. [Google Scholar]
- Zhang, Y.; Jiang, M.; Yu, P.; Yao, Q.; Yang, B.; Tang, J. Agricultural pest image recognition method based on multi feature fusion and sparse representation. China Agric. Sci. 2018, 51, 2084–2093. [Google Scholar]
- Juan, C.; Liangyong, C.; Shengsheng, W.; Huiying, Z.; Changji, W. Landscape pest image recognition based on improved residual network. J. Agric. Mach. 2019, 50, 187–195. [Google Scholar]
- Haitong, P.; Weiming, C.; Longhua, M.; Hongye, S. A review of pest identification techniques based on deep learning. Agric. Eng. 2020, 10, 19–24. [Google Scholar]
- Gondal, M.D.; Khan, Y.N. Early pest detection from crop using image processing and computational intelligence. FAST-NU Res. J. 2015, 1, 59–68. [Google Scholar]
- Hongzhen, Y.; Jianwei, Z.; Xiangtao, L.; Zuorui, S. Research on Image-Based Remote Automatic Recognition System of Insects. J. Agric. Eng. 2008, 1, 188–192. [Google Scholar]
- Yinsong, Z.; Yindi, Z.; Muce, Y. Recognition and counting of insects on sticky board images based on the improved Faster-RCNN model. J. China Agric. Univ. 2019, 24, 115–122. [Google Scholar]
- Feng, C.; Juntao, G.; Yulei, L.; Tianjia, H. A corn pest identification method based on machine vision and convolutional neural network in the cold region of Northeast China. Jiangsu Agric. Sci. 2020, 48, 237–244. [Google Scholar]
- Xi, C.; Yunzhi, W.; Youhua, Z.; Yi, L. Image recognition of stored grain pests based on deep convolutional neural network. Chin. Agric. Sci. Bull. 2016, 34, 154–158. [Google Scholar]
- Peng, S.; Guifei, C.; Liying, C. Image recognition of soybean pests based on attention convolutional neural network. Chin. J. Agric. Mech. 2020, 41, 171–176. [Google Scholar]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–15 June 2015; pp. 3431–3440. [Google Scholar]
- Shi, J.; Malik, J. Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 888–905. [Google Scholar]
- Chen, M.; Wang, X.; Feng, B.; Liu, W. Structured random forest for label distribution learning. Neurocomputing 2018, 320, 171–182. [Google Scholar]
- Boswell, D. Introduction to Support Vector Machines; Departement of Computer Science and Engineering University of California: San Diego, CA, USA, 2002. [Google Scholar]
- Bao, W.; Yang, X.; Liang, D.; Hu, G.; Yang, X. Lightweight convolutional neural network model for field wheat ear disease identification. Comput. Electron. Agric. 2021, 189, 106367. [Google Scholar]
- Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; Hu, Q. Supplementary material for ‘ECA-Net: Efficient channel attention for deep convolutional neural networks. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 13–19. [Google Scholar]
- Lafferty, J.; McCallum, A.; Pereira, F.C.N. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 2001. [Google Scholar]
- Jin, X.B.; Su, T.L.; Kong, J.L.; Bai, Y.T.; Miao, B.B.; Dou, C. State-of-the-art mobile intelligence: Enabling robots to move like humans by estimating mobility with artificial intelligence. Appl. Sci. 2018, 8, 379. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Hu, F.; Xia, G.-S.; Hu, J.; Zhang, L. Transferring deep convolutional neural networks for the scene classification of high-resolution remote sensing imagery. Remote Sens. 2015, 7, 14680–14707. [Google Scholar]
- Chaib, S.; Liu, H.; Gu, Y.; Yao, H. Deep feature fusion for VHR remote sensing scene classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4775–4784. [Google Scholar]
- Li, E.; Xia, J.; Du, P.; Lin, C.; Samat, A. Integrating multilayer features of convolutional neural networks for remote sensing scene classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5653–5665. [Google Scholar]
- Glorot, X.; Bordes, A.; Bengio, Y. Deep sparse rectifier neural networks. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, JMLR Workshop and Conference Proceedings, Fort Lauderdale, FL, USA, 11–13 April 2011; pp. 315–323. [Google Scholar]
- Zhipeng, L.; Libo, Z.; Qiongshui, W. Enhanced cervical cell image data based on generative adversarial network. Sci. Technol. Eng. 2020, 20, 11672–11677. [Google Scholar]
- Al-Masni, M.A.; Al-Antari, M.A.; Choi, M.T.; Han, S.M.; Kim, T.S. Skin lesion segmentation in dermoscopy images via deep full resolution convolutional networks. Comput. Methods Programs Biomed. 2018, 162, 221–231. [Google Scholar]
- Xie, F.; Yang, J.; Liu, J.; Jiang, Z.; Zheng, Y.; Wang, Y. Skin lesion segmentation using high-resolution convolutional neural network. Comput. Methods Programs Biomed. 2020, 186, 105241. [Google Scholar]
- Öztürk, Ş.; Özkaya, U. Skin lesion segmentation with improved convolutional neural network. J. Digit. Imaging 2020, 33, 958–970. [Google Scholar]
- Bi, L.; Kim, J.; Ahn, E.; Kumar, A.; Feng, D.; Fulham, M. Step-wise integration of deep class-specific learning for dermoscopic image segmentation. Pattern Recognit. 2019, 85, 78–89. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
Projects | Content |
---|---|
Central Processing Unit | Intel (R) Core (TM) i7-7700K CPU @ 4.20 GHz (Santa Clara, CA, USA) |
Memory | 32 G |
Video card | NVIDIA GeForce GTX TITAN Xp (Santa Clara, CA, USA) |
Operating System | Ubuntu 5.4.0-6ubuntu1~16.04.5 |
CUDA | Cuda 8.0 with cudnn |
Data Processing | Python 2.7, OpenCV, and TensorFlow |
Techniques | Accuracy (%) | Dice Score (%) |
---|---|---|
Proposed model | 95.50 | 92.10 |
FrCN [25] | 94.03 | 87.08 |
CNN-HRFB [26] | 93.80 | 86.20 |
FCN [26] | 92.70 | 82.30 |
iFCN [27] | 95.30 | 88.64 |
DCL-PSI [28] | 94.08 | 85.66 |
Techniques | Size (MB) | Loss (%) | Test Accuracy (%) | Test Time/Image (ms) |
---|---|---|---|---|
VGG-16 | 800.33 | 35.45 | 81.98 | 153.30 |
VGG-19 | 832.45 | 33.34 | 87.94 | 163.10 |
ResNet50 | 95.23 | 19.33 | 90.74 | 104.50 |
DenseNet | 75.43 | 29.34 | 91.37 | 98.80 |
FCN-ECAD | 143.50 | 6.55 | 98.28 | 69.20 |
Label | FCN-ECAD (%) | Vgg16 (%) | Vgg19 (%) | ResNet50 (%) | DenseNet (%) |
---|---|---|---|---|---|
C. suppressalis | 98.89 | 81.23 | 86.56 | 88.97 | 90.43 |
N. aenescens | 98.99 | 80.38 | 87.88 | 89.99 | 91.58 |
C. medinalis | 97.28 | 79.99 | 87.56 | 89.69 | 90.43 |
N. lugens | 96.42 | 75.25 | 80.95 | 85.63 | 87.63 |
A. ypsilon | 97.33 | 81.98 | 86.89 | 90.45 | 91.22 |
G. sp | 99.53 | 85.65 | 92.33 | 94.21 | 94.38 |
M. separata | 97.35 | 78.65 | 88.15 | 91.59 | 89.92 |
H. armigera | 98.49 | 82.79 | 87.68 | 90.24 | 90.34 |
Gryllidae | 98.98 | 86.78 | 91.58 | 92.99 | 93.87 |
H. diomphalia | 99.58 | 87.11 | 89.87 | 93.65 | 93.99 |
Average | 98.28 | 81.98 | 87.94 | 90.74 | 91.37 |
Techniques | Test Accuracy (%) | Size (MB) | Test Time/Image (ms) |
---|---|---|---|
Proposed model without ECA | 94.34 | 142.67 | 68.40 |
Proposed model with ECA | 98.28 | 143.50 | 69.20 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gong, H.; Liu, T.; Luo, T.; Guo, J.; Feng, R.; Li, J.; Ma, X.; Mu, Y.; Hu, T.; Sun, Y.; et al. Based on FCN and DenseNet Framework for the Research of Rice Pest Identification Methods. Agronomy 2023, 13, 410. https://doi.org/10.3390/agronomy13020410
Gong H, Liu T, Luo T, Guo J, Feng R, Li J, Ma X, Mu Y, Hu T, Sun Y, et al. Based on FCN and DenseNet Framework for the Research of Rice Pest Identification Methods. Agronomy. 2023; 13(2):410. https://doi.org/10.3390/agronomy13020410
Chicago/Turabian StyleGong, He, Tonghe Liu, Tianye Luo, Jie Guo, Ruilong Feng, Ji Li, Xiaodan Ma, Ye Mu, Tianli Hu, Yu Sun, and et al. 2023. "Based on FCN and DenseNet Framework for the Research of Rice Pest Identification Methods" Agronomy 13, no. 2: 410. https://doi.org/10.3390/agronomy13020410
APA StyleGong, H., Liu, T., Luo, T., Guo, J., Feng, R., Li, J., Ma, X., Mu, Y., Hu, T., Sun, Y., Li, S., Wang, Q., & Guo, Y. (2023). Based on FCN and DenseNet Framework for the Research of Rice Pest Identification Methods. Agronomy, 13(2), 410. https://doi.org/10.3390/agronomy13020410