Precision Corn Pest Detection: Two-Step Transfer Learning for Beetles (Coleoptera) with MobileNet-SSD
Abstract
:1. Introduction
- To introduce a new approach to transfer learning involving a two-step process that significantly enhances the accuracy of MobileNet SSD networks. As mentioned above, this procedure starts with broader dataset training, followed by fine-tuning the neural networks parameters on a specific dataset;
- To thoroughly assess in this context two versions of the MobileNet SSD network—MobileNet-SSD-v1 and MobileNet-SSD-v2-Lite—providing insights into their relative performance. This comparative analysis is important for understanding the balance between model complexity and accuracy in the context of low-power mobile systems;
- To assess the advantages in terms of practicality and feasibility of deploying the proposed neural network models on Jetson low-power devices, as a solution for real-world applications. The combination of the MobileNet SSD networks in conjunction with the NVIDIA Jetson platforms is known as a highly optimized solution in terms of computational speed and energy efficiency, making it well-suited for embedded and mobile applications;
- To apply the trained models to detect harmful beetle species, and also to distinguish them from a beneficial species (Coccinella), in order to demonstrate the potential utility of neural networks in real-time pest control. Emphasizing the preservation of beneficial species adds ecological significance to the research.
2. Materials and Methods
2.1. Selected Neural Networks
2.2. The Training Platform and Framework
2.3. PyTorch Scripts
- train_ssd.py [63]: this script serves as a tool for training Single Shot MultiBox Detector (SSD) object detection models. It offers flexibility in configuring dataset types, network architectures and training parameters. The script incorporates data preprocessing, model training, validation and checkpointing, with support for various learning rate scheduling strategies. It also enables optional mean average precision (mAP) evaluation during validation. We used this script for re-training our SSD-MobileNet networks;
- eval_ssd.py [64]: this script is an evaluation tool for assessing the performance of trained SSD models. It allows for customizable evaluation parameters, including dataset type, model architecture and evaluation metrics, such as mean average precision (mAP). The script loads the dataset and the model pth file, performs inference and computes class-specific and/or overall accuracy and mAP values. We used this script to test the model inference performance on the test set before converting it from PyTorch to Open Neural Network Exchange (ONNX) format;
- onnx_export.py [65]: this script is designed for the conversion of PyTorch deep-learning models into the ONNX format. ONNX (Open Neural Network Exchange) is an open-standard format facilitating model interoperability and deployment across diverse inference platforms and inference environments. The script offers command-line customization options, including the choice of neural network architecture, input model checkpoint path, class labels file and output ONNX model path. It dynamically selects the inference device, loads the PyTorch model and exports it to ONNX. In order to conduct testing and real-time inference using our re-trained SSD-MobileNet models with TensorRT version 8.5.2, it was necessary to convert the PyTorch models into the ONNX format so that TensorRT can load them;TensorRT is a specialized library designed for high-performance inference on NVIDIA graphics processing units (GPUs), currently being considered the fastest way to run a trained model [66]. It automatically optimizes DNN models by fusing multiple layers and operations into a single, efficient kernel. This reduces memory bandwidth usage and minimizes the number of compute operations, resulting in faster inference. In order to further improve efficiency, TensorRT applies various graph optimizations, such as layer pruning, to eliminate unnecessary operations. It also supports batched inference, allowing multiple inputs to be processed in parallel, which enhances throughput in real-time applications [67];
- detectnet.py [68]: this script creates an Nvidia detectNet object, and subsequently uses it to employ TensorRT for trained models inferencing on actual images and video sequences. We used ‘detectnet.py’ with the ONNX format of the trained models to visualize detection boxes and confidence scores on the images from the test set. This script was particularly important in the stage of errors analysis and identification of the model’s strengths and limitations.
2.4. Image Datasets
- It has a large number of labeled images, standing as one of the most extensive sources of data available for training computer vision models;
- Each image is already annotated and comes with detailed information about the objects, which makes it readily suitable for training object detection models;
- It provides an open-licensed image sharing and use.
2.5. Transfer Learning Procedures
- dataset-type: the dataset format to be used; in this case, the Open Images format;
- net: used to select MobileNet-SSD-v1 or MobileNet-SSD-v2-Lite;
- data: the directory containing the training images;
- model-dir: the directory where the trained models are to be stored;
- batch-size: the number of images per batch; after experimentation on the Jetson AGX Orin platform, equipped with 64 GB of memory, we determined that a batch size of 32 provided optimal training performance in terms of training time and accuracy;
- workers: the number of PyTorch dataloader threads employed; in our setup, we allocated one thread per each of the 12 CPU cores of the Jetson AGX Orin platform;
- epochs: the number of training epochs; through iterative testing, we determined that the highest accuracy was achieved after more than 100 training epochs.
- The Pascal VOC format is specified for the dataset, by means of the dataset-type argument: --dataset-type=voc;
- The pretrained-ssd argument is employed to retrain the model obtained in the previous transfer learning phase, characterized by the highest accuracy and the lowest cost function value; for instance: --pretrained-ssd=models/mb1-ssd-Epoch-62-Loss-0.97135413.pth.
2.6. Evaluation Metrics
- represents the precision at the kth retrieved item;
- is the change in recall at the kth retrieved item;
- n is the total number of retrieved items.
3. Results and Discussion
- Initially, the original downloaded MobileNet-SSD-v1 neural network was retrained directly on the custom image dataset without augmentation. The performance of this model on the testing set served as the baseline reference level;
- A single transfer learning step was performed, involving the retraining of the original MobileNet-SSD-v1 and MobileNet-SSD-v2-Lite models directly on the augmented custom dataset;
- A two-step transfer learning procedure was performed. Each of the two original models was retrained twice as follows: first on the Open Images dataset (containing only the ‘Ladybug’ and ‘Beetles’ dataset classes), and then each of the resulting retrained model was further retrained on the augmented custom dataset;
- The performances of the models obtained in steps (2) and (3) were compared in terms of accuracy and mAP, to the baseline reference model from step (1).
3.1. Evaluation Metrics and Models Ranking
3.2. Error Analysis
- Misclassification errors: Illustrated in Figure 9a, there is a singular misclassification error over the entire test set, where the model confuses Zabrus with Anoxia. This could be attributed to Zabrus being photographed from an unusual lateral position, a scenario represented by only two images in the training set;
- Non-identification errors: Errors in which the model fails to make identification, as illustrated in Figure 9b–f. Most of these misidentifications are likely due to the fact that the training set contains an extremely limited number of images with a very particular background; therefore, the model might encounter challenges when the background varies significantly from the majority of training images. Neural networks are expected to generalize across different backgrounds, but extreme variations could pose difficulties. For instance, in Figure 9b, the image has a blue sky background. In the training set, there are only four images with a blue background. Similarly, for Figure 9c, there are only two such images in the training set;
- Another circumstance involves instances where the object is considerably small relative to the overall image dimensions, as in Figure 9d, or when it is barely visible, as seen in Figure 9f. A particular scenario is presented by images featuring Opatrum on the ground, where the model correctly identifies Opatrum in Figure 8e but fails to do so in Figure 9e, showing a nuanced performance with a confidence level of 45% in case of Figure 9e, which is just below the 50% threshold;
- Duplication errors: Figure 9g,h show duplication errors, where the same pest is detected twice in images with a singular specimen. Figure 9g highlights potential network confusion during the feature extraction stag, caused by the corn silk, which bears visual similarities to the legs and antennae of the pest species;
- False Positive errors: These are errors where a pest was detected in an area containing only background. There is a single situation over the entire test set, where a false Diabrotica was detected in the upper-right section of Figure 9h, with a confidence level of 62.5%.
4. Conclusions
- To address the issue of generalization error, we plan to employ a custom dataset comprising a minimum of 8000 images, ensuring representative sampling for real-world conditions;
- The top-performing model from current research will undergo retraining using the two-step transfer learning procedure on the new dataset. To gain a more comprehensive understanding of performance improvement, both the pretrained and untrained versions of the model will be trained on the non-augmented dataset. The outcomes will then be compared with the results obtained from training the same versions of the model on the augmented dataset;
- Our dataset enhancement strategy will involve incorporating more images that led to errors in the current study. This includes instances of small-scale beetles relative to image size, partially visible beetles, and images featuring Opatrum on the ground. The goal is to enhance the model’s feature extraction capabilities, particularly in scenarios where the color of Opatrum blends with the soil color;
- Evaluation of model performance will be extended to scenarios with two or more pest species coexisting in the same image, including scenarios involving both pests and beneficial species.
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
References
- Çakmakçı, R.; Salık, M.A.; Çakmakçı, S. Assessment and Principles of Environmentally Sustainable Food and Agriculture Systems. Agriculture 2023, 13, 1073. [Google Scholar] [CrossRef]
- Flint, M.L.; Van den Bosch, R. Introduction to Integrated Pest Management; Springer: New York, NY, USA, 2012; pp. 1–256. [Google Scholar]
- Jensen, S.E. Insecticide Resistance in the Western Flower Thrips, Frankliniella occidentalis. Integr. Pest Manag. Rev. 2000, 5, 131–146. [Google Scholar] [CrossRef]
- Kranthi, K.R.; Jadhav, D.R.; Kranthi, S.; Wanjari, R.R.; Ali, S.S.; Russell, D.A. Insecticide Resistance in Five Major Insect Pests of Cotton in India. Crop Prot. 2002, 21, 449–460. [Google Scholar] [CrossRef]
- Ngegba, P.M.; Cui, G.; Khalid, M.Z.; Zhong, G. Use of Botanical Pesticides in Agriculture as an Alternative to Synthetic Pesticides. Agriculture 2022, 12, 600. [Google Scholar] [CrossRef]
- Krupke, C.; Holland, J.D.; Long, E.; Eitzer, B.D. Planting of Neonicotinoid-Treated Maize Poses Risks for Honey Bees and Other Non-Target Organisms Over a Wide Area without Consistent Crop Yield Benefit. J. Appl. Ecol. 2017, 54, 1449–1458. [Google Scholar] [CrossRef]
- Krupke, C.H.; Long, E.Y. Intersections Between Neonicotinoid Seed Treatments and Honey Bees. Curr. Opin. Insect Sci. 2015, 10, 8–13. [Google Scholar] [CrossRef]
- Bonmatin, J.M.; Giorio, C.; Girolami, V.; Goulson, D.; Kreutzweiser, D.P.; Krupke, C.; Liess, M.; Long, E.; Marzaro, M.; Mitchell, E.A.D.; et al. Environmental Fate and Exposure; Neonicotinoids and Fipronil. Environ. Sci. Pollut. Res. 2015, 22, 35–67. [Google Scholar] [CrossRef] [PubMed]
- Sánchez-Bayo, F.; Goka, K.; Hayasaka, D. Contamination of the Aquatic Environment with Neonicotinoids and its Implication for Ecosystems. Front. Environ. Sci. 2016, 4, 71. [Google Scholar] [CrossRef]
- Ghaderi, S.; Fathipour, Y.; Asgari, S.; Reddy, G. Economic Injury Level and Crop Loss Assessment for Tuta absoluta (Lepidoptera: Gelechiidae) on Different Tomato Cultivars. J. Appl. Entomol. 2019, 143, 493–507. [Google Scholar] [CrossRef]
- Saha, T.; Chandran, N. Chemical Ecology and Pest Management: A Review. Int. J. Chem. Stud. 2017, 5, 618–621. Available online: https://www.chemijournal.com/archives/2017/vol5issue6/PartI/5-5-449-329.pdf (accessed on 7 April 2023).
- Føre, M.; Frank, K.; Norton, T.; Svendsen, E.; Alfredsen, J.; Dempster, T.; Eguiraun, H.; Watson, W.; Stahl, A.; Sunde, L. Precision Fish Farming: A New Framework to Improve Production in Aquaculture. Biosyst. Eng. 2018, 173, 176–193. [Google Scholar] [CrossRef]
- Eli-Chukwu, N. Applications of Artificial Intelligence in Agriculture: A Review. Eng. Technol. Appl. Sci. Res. 2019, 9, 4377–4383. [Google Scholar] [CrossRef]
- Smith, M. Getting Value from Artificial Intelligence in Agriculture. Anim. Prod. Sci. 2018, 60, 46–54. [Google Scholar] [CrossRef]
- Bannerjee, G.; Sarkar, U.; Das, S.; Ghosh, I. Artificial Intelligence in Agriculture: A Literature Survey. Int. J. Sci. Res. Comput. Sci. Appl. Manag. Stud. 2018, 7, 1–6. [Google Scholar]
- Jha, K.; Doshi, A.; Patel, P.; Shah, M.A. Comprehensive Review on Automation in Agriculture using Artificial Intelligence. Artif. Intell. Agric. 2019, 2, 1–12. [Google Scholar] [CrossRef]
- Gulzar, Y.; Ünal, Z.; Aktaş, H.; Mir, M. Harnessing the Power of Transfer Learning in Sunflower Disease Detection: A Comparative Study. Agriculture 2023, 13, 1479. [Google Scholar] [CrossRef]
- Gulzar, Y. Fruit Image Classification Model Based on MobileNetV2 with Deep Transfer Learning Technique. Sustainability 2023, 15, 1906. [Google Scholar] [CrossRef]
- Dhiman, P.; Kaur, A.; Balasaraswathi, V.; Gulzar, Y.; Alwan, A.; Hamid, Y. Image Acquisition, Preprocessing and Classification of Citrus Fruit Diseases: A Systematic Literature Review. Sustainability 2023, 15, 9643. [Google Scholar] [CrossRef]
- Kalfas, I.; De Ketelaere, B.; Bunkens, K.; Saeys, W. Towards Automatic Insect Monitoring on Witloof Chicory Fields using Sticky Plate Image Analysis. Ecol. Inf. 2023, 75, 102037. [Google Scholar] [CrossRef]
- Yang, S.; Xing, Z.; Wang, H.; Dong, X.; Gao, X.; Liu, Z.; Zhang, X.; Li, S.; Zhao, Y. Maize-YOLO: A New High-Precision and Real-Time Method for Maize Pest Detection. Insects 2023, 14, 278–291. [Google Scholar] [CrossRef]
- Wu, X.; Zhan, C.; Lai, Y.-K.; Cheng, M.-M.; Yang, J. IP102: A Large-Scale Benchmark Dataset for Insect Pest Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Albanese, A.; Nardello, M.; Brunelli, D. Automated Pest Detection with DNN on the Edge for Precision Agriculture. IEEE J. Emerg. Sel. Top. Circuits Syst. 2021, 11, 458–467. [Google Scholar] [CrossRef]
- Wang, C.; Grijalva, I.; Caragea, D.; McCornack, B. Detecting Common Coccinellids Found in Sorghum Using Deep Learning Models. Sci. Rep. 2023, 13, 9748. [Google Scholar] [CrossRef] [PubMed]
- Salamut, C.; Kohnert, I.; Landwehr, N.; Pflanz, M.; Schirrmann, M.; Zare, M. Deep Learning Object Detection for Image Analysis of Cherry Fruit Fly (Rhagoletis cerasi L.) on Yellow Sticky Traps. Gesunde Pflanz. 2023, 75, 37–48. [Google Scholar] [CrossRef]
- Rustia, D.J.A.; Chao, J.J.; Chiu, L.-Y.; Wu, Y.-F.; Chung, J.-Y.; Hsu, J.-C.; Lin, T.-T. Automatic Greenhouse Insect Pest Detection and Recognition Based on a Cascaded Deep Learning Classification Method. J. Appl. Entomol. 2020, 145, 206–222. [Google Scholar] [CrossRef]
- Wang, Q.-J.; Zhang, S.-Y.; Dong, S.-F.; Zhang, G.-C.; Yang, J.; Li, R.; Wang, H.-Q. Pest24: A Large-Scale Very Small Object Data Set of Agricultural Pests for Multi-Target Detection. Comput. Electron. Agric. 2020, 175, 105585. [Google Scholar] [CrossRef]
- Li, W.; Wang, D.; Li, M.; Gao, Y.; Wu, J.; Yang, X. Field Detection of Tiny Pests from Sticky Trap Images Using Deep Learning in Agricultural Greenhouse. Comput. Electron. Agric. 2021, 183, 106048. [Google Scholar] [CrossRef]
- Hong, S.-J.; Nam, I.; Kim, S.-Y.; Kim, E.; Lee, C.-H.; Ahn, S.; Parl, I.-K.; Kim, G. Automatic Pest Counting from Pheromone Trap Images Using Deep Learning Object Detectors for Matsucoccus Thunbergianae Monitoring. Insects 2021, 12, 342–358. [Google Scholar] [CrossRef] [PubMed]
- Wang, R.; Jiao, L.; Xie, C.; Chen, P.; Du, J.; Li, R. S-rpn: Sampling-Balanced Region Proposal Network for Small Crop Pest Detection. Comput. Electron. Agric. 2021, 187, 106290. [Google Scholar] [CrossRef]
- Jiao, L.; Xie, C.; Chen, P.; Du, J.; Li, R.; Zhang, J. Adaptive Feature Fusion Pyramid Network for Multi-Classes Agricultural Pest Detection. Comput. Electron. Agric. 2022, 195, 106827. [Google Scholar] [CrossRef]
- Zhang, W.; Huang, H.; Sun, Y.; Wu, X. Agripest-YOLO: A Rapid Light-Trap Agricultural Pest Detection Method Based on Deep Learning. Front. Plant Sci. 2022, 13, 1079384. [Google Scholar] [CrossRef]
- Sava, A.; Ichim, L.; Popescu, D. Detection of Halyomorpha halys using Neural Networks. In Proceedings of the IEEE 8th International Conference on Control, Decision and Information Technologies (CoDIT), Istanbul, Turkey, 17–20 May 2022. [Google Scholar]
- Takimoto, H.; Sato, Y.; Nagano, A.J.; Shimizu, K.K.; Kanagawa, A. Using a Two-Stage Convolutional Neural Network to Rapidly Identify Tiny Herbivorous Beetles in the Field. Ecol. Inf. 2021, 66, 101466. [Google Scholar] [CrossRef]
- Ozdemir, D.; Kunduraci, M.S. Comparison of Deep Learning Techniques for Classification of the Insects in Order Level with Mobile Software Application. IEEE Access 2022, 10, 35675–35684. [Google Scholar] [CrossRef]
- Butera, L.; Ferrante, A.; Jermini, M.; Prevostini, M.; Alippi, C. Precise Agriculture: Effective Deep Learning Strategies to Detect Pest Insects. IEEE/CAA J. Autom. Sin. 2022, 9, 246–258. [Google Scholar] [CrossRef]
- Ahmad, I.; Yang, Y.; Yue, Y.; Ye, C.; Hassan, M.; Cheng, X.; Wu, Y.; Zhang, Y. Deep Learning Based Detector YOLOv5 for Identifying Insect Pests. Appl. Sci. 2022, 12, 10167. [Google Scholar] [CrossRef]
- Ratnayake, M.N.; Dyer, A.G.; Dorin, A. Tracking Individual Honeybees Among Wildflower Clusters with Computer Vision-Facilitated Pollinator Monitoring. PLoS ONE 2021, 16, e0239504. [Google Scholar] [CrossRef] [PubMed]
- Bjerge, K.; Alison, J.; Dyrmann, M.; Frigaard, C.E.; Mann, H.M.R.; Høye, T.T. Accurate Detection and Identification of Insects from Camera Trap Images with Deep Learning. PLOS Sustain. Transform. 2023, 2, e0000051. [Google Scholar] [CrossRef]
- Spanier, R. Pollination AI: Deep Learning Approach to Identify Pollinators and Their Taxa Using the YOLO Architecture. Ph.D. Thesis, RWTHAachen University, Aachen, Germany, 2022. [Google Scholar]
- Bjerge, K.; Frigaard, C.; Karstoft, H. Motion Informed Object Detection of Small Insects in Time-lapse Camera Recordings. Sensors 2023, 23, 7242. [Google Scholar] [CrossRef]
- Venegas, P.; Calderon, F.; Riofrío, D.; Benítez, D.; Ramón, G.; Cisneros-Heredia, D.; Coimbra, M.; Rojo-Álvarez, J.-L.; Perez, N. Automatic Ladybird Beetle Detection Using Deep-Learning Models. PLoS ONE 2021, 16, e0253027. [Google Scholar] [CrossRef]
- Vega, M.; Benitez, D.; Perez, N.P.; Riofrio, D.; Ramón-Cabrera, G.; Cisneros-Heredia, D. Coccinellidae Beetle Specimen Detection Using Convolutional Neural Networks. In Proceedings of the IEEE Colombian Conference on Applications of Computational Intelligence (ColCACI), Cali, Colombia, 26–28 May 2021. [Google Scholar]
- Amarathunga, D.C.; Grundy, J.; Parry, H.; Dorin, A. Methods of Insect Image Capture and Classification: A Systematic Literature Review. Smart Agric. Technol. 2021, 1, 100023. [Google Scholar] [CrossRef]
- Cheng, X.; Zhang, Y.; Chen, Y.; Wu, Y.; Yue, Y. Pest Identification via Deep Residual Learning in Complex Background. Comput. Electron. Agric. 2017, 141, 351–356. [Google Scholar] [CrossRef]
- Kasinathan, T.; Singaraju, D.; Uyyala, S.R. Insect Classification and Detection in Field Crops using Modern Machine Learning Techniques. Inf. Proc. Agric. 2021, 8, 446–457. [Google Scholar] [CrossRef]
- Li, Y.; Yang, J. Few-Shot Cotton Pest Recognition and Terminal Realization. Comput. Electron. Agric. 2020, 169, 105240. [Google Scholar] [CrossRef]
- Nanni, L.; Maguolo, G.; Pancino, F. Insect Pest Image Detection and Recognition Based on Bio-Inspired Methods. Ecol. Inf. 2020, 57, 101089. [Google Scholar] [CrossRef]
- Pattnaik, G.; Shrivastava, V.K.; Parvathi, K. Transfer Learning-Based Framework for Classification of Pest in Tomato Plants. Appl. Artif. Intell. 2020, 34, 981–993. [Google Scholar] [CrossRef]
- Wang, R.J.; Zhang, J.; Dong, W.; Yu, J.; Xie, C.J.; Li, R.; Chen, T.J.; Chen, H.B. Crop Pests Image Classification Algorithm Based on Deep Convolutional Neural Network. Telkomnika 2017, 15, 1239–1246. [Google Scholar] [CrossRef]
- Wang, J.; Li, Y.; Feng, H.; Ren, L.; Du, X.; Wu, J. Common Pests Image Recognition Based on Deep Convolutional Neural Network. Comput. Electron. Agric. 2020, 179, 105834. [Google Scholar] [CrossRef]
- You, Y.; Zeng, Z.; Zheng, J.; Zhao, J.; Luo, F.; Chen, Y.; Xie, M.; Liu, X.; Wei, H. The Toxicity Response of Coccinella septempunctata L. (Coleoptera: Coccinellidae) after Exposure to Sublethal Concentrations of Acetamiprid. Agriculture 2022, 12, 1642. [Google Scholar] [CrossRef]
- Ovsyannikova, E.I. Zabrus tenebrioides Goeze-Corn Ground Beetle. 2008. Available online: http://agroatlas.ru/en/content/pests/Zabrus_tenebrioides/index.html (accessed on 7 April 2023).
- Afonin, A.N.; Greene, S.L.; Dzyubenko, N.I.; Frolov, A.N. Interactive Agricultural Ecological Atlas of Russia and Neighboring Countries. Economic Plants and their Diseases, Pests and Weeds. 2008. Available online: http://www.agroatlas.ru (accessed on 7 April 2023).
- Ovsyannikova, E.I.; Grichanov, I.Y. Opatrum sabulosum (L.)-Darkling Beetle. 2008. Available online: http://agroatlas.ru/en/content/pests/Opatrum_sabulosum/index.html (accessed on 7 April 2023).
- Fătu, A.-C.; Dinu, M.M.; Andrei, A.M. Susceptibility of some melolonthine scarab species to entomopathogenic fungus Beauveria brongniartii (Sacc.) Petch and Metarhizium anisopliae (Metsch.). Sci. Bull. Ser. F Biotech. 2018, 22, 42–49. [Google Scholar]
- Grozea, I.; Trusca, R.; Virteiu, A.M.; Stef, R.; Butnariu, M. Interaction between Diabrotica virgifera virgifera and host plants determined by feeding behavior and chemical composition. Rom. Agric. Res. 2017, 34, 329–337. [Google Scholar]
- CABI. Diabrotica virgifera virgifera (Western Corn Rootworm). 2021. Available online: https://www.cabidigitallibrary.org/doi/full/10.1079/cabicompendium.18637 (accessed on 24 May 2023).
- Franklin, D. NVIDIA: DNN Vision Library (Jetson-Inference): detectNet. 2023. Available online: https://rawgit.com/dusty-nv/jetson-inference/master/docs/html/group__detectNet.html (accessed on 14 July 2023).
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A. SSD: Single Shot MultiBox Detector. In Proceedings of the 14th European Conference on Computer Vision–ECCV 2016, Amsterdam, The Netherlands, 11–14 October 2016. [Google Scholar]
- Teng, T.W.; Veerajagadheswar, P.; Ramalingam, B.; Yin, J.; Mohan, R.E.; Gómez, B.F. Vision Based Wall Following Framework: A Case Study with HSR Robot for Cleaning Application. Sensors 2020, 20, 3298. [Google Scholar] [CrossRef]
- Girshick, R. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015. [Google Scholar]
- Franklin, D. SSD-Based Object Detection in PyTorch: Model Training. 2023. Available online: https://github.com/dusty-nv/pytorch-ssd/blob/master/train_ssd.py (accessed on 14 July 2023).
- Franklin, D. SSD-Based Object Detection in PyTorch: Model Evaluation. 2023. Available online: https://github.com/dusty-nv/pytorch-ssd/blob/master/eval_ssd.py (accessed on 14 July 2023).
- Franklin, D. SSD-Based Object Detection in PyTorch: Export ONNX. 2023. Available online: https://github.com/dusty-nv/pytorch-ssd/blob/master/onnx_export.py (accessed on 14 July 2023).
- Nelson, J. What is TensorRT. 2021. Available online: https://blog.roboflow.com/what-is-tensorrt/ (accessed on 21 June 2021).
- NVIDIA TensorRT. 2023. Available online: https://docs.nvidia.com/deeplearning/tensorrt/pdf/TensorRT-Developer-Guide.pdf (accessed on 21 June 2023).
- Franklin, D. SSD-Based Object Detection in PyTorch: Detectnet. 2023. Available online: https://github.com/dusty-nv/jetson-inference/blob/master/python/examples/detectnet.py (accessed on 21 July 2023).
- Open Images Dataset V7 and Extensions. 2022. Available online: https://storage.googleapis.com/openimages/web/factsfigures_v7.html (accessed on 21 June 2023).
- Coccinella Linnaeus, 1758 in GBIF Secretariat. GBIF Backbone Taxonomy. Checklist Dataset accessed via GBIF.org. Available online: https://www.gbif.org/search?q=Coccinella%20sp. (accessed on 14 July 2023).
- Anoxia villosa (Fabricius, 1781) in GBIF Secretariat. GBIF Backbone Taxonomy. Checklist Dataset accessed via GBIF.org. Available online: https://www.gbif.org/species/1054733 (accessed on 14 July 2023).
- Diabrotica virgifera LeConte, 1868 in GBIF Secretariat. GBIF Backbone Taxonomy. Checklist Dataset accessed via GBIF.org. Available online: https://www.gbif.org/species/1048497 (accessed on 14 July 2023).
- Opatrum sabulosum (Linnaeus, 1761) in GBIF Secretariat. GBIF Backbone Taxonomy. Checklist Dataset accessed via GBIF.org. Available online: https://www.gbif.org/species/4454749 (accessed on 14 July 2023).
- Zabrus tenebrioides (Goeze, 1777) in GBIF Secretariat. GBIF Backbone Taxonomy. Checklist Dataset accessed via GBIF.org. Available online: https://www.gbif.org/species/4473277 (accessed on 14 July 2023).
- GBIF.org, GBIF Home Page. 2023. Available online: https://www.gbif.org (accessed on 14 July 2023).
- Everingham, M.; Ali Eslami, S.M.; Van Gool, L.; Williams, C.K.I.; Winn, J.; Zisserman, A. The PASCAL Visual Object Classes Challenge: A Retrospective. Int. J. Comput. Vision 2014, 111, 98–136. [Google Scholar] [CrossRef]
- NVIDIA Transfer Learning Toolkit for Intelligent Video Analytics-Getting Started Guide. 2020. Available online: https://docs.nvidia.com/metropolis/TLT/archive/tlt-10/pdf/Transfer-Learning-Toolkit-Getting-Started-Guide-IVA.pdf (accessed on 24 May 2023).
- Geron, A. Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow, 2nd ed.; O’Reilly Media: Sebastopol, CA, USA, 2019; p. 491. [Google Scholar]
- BBirgit, iNaturalist Contributors, iNaturalist (2023). iNaturalist Research-Grade Observations. iNaturalist.org. Occurrence dataset accessed via GBIF.org. 2023. Available online: https://www.gbif.org/occurrence/3338144902 (accessed on 30 July 2023).
- Miquet, A. iNaturalist Contributors, iNaturalist (2023). iNaturalist Research-Grade Observations. iNaturalist.org. Occurrence Dataset accessed via GBIF.org. 2023. Available online: https://www.gbif.org/occurrence/4039229776 (accessed on 30 July 2023).
- Ferreira, R. iNaturalist Contributors, iNaturalist (2023). iNaturalist Research-Grade Observations. iNaturalist.org. Occurrence Dataset accessed via GBIF.org. 2023. Available online: https://www.gbif.org/occurrence/4121193187 (accessed on 30 July 2023).
- Mobbini. iNaturalist Contributors, iNaturalist (2023). iNaturalist Research-Grade Observations. iNaturalist.org. Occurrence Dataset accessed via GBIF.org. 2020. Available online: https://www.gbif.org/occurrence/2901580832 (accessed on 30 July 2023).
- Jeltov, P. iNaturalist Contributors, iNaturalist (2023). iNaturalist Research-Grade Observations. iNaturalist.org. Occurrence Dataset accessed via GBIF.org. 2023. Available online: https://www.gbif.org/occurrence/4075854369 (accessed on 30 July 2023).
- Le Mao, P. iNaturalist Contributors, iNaturalist (2023). iNaturalist Research-Grade Observations. iNaturalist.org. Occurrence Dataset accessed via GBIF.org. 2023. Available online: https://www.gbif.org/occurrence/4018220177 (accessed on 30 July 2023).
- Levon, A. iNaturalist Contributors, iNaturalist (2023). iNaturalist Research-Grade Observations. iNaturalist.org. Occurrence Dataset accessed via GBIF.org. 2022. Available online: https://www.gbif.org/occurrence/3903140984 (accessed on 30 July 2023).
- Barileva, N. iNaturalist Contributors, iNaturalist (2023). iNaturalist Research-grade Observations. iNaturalist.org. Occurrence Dataset accessed via GBIF.org. 2023. Available online: https://www.gbif.org/occurrence/4014953025 (accessed on 30 July 2023).
- Danielle. iNaturalist Contributors, iNaturalist (2023). iNaturalist Research-grade Observations. iNaturalist.org. Occurrence Dataset accessed via GBIF.org. 2023. Available online: https://www.gbif.org/occurrence/4018183044 (accessed on 30 July 2023).
- Mednii, A. iNaturalist Contributors, iNaturalist (2023). iNaturalist Research-grade Observations. iNaturalist.org. Occurrence Dataset accessed via GBIF.org. 2023. Available online: https://www.gbif.org/occurrence/4091424606 (accessed on 30 July 2023).
- Fogliato, S. iNaturalist Contributors, iNaturalist (2023). iNaturalist Research-grade Observations. iNaturalist.org. Occurrence Dataset accessed via GBIF.org. 2023. Available online: https://www.gbif.org/occurrence/3874204663 (accessed on 30 July 2023).
Study | Model/Backbone | Dataset | Performance Metrics |
---|---|---|---|
Kalfas et al. [20] | YOLO v5 | 731 sticky plates containing 74,616 bounding boxes | mAP = {0.76 across all dataset; 0.73 wooly aphid; 0.86 chicory leaf-miner; 0.61 grass-fly; 0.67 wasp} |
Yang et al. [21] | YOLOv7 with insertion of CSPResNeXt-50 and VoVGSCSP modules | 4533 annotated images; 13 maize pests | [email protected] = 76.3%; [email protected]:0.95 = 51.2; recall = 77.3%; F1 = 75.2% |
Wu et al. [22] | Faster R-CNN, FPN, SSD300, RefineDet, YOLOv3 | IP102 dataset (18,983 images) | FPN with ResNet-50: [email protected] = 54.93%, YOLOv3: [email protected] = 50.64% |
Albanese et al. [23] | Modified LeNet-5, VGG16, MobileNetV2 | 4400 images; 2 classes: codling moth and general insect | LeNet-5: Acc. 96.1%, Prec. 99.6% VGG16: Acc. 97.9%, Prec. 99.6% MobileNetV2: Acc. 95.1%, Prec. 98.5% |
Wang et al. [24] | Faster R-CNN-FPN with ResNet-50 or ResNet-101, YOLOv5, YOLOv7 | 4865 images; seven species of coccinellids | YOLOv7: [email protected] up to 97.31 and AP up to 74.5; YOLOv5: [email protected] up to 96 and AP up to 73.8; Faster R-CNN: [email protected] up to 94.3 and AP up to 65.6; |
Salamut et al. [25] | Faster R-CNN, YOLOv5 [26] | 1600 annotated images of cherry fruit flies | Faster R-CNN: [email protected] = 0.88%; YOLOv5: [email protected] = 0.76% |
Rustia et al. (2021) [26] | Multi-stage deep learning method | Greenhouse images | F1-scores up to 0.92 |
Wang et al. [27] | Faster R-CNN/VGG-16, Cascade R-CNN/ResNet-50-FPN, YOLOv3/Darknet-53 | Pest24 dataset; 24 field pests; 25,000 images | YOLOv3: [email protected] = 59.79%; Cascade R-CNN: [email protected] = 57.23%; Faster R-CNN: [email protected] = 51.10% |
Li et al. [28] | Faster R-CNN (COCO pre-trained) | 1500 sticky trap images; 2 classes: whitefly and thrips | Faster R-CNN (pre-trained): NA (More accurate than direct training) |
Hong et al. (2021) [29] | AI-based pest counting method | Black pine bast scale images | Counting accuracy = 95% |
Wang et al. [30] | Improved Faster R-CNN/Attention | AgriPest21 dataset; 21 types of pests; 25,000 images | Improved Faster R-CNN: mAP = 78.7% |
Jiao et al. [31] | Faster R-CNN/ResNet50 | AgriPest21 dataset 21 types of pests; 25,000 images | Faster R-CNN: mAP = 77.4% |
Zhang et al. [32] | YOLO models with attention mechanism | Pest24 dataset; 25,000 images of small pests | AgriPest-YOLO: [email protected] = 71.3% |
Sava et al. [33] | YOLOv5m | Dataset from Maryland Biodiversity Project | YOLOv5m: mAP = 99.2% |
Takimoto et al. [34] | Faster R-CNN | Web and field-collected images of herbivorous beetles | Faster R-CNN: NA |
Ozdemir and Kunduraci [35] | Faster R-CNN (Inception-v3) | 25,820 training images of various insect orders | Faster R-CNN: NA |
Butera et al. [36] | Faster R-CNN (MobileNet-v3) | 36,000 web images of Beetle-type pests and non-harmful beetles | Faster R-CNN: mAP = 92.66% |
Ahmad et al. [37] | YOLO models | 7046 images with 23 pests | YOLOv5-X: [email protected] = 98.3%, [email protected]:0.95 = 79.8% |
Ratnayake et al. [38] | YOLOv2 and hybrid approach | 22,260 video frames with honeybees in wildflower clusters | HyDaT: Detection rate = 86.6%, YOLOv2: Detection rate = 60.7% |
Bjerge et al. [39] | YOLOv5 | 29,960 beneficial insects | YOLOv5: [email protected]:0.05:0.95 = 0.592, F1-score = 0.932 |
Spanier [40] | YOLOv5 variant | 17,000 pollinator insect images | YOLOv5 variant: Accuracy = 0.9294, F1-score = 0.9294 |
Bjerge et al. [41] | YOLOv5, Faster R-CNN | 100,000 annotated images of small insects | YOLOv5: [email protected] = 0.924, Faster R-CNN: [email protected] = 0.900 |
Venegas et al. [42] | Deep CNN and traditional methods | 2300 coccinellid beetle images | CNN model AUC = 0.977 |
Vega et al. [43] | CNN with weighted Hausdorff distance | 2633 beetle images | Mean accuracy = 94.30% |
Family/Species/ Common Name | Body Length (mm) | Distribution | Flight Period/ Optimum Temperature Range | Affected Plants | References |
---|---|---|---|---|---|
Carabidae/ Zabrus tenebrioides Goeze, 1777 (Corn Ground Beetle) | 14–16 | England, Southern Sweden, Northern Africa, Asia Minor, Cyprus, Ukraine, Moldova, Transcaucasia | May–June/20–26 °C | Winter wheat, corn, horn, rye, barley, oat | [53,54] |
Tenebrionidae/ Opatrum sabulosum Linnaeus, 1761 (Darkling Beetle) | 7–10 | Western Europe, Northwestern Iran, Northwestern China; the European part of the former USSR, the Caucasus, South and Middle Siberia Kazakhstan, Central Asia | -/they feed actively at a temperature of 17–20 °C. Below 25 °C prefer dry plants, at temperature above 27 °C they eat green plants almost exclusively | Polyphagous (corn, sugar beet, flax, sunflower, tobacco, cotton, pumpkin, fennel, anise, castor-oil plan, safflower, buckwheat, bean) | [54,55] |
Scarabaeidae/ Anoxia villosa (Fabricius, 1781) (Cockchafer, Steppe Beetle) | 20–25 | Europe, especially Mediterranean area | May–August, especially July/- | Polyphagous (corn, sunflower, wheat, barley, woody plants: vine, orchards, forest nurseries) | [56] |
Chrysomelidae/ Diabrotica virgifera virgifera LeConte, 1868 (Western Corn Rootworm) | ~5 | Europe, North America | the end of June–the middle of October/high temperature, but not more than 30 °C | Preferred corn; white squash, alfalfa, clover, rape, bean, soybean, sunflower | [57,58] |
GPU | NVIDIA® Ampere architecture; 2048 NVIDIA CUDA cores; 64 Tensor cores |
CPU | 12-core Arm Cortex-A78AE v8.2 64-bit CPU; 3MB L2 + 6MB L3 |
DL Accelerator | 2× NVDLA v2.0 |
Vision Accelerator | PVA v2.0 |
Memory | 64 GB 256-bit LPDDR5; 204.8 GB/s |
Storage | 64 GB eMMC 5.1 + 2 TB SSD, model Samsung 980 PRO Gen.4 NVMe (added) |
Dataset Class | Number of Images | Relative Distribution | |
---|---|---|---|
Before Augmentation | After Augmentation | ||
Anoxia | 154 | 462 | 17.4% |
Ladybug | 173 | 519 | 19.6% |
Diabrotica | 206 | 618 | 23.3% |
Opatrum | 180 | 536 | 20.2% |
Zabrus | 171 | 513 | 19.5% |
Total: | 884 | 2648 | 100% |
Class Name | Number of Images | Relative Distribution | Train (75%) | Test (10%) | Validation (15%) |
---|---|---|---|---|---|
Anoxia | 462 | 17.4% | 347 | 46 | 69 |
Diabrotica | 618 | 23.3% | 463 | 63 | 92 |
Ladybug | 519 | 19.6% | 389 | 52 | 78 |
Opatrum | 536 | 20.2% | 402 | 54 | 80 |
Zabrus | 513 | 19.5% | 385 | 51 | 77 |
Total | 2648 | 100% | 1986 | 266 | 396 |
Trained Model | Accuracy | mAP | Notes | ||||
---|---|---|---|---|---|---|---|
Anoxia | Diabrotica | Ladybug | Opatrum | Zabrus | |||
SSD-MB-v1 (baseline) | 0.8432 | 0.7323 | 0.8276 | 1 | 0.8030 | 0.841 | Retrained once, on non-augmented custom dataset |
SSD-MB-v1 TL on custom dataset | 0.9866 | 0.7259 | 0.8737 | 0.9240 | 0.9475 | 0.887 | Retrained once, on augmented custom dataset |
SSD-MB-v1 2-step TL | 0.9072 | 0.7770 | 0.9436 | 0.9056 | 0.9033 | 0.908 | Two-step transfer learning |
SSD-MB-v2 Lite TL on custom dataset | 0.9912 | 0.7273 | 0.8906 | 0.9091 | 0.9030 | 0.884 | Retrained once, on augmented custom dataset |
SSD-MB-v2-Lite 2-step TL | 0.9851 | 0.8066 | 0.8939 | 0.9514 | 0.9053 | 0.892 | Two-step transfer learning |
Trained Model | Ranking in Terms of Accuracy for Each Dataset Class | mAP | ||||
---|---|---|---|---|---|---|
Anoxia | Diabrotica | Ladybug | Opatrum | Zabrus | ||
SSD-MB-v1 (baseline) | - | - | - | (1) | - | - |
SSD-MB-v1 TL on custom dataset | (2) | - | - | (2) | (1) | - |
SSD-MB-v1 2-step TL | - | (2) | (1) | - | - | (1) |
SSD-MB-v2 Lite TL on custom dataset | (1) | - | - | - | - | - |
SSD-MB-v2-Lite 2-step TL | 3rd place; 0.6% less than the 1st place | (1) | (2) 5.3% less than the 1st place | 1st place, (except the baseline) | (2) 4.5% less than the 1st place | (2) 1.8% less than the 1st place |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Maican, E.; Iosif, A.; Maican, S. Precision Corn Pest Detection: Two-Step Transfer Learning for Beetles (Coleoptera) with MobileNet-SSD. Agriculture 2023, 13, 2287. https://doi.org/10.3390/agriculture13122287
Maican E, Iosif A, Maican S. Precision Corn Pest Detection: Two-Step Transfer Learning for Beetles (Coleoptera) with MobileNet-SSD. Agriculture. 2023; 13(12):2287. https://doi.org/10.3390/agriculture13122287
Chicago/Turabian StyleMaican, Edmond, Adrian Iosif, and Sanda Maican. 2023. "Precision Corn Pest Detection: Two-Step Transfer Learning for Beetles (Coleoptera) with MobileNet-SSD" Agriculture 13, no. 12: 2287. https://doi.org/10.3390/agriculture13122287
APA StyleMaican, E., Iosif, A., & Maican, S. (2023). Precision Corn Pest Detection: Two-Step Transfer Learning for Beetles (Coleoptera) with MobileNet-SSD. Agriculture, 13(12), 2287. https://doi.org/10.3390/agriculture13122287