Next Article in Journal
Economic Benefits from the Use of Mass Trapping in the Management of Diamondback Moth, Plutella xylostella, in Central America
Next Article in Special Issue
Host-Specific Diversity of Culturable Bacteria in the Gut Systems of Fungus-Growing Termites and Their Potential Functions towards Lignocellulose Bioconversion
Previous Article in Journal
Tenebrio molitor as a Clean Label Ingredient to Produce Nutritionally Enriched Food Emulsions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Deep-Learning-Based Detection Approach for the Identification of Insect Species of Economic Importance

1
The BioRobotics Institute, Sant’Anna School of Advanced Studies, Viale Rinaldo Piaggio 34, 56025 Pontedera, Italy
2
Department of Excellence in Robotics and AI, Sant’Anna School of Advanced Studies, 56127 Pisa, Italy
*
Author to whom correspondence should be addressed.
Insects 2023, 14(2), 148; https://doi.org/10.3390/insects14020148
Submission received: 13 November 2022 / Revised: 22 January 2023 / Accepted: 27 January 2023 / Published: 31 January 2023
(This article belongs to the Special Issue Breakthrough Technologies for Future Entomology)

Abstract

:

Simple Summary

This study aims at developing a machine-learning-based classification approach to recognize insect species of economic importance. Two tephritid pest species with similar shape and locomotory patterns (e.g., the Mediterranean fruit fly Ceratitis capitata, and the olive fruit fly Bactrocera oleae) were used as model organisms. The proposed method, based on a convolutional neural network (CNN), accurately detects and discriminates moving C. capitata and B. oleae adult individuals in real-time. These results importantly contribute to the development of autonomous pest monitoring methods, to intervene with tailored measures instantaneously and remotely. Overall, this study promotes sustainable and efficient crop protection approaches based on integrated pest management and precision techniques.

Abstract

Artificial Intelligence (AI) and automation are fostering more sustainable and effective solutions for a wide spectrum of agricultural problems. Pest management is a major challenge for crop production that can benefit from machine learning techniques to detect and monitor specific pests and diseases. Traditional monitoring is labor intensive, time demanding, and expensive, while machine learning paradigms may support cost-effective crop protection decisions. However, previous studies mainly relied on morphological images of stationary or immobilized animals. Other features related to living animals behaving in the environment (e.g., walking trajectories, different postures, etc.) have been overlooked so far. In this study, we developed a detection method based on convolutional neural network (CNN) that can accurately classify in real-time two tephritid species (Ceratitis capitata and Bactrocera oleae) free to move and change their posture. Results showed a successful automatic detection (i.e., precision rate about 93%) in real-time of C. capitata and B. oleae adults using a camera sensor at a fixed height. In addition, the similar shape and movement patterns of the two insects did not interfere with the network precision. The proposed method can be extended to other pest species, needing minimal data pre-processing and similar architecture.

1. Introduction

The recent rapid progress in Artificial Intelligence (AI) and automation is producing a new wave of technological advancement with tangible impact on social, health, industrial, and environmental contexts [1,2,3]. AI provides robust applicability to complex problems, with performances, in some scenarios, challenging human results. Agriculture represents an application field of crucial importance for AI, due to the benefits that machine learning strategies may provide to face challenges related to pest and disease monitoring, weed management, chemicals use, irrigation issues, yield prediction, precision livestock farming, and more [2,4,5,6]. AI techniques provide the best fitting solution for specific agricultural problems that are generally highly dynamic and cannot be generalized to propose a common solution [2,7].
Crop protection is one of the most expensive practices in the agribusiness, often related to improper strategies adopted, as well as to the impossibility of adequately recognizing and preventing severe parasite infestations and pathogen infections [8,9]. In addition, an increasing environmental concern and public demand for the reduction of toxic insecticides use, make pest management even more challenging [10]. Integrated pest management (IPM) aims at ensuring more sustainable and efficient crop protection programs based on adequate strategies to control pests [8]. One of the IPM’s core components is represented by monitoring pests’ activity and density [11]. However, monitoring operations are still mainly based on human experts that analyse traps in loco, or their digital images, to recognize and count pests [12]. This monitoring approach is labour intensive, time demanding, and expensive [13]. Furthermore, the unavailability of a standardized counting process frequently makes monitoring operations error prone [14].
The option of autonomously and remotely monitoring pest organisms is gaining a momentum due to the advantage to intervene in protectingcrops in real time [15,16,17,18]. In several previous studies, insect models have been used, since specimens are generally well preserved, and their images can be captured at high resolution in ideal laboratory conditions [19,20,21,22]. In other research, insects collected in nature have been classified in laboratory conditions [23,24,25]. Image quality was worse than those of the specimen’s case, but thanks to the laboratory environment, it was possible to adjust several experimental settings. Different machine learning models have been used, such as support vector machines (SVM) [21], artificial neural networks (ANN) [21,26], k-nearest neighbours (KNN) [27], and ensemble methods (e.g., adaptive boosting [28,29]). In a general perspective, the state of the art for insect ecology investigation and monitoring includes four main technologies [30]: computer vision, acoustic monitoring [31], radar technologies [32,33], and molecular methods [34].
However, beside the morphology of stationary or immobilized animals, other features related to living animals behaving in the environment (e.g., walking trajectories, different postures, etc.) have been overlooked so far. Developing a technology considering these features can strongly contribute to improve automatic monitoring in real scenarios. Furthermore, localizing and classifying insects that naturally behave in the environment, can pave the way to the development of a new generation of monitoring stations that are more selective than traditional sticky traps. For instance, Bjerge et al. [35] constructed a system using a deep learning software that performs real-time classification and tracking of pollinators. Unfortunately, sticky traps also capture non-target and/or beneficial organisms [36,37,38,39], with obvious negative effects on biodiversity and ecosystem stability.
Herein, we developed a detection method based on a convolutional neural network (CNN) that can accurately classify living true fruit flys’ species (Diptera: Tephritidae) moving and constantly changing their posture in real-time. Particularly, an artificial neural network (ANN) was trained on two different species belonging to the family Tephritidae: the Mediterranean fruit fly, Ceratitis capitata Wiedemann, a polyphagous pest attacking more than 200 fruit species [40]; and the olive fruit fly Bactrocera oleae (Rossi), that is the major pest of commercial olives worldwide [41]. Our approach can be extended to other pest species, with minimal data pre-processing. In a broader context, the technology can be used for a wide range of applications with other species of economical and medical importance, such as pollinators and mosquitoes [42].

2. Materials and Methods

2.1. Ethics Statement

This research is compliant with the guidelines for the treatment of animals in behavioral research and teaching [43], as well as with the Italian (D.M. 116192) and the European Union regulations [44].

2.2. Animal Rearing

C. capitata and B. oleae adult flies were maintained in separate cylindrical PVC cages under controlled conditions (21 ± 1 °C, 55 ± 5% relative humidity, 16:8 h light:dark) at the BioRobotics Institute. Adults were fed on a dry diet of yeast extract and sucrose mixture, at a ratio of 1:10 (w:w), while a cotton wick provided water.

2.3. Images Acquisition Setup

Flies were individually transferred in a transparent petri dish (60 mm diameter) turned upside down avoiding insects’ escape. An equal proportion of males and females for each species were tested. Although sexual dimorphism exists in both species, the colours and morphology between the two species are clearly distinguishable regardless of their sexes.
The petri dish was framed by an image sensor (specifically, camera module with 1/2.7” CMOS sensor, Full-HD maximum resolution, 60 FPS, and coloured), mounting fixed 2.8–12 mm optics (i.e., without automatic focus), Figure 1. Flies were allowed to move freely inside the plate enabling the camera to capture each insect in different poses and positions in order to build a relevant dataset. In the configuration with maximum resolution (i.e., 1920 × 1080 ) 1 pixel covers 0.6 mm.
Flies were recorded both individually and in pairs (i.e., one per each fly) in the same petri plate, without additional illumination other than laboratory lights, Figure 2. Recording time for each video varies from a few seconds up to 3 min depending on the insects’ behaviour. For instance, some recordings were stopped in case the insect remained motionless for several seconds. The use of video recording simplifies the data collection phase, avoiding the set of a timer for image capturing which can lead to blurred images or missing insect’s motion. In addition, video recording is easier to analyse than single images and faster in trimming parts where insects are motionless. Images selection criteria maximizes insects’ different poses and minimizes multiples of similar ones to improve the training process.
Although, C. capitata and B. oleae adult flies are chromatically and morphologically distinct as shown in Figure 2, differences are reduced in the captured images given the insects’ small size (i.e., 4–6 mm long). Hence, images display two similar insects making the recognition process challenging.

2.4. Dataset Creation and Pre-Processing

Frames were extracted from the captured videos by manually labelling the region of interest (ROI) of each fly in multiple poses for high quality annotation. Then, datasets were pre-processed to generate different resolution sizes for a better network generalization. Multi-scale feature extraction is a method to improve the training accuracy of neural network models. This method first scales the input image to several different scales, then performs feature extraction of the image at each scale and finally constructs a feature pyramid using all the extracted scale features and inputs them into the neural network model. In this work, three resolutions were used: 1920 × 1080, 640 × 360 and 416 × 234.
Datasets were composed of 912 images split in 70 % training set, 20 % validation set and 10 % testing set. The training set was almost equally distributed with 49% images for C. capitata and 51% images for B. oleae. Furthermore, frames used for training included only individual insects in the petri dish, the dataset of multiple insects in the same image, called the performance set (i.e., 914 images), was used afterwards to assess the network performance, Table 1. Once results are evaluated using the performance set, validation and testing sets’ distributions are irrelevant. A schematic workflow of the developed approach is shown in Figure 3.

2.5. YOLO Network

Convolutional neural networks of the YOLO (You only look once) family [45,46,47] are one-stage object detection systems computationally inexpensive with good performance metrics and outperforming other target recognition algorithms [48]. These networks were used in a variety of recognition tasks: apple detection during different growth stages [49], uneaten feed pellets detection in underwater images for aquaculture [50], simultaneous detection and classification of breast masses in digital mammograms [51], automatic license plate recognition [52] or even medical face mask detection in COVID-19 scenarios [53]. Compared with the Faster R-CNN network, the YOLO network transforms the detection problem into a regression one without requiring a proposal region. Indeed, the network, using a single CNN for the entire image, divides the input image into sub-regions predicting multiple bounding box coordinates and probabilities of each class directly through regression increasing the detection speed.
YOLOv5 network [54] is a version of the YOLO architecture series. This network model has proven to significantly improve the detection accuracy and the inference speed (the fastest detection speed being up of 140 frames per second); these attributes are of great importance when moving forward in system implementation to bigger datasets and real-time detection. Nevertheless, the size of the weight file related to the target detection network model is small, nearly 90% smaller than the previous YOLOv4, meaning that the YOLOv5 model is lightweight and suitable for deployment to embedded devices implementing real-time detection. For the sake of completeness, YOLOv5 contains four architectures: YOLOv5s, YOLOv5m, YOLOv5l, and YOLOv5x. Each architecture has a different amount of feature extraction modules and convolution kernel. Simplifying, size and parameters number of the model in the four architectures increase in turn. Since, we have two targets to be identified in this study, and the recognition model has high requirements for real-time performance and lightweight properties, we based our experimental architecture on YOLOv5s. The network model size is about 14 MB and its inference time with PyTorch is 2.2 ms.
The network architecture of YOLOv5, shown in Figure 4, is divided into three main parts: Backbone, Neck and Head, built on CSPDarknet (cross stage partial network into Darknet), PANet (path aggregation network), and YOLO layer, respectively. Images are the input of CSPDarknet for features extraction, then PANet is fed for feature fusion and, lastly, the YOLO layer returns detection results such as class, probability, location, and size.

2.6. Network Training and Testing Results

The network is built up by PyTorch [55] and trained on Intel Core i7-7820X, with a 3.60 GHz processor, 32 GB RAM, 500 GB SSD, and 8 GB NVIDIA GTX 1070 using Windows OS. Results are classified as: True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN) [56]. TP and TN indicate the truly recognized olive/Mediterranean fruit fly points by each algorithm. Detection errors, instead, are shown through the FP and FN classes which indicate any olive (or Mediterranean) fruit fly identification (or miss), respectively. The binary classification parameters are collected in the so-called confusion matrix that allows the quick visualization of the algorithm performance, shown in Table 2.
Three indices are taken into account to assess each algorithm performance. Firstly, the precision percentage or positive predictive value (PPV) calculated as:
PPV % = TP TP + FP × 100
that gets the highest values when high precision is achieved, meaning that the network returned more relevant than irrelevant results. Another important index is the sensitivity, true positive rate (TPR), or recall defined as
TPR % = TP TP + FN × 100 = 100   FNR %
where FN R % is the false negative rate.
Finally, the accuracy in unbalanced classes is measured by the F1 score (also F-score or F-measure) defined as:
F 1 = 2 · precision   ×   sensitivity precision   +   sensitivity = 2 TP 2 TP   +   FP   +   FN  
YOLOv5 model evaluation includes the mean average precision (mAP), detection time, intersection over union (IoU), and floating-point operations (FLOPS). IoU calculates the overlap ratio between the boundary box of the prediction (pred), ground-truth (gt) [57]:
IoU = Area pred Area gt   Area pred Area gt    
YOLO training options are summarized in Table 3. Uncited network parameters are set to zero.

3. Results

The training confusion matrix and F1 score are shown in Table 4 and Figure 5, respectively. In some of these figures, species names are abbreviated for convenience only with fruit fly (i.e., Mediterranean fruit fly) and olive fly (i.e., olive fruit fly). Training results, however, are summarized in Figure 6. In particular, the progressions of various metrics are shown such as box, objectness and classification during training and validation; metrics such as precision, recall, and mAP0.5 after each epoch are also plotted. The network performed very well in terms of precision (95%), recall (97%), and mAP0.5 (95%). Examples of the testing set results are presented in Figure 7, showing the network precision in classifying labelled images. The YOLO network was trained with a dataset composed of the olive fruit fly and the Mediterranean fruit fly separately (i.e., training set in Table 1). Afterwards, the network was tested with the performance set (i.e., dataset with two flies in the same image, Table 1). Figure 8 shows a visualization of some output from the network on tested images of the performance set. The precision rate of the performance set was about 93% in total for both datasets including two insects in the same petri dish or in separated ones.

4. Discussion

Results showed the feasibility of using the proposed method for real-time and accurate detection of C. capitata and B. oleae adult flies. In the experimental setup, the network performed good, as shown in Figure 6, with precision (95%), recall (97%), and mAP0.5 (95%). The software based on YOLOv5s architecture requires 14 MB of memory with PyTorch inference time of 2.2 ms. Hence, a compatible on-field hardware could be a DSP or a microcontroller with single core processor, and a relatively small memory capacity. The comparison between these two fly species, belonging to the same family, at a fixed camera height demonstrated that even insects with similar visual and motor patterns could be detected from YOLO with a good precision rate (about 93%). Nevertheless, the similarity challenges the network in some angular poses of the insect. For instance, in Figure 9 the network was unable to recognize the Mediterranean fruit fly, classifying it as an olive fruit fly with high precision. This image is not easy to classify, even manually by an operator, and only insect’s sizing may help us in the classification. Insects with a small size and morphologically similar (on a macro scale) are still challenging for deep learning techniques.
The white background enhances image contrast in both training and inference phases for these species with body colours prevalently black and grey. In general, the background colour is an important parameter to tune both for enhancing and for filtering some insects, making them invisible for the network inference. For instance, if the monitoring scope is focused on insects with dominant white colour, then white background will probably be a big issue for the network training.
The results obtained with YOLO suggest that innovative systems for monitoring pests’ population dynamic can be developed relying on machine learning techniques. These systems may include a network of low-cost microcontrollers connected to a CMOS sensor presenting a configuration similar to that in Figure 1. In addition, the integration of a wireless communication module can enable information exchange with field monitoring platforms. Energy consumption has to be estimated and it is strongly related to the selected hardware. HHowever, micro-solar panels (electrical power from 0.3 W to 3 W roughly) can provide a valid solution at least during warmer seasons.
This study may promote the development of innovative approaches based on a distributed network of automatic monitoring platforms in the field to investigate the population dynamic of key insect species. This would contribute to increase sustainability and efficiency of control measures, and reduce negative effects on non-targeted or even beneficial organisms. This approach can be extended to other species of interest, with minimal data pre-processing and similar architecture.

5. Conclusions

A CNN based on the YOLO network was used as a detection method to classify in real-time living fruit fly species (Diptera: Tephritidae) while moving and constantly changing their posture. We trained an artificial neural network on two different species belonging to the family Tephritidae: the Mediterranean fruit fly C. capitata, a highly polyphagous phytophagous widely distributed throughout the world; and the olive fruit fly B. oleae, that is the major pest of commercial olives worldwide.
The comparison between these two fly species using a fixed camera height demonstrated that even insects with similar shape and movement patterns could be detected by the network with a precision rate of about 93%. The software based on YOLOv5s architecture requires 14 MB of memory with PyTorch inference time of 2.2 ms. Low-cost monitoring systems can be developed with minimal hardware, creating a sensing network to monitor the insects’ population dynamic in the field.
Our approach can be extended to other pests, pollinators, and hematophagous species with minimal data pre-processing. Further studies will focus on tests development in field conditions with a custom hardware configuration.

Author Contributions

Conceptualization, M.T. and D.R.; methodology, M.T. and D.R.; validation, M.T., D.R. and C.S.; formal analysis, M.T. and D.R.; investigation, M.T., D.R. and C.S.; resources C.S. and D.R.; writing—original draft preparation, M.T. and D.R.; writing—review and editing, M.T., D.R. and C.S.; supervision D.R. All authors have read and agreed to the published version of the manuscript.

Funding

Partial financial support was received from the H2020 FETOPEN Project ‘‘Robocoenosis-ROBOts in cooperation with a bioCOENOSIS’’ [899520]. Funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Institutional Review Board Statement

This study was carried out in accordance with the Guidelines for the Use of Animals in Research, and the legal requirements of Italian and EU legislation. All experiments consisted in behavioral observations, and no specific permits are needed in the country where the experiments were conducted.

Data Availability Statement

Data are available on request.

Conflicts of Interest

The authors declare that they have no known competing financial interest or personal relationship that could have appeared to influence the work reported in this paper.

References

  1. Russell, S.J. Artificial Intelligence a Modern Approach; Pearson Education, Inc.: London, UK, 2010. [Google Scholar]
  2. Bannerjee, G.; Sarkar, U.; Das, S.; Ghosh, I. Artificial intelligence in agriculture: A literature survey. Int. J. Sci. Res. Comp. Sci. Appl. Manag. Stud. 2018, 7, 1–6. [Google Scholar]
  3. Dharmaraj, V.; Vijayanand, C. Artificial intelligence (AI) in agriculture. Int. J. Cur. Microb. Appl. Sci. 2018, 7, 2122–2128. [Google Scholar] [CrossRef]
  4. Føre, M.; Frank, K.; Norton, T.; Svendsen, E.; Alfredsen, J.A.; Dempster, T.; Eguiraun, H.; Watson, W.; Stahl, A.; Sunde, L.F. Precision fish farming: A new framework to improve production in aquaculture. Biosyst. Eng. 2018, 173, 176–193. [Google Scholar] [CrossRef]
  5. Eli-Chukwu, N.C. Applications of artificial intelligence in agriculture: A review. Eng. Tech. Appl. Sci. Res. 2019, 9, 4377–4383. [Google Scholar] [CrossRef]
  6. Smith, M.J. Getting value from artificial intelligence in agriculture. Anim. Prod. Sci. 2018, 60, 46–54. [Google Scholar] [CrossRef]
  7. Jha, K.; Doshi, A.; Patel, P.; Shah, M. A comprehensive review on automation in agriculture using artificial intelligence. Artif. Intell. Agric. 2019, 2, 1–12. [Google Scholar] [CrossRef]
  8. Dara, S.K. The new integrated pest management paradigm for the modern age. J. Integr. Pest Manage. 2019, 10, 12. [Google Scholar] [CrossRef] [Green Version]
  9. Ghaderi, S.; Fathipour, Y.; Asgari, S.; Reddy, G.V. Economic injury level and crop loss assessment for Tuta absoluta (Lepidoptera: Gelechiidae) on different tomato cultivars. J. Appl. Entomol. 2019, 143, 493–507. [Google Scholar] [CrossRef]
  10. Saha, T.; Chandran, N. Chemical ecology and pest management: A review. Int. J. Card. Sc. 2017, 5, 618–621. [Google Scholar]
  11. Prasad, Y.; Prabhakar, M. Pest monitoring and forecasting. Integrated Pest Management: Principles and Practice; Cabi: Oxfordshire, UK, 2012; pp. 41–57. [Google Scholar]
  12. Witzgall, P.; Kirsch, P.; Cork, A. Sex pheromones and their impact on pest management. J. Chem. Ecol. 2010, 36, 80–100. [Google Scholar] [CrossRef]
  13. Silva, D.; Salamanca, J.; Kyryczenko-Roth, V.; Alborn, H.T.; Rodriguez-Saona, C. Comparison of trap types, placement, and colors for monitoring Anthonomus musculus (Coleoptera: Curculionidae) adults in highbush blueberries. J. Insect Sci. 2018, 18, 19. [Google Scholar] [CrossRef]
  14. Liu, H.; Lee, S.H.; Chahl, J.S. A review of recent sensing technologies to detect invertebrates on crops. Precis. Agric. 2017, 18, 635–666. [Google Scholar] [CrossRef]
  15. Ding, W.; Taylor, G. Automatic moth detection from trap images for pest management. Comput. Electron. Agric. 2016, 123, 17–28. [Google Scholar] [CrossRef] [Green Version]
  16. Durgabai, R.P.L.; Bhargavi, P. Pest management using machine learning algorithms: A review. Int. J. Com. Sc. Eng. Inf. Tech. Res. 2018, 8, 13–22. [Google Scholar]
  17. Rustia, D.J.A.; Lin, C.E.; Chung, J.Y.; Zhuang, Y.J.; Hsu, J.C.; Lin, T.T. Application of an image and environmental sensor network for automated greenhouse insect pest monitoring. J. Asia-Pac. Entomol. 2020, 23, 17–28. [Google Scholar] [CrossRef]
  18. Clark, R.D. Putting deep learning in perspective for pest management scientists. Pest. Manage. Sci. 2020, 76, 2267–2275. [Google Scholar] [CrossRef]
  19. Arbuckle, T.; Schröder, S.; Steinhage, V.; Wittmann, D. Biodiversity informatics in action: Identification and monitoring of bee species using ABIS. In Proceedings of the 15th International Symposium Informatics for Environmental Protection, Zurich, Switzerland, 10–12 October 2001; Volume 1, pp. 425–430. [Google Scholar]
  20. Tofilski, A. DrawWing, a program for numerical description of insect wings. J. Insect Sci. 2004, 4, 1–5. [Google Scholar] [CrossRef]
  21. Wang, J.; Lin, C.; Ji, L.; Liang, A. A new automatic identification system of insect images at the order level. Knowl. Based Syst. 2012, 33, 102–110. [Google Scholar] [CrossRef]
  22. Kang, S.H.; Cho, J.H.; Lee, S.H. Identification of butterfly based on their shapes when viewed from different angles using an artificial neural network. J. Asia Pac. Entomol. 2014, 17, 143–149. [Google Scholar] [CrossRef]
  23. Mayo, M.; Watson, A.T. Automatic species identification of live moths. Knowl. Based Syst. 2007, 20, 195–202. [Google Scholar] [CrossRef] [Green Version]
  24. Larios, N.; Soran, B.; Shapiro, L.G.; Martínez-Muñoz, G.; Lin, J.; Dietterich, T.G. Haar random forest features and SVM spatial matching kernel for stonefly species identification. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; IEEE: Piscataway, NJ, USA; pp. 2624–2627. [Google Scholar] [CrossRef] [Green Version]
  25. Lytle, D.A.; Martínez-Muñoz, G.; Zhang, W.; Larios, N.; Shapiro, L.; Paasch, R.; Moldenke, A.; Mortensen, E.N.; Todorovic, S.; Dietterich, T.G. Automated processing and identification of benthic invertebrate samples. J. N. Am. Benthol. Soc. 2010, 29, 867–874. [Google Scholar] [CrossRef]
  26. Kaya, Y.; Kayci, L. Application of artificial neural network for automatic detection of butterfly species using color and texture features. Vis. Comp. 2014, 30, 71–79. [Google Scholar] [CrossRef]
  27. Li, X.L.; Huang, S.G.; Zhou, M.Q.; Geng, G.H. KNN-spectral regression LDA for insect recognition. In Proceedings of the 2009 First International Conference on Information Science and Engineering, Nanjing, China, 26–28 December 2009; IEEE: Piscataway, NJ, USA; pp. 1315–1318. [Google Scholar] [CrossRef]
  28. Tuda, M.; Luna-Maldonado, A.I. Image-based insect species and gender classification by trained supervised machine learning algorithms. Ecol. Informat. 2020, 60, 101135. [Google Scholar] [CrossRef]
  29. Wen, C.; Guyer, D. Image-based orchard insect automated identification and classification method. Comput. Electron. Agric. 2012, 89, 110–115. [Google Scholar] [CrossRef]
  30. van Klink, R.; August, T.; Bas, Y.; Bodesheim, P.; Bonn, A.; Fossøy, F.; Høye, T.T.; Jongejans, E.; Menz, M.H.M.; Bowler, D.E.; et al. Emerging technologies revolutionise insect ecology and monitoring. Trends Ecol. Evol. 2022, 10, 872–885. [Google Scholar] [CrossRef]
  31. Kawakita, S.; Ichikawa, K. Automated classification of bees and hornet using acoustic analysis of their flight sounds. Apidologie 2019, 50, 71–79. [Google Scholar] [CrossRef] [Green Version]
  32. Brydegaard, M.; Jansson, S.; Malmqvist, E.; Mlacha, Y.P.; Gebru, A.; Okumu, F.; Killeen, G.F.; Kirkeby, C. Lidar reveals activity anomaly of malaria vectors during pan-African eclipse. Sci. Adv. 2020, 6, eaay5487. [Google Scholar] [CrossRef] [PubMed]
  33. Genoud, A.P.; Basistyy, R.; Williams, G.M.; Thomas, B.P. Optical remote sensing for monitoring flying mosquitoes, gender identification and discussion on species identification. Appl. Phys. B Lasers Opt. 2018, 124, 46. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Batovska, J.; Piper, A.M.; Valenzuela, I.; Cunningham, J.P.; Blacket, M.J. Developing a non-destructive metabarcoding protocol for detection of pest insects in bulk trap catches. Sci. Rep. 2021, 11, 1–14. [Google Scholar] [CrossRef]
  35. Bjerge, K.; Mann, H.M.; Høye, T.T. Real-time insect tracking and monitoring with computer vision and deep learning. Remote Sens. Ecol. Conserv. 2022, 8, 315–327. [Google Scholar] [CrossRef]
  36. Clare, G.; Suckling, D.M.; Bradley, S.J.; Walker, J.T.S.; Shaw, P.W.; Daly, J.M.; McLaren, G.F.; Wearing, C.H.; Wearing, C.H. Pheromone trap colour determines catch of nontarget insects. New Zealand Plant Prot. 2000, 53, 216–220. [Google Scholar] [CrossRef] [Green Version]
  37. Wallis, D.R.; Shaw, P.W. Evaluation of coloured sticky traps for monitoring beneficial insects in apple orchards. New Zealand Plant Prot. 2008, 61, 328–332. [Google Scholar] [CrossRef]
  38. Blackmer, J.L.; Byers, J.A.; Rodriguez-Saona, C. Evaluation of color traps for monitoring Lygus spp.: Design, placement, height, time of day, and non-target effects. Crop Prot. 2008, 27, 171–181. [Google Scholar] [CrossRef]
  39. Broughton, S.; Harrison, J. Evaluation of monitoring methods for thrips and the effect of trap colour and semiochemicals on sticky trap capture of thrips (Thysanoptera) and beneficial insects (Syrphidae, Hemerobiidae) in deciduous fruit trees in Western Australia. Crop Prot. 2012, 42, 156–163. [Google Scholar] [CrossRef]
  40. Benelli, G.; Romano, D. Does indirect mating trophallaxis boost male mating success and female egg load in Mediterranean fruit flies? J. Pest Sc. 2018, 91, 181–188. [Google Scholar] [CrossRef]
  41. Daane, K.M.; Johnson, M.W. Olive fruit fly: Managing an ancient pest in modern times. Annu. Rev. Entomol. 2010, 55, 151–169. [Google Scholar] [CrossRef]
  42. Pegoraro, L.; Hidalgo, O.; Leitch, I.J.; Pellicer, J.; Barlow, S.E. Automated video monitoring of insect pollinators in the field. Emerg. Top. Life Sci. 2020, 4, 87–97. [Google Scholar] [CrossRef]
  43. ASAB/ABS. Guidelines for the treatment of animals in behavioural research and teaching. Anim. Behav. 2020, 183, 1–11. [Google Scholar] [CrossRef]
  44. European Commission. 2007. Commission Recommendations of 18 June 2007 on Guidelines for the Accommodation and Care of Animals Used for Experimental and other Scientific Purposes. Annex II to European Council Directive 86/609. See 2007/526/EC. Available online: https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2007:197:0001:0089:EN:PDF (accessed on 12 November 2022).
  45. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  46. Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7263–7271. [Google Scholar] [CrossRef] [Green Version]
  47. Farhadi, A.; Redmon, J. Yolov3: An incremental improvement. arXiv Prepr. 2018, arXiv:1804.02767. [Google Scholar] [CrossRef]
  48. Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv Prepr. 2020, arXiv:2004.10934. [Google Scholar] [CrossRef]
  49. Tian, Y.; Yang, G.; Wang, Z.; Wang, H.; Li, E.; Liang, Z. Apple detection during different growth stages in orchards using the improved YOLO-V3 model. Comput. Electron. Agric. 2019, 157, 417–426. [Google Scholar] [CrossRef]
  50. Hu, X.; Liu, Y.; Zhao, Z.; Liu, J.; Yang, X.; Sun, C.; Chen, S.; Li, B.; Zhou, C. Real-time detection of uneaten feed pellets in underwater images for aquaculture using an improved YOLO-V4 network. Comput. Electron. Agric. 2021, 185, 106135. [Google Scholar] [CrossRef]
  51. Al-Masni, M.A.; Al-Antari, M.A.; Park, J.M.; Gi, G.; Kim, T.Y.; Rivera, P.; Valarezo, E.; Choi, M.-T.; Han, S.-M.; Kim, T.S. Simultaneous detection and classification of breast masses in digital mammograms via a deep learning YOLO-based CAD system. Comput. Methods Prog. Biomed. 2018, 157, 85–94. [Google Scholar] [CrossRef] [PubMed]
  52. Chen, R.C. Automatic License Plate Recognition via sliding-window darknet-YOLO deep learning. Image Vis. Comp. 2019, 87, 47–56. [Google Scholar] [CrossRef]
  53. Loey, M.; Manogaran, G.; Taha, M.H.N.; Khalifa, N.E.M. Fighting against COVID-19: A novel deep learning model based on YOLO-v2 with ResNet-50 for medical face mask detection. Sustain. Cities Soc. 2021, 65, 102600. [Google Scholar] [CrossRef]
  54. GitHub Inc. 2021. Available online: https://github.com/ultralytics/yolov5 (accessed on 12 November 2022).
  55. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Chintala, S.; et al. Pytorch: An imperative style, high-performance deep learning library. Adv. Neural Inf. Process Syst. 2019, 32, 1–12. [Google Scholar]
  56. Colquhoun, D. The reproducibility of research and the misinterpretation of p-values. R. Soc. Open Sci. 2017, 4, 171085. [Google Scholar] [CrossRef] [Green Version]
  57. Dewi, C.; Chen, R.C.; Liu, Y.T.; Jiang, X.; Hartomo, K.D. Yolo V4 for advanced traffic sign recognition with synthetic training data generated by various GAN. IEEE Access 2021, 9, 97228–97242. [Google Scholar] [CrossRef]
Figure 1. Experimental setup for images acquisition with the camera pointing to the fly inside a petri dish.
Figure 1. Experimental setup for images acquisition with the camera pointing to the fly inside a petri dish.
Insects 14 00148 g001
Figure 2. Dataset composition: (a) C. capitata adult fly in a petri dish; (b) B. oleae adult fly in a petri dish; (c) C. capitata and B. oleae in the same petri dish (performance set); (d) C. capitata and B. oleae in different petri dishes (performance set). Photographs of (e) C. capitata adult fly and (f) B. oleae adult fly.
Figure 2. Dataset composition: (a) C. capitata adult fly in a petri dish; (b) B. oleae adult fly in a petri dish; (c) C. capitata and B. oleae in the same petri dish (performance set); (d) C. capitata and B. oleae in different petri dishes (performance set). Photographs of (e) C. capitata adult fly and (f) B. oleae adult fly.
Insects 14 00148 g002
Figure 3. A schematic representation of the developed steps from data capturing (1) and data preprocessing (2–4) towards network setup, training, and performance evaluation (5–7).
Figure 3. A schematic representation of the developed steps from data capturing (1) and data preprocessing (2–4) towards network setup, training, and performance evaluation (5–7).
Insects 14 00148 g003
Figure 4. YOLOv5 network architecture split into three main parts: Backbone, Neck, and Head.
Figure 4. YOLOv5 network architecture split into three main parts: Backbone, Neck, and Head.
Insects 14 00148 g004
Figure 5. YOLO training F1 score results.
Figure 5. YOLO training F1 score results.
Insects 14 00148 g005
Figure 6. Visualization of various metrics (e.g., precision, recall, mAP0.5, etc.) with the number of epochs (i.e., x-axis) during training and validation.
Figure 6. Visualization of various metrics (e.g., precision, recall, mAP0.5, etc.) with the number of epochs (i.e., x-axis) during training and validation.
Insects 14 00148 g006
Figure 7. Training results: (a) labelled images of the olive fruit fly (i.e., B. oleae in the orange box) and the Mediterranean fruit fly (i.e., C. capitata in the blue box), (b) prediction results of the testing set with inference precision percentage.
Figure 7. Training results: (a) labelled images of the olive fruit fly (i.e., B. oleae in the orange box) and the Mediterranean fruit fly (i.e., C. capitata in the blue box), (b) prediction results of the testing set with inference precision percentage.
Insects 14 00148 g007
Figure 8. YOLO testing results using performance set. Colored boxed indicated the olive fruit fly (i.e., pink and green) and the Mediterranean fruit fly (i.e., purple and red) recognized from the network along with the precision rate percentage.
Figure 8. YOLO testing results using performance set. Colored boxed indicated the olive fruit fly (i.e., pink and green) and the Mediterranean fruit fly (i.e., purple and red) recognized from the network along with the precision rate percentage.
Insects 14 00148 g008
Figure 9. Detection failed to recognize the Mediterranean fruit fly (right) in an angular pose (slightly blurred) of the insect’s body.
Figure 9. Detection failed to recognize the Mediterranean fruit fly (right) in an angular pose (slightly blurred) of the insect’s body.
Insects 14 00148 g009
Table 1. Dataset composition, size, and type distribution.
Table 1. Dataset composition, size, and type distribution.
Dataset CompositionDataset SizeDataset Type
C. capitata309Training set
B. oleae330Training set
C. capitata66Validation set
B. oleae117Validation set
C. capitata63Testing set
B. oleae27Testing set
C. capitata and B. oleae914Performance set
Table 2. Confusion matrix table layout.
Table 2. Confusion matrix table layout.
Predicted Class
PositiveNegative
Actual classPositiveTPFN
NegativeFPTN
Table 3. YOLO training options and parameters setting.
Table 3. YOLO training options and parameters setting.
ParameterValueParameterValue
lr00.01lrf0.2
momentum0.973weight decay0.0005
warmup epochs 3.0warmup momentum0.8
warmup bias lr 0.1box0.05
cls0.5clspw1.0
obj 1.0objpw1.0
IoUt0.2anchort4.0
hsvh0.015hsvs0.7
hsvv0.4translate0.1
scale0.5fliplr0.5
Table 4. YOLO training confusion matrix results.
Table 4. YOLO training confusion matrix results.
True Condition
TrainingMediterranean Fruit FlyOlive Fruit Fly
Predicted ConditionMediterranean Fruit Fly T P = 213
100 %
F P = 16
0 %
Olive Fruit Fly F N = 11
93 %
T N = 698
5 %
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tannous, M.; Stefanini, C.; Romano, D. A Deep-Learning-Based Detection Approach for the Identification of Insect Species of Economic Importance. Insects 2023, 14, 148. https://doi.org/10.3390/insects14020148

AMA Style

Tannous M, Stefanini C, Romano D. A Deep-Learning-Based Detection Approach for the Identification of Insect Species of Economic Importance. Insects. 2023; 14(2):148. https://doi.org/10.3390/insects14020148

Chicago/Turabian Style

Tannous, Michael, Cesare Stefanini, and Donato Romano. 2023. "A Deep-Learning-Based Detection Approach for the Identification of Insect Species of Economic Importance" Insects 14, no. 2: 148. https://doi.org/10.3390/insects14020148

APA Style

Tannous, M., Stefanini, C., & Romano, D. (2023). A Deep-Learning-Based Detection Approach for the Identification of Insect Species of Economic Importance. Insects, 14(2), 148. https://doi.org/10.3390/insects14020148

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop