Next Article in Journal
Effect of Tillage and Sowing Technologies Nexus on Winter Wheat Production in Terms of Yield, Energy, and Environment Impact
Next Article in Special Issue
Insect Predation Estimate Using Binary Leaf Models and Image-Matching Shapes
Previous Article in Journal
Comparative Analysis of Pasture Soil Fertility in Semiarid Agro-Silvo-Pastoral Systems
Previous Article in Special Issue
Evaluation of the U.S. Peanut Germplasm Mini-Core Collection in the Virginia-Carolina Region Using Traditional and New High-Throughput Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Using Image Analysis and Regression Modeling to Develop a Diagnostic Tool for Peanut Foliar Symptoms

Department of Plant and Environmental Sciences, Edisto Research and Education Center, Clemson University, Blackville, SC 29817, USA
*
Author to whom correspondence should be addressed.
Agronomy 2022, 12(11), 2712; https://doi.org/10.3390/agronomy12112712
Submission received: 13 October 2022 / Revised: 26 October 2022 / Accepted: 30 October 2022 / Published: 1 November 2022
(This article belongs to the Special Issue Application of Image Processing in Agriculture)

Abstract

:
Peanut foliar diseases and disorders can be difficult to rapidly diagnose with little experience because some abiotic and biotic symptoms present similar symptoms. Developing algorithms for automated identification of peanut foliar diseases and disorders could potentially provide a quick, affordable, and easy method for diagnosing peanut symptoms. To examine this, images of peanut leaves were captured from various angles, distances, and lighting conditions using various cameras. Color space data from all images was subsequently extracted and subjected to logistic regression. Separate algorithms were developed for each symptom to include healthy, hopperburn, late leaf spot, Provost injury, tomato spotted wilt, paraquat injury, or surfactant injury. The majority of these symptoms are not included within currently available disease identification mobile apps. All of the algorithms developed for peanut foliar diagnostics were ≥ 86% accurate. These diagnostic algorithms have the potential to be a valuable tool for growers if made available via a web-accessible platform, which is the next step of this work.

1. Introduction

Peanut is an important food crop in the U.S. where annual production is valued greater than USD 1.1 billion [1]. Numerous pathogens have the ability to cause yield-reducing disease, with disease often considered the most limiting factor with respect to peanut production [2]. In South Carolina, late leaf spot and tomato spotted wilt are among the most economically important diseases causing predominantly foliar symptoms [3]. Several abiotic incitants cause foliar symptoms in peanut including nutrient deficiency and pesticide injury, some of which can resemble those caused by diseases. Diagnostics of foliar symptoms can be time consuming when visual assessments are conducted manually or when lab-based molecular methods are used [4]. Early and accurate diagnosis of peanut diseases or disorders are vital for reducing economic losses through unnecessary purchases, wasted time and labor, and yield losses resulting from suboptimal management. Furthermore, misdiagnosis can contribute to the development of fungicide resistance among populations of pathogenic fungi through additional exposure to non-target fungicide applications [5]. Although reference resources are available to assist growers in the manual identification of peanut foliar symptoms (e.g., [2] and https://images.bugwood.org/, accessed on 8 April 2022), the availability of an economically accessible and easy-to-use tool to aid in diagnostics would be beneficial when making management decisions.
Imaging analysis and processing techniques for assessing plant phenotypes have previously been developed and used by plant scientists and researchers for classifying phenotypes ranging from basic morphology to quantifying plant disease severity [6,7,8]. The general process is image collection, image analysis (or processing), and then model development. For image acquisition, there are several types of images that have been used for detecting biotic or abiotic symptoms in plants, such as visible spectrum (VIS) [8,9,10,11], fluorescence [12,13,14], infrared [15,16,17], normalized difference vegetation index (NDVI) [18,19,20], mulit-spectral [21,22], and hyperspectral [12,23]. However, all imaging methods beyond simple VIS require specialized cameras or equipment. The image processing step normally requires a software or program to analyze and extract color space data from the uploaded image(s). A few examples of software used to process symptomatic plant images include Assess (American Phytopathological Society, St. Paul, MN, USA) [24,25], Adobe Photoshop (Adobe Inc, San Jose, CA, USA) [22,26], and ImageJ [8,10,27]. An advantage to ImageJ over Assess and Adobe is that it is free and opensource; however, in this study we used Batch Load Image Processor v.1.1 (BLIP) software developed by Clemson University [28]. Although BLIP is not currently accessible to the public, it was preferrable over other processing programs due to our ability to customize the type of color space data extracted from images. Segmentation is a step during image analysis where image pixels are classified by color or texture to facilitate differentiating between diseased and healthy tissue [11,29]; this process can be manual or automated depending on the individual software used. Edge detection is another method of segmentation where adjacent pixels with abrupt changes in color space values are detected as an edge or boundary [29]. Machine learning techniques are very commonly used for the development of image-based models, such as artificial neural networks (ANN), support vector machines (SVM), random forest (RF), linear discriminant analysis (LDA), regression analysis, and convolutional neural network (CNN) [11,20,30,31]. Models for classifying disease symptoms have been developed for tomato [32,33], rice [34], wheat [35], avocado [20], lettuce [18], and citrus leaves [36], as well as potato tubers [9].
Image analysis using regression modeling for peanut foliar symptoms is one such means that can be used to automate the process of diagnosis to further facilitate information-based crop management decisions. Using image analysis and regression modeling for identifying (or classifying) peanut diseases based upon field-collected, smartphone images with native backgrounds would be a novel development and one that could be accessible with a greater degree of flexibility for end-users. The objective of this project was to develop algorithms for automated identification of peanut foliar symptoms as an initial step towards this effort.

2. Materials and Methods

2.1. Image Acquisition

Still images of peanut foliage were taken from various smartphones (e.g., Samsung Galaxy S5 and S10e, Google Pixel 3) and camera (Canon EOS Rebel T6i) types at various angles, light conditions, and distances from the canopy in a field setting (Figure 1). The size range of the images were 2397 to 15,872,256 pixels with approximately 74% of images used having ≤ 150,000 pixels. Images were sorted by symptomology for seven symptom types to include healthy, hopperburn, late leaf spot (caused by Nothopassalora personata ((Berk, and M. A. Curtis) S. A. Kahn and M. Kamal), Provost injury, tomato spotted wilt, paraquat injury, and surfactant injury (Figure 2). From each original (composite) image, individual leaflets were identified and cropped as a new image with the native background retained. Leaflets and canopy images from a range of varying presentation angles and shadow coverage were included.

2.2. Image Processing

Each image was manually rated on a binary scale for each of the seven foliar symptoms where 0 = nonevent (absent) and 1 = event (applicable symptom present) in a master database along with picture name, type (canopy or cropped), and location. Images were processed in BLIP, which extracts 544 variables of color space data for each image pixel, such as RGB, hue, Hope Color Index (HCI), and edge detection. The HCI was developed by manually recording more than 70 unique RGB values, which were grouped to generally characterize greens, yellows, browns, dark browns or black, and whites. These colors were recorded by manually assessing the images of healthy and symptomatic leaves and collecting the RGB from healthy areas, lesions, chlorotic areas, injury, etc., so that these colors were specific to asymptomatic and symptomatic tissue on peanut leaves. Colors were analyzed using K Means Clusters for more specific groupings and were added to the image processing software as additional data columns (Table 1). Manual binary ratings and the BLIP output variables for each image were compiled into a master database to be used for model development.

2.3. Data Analysis

The final master database size was 4208 peanut images. Models were developed for asymptomatic (or healthy) leaves and symptomatic leaves to include paraquat injury, hopperburn, late leaf spot, Provost injury, surfactant injury, and tomato spotted wilt for a total of seven individual models using the numerical color space values generated by BLIP. The data were subjected to logistic regression analysis (PROC LOGISTIC) in SAS 9.4. The best subset selection was used to reduce the number of parameters for each model. Logistic regression was selected as the classifier of choice due to our interest in statistically modeling the binary nature of the data and reported success by other researchers to develop similar diagnostic models using logistic regression [11,13].
Models were trained using 80% of the dataset (or 3367 images), and the remaining 20% (or 841) of the images were used for the validation. The probability of an event was calculated using the following equation:
Probability of symptom (event = 1) = exp(L)/(1 + exp(L))
The probabilities were on a scale of 0 to 1 for each image, with images assigned a probability ≥ 0.50 being classified as an event for each respective model. The models were assessed based on model accuracy, defined as the percentage of images that were correctly classified, sensitivity, defined as ‘true positive’ or the proportion of event correctly classified as an event (1), and specificity, defined as ‘true negative’ or the proportion of non-event images being correctly classified as a non-event (0), as well as receiver operating characteristic (ROC) curves, Akaike’s information criterion (AIC), and loess calibration plots.
With the goal to make the algorithms available to the public, and with the probability of there being low quality images analyzed, it was important to test the algorithms for accuracy on images with reduced pixel availability and various levels of brightness. The validation dataset was sorted by image size and classified into 4 pixel classes for each model, where: Class 1 = number of pixels < 50,000, Class 2 = 50,000 < number of pixels < 100,000, Class 3 = 100,000 < number of pixels < 150,000, and Class 4 = number of pixels > 150,000. In a separate analysis, the validation dataset was split into 4 classes of brightness where: Class 1 = brightness value < 150 (dark images), Class 2 = brightness value > 150 and < 190, Class 3 = brightness value > 190 and < 240, and Class 4 = brightness class > 240 (very bright images).

3. Results

3.1. Model Performance

The algorithms developed for peanut foliar diagnostics (Table S1) were accurate (89.4 to 98.7%), specific (94.9 to 99.3%), and sensitive (70.0 to 93.1%; Table 2). The area under the ROC curves for training and validation performance were > 93% for all models. Of the incorrect predictions for the LLS model (89.4% accuracy), only eight images were of Provost injury incorrectly predicted to be LLS. For the Provost injury model (97.8% accuracy), only two images were incorrectly classified as Provost injury that were actually LLS. None of the hopperburn images misclassified with the hopperburn model (96.3% accuracy) were incorrectly predicted to be paraquat injury; however, one image of hopperburn was incorrectly classified as paraquat injury when the dataset was analyzed using the paraquat injury model (98.7% accuracy).

3.2. Impact of Image Quality on Model Accuracy

For each symptomology, the majority of images in the validation dataset had ≤ 150,000 pixels (Classes 1–3), mostly single leaflet images that were cropped from larger canopy images (Table 3). For each model, Class 3 images were the least accurately categorized, ranging from 58.6% to 85.9%. Model accuracies ranged from 82.0% to 95.2% for Class 1 images, from 75.6% to 95.7% for Class 2 images, and from 76.6% to 96.0% for Class 4 images.
Overall, it was observed that models were less likely to accurately categorize images in brightness Classes 1 and 4 compared to brightness Classes 2 and 3 (Table 4). Accuracy of the models’ classification of images ranged from 61.6% to 86.4%, and from 54.3% to 87.6% for brightness Classes 1 and 4, respectively. For brightness Classes 2 and 3, accuracy ranged from 81.5% to 93.1% and from 83.0% to 96.6%, respectively, among the peanut symptomology models.

4. Discussion

The algorithms developed for peanut foliar symptom diagnostics using logistic regression analysis were accurate and acceptable when compared to similar diagnostic models developed using either traditional modeling or machine learning. Pérez-Bueno et al. [20] developed similar logistic regression algorithms for detecting white root rot of avocado trees using NDVI data extracted from images. Their reported logistic regression analysis model was 82% accurate, 79% sensitive, and 85% specific, and performed better when compared to other classifiers, ANN, linear discriminant analysis, and SVM models developed using the same images [20]. Oppenheim et al. [12] used a CNN to identify and differentiate between healthy tubers and symptomatic tubers for four diseases with an accuracy of approximately 96% with an 80/20 train-test split as used to develop our algorithms.
Olivoto et al. [10] used logistic regression to develop models for measuring foliar disease severity for six individual diseases. In their study, images with an average range from 84,480 to 972,774 pixels were used, and it was observed that image resolutions did not significantly affect the accuracy of their analysis; however, a slight reduction in accuracy was observed with images reduced to ≤ 45,900 pixels [10]. Steddom et al. [24] compared the effects of image resolution and formatting on the accuracy of their predictions for wheat foliar disease severity. They found that resized images with low resolution (858–337,000 pixels/image) had little to no effect on the accuracy when uploaded to Assess software for analysis. Retention of accurate classification with a large range of image resolutions represents a potential means of increasing the throughput for the final server-side framework, though the exact magnitude of time savings would depend on the specific framework involved. Overall, a reduction in accuracy (%) was not observed with low resolution images for these peanut foliar symptomology models. In fact, for the majority of the models, images in pixel class 1 (>50,000 pixels) were the most accurately (%) classified. This was most likely due to the fact that the majority of the image dataset were cropped leaflet images with low resolution. Image brightness did have some impact on model accuracy (%) where images with brightness values > 150 and < 240 were most likely to be classified correctly. Such information pertaining to image quality will be useful in developing the website (or app) for model availability where the user could be alerted that the image uploaded is outside the desired range and that there is a corresponding decrease in confidence in the result based on this data.
Though a range in accuracy was observed, the most accurate (%) was the algorithm of peanut leaves symptomatic for paraquat injury with the least accurate (%) being the algorithm for peanut leaves symptomatic of late leaf spot. This may have been contributed to by late leaf spot images varying greatly in the severity and color of symptoms (e.g., lesions can vary in shade and the irregular presence of a yellow halo around lesions), whereas paraquat injury symptoms were generally more uniform in severity and color. Regardless, models were able to accurately classify respective symptoms against a photo database of all foliar symptoms. Additionally, the individual models were accurate when used to ‘diagnose’ or differentiate symptoms that appear similar, such as the late leaf spot and Provost injury models and the hopperburn and paraquat injury models. With respect to image classification, each model was more specific than sensitive, with the sensitivity and specificity of these models having been comparable to those developed by Pérez-Bueno et al. [11]. Although the models for healthy, hopperburn, late leaf spot, and tomato spotted wilt, exhibited a greater chance for ‘false negative’ predictions (i.e., lower sensitivities circa 70 to 76% compared with 87 to 93%), the sensitivity of those four models’ predictions were functionally acceptable. This follows the availability of readily observable and characteristic features that can be paired with image collection. Hopperburn symptoms, in addition to being recognizable, often occur together first near field edges and, when feeding is active, are accompanied in the field by leafhoppers whose movement may be seen. Image identification of late leaf spot lesions may be supplemented with in-field examination for the presence of bumpy conidiophores on abaxial surfaces. Tomato spotted wilt infection, as a whole, can produce a wide variety of symptoms; when infections occur, it is rarely limited to an individual plant, which consequently facilitates multiple symptomatic plants from which to collect images in a given field for subsequent analysis. Lastly, the nature of a non-omniscient tool aimed at declaring or predicting “healthy” is limited by design to only being able to identify instances where there is a lack of previously identified problems that have been (explicitly or implicitly) incorporated into its development.
While the app Plantix offers some models for peanut symptom identification, it is only available for Android users and does not include models for paraquat injury, tomato spotted wilt (caused by Tomato spotted wilt virus), surfactant injury, nor Provost injury on peanut [37]. Therefore, there are currently no automated tools available to identify all of the peanut foliar symptoms presented in this paper, meaning that these diagnostic algorithms have the potential to be a valuable tool for growers if made available via a web-accessible platform. In anticipation of making these models accessible and in a user-friendly manner, images with reduced quality (≤150,000) relative to the average resolution of most devices (≥1 million pixels) and native backgrounds were used in order to eliminate the need for special equipment or for users to perform additional modification to the image prior to upload. Similar to how traditional machine learning techniques use segmentation to remove backgrounds or classify diseased areas, BLIP uses edge detection procedures to differentiate between plant and soil pixels as well as the HCI colors, which were greatly important in the development of the presented models.
The availability of a simple, quick, and accurate tool to correctly identify symptoms could aid producers in decision-making regarding the potential need for pesticide applications. Although diagnostics are a crucial part of the decision-making process, it is important for growers to consider all available information before finalizing management decisions. Factors such as cultivar, growth stage, incidence of symptoms throughout the field, and management decisions prior to diagnosis add invaluable context to help inform how individual situations should be addressed.

5. Conclusions

In this study, image analysis and regression modeling were successfully used in the development of peanut foliar symptom diagnostic algorithms that are not currently available, to our knowledge, via other disease identification tools on the market. The development of peanut foliar models using point and click, field-based images is a novel approach to automated diagnostics that would potentially be a great resource for growers wanting to quickly identify foliar symptoms found in their field. The accessibility of these models to process images would be an advancement in the current market of automated plant diagnostic applications and one that is currently being pursued.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/agronomy12112712/s1, Table S1: Models for healthy, hopperburn, Provost injury, tomato spotted wilt, late leaf spot, paraquat injury, and surfactant injury symptoms on peanut leaves.

Author Contributions

Conceptualization, H.R.-B., K.R.K. and D.J.A.; methodology, H.R.-B., K.R.K. and D.J.A.; software, K.R.K.; validation, H.R.-B., K.R.K. and D.J.A.; formal analysis, H.R.-B. and D.J.A.; investigation, H.R.-B.; resources, K.R.K. and D.J.A.; data curation, H.R.-B. and D.J.A.; writing—original draft preparation, H.R.-B.; writing—review and editing, H.R.-B., K.R.K. and D.J.A.; visualization, H.R.-B.; supervision, K.R.K. and D.J.A.; project administration, D.J.A.; funding acquisition, D.J.A. All authors have read and agreed to the published version of the manuscript.

Funding

Funding was provided by USDA NIFA CPPM EIP Project No. SC-2017-04383. This material is based upon work supported by NIFA/USDA, under project number SC-1700592.

Data Availability Statement

Not applicable.

Acknowledgments

Technical Contribution No. 7005 of the Clemson University Experiment Station.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. USDA/NASS QuickStats Query Tool. Available online: https://quickstats.nass.usda.gov/ (accessed on 30 August 2022).
  2. Porter, D.M. Peanut Diseases. In Compendium of Peanut Diseases, 2nd ed.; Kokalis-Burelle, N., Porter, D.M., Rodrigues-Kabana, R., Smith, D.H., Subrahmanyam, P., Eds.; American Phytopathological Society Press: St. Paul, MN, USA, 1997. [Google Scholar]
  3. Anco, D.; Thomas, J.S.; Marshall, M.; Kirk, K.R.; Plumblee, M.T.; Smith, N.; Farmaha, B.; Payero, J. Peanut Money-Maker 2021 Production Guide; Circular 588; Clemson University Extension: Clemson, SC, USA, 2021. [Google Scholar]
  4. Fang, Y.; Ramasamy, R.P. Current and Prospective Methods for Plant Disease Detection. Biosensors 2015, 5, 537–561. [Google Scholar] [CrossRef] [Green Version]
  5. Hahn, M. The rising threat of fungicide resistance in plant pathogenic fungi: Botrytis as a case study. J. Chem. Biol. 2014, 7, 133–141. [Google Scholar] [CrossRef] [Green Version]
  6. Li, L.; Zhang, Q.; Huang, D. A Review of Imaging Techniques for Plant Phenotyping. Sensors 2014, 14, 20078–20111. [Google Scholar] [CrossRef]
  7. Mutka, A.; Bart, R.S. Image-based phenotyping of plant disease symptoms. Front. Plant Sci. 2015, 5, 734. [Google Scholar] [CrossRef] [Green Version]
  8. Barbedo, J.G.A. An Automatic Method to Detect and Measure Leaf Disease Symptoms Using Digital Image Processing. Plant Dis. 2014, 98, 1709–1716. [Google Scholar] [CrossRef] [Green Version]
  9. Oppenheim, D.; Shani, G.; Erlich, O.; Tsror, L. Using Deep Learning for Image-Based Potato Tuber Disease Detection. Phytopathology 2019, 109, 1083–1087. [Google Scholar] [CrossRef]
  10. Olivito, T.; Andrade, S.M.; Del Ponte, E.M. Measuring plant disease severity in R: Introducing and evaluating the plinman package. Trop. Plant Pathol. 2022, 47, 95–104. [Google Scholar] [CrossRef]
  11. Bock, C.H.; Barbedo, J.G.A.; Del Ponte, E.M.; Bohnenkamp, D.; Mahlein, A.-K. From visual estimates to fully automated sensor-based measurements of plant disease severity: Status and challenges for improving accuracy. Phytopathol. Res. 2020, 2, 9. [Google Scholar] [CrossRef] [Green Version]
  12. Bauriegel, E.; Herppich, W.B. Hyperspectral and Chlorophyll Fluorescence Imaging for Early Detection of Plant Diseases, with Special Reference to Fusarium spec. Infections on Wheat. Agriculture 2014, 4, 32–57. [Google Scholar] [CrossRef] [Green Version]
  13. Belasque, J.J.; Gasparoto, M.C.G.; Marcassa, L.G. Detection of mechanical and disease stresses in citrus plants by fluorescence spectroscopy. Appl. Opt. 2008, 47, 1922–1926. [Google Scholar] [CrossRef]
  14. Daley, P.F. Chlorophyll fluorescence analysis and imaging in plant stress and disease. Can. J. Plant Pathol. 1995, 17, 167–173. [Google Scholar] [CrossRef]
  15. Nilsson, H.E. Hand-held radiometry and IR-thermography of plant diseases in field plot experiments†. Int. J. Remote Sens. 1991, 12, 545–557. [Google Scholar] [CrossRef]
  16. Oerke, E.; Steiner, U.; Dehne, H.; Lindenthal, M. Thermal imaging of cucumber leaves affected by downey mildew and environmental conditions. J. Exp. Bot. 2006, 57, 2121–2132. [Google Scholar] [CrossRef]
  17. Wang, M.; Ling, N.; Dong, X.; Zhu, Y.; Shen, Q.; Guo, S. Thermographic visualization of leaf response in cucumber plants infected with the soil-borne pathogen Fusarium oxysporum f. sp. cucumerinum. Plant Physiol. Biochem. 2012, 61, 153–161. [Google Scholar] [CrossRef]
  18. Sandmann, M.; Grosch, R.; Graefe, J. The Use of Features from Fluorescence, Thermography, and NDVI Imaging to Detect Biotic Stress in Lettuce. Plant Dis. 2018, 102, 1101–1107. [Google Scholar] [CrossRef] [Green Version]
  19. Wang, L.; Duan, Y.; Zhang, L.; Rehman, T.U.; Ma, D.; Jin, J. Precise Estimation of NDVI with a Simple NIR Sensitive RGB Camera and Machine Learning Methods for Corn Plants. Sensors 2020, 20, 3208. [Google Scholar] [CrossRef]
  20. Pérez-Bueno, M.L.; Pineda, M.; Vida, C.; Fernández-Ortuño, D.; Torés, J.A.; de Vicente, A.; Cazorla, F.M.; Barón, M. Detection of White Root Rot in Avocado Trees by Remote Sensing. Plant Dis. 2019, 103, 1119–1125. [Google Scholar] [CrossRef]
  21. Raikes, C.; Burpee, L.L. Use of multispectral radiometry for assessment of Rhizoctonia Blight in Creeping Bentgrass. Phytopathology 1998, 88, 446–449. [Google Scholar] [CrossRef] [Green Version]
  22. Cui, D.; Zhang, Q.; Li, M.; Hartman, G.L.; Zhao, Y. Image processing methods for quantitatively detecting soybean rust from multispectral images. Biosyst. Eng. 2010, 17, 186–193. [Google Scholar] [CrossRef]
  23. Mahleim, A.K.; Oerke, E.C.; Steiner, U.; Dehne, H.W. Recent advantages in sensing plant diseases for precision crop protection. Eur. J. Plant Pathol. 2012, 133, 197–209. [Google Scholar] [CrossRef]
  24. Steddom, K.; McMullen, M.; Schatz, B.; Rush, C.M. Comparing Image Format and Resolution for Assessment of Foliar Diseases of Wheat. Plant Health Prog. 2005, 6, 11. [Google Scholar] [CrossRef] [Green Version]
  25. Bock, C.H.; Parker, P.E.; Cook, A.Z.; Gottwald, T.R. Visual Rating and the Use of Image Analysis for Assessing Different Symptoms of Citrus Canker on Grapefruit Leaves. Plant Dis. 2008, 92, 530–541. [Google Scholar] [CrossRef] [Green Version]
  26. Kwack, M.S.; Kim, E.N.; Lee, H.; Kim, J.-W.; Chun, S.-C.; Kim, K.D. Digital image analysis to measure lesion area of cucumber anthracnose by Colletotrichum orbiculare. J. Gen. Plant Pathol. 2005, 71, 418–421. [Google Scholar] [CrossRef]
  27. Peressotti, E.; Duchêne, E.; Merdinoglu, D.; Mestre, P. A semi-automatic non-destructive method to quantify grapevine downy mildew sporulation. J. Microbiol. Methods 2011, 84, 265–271. [Google Scholar] [CrossRef]
  28. Kirk, K.R. Batch Load Image Processor; v.1.1.; Clemson University: Clemson, SC, USA, 2022. [Google Scholar]
  29. Muthukrishnan, R.; Radha, M. Edge detection techniques for image segmentation. Int. J. Comput. Sci. Inf. Technol. 2011, 3, 259–267. [Google Scholar] [CrossRef]
  30. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef] [Green Version]
  31. Loddo, A.; Di Ruberto, C.; Vale, A.M.P.G.; Ucchesu, M.; Soares, J.M.; Bacchetta, G. An effective and friendly tool for seed image analysis. Vis. Comput. 2022, 1–18. [Google Scholar] [CrossRef]
  32. Barhimi, M.; Boukhalfa, K.; Moussaoui, A. Deep learning for tomato diseases: Classification and symptoms visualization. Appl. Artif. Intell. 2017, 31, 299–315. [Google Scholar] [CrossRef]
  33. Wang, X.; Zhang, M.; Zhu, J.; Geng, S. Spectral prediction of Phytophthora infestans infection on tomatoes using artificial neural network (ANN). Int. J. Remote Sens. 2008, 29, 1693–1706. [Google Scholar] [CrossRef]
  34. Sanyal, P.; Patel, S.C. Pattern recognition method to detect two diseases in rice plants. Imaging Sci. J. 2008, 56, 319–325. [Google Scholar] [CrossRef]
  35. Lu, J.; Hu, J.; Zhao, G.; Mei, F.; Zhang, C. An in-field automatic wheat disease diagnosis system. Comput. Electron. Agric. 2017, 142, 369–379. [Google Scholar] [CrossRef] [Green Version]
  36. Barman, U.; Choudhury, R.D.; Sahu, D.; Barman, G.G. Comparison of convolution neural networks for smartphone image based real time classification of citrus leaf disease. Comput. Electron. Agric. 2020, 177, 105661. [Google Scholar] [CrossRef]
  37. Plantix, Version 3.6.0. Mobile Application Software. PEAT GmbH: Berlin, Germany, 2015. Available online: https://plantix.net/en/(accessed on 15 March 2022).
Figure 1. Examples of canopy and leaflet images from different angles, resolution quality, and lighting included for developing the models.
Figure 1. Examples of canopy and leaflet images from different angles, resolution quality, and lighting included for developing the models.
Agronomy 12 02712 g001
Figure 2. Symptoms used to build peanut diagnostic models and their causal agents: (A). Paraquat injury, abiotic, (B). Healthy, (C). Hopperburn, Empoasca fabae, (D). Tomato spotted wilt, Tomato spotted wilt virus, (E). Late leaf spot, Nothopassalora personata, (F). Provost injury, abiotic, and (G). Surfactant injury, abiotic.
Figure 2. Symptoms used to build peanut diagnostic models and their causal agents: (A). Paraquat injury, abiotic, (B). Healthy, (C). Hopperburn, Empoasca fabae, (D). Tomato spotted wilt, Tomato spotted wilt virus, (E). Late leaf spot, Nothopassalora personata, (F). Provost injury, abiotic, and (G). Surfactant injury, abiotic.
Agronomy 12 02712 g002
Table 1. Color clusters collected from healthy and symptomatic peanut images that were added to Batch Load Image Processor v.1.1 software as the Hope Color Index for analysis of the images used to develop the diagnostic algorithms.
Table 1. Color clusters collected from healthy and symptomatic peanut images that were added to Batch Load Image Processor v.1.1 software as the Hope Color Index for analysis of the images used to develop the diagnostic algorithms.
HCI Color aRedGreenBlueCorresponding Color
018515249
18111660
2503741
315920181
419320472
512941736
6150168137
7209195114
8967375
923722749
10231238218
11202213155
12152111104
a Final 13 distinct color groupings based on K Means Cluster analysis of over 70 unique colors (RGB) specific to peanut leaves.
Table 2. Performance of paraquat injury, healthy, hopperburn, late leaf spot, Provost injury, surfactant injury, and tomato spotted wilt diagnostic models.
Table 2. Performance of paraquat injury, healthy, hopperburn, late leaf spot, Provost injury, surfactant injury, and tomato spotted wilt diagnostic models.
ModelAccuracy (%) aSensitivity (%) bSpecificity (%) c
Paraquat injury98.786.699.3
Healthy91.676.395.8
Hopperburn96.370.098.6
Late leaf spot89.471.794.9
Provost injury97.893.198.7
Surfactant injury94.391.396.1
Tomato spotted wilt95.273.598.1
a Accuracy is defined as the percentage of images that were correctly classified. b Sensitivity is defined as the proportion of events correctly classified as an event, or true positives. c Specificity is defined as the proportion of non-event images being correctly classified as a non-event, or true negatives.
Table 3. The total number of images, the number of event images and the accuracy of each pixel size class by model.
Table 3. The total number of images, the number of event images and the accuracy of each pixel size class by model.
ModelSize Class aTotal ImagesEvent Images bAccuracy (%) c
Paraquat injuryClass 1377494.2
Class 21641195.7
Class 3991185.9
Class 42012096.0
HealthyClass 137716986.7
Class 21641482.3
Class 399672.7
Class 4201882.1
HopperburnClass 13772695.2
Class 2164684.1
Class 399972.7
Class 42012383.1
Late leaf spotClass 137711582.0
Class 21644275.6
Class 3991258.6
Class 42013176.6
Provost injuryClass 13773695.2
Class 21642882.9
Class 3991965.7
Class 42014992.5
Surfactant injuryClass 13774489.7
Class 21647778.7
Class 3996061.6
Class 420112478.1
Tomato spotted wiltClass 13771990.5
Class 21642481.7
Class 3991271.7
Class 42012793.0
a Class 1 includes images with > 50,000 pixels, Class 2 images have > 50,0000 and < 100,000 pixels, Class 3 images have > 100,000 and < 150,000 pixels, and Class 4 images have > 150,000 pixels. b Event images are defined as images that contain the respective symptomology for the model. c Accuracy is defined as the percentage of images that were correctly classified.
Table 4. The total number of images, the number of event images and the accuracy of each brightness class by model.
Table 4. The total number of images, the number of event images and the accuracy of each brightness class by model.
ModelBrightness Class aTotal ImagesEvent Images bAccuracy (%) c
Paraquat injuryClass 1125184.0
Class 2259492.7
Class 33523196.6
Class 41051083.8
HealthyClass 11251672.8
Class 22596683.8
Class 33528787.5
Class 41052866.7
HopperburnClass 1125974.4
Class 22592583.8
Class 33522790.3
Class 4105376.2
Late leaf spotClass 11253461.6
Class 22595181.5
Class 33528083.0
Class 41053554.3
Provost injuryClass 11254272.8
Class 22593593.1
Class 33524792.6
Class 4105887.6
Surfactant injuryClass 11256267.2
Class 22599385.3
Class 335212391.8
Class 41052772.4
Tomato spotted wiltClass 1125586.4
Class 22592692.3
Class 33524088.1
Class 41051176.2
a Class 1 includes images with brightness values < 150; Class 2 images have brightness values > 150 and < 190; Class 3 images have brightness values > 190 and < 240; and Class 4 images have brightness values > 240. b Event images are defined as images that contain the respective symptomology for the model. c Accuracy is defined as the percentage of images that were correctly classified.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Renfroe-Becton, H.; Kirk, K.R.; Anco, D.J. Using Image Analysis and Regression Modeling to Develop a Diagnostic Tool for Peanut Foliar Symptoms. Agronomy 2022, 12, 2712. https://doi.org/10.3390/agronomy12112712

AMA Style

Renfroe-Becton H, Kirk KR, Anco DJ. Using Image Analysis and Regression Modeling to Develop a Diagnostic Tool for Peanut Foliar Symptoms. Agronomy. 2022; 12(11):2712. https://doi.org/10.3390/agronomy12112712

Chicago/Turabian Style

Renfroe-Becton, Hope, Kendall R. Kirk, and Daniel J. Anco. 2022. "Using Image Analysis and Regression Modeling to Develop a Diagnostic Tool for Peanut Foliar Symptoms" Agronomy 12, no. 11: 2712. https://doi.org/10.3390/agronomy12112712

APA Style

Renfroe-Becton, H., Kirk, K. R., & Anco, D. J. (2022). Using Image Analysis and Regression Modeling to Develop a Diagnostic Tool for Peanut Foliar Symptoms. Agronomy, 12(11), 2712. https://doi.org/10.3390/agronomy12112712

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop