Techniques for Canopy to Organ Level Plant Feature Extraction via Remote and Proximal Sensing: A Survey and Experiments
Abstract
:1. Introduction
1.1. Background and Motivation
1.2. Scope and Objectives
1.3. Contributions of This Paper
1.4. Structure of the Paper
2. Overview of Plant Phenotyping Using Remote/Proximal Sensing Technologies
2.1. Satellite-Based Plant Feature Extraction
2.1.1. Multispectral and Hyperspectral Imaging with Respect to Satellites
2.1.2. Thermal Imaging with Respect to Satellites
2.1.3. Synthetic Aperture Radar (SAR)
2.1.4. LiDAR (Light Detection and Ranging)
2.2. UAV-Based Plant Feature Extraction
2.2.1. Multispectral and Hyperspectral Imaging with Respect to UAVs
2.2.2. Thermal Imaging with Respect to UAVs
2.2.3. RGB Imaging and Structure from Motion (SfM)
2.2.4. LiDAR and Depth Sensing
2.3. Phenotyping Scope in Remote and Proximal Sensing
2.3.1. Plant-Level Phenotyping: Plant Height, Biomass Estimation, Plant Health Monitoring
2.3.2. Organ-Level Phenotyping: Leaf Area Index, Stem Thickness, and Flower and Fruit Counting
3. Technical Progress in 3D Point Cloud Classification in Remote Sensing
3.1. Conventional Methods
3.2. Deep Learning Approaches
3.3. Deep Learning Advances in 3D Point Cloud Processing
3.4. Progress in Performance (Complexity/Data Size)
- Processing Time—the time it takes to process a single point cloud frame.
- Frame Rate—the number of point cloud frames that can be processed per second.
- Data Size—the size of a single point cloud frame.
- Storage Requirements—the amount of storage needed to store one minute of point cloud data.
3.5. Exploring Point Cloud Segmentation via PointNet and SVM
3.5.1. Data Processing
3.5.2. Methodology
3.5.3. Results
4. Plant Feature Extraction via 2D and 3D Data Modalities
4.1. Two-Dimensional (2D) Image-Based Plants Organ-Level Feature Extraction
4.1.1. Color Space Analysis and Conversion
4.1.2. Shape Analysis and Morphological Features
4.1.3. Segmentation Techniques
4.1.4. Deep Learning for 2D Image-Based Feature Extraction
4.2. Three-Dimensional (3D) Point Cloud-Based Plant Feature Extraction
5. Remote Sensing and Plant Phenotyping: Insights
5.1. Canopy-Level Remote Sensing Data Collection and Analysis
5.2. Plant-Level 3D Data Collection for Phenotyping
5.3. Two-Dimensional (2D) Image-Based Organ Level Feture Extraction
5.3.1. Image Annotation and Segmentation
5.3.2. Model Training and Implementation
5.3.3. Future Directions in 2D Analysis
5.4. Organ-Level Feature Extraction Using High-Resolution 3D Data
6. Discussion
7. Conclusions and Future Outlook
Author Contributions
Funding
Conflicts of Interest
References
- Sishodia, R.P.; Ray, R.L.; Singh, S.K. Applications of Remote Sensing in Precision Agriculture: A Review. Remote Sens. 2020, 12, 3136. [Google Scholar] [CrossRef]
- Liu, J.; Zhou, Z.; Li, B. Editorial: Remote Sensing for Field-Based Crop Phenotyping. Front. Plant Sci. 2024, 15, 1368694. [Google Scholar] [CrossRef]
- Araus, J.L.; Cairns, J.E. Field high-throughput phenotyping: The new crop breeding frontier. Trends Plant Sci. 2014, 19, 52–61. [Google Scholar] [CrossRef] [PubMed]
- Zero, E.; Sacile, R.; Trinchero, D.; Fossa, M. Smart Sensors and Smart Data for Precision Agriculture: A Review. Sensors 2024, 24, 2647. [Google Scholar] [CrossRef] [PubMed]
- Ghahremani, M.; Williams, K. Direct and accurate feature extraction from 3D point clouds of plants using RANSAC. Comput. Electron. Agric. 2022, 184, 40–47. [Google Scholar] [CrossRef]
- Huu, T.N.; Lee, S. Development of volumetric image descriptor for urban object classification using 3D LiDAR based on convolutional neural network. In Proceedings of the 2022 22nd International Conference on Control, Automation, and Systems, Seoul, Republic of Korea, 27–30 November 2022; pp. 1442–1448. [Google Scholar]
- Weyler, J.; Milioto, A.; Falck, T.; Behley, J.; Stachniss, C. Joint plant instance detection and leaf count estimation for in-field plant phenotyping. IEEE Robot. Autom. Lett. 2021, 6, 314–321. [Google Scholar] [CrossRef]
- Xin, B.; Sun, J.; Bartholomeus, H.; Kootstra, G. 3D data-augmentation methods for semantic segmentation of tomato plant parts. Front. Plant Sci. 2023, 14, 1045545. [Google Scholar] [CrossRef]
- Munisami, T.; Ramsurn, M.; Kishnah, S.; Pudaruth, S.C. Plant leaf recognition using shape features and colour histogram with K-nearest neighbour classifiers. Procedia Comput. Sci. 2015, 58, 740–747. [Google Scholar] [CrossRef]
- Giang TT, H.; Ryoo, Y.-J. Sweet pepper leaf area estimation using semantic 3D point clouds based on semantic segmentation neural network. AgriEngineering 2024, 6, 645–656. [Google Scholar] [CrossRef]
- Kalyoncu, C.; Toygar, Ö. GTCLC: Leaf classification method using multiple descriptors. IET Comput. Vis. 2016, 10, 700–708. [Google Scholar] [CrossRef]
- Fuentes-Peñailillo, F.; Gutter, K.; Vega, R.; Carrasco Silva, G. Transformative Technologies in Digital Agriculture: Leveraging IoT, Remote Sensing, and AI for Smart Crop Management. J. Sens. Actuator Netw. 2024, 13, 39. [Google Scholar] [CrossRef]
- Xu, Y.; Shrestha, V.; Piasecki, C.; Wolfe, B.; Hamilton, L.; Millwood, R.J.; Mazarei, M.; Stewart, C.N. UAV-Based Remote Sensing for Automated Phenotyping of Field-Grown Switchgrass: Modeling Sustainability Traits. Plants 2024, 10, 2726. [Google Scholar] [CrossRef]
- Borah, K.; Das, H.S.; Seth, S. A review on advancements in feature selection and feature extraction for high-dimensional NGS data analysis. Funct. Integr. Genom. 2024, 24, 139. [Google Scholar] [CrossRef] [PubMed]
- Wang, C.; Liu, B.; Liu, L. A review of deep learning used in the hyperspectral image analysis for agriculture. Artif. Intell. Rev. 2021, 54, 5205–5253. [Google Scholar] [CrossRef]
- Pineda, M.; Barón, M.; Pérez-Bueno, M.-L. Thermal Imaging for Plant Stress Detection and Phenotyping. Remote Sens. 2021, 13, 68. [Google Scholar] [CrossRef]
- Dong, L.; Jiao, N.; Zhang, T.; Liu, F.; You, H. GPU Accelerated Processing Method for Feature Point Extraction and Matching in Satellite SAR Images. Appl. Sci. 2024, 14, 1528. [Google Scholar] [CrossRef]
- Farhan, S.M.; Yin, J.; Chen, Z.; Memon, M.S. A Comprehensive Review of LiDAR Applications in Crop Management for Precision Agriculture. Sensors 2024, 24, 5409. [Google Scholar] [CrossRef]
- Gano, B.; Bhadra, S.; Vilbig, J.M.; Ahmed, N.; Sagan, V.; Shakoor, N. Drone-based imaging sensors, techniques, and applications in plant phenotyping for crop breeding: A comprehensive review. Plant Phenome J. 2024, 7, e20100. [Google Scholar] [CrossRef]
- Wen, T.; Li, J.-H.; Wang, Q.; Gao, Y.; Hao, G.-F.; Song, B.-A. Thermal imaging: The digital eye facilitates high-throughput phenotyping traits of plant growth and stress responses. Sci. Total Environ. 2023, 899, 165626. [Google Scholar] [CrossRef]
- Yuan, H.; Bennett, R.S.; Wang, N.; Chamberlin, K. Development of a Peanut Canopy Measurement System Using a Ground-Based LiDAR Sensor. Front. Plant Sci. 2019, 10, 203. [Google Scholar] [CrossRef]
- Zhou, J.; Li, F.; Wang, X.; Yin, H.; Zhang, W.; Du, J.; Pu, H. Hyperspectral and Fluorescence Imaging Approaches for Nondestructive Detection of Rice Chlorophyll. Plants 2024, 13, 1270. [Google Scholar] [CrossRef] [PubMed]
- Ijaz, S.; Ul Haq, I.; Babar, M. Fluorescent Imaging System-Based Plant Phenotyping for Disease Recognition. In Trends in Plant Disease Assessment; Ul Haq, I., Ijaz, S., Eds.; Springer: Singapore, 2022; pp. 189–206. [Google Scholar] [CrossRef]
- Rouse, J.W.; Haas, R.H.; Schell, J.A. Monitoring vegetation systems in the Great Plains with ERTS. NASA Spec. Publ. 1974, 351, 309. [Google Scholar]
- Vekerdy, Z.; Laake, P.E.; Timmermans, J.; Dost, R. Canopy Structural Modeling Based on Laser Scanning; International Institute for Geo-Information Science and Earth Observation: Enschede, The Netherlands, 2007. [Google Scholar]
- Gutierrez, M.; Reynolds, M.P.; Klatt, A.R. Association of water spectral indices with plant and soil water relations in contrasting wheat genotypes. J. Exp. Bot. 2010, 61, 3291–3303. [Google Scholar] [CrossRef] [PubMed]
- Musungu, K.; Dube, T.; Smit, J.; Shoko, M. Using UAV multispectral photography to discriminate plant species in a seep wetland of the Fynbos Biome. Wetl. Ecol. Manag. 2024, 32, 207–227. [Google Scholar] [CrossRef]
- Jiang, Z.; Huete, A.; Didan, K.; Miura, T. Development of a two-band enhanced vegetation index without a blue band. Remote Sens. Environ. 2008, 112, 3833–3845. [Google Scholar] [CrossRef]
- Huete, A.R. A soil-adjusted vegetation index (SAVI). Remote Sens. Environ. 1988, 25, 295–309. [Google Scholar] [CrossRef]
- Sharma, L.K.; Bu, H.; Denton, A.; Franzen, D.W. Active-optical sensors using red NDVI compared to red edge NDVI for prediction of corn grain yield in North Dakota, U.S.A. Sensors 2015, 15, 27832–27853. [Google Scholar] [CrossRef]
- Peñuelas, J.; Garbulsky, M.F.; Filella, I. Photochemical reflectance index (PRI) and remote sensing of plant CO₂ uptake. New Phytol. 2011, 191, 596–599. [Google Scholar] [CrossRef]
- Coswosk, G.G.; Gonçalves, V.M.L.; de Lima, V.J.; de Souza, G.A.R.; Junior, A.T.D.A.; Pereira, M.G.; de Oliveira, E.C.; Leite, J.T.; Kamphorst, S.H.; de Oliveira, U.A.; et al. Utilizing Visible Band Vegetation Indices from Unmanned Aerial Vehicle Images for Maize Phenotyping. Remote Sens. 2024, 16, 3015. [Google Scholar] [CrossRef]
- McFeeters, S.K. The use of the normalized difference Water Index (NDWI) in the delineation of open water features. Int. J. Remote Sens. 1996, 17, 1425–1432. [Google Scholar] [CrossRef]
- Tucker, C.J. Red and photographic infrared linear combinations for monitoring vegetation. Remote Sens. Environ. 1979, 8, 127–150. [Google Scholar] [CrossRef]
- Rasmussen, J.; Ntakos, G.; Nielsen, J.; Svensgaard, J.; Poulsen, R.N.; Christensen, S. Are vegetation indices derived from consumer-grade cameras mounted on UAVs sufficiently reliable for assessing experimental plots? Eur. J. Agron. 2016, 74, 75–92. [Google Scholar] [CrossRef]
- Sripada, R.P.; Heiniger, R.W.; White, J.G.; Meijer, A.D. Aerial Color Infrared Photography for Determining Early In-Season Nitrogen Requirements in Corn. Agron. J. 2006, 98, 968–977. [Google Scholar] [CrossRef]
- Gitelson, A.A.; Viña, A.; Ciganda, V.; Rundquist, D.C.; Arkebauer, T.J. Remote estimation of canopy chlorophyll content in crops. Geophys. Res. Lett. 2005, 32, L08403. [Google Scholar] [CrossRef]
- Numbisi, F.N.; Coillie, F.V.; Wulf, R.R. Delineation of Cocoa Agroforests Using Multiseason Sentinel-1 SAR Images: A Low Grey Level Range Reduces Uncertainties in GLCM Texture-Based Mapping. ISPRS Int. J. Geo-Inf. 2019, 8, 179. [Google Scholar] [CrossRef]
- Lhermitte, E.; Hilal, M.; Furlong, R.; O’Brien, V.; Humeau-Heurtier, A. Deep Learning and Entropy-Based Texture Features for Color Image Classification. Entropy 2022, 24, 1577. [Google Scholar] [CrossRef]
- Javidan, S.M.; Banakar, A.; Rahnama, K.; Vakilian, K.A.; Ampatzidis, Y. Feature engineering to identify plant diseases using image processing and artificial intelligence: A comprehensive review. Smart Agric. Technol. 2024, 8, 100480. [Google Scholar] [CrossRef]
- Ashfaq, M.; Ahmad, W.; Shah, S.M.; Ilyas, M. Citrus Leaf Disease Detection Using Joint Scale Local Binary Pattern. Sindh Univ. J. Inf. Commun. Technol. 2021, 4, 199–206. Available online: https://sujo.usindh.edu.pk/index.php/USJICT/article/view/3292 (accessed on 21 April 2024).
- Lang, N.; Jetz, W.; Schindler, K.; Wegner, J.D. A high-resolution canopy height model of the Earth. Nat. Ecol. Evol. 2023, 7, 1778–1789. [Google Scholar] [CrossRef]
- Hasselquist, N.J.; Benegas, L.; Roupsard, O.; Malmer, A.; Ilstedt, U. Canopy cover effects on local soil water dynamics in a tropical agroforestry system: Evaporation drives soil water isotopic enrichment. Hydrol. Process. 2018, 32, 994–1004. [Google Scholar] [CrossRef]
- Maimaitijiang, M.; Sagan, V.; Sidike, P.; Maimaitiyiming, M.; Hartling, S.; Peterson, K.T.; Maw, M.J.W.; Shakoor, N.; Mockler, T.; Fritschi, F.B. Vegetation Index Weighted Canopy Volume Model (CVMVI) for soybean biomass estimation from Unmanned Aerial System-based RGB imagery. ISPRS J. Photogramm. Remote Sens. 2019, 151, 27–41. [Google Scholar] [CrossRef]
- Adak, A.; Murray, S.C.; Washburn, J.D. Deciphering temporal growth patterns in maize: Integrative modeling of phenotype dynamics and underlying genomic variations. New Phytol. 2024, 242, 121–136. [Google Scholar] [CrossRef] [PubMed]
- Zeng, L.; Wardlow, B.D.; Xiang, D.; Hu, S.; Li, D. A review of vegetation phenological metrics extraction using time-series, multispectral satellite data. Remote Sens. Environ. 2020, 237, 111511. [Google Scholar] [CrossRef]
- Sabie, R.; Bawazir, A.S.; Buenemann, M.; Steele, C.; Fernald, A. Calculating Vegetation Index-Based Crop Coefficients for Alfalfa in the Mesilla Valley, New Mexico Using Harmonized Landsat Sentinel-2 (HLS) Data and Eddy Covariance Flux Tower Data. Remote Sens. 2024, 16, 2876. [Google Scholar] [CrossRef]
- Tang, B.; Xie, W.; Meng, Q.; Moorhead, R.J.; Feng, G. Soil Moisture Estimation Using Hyperspectral Imagery Based on Metric Learning. In Proceedings of the 21st IEEE International Conference on Machine Learning and Applications (ICMLA), Nassau, Bahamas, 12–14 December 2022; pp. 1392–1396. [Google Scholar]
- Guo, Z.; Cai, D.; Zhou, Y.; Xu, T.; Yu, F. Identifying rice field weeds from unmanned aerial vehicle remote sensing imagery using deep learning. Plant Methods 2024, 20, 105. [Google Scholar] [CrossRef] [PubMed]
- Selvaraj, M.G.; Valderrama, M.; Guzman, D.; Valencia, M.; Ruiz, H.; Acharjee, A. Machine learning for high-throughput field phenotyping and image processing provides insight into the association of above and below-ground traits in cassava (Manihot esculenta Crantz). Plant Methods 2020, 16, 87. [Google Scholar] [CrossRef]
- Bhugra, S.; Srivastava, S.; Kaushik, V.; Mukherjee, P.; Lall, B. Plant Data Generation with Generative AI: An Application to Plant Phenotyping. In Applications of Generative AI; Lyu, Z., Ed.; Springer: Cham, Switzerland, 2024. [Google Scholar] [CrossRef]
- Zhang, Y.; Wu, H.; Yang, W. Forests Growth Monitoring Based on Tree Canopy 3D Reconstruction Using UAV Aerial Photogrammetry. Forests 2019, 10, 1052. [Google Scholar] [CrossRef]
- Ni, Z.; Burks, T.F.; Lee, W.S. 3D Reconstruction of Plant/Tree Canopy Using Monocular and Binocular Vision. J. Imaging 2016, 2, 28. [Google Scholar] [CrossRef]
- Bhandari, M.; Baker, S.; Rudd, J.C.; Ibrahim, A.M.H.; Chang, A.; Xue, Q.; Jung, J.; Landivar, J.; Auvermann, B. Assessing the Effect of Drought on Winter Wheat Growth Using Unmanned Aerial System (UAS)-Based Phenotyping. Remote Sens. 2021, 13, 1144. [Google Scholar] [CrossRef]
- Wu, S.; Wen, W.; Xiao, B.; Guo, X.; Du, J.; Wang, C.; Wang, Y. An accurate skeleton extraction approach from 3D point clouds of maize plants. Front. Plant Sci. 2019, 10, 248. [Google Scholar] [CrossRef]
- Yu, S.; Fan, J.; Lu, X.; Wen, W.; Shao, S.; Guo, X.; Zhao, C. Hyperspectral Technique Combined with Deep Learning Algorithm for Prediction of Phenotyping Traits in Lettuce. Front. Plant Sci. 2022, 13, 927832. [Google Scholar] [CrossRef] [PubMed]
- Zhang, H.; Ge, Y.; Xie, X.; Atefi, A.; Wijewardane, N.K.; Thapa, S. High throughput analysis of leaf chlorophyll content in sorghum using RGB, hyperspectral, and fluorescence imaging and sensor fusion. Plant Methods 2022, 18, 60. [Google Scholar] [CrossRef] [PubMed]
- Luo, B.; Sun, H.; Zhang, L.; Chen, F.; Wu, K. Advances in the tea plants phenotyping using hyperspectral imaging technology. Front. Plant Sci. 2024, 15, 1442225. [Google Scholar] [CrossRef] [PubMed]
- Yuan, T.; Xu, C.G.; Ren, Y.X.; Feng, Q.C.; Tan, Y.Z.; Li, W. Detecting the information of cucumber in greenhouse for picking based on NIR image. Spectrosc. Spectr. Anal. 2009, 29, 2054–2058. [Google Scholar] [PubMed]
- Abboud, T.; Hedjam, R.; Noumeir, R.; Berinstain, A. Segmentation of imaged plants captured by a fluorescent imaging system. In Proceedings of the 25th IEEE Canadian Conference on Electrical and Computer Engineering (CCECE), Montreal, QC, Canada, 29 April–2 May 2012; pp. 1–4. [Google Scholar]
- Janssens, O.; Vylder, J.D.; Aelterman, J.; Verstockt, S.; Philips, W.; Straeten, D.V.; Hoecke, S.V.; Walle, R.V. Leaf segmentation and parallel phenotyping for the analysis of gene networks in plants. In Proceedings of the 21st European Signal Processing Conference (EUSIPCO 2013), Marrakech, Morocco, 9–13 September 2013; pp. 1–5. [Google Scholar]
- Wang, K.; Pu, X.; Li, B. Automated Phenotypic Trait Extraction for Rice Plant Using Terrestrial Laser Scanning Data. Sensors 2024, 24, 4322. [Google Scholar] [CrossRef] [PubMed]
- Andayani, U.; Sumantri, I.B.; Arisandy, B. Classification of Zingiber Plants Based on Stomate Microscopic Images Using Probabilistic Neural Network (PNN) Algorithm. J. Phys. Conf. Ser. 2020, 1566, 012122. [Google Scholar] [CrossRef]
- Uhrmann, F.; Hügel, C.; Paris, S.; Scholz, O.; Zollhöfer, M.; Greiner, G. A Model-based Approach to Extract Leaf Features from 3D Scans. In Proceedings of the 7th International Conference on Functional-Structural Plant Models, Saariselkä, Finland, 9–14 June 2013. [Google Scholar]
- Nguyen, T.T.; Slaughter, D.C.; Max, N.; Maloof, J.N.; Sinha, N. Structured Light-Based 3D Reconstruction System for Plants. Sensors 2015, 15, 18587–18612. [Google Scholar] [CrossRef]
- Son, H.; Kim, C.; Kim, C. Fully Automated As-Built 3D Pipeline Extraction Method from Laser-Scanned Data Based on Curvature Computation. J. Comput. Civ. Eng. 2015, 29, B4014003. [Google Scholar] [CrossRef]
- Bao, Y.; Tang, L.; Schnable, P.S.; Salas-Fernandez, M.G. Infield Biomass Sorghum Yield Component Traits Extraction Pipeline Using Stereo Vision. In Proceedings of the 2016 ASABE Annual International Meeting, Orlando, FL, USA, 17–20 July 2016. [Google Scholar]
- Paulus, S. Measuring Crops in 3D: Using Geometry for Plant Phenotyping. Plant Methods 2019, 15, 103. [Google Scholar] [CrossRef]
- Li, D.; Li, J.; Xiang, S.; Pan, A. PSegNet: Simultaneous Semantic and Instance Segmentation for Point Clouds of Plants. Plant Phenomics 2022, 2022, 9787643. [Google Scholar] [CrossRef]
- Heiwolt, K.; Öztireli, C.; Cielniak, G. Statistical shape representations for temporal registration of plant components in 3D. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; pp. 9587–9593. [Google Scholar]
- Peng, Y.; Lin, S.; Wu, H.; Cao, G. Point Cloud Registration Based on Fast Point Feature Histogram Descriptors for 3D Reconstruction of Trees. Remote Sens. 2023, 15, 3775. [Google Scholar] [CrossRef]
- Li, J.; Li, Q.; Qiao, J.; Li, L.; Yao, J.; Tu, J. Organ-Level Instance Segmentation of Oilseed Rape at Seedling Stage Based on 3D Point Cloud. Appl. Eng. Agric. 2024, 40, 151–164. [Google Scholar] [CrossRef]
- Akhtar, M.S.; Zafar, Z.; Nawaz, R.; Fraz, M.M. Unlocking Plant Secrets: A Systematic Review of 3D Imaging in Plant Phenotyping Techniques. Comput. Electron. Agric. 2024, 222, 109033. [Google Scholar] [CrossRef]
- Kim, J.Y.; Abdel-Haleem, H. Open-Source Electronics for Plant Phenotyping and Irrigation in Controlled Environment. Smart Agric. Technol. 2022, 3, 100093. [Google Scholar] [CrossRef]
- Clarke, J.L.; Qiu, Y.; Schnable, J.C. Experimental Design for Controlled Environment High-Throughput Plant Phenotyping. Methods Mol. Biol. 2022, 2539, 57–68. [Google Scholar] [CrossRef]
- Patil, M.; Soma, S. Identification of Growth Rate of Plant Based on Leaf Features Using Digital Image Processing Techniques. Int. J. Emerg. Technol. Adv. Eng. 2013, 3, 2250–2259. [Google Scholar]
- Khirade, S.D.; Patil, A.B. Plant Disease Detection Using Image Processing. In Proceedings of the 2015 International Conference on Computing Communication Control and Automation, Pune, India, 26–27 February 2015; pp. 768–771. [Google Scholar]
- Sabrol, H.; Satish, K.V. Tomato Plant Disease Classification in Digital Images Using Classification Tree. In Proceedings of the 2016 International Conference on Communication and Signal Processing (ICCSP), Lonere, India, 26–27 December 2016; pp. 1242–1246. [Google Scholar]
- Prakash, R.M.; Saraswathy, G.; Ramalakshmi, G.; Mangaleswari, K.; Kaviya, T. Detection of Leaf Diseases and Classification Using Digital Image Processing. In Proceedings of the 2017 International Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS), Coimbatore, India, 17–18 March 2017; pp. 1–4. [Google Scholar]
- Bhujade, V.G.; Sambhe, V.; Banerjee, B. Digital Image Noise Removal Towards Soybean and Cotton Plant Disease Using Image Processing Filters. Expert Syst. Appl. 2024, 246, 123031. [Google Scholar] [CrossRef]
- Jagtap, S.B.; Hambarde, S.M. Agricultural Plant Leaf Disease Detection and Diagnosis Using Image Processing Based on Morphological Feature Extraction. IOSR J. VLSI Signal Process. 2014, 4, 24–30. [Google Scholar] [CrossRef]
- Meirista, E.; Mukhlash, I.; Setiyono, B.; Suryani, D.R.; Nurvitasari, E. Classification of Plants and Weeds in Multi-Leaf Image Using Support Vector Machine Based on Leaf Shape and Texture Features. In Proceedings of the International Conference on Science and Technology (ICST 2018), Yogyakarta, Indonesia, 7–8 August 2018. [Google Scholar]
- Liang, F.; Chen, H.; Cui, S.; Yang, L.; Wu, X. Detection Method of Vegetable Maturity Based on Neural Network and Bayesian Information Fusions. In Proceedings of the Sixth International Conference on Electronics and Information Engineering, Fukuoka, Japan, 24–26 March 2023. [Google Scholar]
- Thomkaew, J.; Intakosum, S. Plant Species Classification Using Leaf Edge Feature Combination with Morphological Transformations and SIFT Key Point. J. Image Graph. 2023, 2023, 91–97. [Google Scholar] [CrossRef]
- Labellapansa, A.; Yulianti, A.; Yuliani, A.T. Segmentation of Palm Oil Leaf Disease Using Zoning Feature Extraction. In Proceedings of the Proceedings of the Second International Conference on Science, Engineering and Technology, Oxford, UK, 8–10 November 2019. [Google Scholar]
- Prasad, S.; Kumar, P.; Hazra, R.; Kumar, A. Plant Leaf Disease Detection Using Gabor Wavelet Transform. In Swarm, Evolutionary, and Memetic Computing (SEMCCO 2012); Panigrahi, B.K., Das, S., Suganthan, P.N., Nanda, P.K., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7677, pp. 435–443. [Google Scholar] [CrossRef]
- Ahmed, A.A.N.; Haque, H.M.F.; Rahman, A.; Shatabda, S. Wavelet and Pyramid Histogram Features for Image-Based Leaf Detection. In Emerging Technologies in Data Mining and Information Security; Abraham, A., Dutta, P., Mandal, J., Bhattacharya, A., Dutta, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2019; Volume 814, pp. 245–251. [Google Scholar] [CrossRef]
- Kiran, S.M.; Chandrappa, N.D. Plant Disease Identification Using Discrete Wavelet Transforms and SVM. J. Univ. Shanghai Sci. Technol. 2021, 23, 108–112. [Google Scholar]
- Bhardwaj, A.; Kaur, M.; Kumar, A. Recognition of Plants by Leaf Image Using Moment Invariant and Texture Analysis. Int. J. Innov. Appl. Stud. 2013, 3, 237–248. [Google Scholar]
- Adam, S.; Amir, A. Fruit Plant Leaf Identification Feature Extraction Using Zernike Moment Invariant (ZMI) and Methods Backpropagation. In Proceedings of the 2019 International Conference on Informatics, Multimedia, Cyber and Information System (ICIMCIS), Jakarta, Indonesia, 24–25 October 2019; pp. 225–230. [Google Scholar]
- Sulc, M.; Matas, J. Texture-Based Leaf Identification. In Proceedings of the European Conference on Computer Vision (ECCV), Zurich, Switzerland, 6–12 September 2014; pp. 185–200. [Google Scholar]
- Chaki, J.; Parekh, R.; Bhattacharya, S. Plant Leaf Recognition Using Texture and Shape Features with Neural Classifiers. Pattern Recognit. Lett. 2015, 58, 61–68. [Google Scholar] [CrossRef]
- Mohan, A.; Peeples, J. Lacunarity Pooling Layers for Plant Image Classification using Texture Analysis. In Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 16–22 June 2024; pp. 5384–5392. [Google Scholar]
- Sangeetha, M.; Kannan, S.R.; Boopathi, S.; Ramya, J.; Ishrat, M.; Sabarinathan, G. Prediction of Fruit Texture Features Using Deep Learning Techniques. In Proceedings of the 2023 4th International Conference on Smart Electronics and Communication (ICOSEC), Trichy, India, 20–22 September 2023; pp. 762–768. [Google Scholar] [CrossRef]
- Anggraini, R.A.; Wati, F.F.; Shidiq, M.J.; Suryadi, A.; Fatah, H.; Kholifah, D.N. Identification of herbal plant based on leaf image using GLCM feature and K-Means. Techno Nusa Mandiri 2020, 17, 71–78. [Google Scholar] [CrossRef]
- Saha, D.; Hamer, G.; Lee, J.Y. Development of Inter-Leaves Weed and Plant Regions Identification Algorithm using Histogram of Oriented Gradient and K-Means Clustering. In Proceedings of the International Conference on Research in Adaptive and Convergent Systems, Krakow, Poland, 20–23 September 2017. [Google Scholar]
- Jafari, A.; Mohtasebi, S.S.; Jahromi, H.E.; Omid, M. Color Feature Extraction by Means of Discriminant Analysis for Weed Segmentation. In Proceedings of the 2004 ASAE Annual Meeting, Ottawa, ON, Canada, 1–4 August 2004. [Google Scholar]
- Shigang, C.; Hongdou, C.; Fan, L.; Lili, Y.; Xing-li, W. Veins feature extraction for LED plant growth cabinet. In Proceedings of the 2016 35th Chinese Control Conference (CCC), Chengdu, China, 27–29 July 2016; pp. 4917–4920. [Google Scholar]
- Ambarwari, A.; Adrian, Q.J.; Herdiyeni, Y.; Hermadi, I. Plant species identification based on leaf venation features using SVM. TELKOMNIKA Telecommun. Comput. Electron. Control 2020, 18, 726–732. [Google Scholar] [CrossRef]
- Chen, G.; Meng, Y.; Lu, J.; Wang, D. Research on Color and Shape Recognition of Maize Diseases Based on HSV and OTSU Method. In Proceedings of the Conference on Control Technology and Applications, Buenos Aires, Argentina, 19–22 September 2016. [Google Scholar]
- Rojanarungruengporn, K.; Pumrin, S. Early Stress Detection in Plant Phenotyping Using CNN and LSTM Architecture. In Proceedings of the 2021 9th International Electrical Engineering Congress (iEECON), Pattaya, Thailand, 10–12 March 2021; pp. 389–392. [Google Scholar]
- Giuffrida, M.V.; Doerner, P.; Tsaftaris, S.A. Pheno-Deep Counter: A unified and versatile deep learning architecture for leaf counting. Plant J. 2018, 96, 880–890. [Google Scholar] [CrossRef]
- Ullah, S.; Panzarová, K.; Trtílek, M.; Lexa, M.; Máčala, V.; Neumann, K.; Altmann, T.; Hejátko, J.; Pernisová, M.; Gladilin, E. High-Throughput Spike Detection in Greenhouse Cultivated Grain Crops with Attention Mechanisms-Based Deep Learning Models. Plant Phenomics 2024, 6, 0155. [Google Scholar] [CrossRef] [PubMed]
- Li, B.; Guo, C. MASPC_Transform: A Plant Point Cloud Segmentation Network Based on Multi-Head Attention Separation and Position Code. Sensors 2022, 22, 9225. [Google Scholar] [CrossRef]
- Qi, C.; Chen, K.; Gao, J. A Vision Transformer-Based Robotic Perception for Early Tea Chrysanthemum Flower Counting in Field Environments. J. Field Robot. 2024. [Google Scholar] [CrossRef]
- Alajas, O.J.; Concepcion, R.S.; Bandala, A.A.; Sybingco, E.; Dadios, E.P.; Mendigoria, C.H.; Aquino, H.L. Grape Phaeomoniella chlamydospora Leaf Blotch Recognition and Infected Area Approximation Using Hybrid Linear Discriminant Analysis and Genetic Programming. In Proceedings of the 2022 IEEE 14th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management (HNICEM), Boracay, Philippines, 1–4 December 2022; pp. 1–6. [Google Scholar]
- Wang, L.; Huang, Y.; Hou, Y.; Shan, J.; Zhang, S. Graph Attention Convolution for Point Cloud Semantic Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 10296–10305. [Google Scholar]
- SK, P.K.; Sumithra, M.G.; Saranya, N. Particle Swarm Optimization (PSO) with Fuzzy C-Means (PSO-FCM)–Based Segmentation and Machine Learning Classifier for Leaf Diseases Prediction. Concurr. Comput. Pract. Exp. 2021, 33, e5312. [Google Scholar] [CrossRef]
- Muthukannan, K.; Latha, P. A PSO Model for Disease Pattern Detection on Leaf Surfaces. Image Anal. Stereol. 2015, 34, 209–216. [Google Scholar] [CrossRef]
- Cristin, R.; Kumar, B.S.; Priya, C.; Kanagarathinam, K. Deep Neural Network-Based Rider-Cuckoo Search Algorithm for Plant Disease Detection. Artif. Intell. Rev. 2020, 53, 4993–5018. [Google Scholar] [CrossRef]
- Patel, S.P.; Rungta, R. Automatic Detection of Plant Leaf Disease Using K-Means Clustering and Segmentation. Int. J. Sci. Res. Eng. Technol. 2017, 6, 774–779. [Google Scholar]
- Kabir, R.; Jahan, S.; Islam, M.R.; Rahman, N.; Islam, M.R. Discriminant Feature Extraction Using Disease Segmentation for Automatic Leaf Disease Diagnosis. In Proceedings of the International Conference on Computing Advancements, Dhaka, Bangladesh, 10–12 January 2020. [Google Scholar]
- Rajani, S.; Veena, M.N. Medicinal Plants Segmentation Using Thresholding and Edge-Based Techniques. Int. J. Innov. Technol. Explor. Eng. 2019, 8, 24–30. [Google Scholar]
- Khan, M.A.; Akram, T.; Sharif, M.; Javed, K.; Raza, M.; Saba, T. An Automated System for Cucumber Leaf Diseased Spot Detection and Classification Using Improved Saliency Method and Deep Features Selection. Multimed. Tools Appl. 2020, 79, 18627–18656. [Google Scholar] [CrossRef]
- Harrap, M.J.M.; Rands, S.A.; Hempel de Ibarra, N.; Whitney, H.M. The Diversity of Floral Temperature Patterns, and Their Use by Pollinators. eLife 2017, 6, e31262. [Google Scholar] [CrossRef]
- Subramani, K.; Periyasamy, S.; Theagarajan, P. Double Line Clustering-Based Colour Image Segmentation Technique for Plant Disease Detection. Curr. Med. Imaging Rev. 2019, 15, 769–776. [Google Scholar] [CrossRef]
- Senthilkumar, C.; Kamarasan, M. An Optimal Weighted Segmentation with Hough Transform Based Feature Extraction and Classification Model for Citrus Disease. In Proceedings of the 2020 International Conference on Inventive Computation Technologies (ICICT), Coimbatore, India, 26–28 February 2020; pp. 215–220. [Google Scholar]
- Xia, C.; Lee, J.; Li, Y.; Chung, B.; Chon, T. In Situ Detection of Small-Size Insect Pests Sampled on Traps Using Multifractal Analysis. Opt. Eng. 2012, 51, 027001. [Google Scholar] [CrossRef]
- Deepika, K.C.; Ruth, I.; Keerthana, S.; Sathya Bama, B.; Avvailakshmi, S.; Vidhya, A. Robust Plant Recognition Using Graph Cut Based Flower Segmentation and PHOG Based Feature Extraction. In Proceedings of the 2012 International Conference on Machine Vision and Image Processing (MVIP), Tamil Nadu, India, 14–15 December 2012; pp. 44–47. [Google Scholar]
- Glezakos, T.J.; Tsiligiridis, T.A.; Yialouris, C.P. Piecewise Evolutionary Segmentation for Feature Extraction in Time Series Models. Neural Comput. Appl. 2012, 24, 243–257. [Google Scholar] [CrossRef]
- Koenderink, J.J.; van Doorn, A.J. Surface Shape and Curvature Scales. Image Vis. Comput. 1992, 10, 557–565. [Google Scholar] [CrossRef]
- Boissonnat, J.D.; Cazals, F. Smooth Surface Reconstruction via Natural Neighbour Interpolation of Distance Functions. In Proceedings of the Symposium on Computational Geometry (SCG ‘02), Barcelona, Spain, 5–7 June 2002; pp. 223–232. [Google Scholar] [CrossRef]
- Belongie, S.; Malik, J.; Puzicha, J. Shape Matching and Object Recognition Using Shape Contexts. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 509–522. [Google Scholar] [CrossRef]
- Rusu, R.B.; Marton, Z.C.; Blodow, N.; Beetz, M. Learning Informative Point Classes for the Acquisition of Object Model Maps. In Proceedings of the 10th International Conference on Control Automation Robotics and Vision (ICARCV), Hanoi, Vietnam, 17–20 December 2008. [Google Scholar]
- Rusu, R.B.; Blodow, N.; Marton, Z.C.; Beetz, M. Aligning Point Cloud Views Using Persistent Feature Histograms. In Proceedings of the 21st IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Nice, France, 22–26 September 2008. [Google Scholar]
- Rusu, R.B.; Blodow, N.; Beetz, M. Fast Point Feature Histograms (FPFH) for 3D Registration. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation (ICRA), Kobe, Japan, 12–17 May 2009; pp. 3212–3217. [Google Scholar]
- Johnson, A.E.; Hebert, M. Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes. IEEE Trans. Pattern Anal. Mach. Intell. 1999, 21, 433–449. [Google Scholar] [CrossRef]
- Benenson, R.; Omran, M.; Hosang, J.; Schiele, B. Volumetric and Multi-View CNNs for Object Classification on 3D Data. arXiv 2016, arXiv:1604.03265. [Google Scholar] [CrossRef]
- Su, H.; Maji, S.; Kalogerakis, E.; Learned-Miller, E. Multi-view Convolutional Neural Networks for 3D Shape Recognition. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 945–953. [Google Scholar] [CrossRef]
- Bruna, J.; Zaremba, W.; Szlam, A.; LeCun, Y. Spectral Networks and Locally Connected Networks on Graphs. arXiv 2013, arXiv:1312.6203. [Google Scholar]
- Leng, L.; Zhong, Z.; Wang, R.; Yang, L. Feature-Based Deep Neural Networks for 3D Data Classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 507–516. [Google Scholar] [CrossRef]
- Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar]
- Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. Adv. Neural Inf. Process. Syst. 2017, 30, 1–10. [Google Scholar] [CrossRef]
- Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S.E.; Bronstein, M.M.; Solomon, J.M. Dynamic Graph CNN for Learning on Point Clouds. ACM Trans. Graph. 2019, 38, 146. [Google Scholar] [CrossRef]
- Xu, M.; Ding, R.; Zhao, H.; Qi, X. Paconv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual, 19–25 June 2021; pp. 3173–3182. [Google Scholar]
- Xiang, T.; Zhang, C.; Song, Y.; Yu, J.; Cai, W. Walk in the Cloud: Learning Curves for Point Clouds Shape Analysis. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, BC, Canada, 11–17 October 2021; pp. 915–924. [Google Scholar]
- Zhao, H.; Jiang, L.; Jia, J.; Torr PH, S.; Koltun, V. Point Transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 16259–16268. [Google Scholar]
- Lai, X.; Liu, J.; Jiang, L.; Wang, L.; Zhao, H.; Liu, S.; Qi, X.; Jia, J. Stratified Transformer for 3D Point Cloud Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 8500–8509. [Google Scholar]
- Hou, J.; Dai, X.; He, Z.; Dai, A.; Nießner, M. Mask3D: Pre-Training 2D Vision Transformers by Learning Masked 3D Priors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 13510–13519. [Google Scholar]
- Mertoğlu, K.; Şalk, Y.; Sarıkaya, S.K.; Turgut, K.; Evrenesoğlu, Y.; Çevikalp, H.; Gerek, Ö.N.; Dutağacı, H.; Rousseau, D. PLANesT-3D: A New Annotated Dataset for Segmentation of 3D Plant Point Clouds. arXiv 2024, arXiv:2407.21150. [Google Scholar]
- Liu, Z.; Tang, H.; Lin, Y.; Han, S. Point-Voxel CNN for Efficient 3D Deep Learning. arXiv 2019, arXiv:1907.03739. [Google Scholar]
- Yan, X.; Zheng, C.; Li, Z.; Wang, S.; Cui, S. PointASNL: Robust Point Clouds Processing Using Nonlocal Neural Networks with Adaptive Sampling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattlepp, WA, USA, 13–19 June 2020; pp. p.5589–5598. [Google Scholar]
- Kaul, C.; Pears, N.; Manandhar, S. FatNet: A Feature-Attentive Network for 3D Point Cloud Processing. In Proceedings of the 2020 International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 7211–7218. [Google Scholar]
- Xu, S.; Li, Y.; Zhao, J.; Zhang, B.; Guo, G. POEM: 1-bit Point-Wise Operations Based on Expectation-Maximization for Efficient Point Cloud Processing. arXiv 2021, arXiv:2111.13386. [Google Scholar]
- Rosu, R.A.; Schutt, P.; Quenzel, J.; Behnke, S. LatticeNet: Fast Point Cloud Segmentation Using Permutohedral Lattices. In Proceedings of the IEEE Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022. [Google Scholar]
- Zhang, J.; Chen, T.; Ding, D.; Ma, Z. G-PCC++: Enhanced Geometry-Based Point Cloud Compression. In Proceedings of the 31st ACM International Conference on Multimedia (MM ‘23), Ottawa, ON, Canada, 29 October–3 November 2023; pp. 1352–1363. [Google Scholar] [CrossRef]
- Zeng, B.; Liu, B.; Li, H.; Liu, X.; Liu, J.; Chen, D.; Peng, W.; Zhang, B. FNeVR: Neural Volume Rendering for Face Animation. Adv. Neural Inf. Process. Syst. 2022, 35, 22451–22462. [Google Scholar]
- Poux, F.; Ponciano, J.-J. Self-Learning Ontology for Instance Segmentation of 3D Indoor Point Cloud. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, XLIII-B2, 309–316. [CrossRef]
- Van Natijne, A. GeoTiles.nl: Readymade Geodata with a Focus on the Netherlands. 2020. Available online: https://geotiles.nl/ (accessed on 18 March 2024).
- Xiao, Z.; Gao, J.; Lanyu, Z. Voxel Grid Downsampling for 3D Point Cloud Recognition. Modul. Mach. Tool Autom. Manuf. Technol. 2022, 11, 43–47. [Google Scholar]
- Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal Loss for Dense Object Detection. arXiv 2017, arXiv:1708.02002. [Google Scholar] [CrossRef]
- Chawla, N.; Bowyer, K.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic Minority Over-Sampling Technique. arXiv 2002, arXiv:1106.1813. [Google Scholar] [CrossRef]
- Velastegui, R.; Yang, L.; Han, D. The Importance of Color Spaces for Image Classification Using Artificial Neural Networks: A Review. In Proceedings of the International Conference on Computational Science and Its Applications, Cagliari, Italy, 13–16 September 2021; Springer International Publishing: Cham, Switzerland; pp. 70–83. [Google Scholar]
- Al-Mashhadani, Z.; Chandrasekaran, B. Autonomous ripeness detection using image processing for an agricultural robotic system. In Proceedings of the 2020 11th IEEE Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), New York, NY, USA, 28–31 October 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 743–748. [Google Scholar] [CrossRef]
- Rao, U.S.N. Design of automatic cotton picking robot with machine vision using image processing algorithms. In Proceedings of the 2013 International Conference on Control, Automation, Robotics and Embedded Systems (CARE), Jabalpur, India, 16–18 December 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 1–5. [Google Scholar] [CrossRef]
- Ganesan, P.; Sathish, B.S.; Vasanth, K.; Sivakumar, V.G.; Vadivel, M.; Ravi, C.N. A Comprehensive Review of the Impact of Color Space on Image Segmentation. In Proceedings of the 2019 5th International Conference on Advanced Computing & Communication Systems (ICACCS), Coimbatore, India, 15–16 March 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 962–967. [Google Scholar]
- Dewi, T.; Mulya, Z.; Risma, P.; Oktarina, Y. BLOB Analysis of an Automatic Vision-Guided System for a Fruit Picking and Placing Robot. Int. J. Comput. Vis. Robot. 2021, 11, 315–327. [Google Scholar] [CrossRef]
- Dewi, T.; Rusdianasari, R.; Kusumanto, R.D.; Siproni, S. Image Processing Application on Automatic Fruit Detection for Agriculture Industry. In Proceedings of the 5th FIRST T1 T2 2021 International Conference (FIRST-T1-T2 2021), Palembang, Indonesia, 20–21 October 2021; Atlantis Press: Dordrecht, The Netherlands, 2022; pp. 47–53. [Google Scholar]
- Bulanon, D.M.; Kataoka, T. Fruit Detection System and an End Effector for Robotic Harvesting of Fuji Apples. Agric. Eng. Int. CIGR J. 2010, 12, 203–210. [Google Scholar]
- Fernandes, L.; Shivakumar, B.R. Identification and Sorting of Objects Based on Shape and Colour Using Robotic Arm. In Proceedings of the 2020 Fourth International Conference on Inventive Systems and Control (ICISC), Coimbatore, India, 8–10 January 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 866–871. [Google Scholar]
- Zhang, Y.; Wu, L. Classification of Fruits Using Computer Vision and a Multiclass Support Vector Machine. Sensors 2012, 12, 12489–12505. [Google Scholar] [CrossRef]
- Ge, Y.; Xiong, Y.; Tenorio, G.L.; From, P.J. Fruit Localization and Environment Perception for Strawberry Harvesting Robots. IEEE Access 2019, 7, 147642–147652. [Google Scholar] [CrossRef]
- Kodagali, J.A.; Balaji, S. Computer Vision and Image Analysis Based Techniques for Automatic Characterization of Fruits: A Review. Int. J. Comput. Appl. 2012, 50, 6–12. [Google Scholar]
- Vasconcelos, G.J.Q.; Costa, G.S.R.; Spina, T.V.; Pedrini, H. Low-Cost Robot for Agricultural Image Data Acquisition. Agriculture 2023, 13, 413. [Google Scholar] [CrossRef]
- Barth, R.; IJsselmuiden, J.; Hemming, J.; Van Henten, E.J. Data Synthesis Methods for Semantic Segmentation in Agriculture: A Capsicum Annuum Dataset. Comput. Electron. Agric. 2018, 144, 284–296. [Google Scholar] [CrossRef]
- Jiao, Y.; Luo, R.; Li, Q.; Deng, X.; Yin, X.; Ruan, C.; Jia, W. Detection and Localization of Overlapped Fruits Application in an Apple Harvesting Robot. Electronics 2020, 9, 1023. [Google Scholar] [CrossRef]
- Wang, C.; Wang, H.; Han, Q.; Zhang, Z.; Kong, D.; Zou, X. Strawberry Detection and Ripeness Classification Using YOLOv8+ Model and Image Processing Method. Agriculture 2024, 14, 751. [Google Scholar] [CrossRef]
- Bu, L.; Chen, C.; Hu, G.; Sugirbay, A.; Sun, H.; Chen, J. Design and Evaluation of a Robotic Apple Harvester Using Optimized Picking Patterns. Comput. Electron. Agric. 2022, 198, 107092. [Google Scholar] [CrossRef]
- Miao, Z.; Yu, X.; Li, N.; Zhang, Z.; He, C.; Li, Z.; Deng, C.; Sun, T. Efficient Tomato Harvesting Robot Based on Image Processing and Deep Learning. Precis. Agric. 2023, 24, 254–287. [Google Scholar] [CrossRef]
- Yin, H.; Sun, Q.; Ren, X.; Guo, J.; Yang, Y.; Wei, Y.; Huang, B.; Chai, X.; Zhong, M. Development, Integration, and Field Evaluation of an Autonomous Citrus-Harvesting Robot. J. Field Robot. 2023, 40, 1363–1387. [Google Scholar] [CrossRef]
- Onishi, Y.; Yoshida, T.; Kurita, H.; Fukao, T.; Arihara, H.; Iwai, A. An Automated Fruit Harvesting Robot by Using Deep Learning. Robomech J. 2019, 6, 13. [Google Scholar] [CrossRef]
- Fujinaga, T. Strawberries Recognition and Cutting Point Detection for Fruit Harvesting and Truss Pruning. Precis. Agric. 2024, 25, 1–22. [Google Scholar] [CrossRef]
- Peng, H.; Xue, C.; Shao, Y.; Chen, K.; Xiong, J.; Xie, Z.; Zhang, L. Semantic Segmentation of Litchi Branches Using DeepLabV3+ Model. IEEE Access 2020, 8, 164546–164555. [Google Scholar] [CrossRef]
- Fujinaga, T.; Nakanishi, T. Semantic Segmentation of Strawberry Plants Using DeepLabV3+ for Small Agricultural Robot. In Proceedings of the 2023 IEEE/SICE International Symposium on System Integration (SII), Atlanta, GA, USA, 17–20 January 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–6. [Google Scholar]
- Giang, T.T.H.; Khai, T.Q.; Im, D.Y.; Ryoo, Y.J. Fast Detection of Tomato Sucker Using Semantic Segmentation Neural Networks Based on RGB-D Images. Sensors 2022, 22, 5140. [Google Scholar] [CrossRef]
- Kalampokas, T.; Tziridis, K.; Nikolaou, A.; Vrochidou, E.; Papakostas, G.A.; Pachidis, T.; Kaburlasos, V.G. Semantic Segmentation of Vineyard Images Using Convolutional Neural Networks. In Proceedings of the 21st EANN (Engineering Applications of Neural Networks) 2020 Conference, Halkidiki, Greece, 5–7 June 2020; Springer International Publishing: Cham, Switzerland, 2020; pp. 292–303. [Google Scholar]
- Lin, G.; Tang, Y.; Zou, X.; Xiong, J.; Li, J. Guava Detection and Pose Estimation Using a Low-Cost RGB-D Sensor in the Field. Sensors 2019, 19, 428. [Google Scholar] [CrossRef]
- Hussain, M.; He, L.; Schupp, J.; Lyons, D.; Heinemann, P. Green Fruit Segmentation and Orientation Estimation for Robotic Green Fruit Thinning of Apples. Comput. Electron. Agric. 2023, 207, 107734. [Google Scholar] [CrossRef]
- Hoppe, H.; De Rose, T.; Duchamp, T.; McDonald, J.; Stuetzle, W. Surface Reconstruction from Unorganized Points. SIGGRAPH Comput. Graph. 1992, 26, 71–78. [Google Scholar] [CrossRef]
- Harandi, N.; Vandenberghe, B.; Vankerschaver, J.; Depuydt, S.; Van Messem, A. How to Make Sense of 3D Representations for Plant Phenotyping: A Compendium of Processing and Analysis Techniques. Plant Methods 2023, 19, 60. [Google Scholar] [CrossRef] [PubMed]
- Um, D. Multiple Intensity Differentiation Based 3D Surface Reconstruction with Photometric Stereo Compensation. IEEE Sens. J. 2014, 14, 1453–1458. [Google Scholar] [CrossRef]
- Beltran, D.; Basañez, L. A Comparison Between Active and Passive 3D Vision Sensors: BumblebeeXB3 and Microsoft Kinect. In Proceedings of the ROBOT2013: First Iberian Robotics Conference, Madrid, Spain, 28–29 November 2014; Springer: Cham, Switzerland, 2014; Volume 252. [Google Scholar] [CrossRef]
- Panjvani, K.; Dinh, A.V.; Wahid, K.A. LiDARPheno: A Low-Cost LiDAR-Based 3D Scanning System for Leaf Morphological Trait Extraction. Front. Plant Sci. 2019, 10, 147. [Google Scholar] [CrossRef] [PubMed]
- Patel, K.; Park, E.-S.; Lee, H.; Priya, G.G.L.; Kim, H.; Joshi, R.; Arief, M.A.A.; Kim, M.S.; Baek, I.; Cho, B.-K. Deep Learning-Based Plant Organ Segmentation and Phenotyping of Sorghum Plants Using LiDAR Point Cloud. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 8492–8507. [Google Scholar] [CrossRef]
- Zhu, Y.; Sun, G.; Ding, G.; Zhou, J.; Wen, M.; Jin, S.; Zhao, Q.; Colmer, J.; Ding, Y.; Ober, E.S.; et al. Large-scale field phenotyping using backpack LiDAR and CropQuant-3D to measure structural variation in wheat. Plant Physiol. 2021, 187, 716–738. [Google Scholar] [CrossRef]
- Han, B.; Li, Y.; Bie, Z.; Peng, C.; Huang, Y.; Xu, S. MIX-NET: Deep Learning-Based Point Cloud Processing Method for Segmentation and Occlusion Leaf Restoration of Seedlings. Plants 2022, 11, 3342. [Google Scholar] [CrossRef]
- Syed, T.N.; Jizhan, L.; Xin, Z.; Shengyi, Z.; Yan, Y.; Mohamed, S.H.A.; Lakhiar, I.A. Seedling-Lump Integrated Non-Destructive Monitoring for Automatic Transplanting with Intel RealSense Depth Camera. Artif. Intell. Agric. 2019, 3, 18–32. [Google Scholar] [CrossRef]
- Eltner, A.; Sofia, G. Structure from Motion Photogrammetric Technique. In Developments in Earth Surface Processes; Elsevier: Amsterdam, The Netherlands, 2020; Volume 23, pp. 1–24. [Google Scholar] [CrossRef]
- Iglhaut, J.; Cabo, C.; Puliti, S.; Piermattei, L.; O’Connor, J.; Rosette, J. Structure from Motion Photogrammetry in Forestry: A Review. Curr. For. Rep. 2019, 5, 155–168. [Google Scholar] [CrossRef]
- Paulus, S.; Dupuis, J.; Mahlein, A.K.; Kuhlmann, H. Surface Feature-Based Classification of Plant Organs from 3D Laser-Scanned Point Clouds for Plant Phenotyping. BMC Bioinform. 2013, 14, 238. [Google Scholar] [CrossRef]
- Rossi, R.; Leolini, C.; Costafreda-Aumedes, S.; Leolini, L.; Bindi, M.; Zaldei, A.; Moriondo, M. Performances Evaluation of a Low-Cost Platform for High-Resolution Plant Phenotyping. Sensors 2020, 20, 3150. [Google Scholar] [CrossRef] [PubMed]
- Rossi, R.; Costafreda-Aumedes, S.; Leolini, L.; Bindi, M.; Moriondo, M. Implementation of an Algorithm for Automated Phenotyping through Plant 3D-Modeling: A Practical Application on the Early Detection of Water Stress. Comput. Electron. Agric. 2022, 197, 106937. [Google Scholar] [CrossRef]
- Gao, T.; Zhu, F.; Paul, P.; Sandhu, J.; Doku, H.A.; Sun, J.; Pan, Y.; Staswick, P.; Walia, H.; Yu, H. Novel 3D Imaging Systems for High-Throughput Phenotyping of Plants. Remote Sens. 2021, 13, 2113. [Google Scholar] [CrossRef]
- Tanabata, T.; Hayashi, A.; Kochi, N.; Isobe, S. Development of a Semi-Automatic 3D Modeling System for Phenotyping Morphological Traits in Plants. In Proceedings of the 2018 IECON—44th Annual Conference of the IEEE Industrial Electronics Society, Washington, DC, USA, 21–23 October 2018; pp. 5439–5444. [Google Scholar] [CrossRef]
- Pongpiyapaiboon, S.; Tanaka, H.; Hashiguchi, M.; Hashiguchi, T.; Hayashi, A.; Tanabata, T.; Akashi, R. Development of a Digital Phenotyping System Using 3D Model Reconstruction for Zoysiagrass. Plant Phenome J. 2023, 6, e20076. [Google Scholar] [CrossRef]
- Gomathi, N.; Rajathi, K.; Mahdal, M.; Elangovan, M. Point Sampling Net: Revolutionizing Instance Segmentation in Point Cloud Data. IEEE Access 2023, 11, 2731–2740. [Google Scholar] [CrossRef]
- Roggiolani, G.; Magistri, F.; Guadagnino, T.; Behley, J.; Stachniss, C. Unsupervised Pre-Training for 3D Leaf Instance Segmentation. IEEE Robot. Autom. Lett. 2023, 8, 2212–2219. [Google Scholar] [CrossRef]
- Li, D.; Shi, G.; Li, J.; Chen, Y.; Zhang, S.; Xiang, S.; Jin, S. PlantNet: A Dual-Function Point Cloud Segmentation Network for Multiple Plant Species. ISPRS J. Photogramm. Remote Sens. 2022, 184, 243–263. [Google Scholar] [CrossRef]
- Masuda, T. Leaf Area Estimation by Semantic Segmentation of Point Cloud of Tomato Plants. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Montreal, BC, Canada, 11–17 October 2021; pp. 1923–1931. [Google Scholar]
- Guo, R.; Xie, J.; Zhu, J.; Cheng, R.; Zhang, Y.; Zhang, X.; Gong, X.; Zhang, R.; Wang, H.; Meng, F. Improved 3D Point Cloud Segmentation for Accurate Phenotypic Analysis of Cabbage Plants Using Deep Learning and Clustering Algorithms. Comput. Electron. Agric. 2023, 211, 108014. [Google Scholar] [CrossRef]
- Wang, Y.; Liu, Q.; Yang, J.; Ren, G.; Wang, W.; Zhang, W.; Li, F. A Method for Tomato Plant Stem and Leaf Segmentation and Phenotypic Extraction Based on Skeleton Extraction and Supervoxel Clustering. Agronomy 2024, 14, 198. [Google Scholar] [CrossRef]
- Gu, W.; Wen, W.; Wu, S.; Zheng, C.; Lu, X.; Chang, W.; Xiao, P.; Guo, X. 3D Reconstruction of Wheat Plants by Integrating Point Cloud Data and Virtual Design Optimization. Agriculture 2024, 14, 391. [Google Scholar] [CrossRef]
- Imabuchi, T.; Kawabata, K. Semantic and Volumetric 3D Plant Structures Modeling Using Projected Image of 3D Point Cloud. In Proceedings of the 2024 IEEE/SICE International Symposium on System Integration (SII), Ha Long, Vietnam, 9–11 January 2024; pp. 141–146. [Google Scholar]
- Priestnall, G.; Jaafar, J.; Duncan, A. Extracting Urban Features from LiDAR Digital Surface Models. Comput. Environ. Urban Syst. 2000, 24, 65–78. [Google Scholar] [CrossRef]
- Zhao, L.; Um, D.; Nowka, K.; Landivar-Scott, J.L.; Landivar, J.; Bhandari, M. Cotton Yield Prediction Utilizing Unmanned Aerial Vehicles (UAV) and Bayesian Neural Networks. Comput. Electron. Agric. 2024, 226, 109415. [Google Scholar] [CrossRef]
- Available online: https://alexelvisbadillo.weebly.com/gvals-resources.html (accessed on 2 May 2024).
- Schunck, D.; Magistri, F.; Rosu, R.A.; Cornelißen, A.; Chebrolu, N.; Paulus, S.; Léon, J.; Behnke, S.; Stachniss, C.; Kuhlmann, H.; et al. Pheno4D: A Spatio-Temporal Dataset of Maize and Tomato Plant Point Clouds for Phenotyping and Advanced Plant Analysis. PLoS ONE 2021, 16, e0256340. [Google Scholar] [CrossRef] [PubMed]
Year | Authors | Contributions | Limitations |
---|---|---|---|
2020 | [1] | Comprehensive review of remote sensing applications in precision agriculture; highlights the integration of satellite imagery and UAVs for optimizing inputs like water and nutrients. | Focuses more on existing technologies rather than proposing new methodologies; limited discussion on future innovations. |
2024 | [2] | Overview of advancements in high-throughput phenotyping platforms using remote sensing; emphasizes the integration of diverse sensors like hyperspectral and 3D imaging. | Limited by regional infrastructure availability, particularly in developing regions. |
2014 | [3] | Reviews the state-of-the-art UAV remote sensing technologies for field-based crop phenotyping, emphasizing the ability to measure a wide range of phenotypic traits. | Discusses existing technologies, lacks an exploration of emerging trends and future applications. |
2024 | [4] | Explores the transformative impacts of smart sensors, IoT, and AI in modern agricultural practices; discusses the integration of these technologies with remote sensing for enhanced precision. | Primarily theoretical; requires further empirical validation and field tests. |
2024 | [12] | Compares UAV, satellite, and ground-based remote sensing approaches for field-based crop phenotyping; finds UAVs superior in resolution and efficiency for large-scale breeding programs. | Limited to specific physiological traits like the canopy temperature and NDVI, does not cover broader phenotypic traits. |
2024 | [13] | The first study using UAV-based remote sensing to model sustainability traits in crops like switchgrass; highlights UAVs’ potential in high-throughput phenotyping. | Focused on a specific crop (switchgrass), which may limit generalizability to other crop types. |
Criteria | Canopy-Level Phenotyping | Plant-Level Phenotyping | Plant-Organ-Level Phenotyping |
---|---|---|---|
Imaging Techniques | Multispectral Hyperspectral LiDAR Thermal SAR Photogrammetry | High-Resolution RGB Multispectral Hyperspectral LiDAR Thermal | 3D Imaging Fluorescence Hyperspectral |
Data Resolution | Medium to low (depends on the altitude and sensor, satellite and UAV systems typically have lower resolution at higher altitudes) | High due to proximity (due to closer proximity in UAV and controlled environment setups (CESs)) | Very nigh (detailed 3D reconstruction, organ-specific data acquisition) |
Phenotyping Focus | Whole canopy cover, vegetation health, biomass estimation | Individual plant shapes, growth rates, nutrient stress | Leaf area, stem diameter, fruit/flower detection, photosynthetic activity |
Common Sensors Used | RGB cameras, multispectral/hyperspectral sensors, LiDAR, thermal | RGB cameras, LiDAR, thermal, hyperspectral, multispectral | Fluorescence sensors, 3D scanners, high-resolution RGB |
Feature Extraction | NDVI, EVI, 3D canopy structure from LiDAR, thermal stress indices | Detailed 3D plant models, growth tracking, nutrient stress detection | Leaf area, stem diameter, fruit/flower detection, photosynthetic activity (fluorescence) |
Data Analysis Techniques | Spectral analysis, image segmentation, 3D modeling, time series analysis | Advanced segmentation, 3D reconstruction, thermal and spectral analysis | Deep learning for organ-level segmentation, photosynthetic efficiency analysis |
Data Collection Frequency | Periodic (depending on the satellite or UAV revisit time) | Continuous or high frequency in CES and UAV settings | On-demand or continuous for high-resolution monitoring |
Advantages | Large area coverage, whole crop monitoring, temporal analysis | Detailed individual plant monitoring, flexible for research needs | Highly precise phenotyping of organs (leaf, stem, and fruit), detailed trait extraction |
Feature Category | Feature Name | Description | Use in Phenotyping and Crop Modeling | Authors |
---|---|---|---|---|
Spectral Features | Normalized Difference Vegetation Index (NDVI) | Measures the photosynthetic capacity and vegetation health by comparing near-infrared (NIR) and red-light reflectance. | Estimates biomass, crop vigor, and health status. Used in yield prediction and growth monitoring. | [24,25] |
Green NDVI (GNDVI) | Similar to NDVI, focuses on green light, and assesses the chlorophyll content and leaf area index. | Assesses the chlorophyll concentration, which is an indicator of the nitrogen content and crop stress. | [26,27] | |
Enhanced Vegetation Index (EVI) | Corrects for atmospheric and soil background noise, useful in areas with a high biomass. | Enhances the signal in dense vegetation for better biomass and health estimations. | [28] | |
Soil-Adjusted Vegetation Index (SAVI) | Adjusts for soil brightness in areas with sparse vegetation. | Improves the vegetation signal in areas where bare soil is exposed. | [29] | |
Red Edge NDVI | Uses the red-edge band to detect subtle changes in the chlorophyll content and high biomass conditions. | Early detection of plant stress and monitoring of senescence stages. Strongly correlated with the plant biomass due to its sensitivity to chlorophyll variations. | [30] | |
Photochemical Reflectance Index (PRI) | Measures light use efficiency (LUE) and CO2 uptake using reflectance changes at 531 nm and 570 nm due to xanthophyll cycle activity. | Estimates gross primary productivity (GPP) and photosynthetic performance. Useful for detecting plant stress and quantifying CO2 fixation under varying environmental conditions. | [31] | |
Visible Atmospherically Resistant Index (VARI) | Assesses the physiological status of vegetation using the green, red, and blue bands. | Detects plant stress and is significantly correlated with the grain yield, flowering time, plant height, and anthocyanin concentration. Shows strong utility in maize phenotyping. | [32] | |
Normalized Difference Water Index (NDWI) | Measures the water content in vegetation. | Detects drought stress and water retention in crops. | [33] | |
Normalized Green–Red Difference Index (NGRDI) | Compares green and red bands to highlight vegetation health. | Effective for monitoring crop health and early stress detection. | [34] | |
Enhanced NDVI index (ENDVI) | Combines the NIR and green bands to enhance vegetation signals. | Useful for precision agriculture and assessing the nitrogen status. | [35] | |
Green Ratio Vegetation Index (GRVI) | Ratio of NIR and green band reflectance to assess crop health. | Indicates chlorophyll activity and the nitrogen status. | [36] | |
Chlorophyll Index Red Edge (CIRE) | Uses the red edge band to estimate the chlorophyll concentration. | Measures subtle changes in vegetation health and productivity. | [37] | |
Log Red | Logarithmic transformation of red reflectance to reduce saturation effects. | Improves sensitivity to changes in vegetation cover. | [27] | |
Red, Green Vegetation Index (RG) | Weighted combination of red and green bands to highlight vegetation features. | Identifies areas of crop stress and biomass estimation. | [27] | |
Textural Features | Gray-Level Co-occurrence Matrix (GLCM) Features | Captures texture via contrast, correlation, energy, and homogeneity. | Useful for detecting crop type variations and stress levels in vegetation. | [38] |
Entropy | Quantitative measure of randomness in texture. Higher values indicate diverse texture elements. | Differentiates healthy from stressed crops based on the uniformity of the canopy structure. | [39,40] | |
Local Binary Patterns (LBPs) | Describes spatial texture patterns in the canopy. | Useful in differentiating crop species and identifying stress patterns. | [41] | |
Structural Features | Canopy Height Model (CHM) | Derived from DSM and DTM, represents plant height. | Biomass estimation, growth rate monitoring, and yield potential prediction. | [42] |
Canopy Cover | Measures the proportion of ground covered by the canopy using thresholding techniques. | Essential for yield prediction, irrigation management, and monitoring crop density. | [43] | |
Canopy Volume | A 3D representation of the canopy structure. | Biomass estimation and modeling crop architecture. | [44] | |
Leaf Area Index (LAI) | Measures the leaf area relative to the ground area. | Estimates the photosynthetic capacity, crop growth, and potential yield. | [27] | |
Temporal Features | Growth Rate | Measures changes in the canopy height, volume, and spectral indices over time. | Useful for assessing crop growth, phenological changes, and stress responses. | [45] |
Phenological Metrics | Key transitions like greening, flowering, and senescence stages. Combines structural and spectral features to estimate the above-ground biomass. | Tracking phenological development and predicting yield. Critical for yield prediction and crop health monitoring. | [46] | |
Environmental Features | Evapotranspiration (ET) | Measures evaporation and plant transpiration, derived from UAV and spectral data. | Used in irrigation management and water balance models. | [47] |
Soil Moisture Content | Derived from NDWI and thermal imagery. | Estimates the soil water content and crop water stress. | [48] | |
Machine Learning Features | Deep Learning-Based Features | Extracts complex patterns from high-dimensional datasets. Uses CNNs, graph-based models and transformers to learn patterns and generate high-resolution crop imagery. | Improves feature extraction and phenotyping accuracy, and simulates realistic crop growth patterns. | [49,50] |
GAN-based Features | Trains GANs to generate synthetic images mimicking real-world growth patterns. | Creates realistic synthetic images for simulations or data augmentation. | [51] | |
Ordinary Differential Equations (ODEs) and Partial Differential Equation Methods (PDEs) | Models the dynamics and temporal changes of complex systems. Useful for understanding and predicting physiological interactions in crops. | Facilitates the study of dynamic growth patterns, water use efficiency, and stress responses under varying environmental conditions. | [52] | |
3D Reconstruction | Point Cloud Density and Distribution | Analyzing 3D point clouds for canopy structure and biomass estimation. | Provides detailed structural information for accurate phenotyping and yield estimation. | [53] |
Methods/Techniques | Description | Authors |
---|---|---|
Hyperspectral Imaging | Imaging technique capturing spectral information across wavelengths for a detailed analysis. | [56,57,58] |
NIR (Near-Infrared) Imaging | Uses near-infrared light to penetrate plant tissues, revealing internal structures. | [59] |
Fluorescence Imaging | Exploits natural fluorescence from chlorophyll to assess plant health and segmentation. | [60,61] |
Terrestrial Laser Scanning/Laser Techniques | Utilizes laser scanning to create detailed 3D models of plants for precise measurements, and includes Terrestrial Laser Scanning (TLS). | [62] |
Microscopic Approach | Involves high-magnification imaging for detailed cellular and tissue analyses. Specialized microscopy focusing on stomata for physiological studies. | [63] |
Point Cloud | 3D representations of plants and their organs are obtained via scanning techniques like structured light, laser scanning, photogrammetry, and multiview stereo. Point cloud data are crucial for capturing precise structural information of plants and their individual organs. Extraction techniques focus on various methods such as shape modeling, statistical feature descriptors, and segmentation algorithms for analyzing morphological traits and growth patterns at the organ and plant levels. | [64,65,66,67,68,69,70,71,72,73] |
For Controlled Environments/Chambers | Enclosed environments designed to maintain stable, regulated conditions for consistent phenotyping and experimentation. Used to control variables such as temperature, humidity, light intensity, and irrigation schedules, ensuring precise environmental monitoring and data consistency. | [74,75] |
Digital Image Processing | Application of image processing techniques to enhance, segment, and classify digital images for plant feature extraction, disease detection, and growth rate analyses. Techniques often include filtering, noise removal, segmentation, and classification for plant health assessments and monitoring. | [76,77,78,79,80] |
Morphological Feature Extraction | Extracts structural features like the shape, size, leaf edges, and structures of plant organs using morphological transformations, edge feature analysis, and key point detection methods. Applied for plant disease detection, species classification, and structural phenotyping. It can also be used to determine the ripeness or maturity level of vegetables. | [81,82,83,84] |
Zoning Feature Extraction | Divides images into zones to extract localized features. | [85] |
Wavelets | Analyzes images at multiple resolutions using wavelet transformations to extract features like edges, textures, and shapes. Applied for plant disease detection, species classification, and leaf identification by capturing fine details and patterns in plant images. | [86,87,88] |
Moment Invariant | Mathematical descriptors that are invariant to image transformations like scaling and rotation. | [89] |
Zernike Movement Invariant (ZMI) | Uses Zernike polynomials for rotation-invariant feature extraction. | [90] |
Textural Feature Analysis | Analyzes the texture of images to extract features like smoothness or coarseness. | [91,92,93] |
GLCM Features | Gray-Level Co-occurrence Matrix (GLCM) for a texture analysis. This is widely used for texture classification and segmentation in medical imaging, agriculture, and other fields requiring fine-grained texture analysis by computing contrast, dissimilarity, homogeneity, energy, and correlation of the matrix. | [94,95] |
Histogram-Oriented Gradient Method | Captures edge or gradient structures in images for feature extraction. | [71,96] |
Color Feature Extraction | Extracts features based on color information in images. | [97] |
Veins/Leaf Venation Density | Analyzes the venation patterns in leaves for species identification or health assessments. | [98,99] |
Shape Recognition—HSV and OTSU Methods | Uses the Hue, Saturation, Value (HSV) color space and Otsu’s thresholding for shape-based segmentation. | [100] |
CNN (Convolutional Neural Network) | Convolution layers capture local spatial features by applying different filters. These features are then flattened into a feature vector, excluding fully connected layers, and the VGG16 architecture is used as a feature extractor to capture visual patterns from the images. | [101] |
The model first extracts meaningful features from plant images using ResNet50, processes these features through multiple Dense layers, and then sums the resulting values to predict the leaf count, making it a regression-based leaf counter. | [102] | |
LSTM (Long Short-Term Memory) | LSTM is used to process time series data, making it suitable for capturing dynamic changes over time. LSTM helps detect subtle early stress indicators that may not be evident in a single image, thereby improving early-stage stress detection. | [101] |
Attention Mechanism | Attention mechanisms are designed to selectively emphasize the most relevant features of an object while suppressing irrelevant or noisy features. Neural networks that focus on important parts of input data improve performance on tasks like classification. | [103] |
Transformers | Advanced deep learning models using self-attention mechanisms, effective in sequence modeling. | [104,105] |
Genetic Programming | Evolutionary algorithm that evolves programs or models to perform specific tasks. | [106,107] |
Particle Swarm Optimization | The Particle Swarm Optimization (PSO) algorithm enhances the accuracy and efficiency of leaf disease classification by optimizing the segmentation process. PSO aids in finding optimal cluster centers, ensuring that the best characteristics of the features are applied for separation of diseased regions. | [108,109] |
Rider Cuckoo Search Algorithm | The Rider Cuckoo Search Algorithm is a hybrid optimization technique integrating the Rider Optimization Algorithm (ROA) and Cuckoo Search (CS) to improve performance. It optimizes the training of Deep Belief Networks (DBNs) by balancing exploration and exploitation, leading to better classification results. | [110] |
Clustering and Segmentation | Groups data points based on similarity, used in the general process of partitioning an image into meaningful regions like segmentation and classification. | [111] |
Discriminant Analysis | Statistical method used to find a combination of features that separates classes of objects. | [112] |
Thresholding and Edge-Based Techniques | These techniques use intensity thresholds and edge detection for segmenting images into meaningful regions. Thresholding converts gray-scale images into binary images based on defined threshold values, while edge detection identifies boundaries between objects using first-order (e.g., Sobel and Canny) and second-order derivatives. These methods are widely used for medicinal plant image segmentation. | [113] |
Improved Saliency Method | Enhances the saliency (prominence) of objects for better segmentation by fusing the Sharif saliency-based (SHSB) method with active contour segmentation, improving the clarity of infected regions in cucumber leaves. This step aids in accurate feature extraction. | [114] |
Thermal imaging from flower temperature patterns | Captures the floral surface temperature distribution using infrared imaging, revealing contrasting temperature patterns. Analyzes distinct thermal structures within a flower to identify temperature contrasts between petals and reproductive parts. | [115] |
Double Line Clustering | A clustering method that segments diseased plant regions by analyzing pairs of lines for more precise image segmentation. It is used to identify and segment diseased leaf areas in crops like tomatoes, grapes, and cucumbers. | [116] |
Hough Transform | Identifies geometric shapes (such as lines, circles, and ellipses) within digital images by mapping the image space into a parameter space. Uses a voting mechanism to extract shape details based on defined parameters, making it highly resilient to noise and incomplete boundaries. This method excels at detecting intricate shapes and patterns in noisy datasets and handles partial or broken edges effectively, ensuring reliable extraction of irregular object shapes. Ideal for shape analyses. | [117] |
Adaptive Gamma Correction | Adjusts the image brightness adaptively to enhance features. | [114,117] |
Multi-Fractal Analysis | Analyzes complex patterns that exhibit fractal properties at multiple scales using fractal dimensions to capture both the local and global characteristics of an object. The key steps include calculating singularities for each image pixel using Hölder exponents and extracting multifractal spectra such as the Hausdorff dimension for global characterization. This method is particularly effective at identifying small objects, such as insect pests, under challenging conditions like variable lighting. It involves box-counting for estimating the fractal dimension and uses techniques such as regional minima and morphological operations to isolate target regions. | [118] |
Graph-Based Methods like Pyramidal Histogram of Oriented Gradients (PHOGs) | Graph Cut segmentation is used to segment flowers from complex backgrounds by treating the image as a graph. PHOGs capture the shape of flowers using histograms of edge orientations at multiple pyramid levels, effectively representing local and global shape features for image matching. | [119] |
Piecewise Evolutionary Segmentation | The method divides time series data into smaller segments using evolutionary algorithms to find the optimal segmentation pattern, reducing dimensionality and retaining important features. This segmentation adapts dynamically using genetic algorithms to enhance classification and regression models by finding the best segmentation pattern for each problem. | [120] |
Technique | Contribution | Limitation | Authors |
---|---|---|---|
Curvature-based | Estimates curvature values at each point for shape analysis and feature extraction | Can be sensitive to noise and high curvature variations | [121] |
Normal based | Estimates surface normals at each point for shape analysis and feature extraction | Sensitive to noisy data and missing points | [107] |
Spin Images | Encodes the spatial distribution of points using 2D histograms for feature description and object recognition | Sensitive to occlusions and requires precise point alignment | [127] |
Voxel-based | Discretizes the 3D space into voxels for efficient processing | May lose accuracy due to discretization | [122] |
PFH | Encodes the spatial distribution of points in a local neighborhood for feature description and registration | Computationally expensive and sensitive to noise | [124,125] |
FPFH | An improvement over the PFH with reduced computational complexity while maintaining accurate feature description | Limited to small neighborhoods for fast computation | [126] |
Shape Context | Uses 2D histograms to encode the spatial distribution, successful in cluttered environments | May require more computational resources for complex point clouds | [123] |
Technique | Contributions | Limitations | Authors |
---|---|---|---|
Spectral CNNs | Spectral analysis for shape recognition on meshes. Robust to non-isometric deformations. | Restricted to manifold meshes like organic objects. Not easily extendable to non-isometric shapes like furniture. | [130] |
Feature-based DNNs | Converts 3D data into vectors for classification. Fast and efficient processing. | Limited by the representation power of extracted features. Requires domain-specific feature engineering. | [131] |
Volumetric CNNs | Three-dimensional convolutional neural networks for shape recognition. Real-time object recognition. | Constrained by resolution due to data sparsity and computation costs. Limited to small-scale 3D data. | [128] |
Multiview CNNs | Two-dimensional convolutional neural networks for shape classification and retrieval. Efficient processing of large-scale 3D data. | Limited to shape classification and retrieval tasks. Requires multiple views of the 3D object. | [129] |
Technique | Contributions | Limitations | Reference |
---|---|---|---|
PointNet | Novel approach for point-wise feature extraction for 3D point cloud segmentation | Lacks local context information, making it less effective for complex plant structures and dense datasets | [132] |
PointNet++ | Extended PointNet by incorporating local neighborhood features through hierarchical grouping | Struggles with highly complex inter-point dependencies in large-scale datasets | [133] |
Point–voxel CNN | Improves execution efficiency by pooling the advantages of voxels and points | May need additional fine-tuning for optimal performance | [141] |
PointASNL | Uses nonlocal neural networks for robust point cloud processing | Can be computationally intensive and requires extensive memory | [142] |
PAConv | Utilizes a dynamic kernel construction to adapt convolution weights based on the local geometry | High complexity and increased training times | [135] |
DGCNN | Uses a graph-based approach with dynamic EdgeConv operations for local neighborhood relationship modeling | Computationally intensive and sensitive to noise in low-density regions | [134] |
CurveNet | Leverages curve-based features to capture connectivity and spatial context in curved plant structures | Vulnerable to noise, reducing its effectiveness in real-world settings | [136] |
FatNet | Feature-attentive network for 3D point cloud processing | Can be sensitive to overfitting due to its high focus on features | [143] |
POEM | Reduces storage and computing costs with a 1-bit fully connected layer (Bi-FC) | Potential loss of detail due to compression | [144] |
LatticeNet | Novel approach for 3D semantic segmentation from raw point clouds | May require significant computational resources for segmentation | [145] |
Point Transformer | Introduces self-attention for long-range feature aggregation, excelling in complex organ segmentation | The high computational cost limits real-time applications | [137] |
G-PCC++ | A KNN-based linear interpolation for geometry restoration, a KNN-based Gaussian distance weighted mapping for attribute enhancement. | Computational complexity, which might limit resource-constrained environments | [146] |
Stratified Transformer | Extends Mask R-CNN to 3D data using sparse convolutions, enabling robust instance segmentation of large-scale datasets | Relies on pre-computed features, increasing the preprocessing time and reducing flexibility | [138] |
Mask3D | Extends Mask R-CNN to 3D data using sparse convolutions, enabling robust instance segmentation of large-scale datasets | Relies on pre-computed features, increasing the preprocessing time and reducing flexibility | [139] |
FNeVR | Enhances facial details for image rendering via neural volume rendering | Specific to facial applications and might not be generalizable | [147] |
SP-LSCnet | Combines unsupervised clustering and an adaptive network for efficient segmentation using superpoints | Two-stage processing may lead to misclassification in geometrically ambiguous regions | [140] |
Method | Processing Time (ms) | FrameRate (FPS) | DataSize (MB) | StorageRequirements (MB/min) |
---|---|---|---|---|
Conventional (before 2010) | 100–500 | 1–5 | 100–500 | 1000–5000 |
PointNet (2017) | 10–30 | 30–100 | 10–50 | 100–500 |
PointNet++ (2018) | 5–15 | 60–200 | 5–20 | 50–200 |
Recent Neural Network-Based Methods (2020 and later) | 1–5 | 200–500 | 1–10 | 10–50 |
Metric | Buildings | Ground | Vegetation | Water | Unclassified | Overall Accuracy |
---|---|---|---|---|---|---|
Precision (PointNet) | Poor | Good | Good | Poor | Poor | 78% |
Precision (SVM) | Poor | High | High | Poor | Poor | 88% |
Recall (PointNet) | Very low | High | Moderate | Very low | - | - |
Recall (SVM) | Moderate | High | High | - | - | - |
F1 score (PointNet) | Weakness | Moderate | Moderate | Weakness | - | - |
F1 score (SVM) | Weakness | High | High | - | - | - |
Technique | Contributions | Limitations | Authors |
---|---|---|---|
Image segmentation | Divides the image into background and foreground region of interests | [158] | |
Edge detection | Sobel edge detection | Slow process and results in a noisy image | |
Prewitt edge detection: faster process | Suitable for a high-contrast and low-noise image | ||
Canny edge detection: based on the Laplacian algorithm and is insensitive to noise | Encompasses a multistage algorithm, and hence results in longer processing time | ||
Blob analysis (binary large object analysis) | The position of the target can be detected. | ||
Semantic Segmentation | Four different deep learning segmentation methods; semi-automatic annotation process | Requires large amount of annotated image datasets; exhaustive manual annotation process | [164] |
Color Conversion | HSV color space: thresholding is utilized to determine ripe and turning fruit colors. Hue values for ripe tomatoes are 0–10, the saturation value is 170–256, and the hue value for turning fruit is 11–20 and the saturation value is 150–256. | When ripe and turning fruits are grouped together, the technique can identify the fruit but cannot distinguish between both fruits. Hence, its counts the fruits as a single fruit than a couple of fruits (multiple fruits). Focal bottleneck transformer network (FBoT-Net) for green apple | [154] [178] |
Image Segmentation | Segments the image based on HSV values | ||
Deep learning based 2D image segmentation | |||
Color Conversion | Identifies ripe strawberries based on the hue (HSV format). | [179] | |
Single-stage deep learning (YOLOv8) | Improves the performance by combining ECA with YOLO. Results in enhanced identification of ripe strawberries in complex environments (forelight, backlight, and occlusion). | Limitation in Context understanding and struggle in detecting small objects compared to latest vrsions. | |
Color Analysis | The required target has different color values compared to other parts of the plants. | Requires external memory for image processing | [155] |
SSD | High processing speed and accuracy | Difficulty detecting fruits at the edge of the frame and farther away from the frame | [171] |
K-means Segmentation | Faster fruit detection | Larger images required longer running time. Occlusions lead to inaccurate results. | [166] |
RGB-D Image Segmentation | Real-time fruit detection (138 frames per second) | Slightly noisy results. The neural network structure is complicated and the training process is longer. | [177] |
Low-cost sensor | Guava fruits were easier to detect than the branch. Some of the unsuccessful results were due to sunlight. | [176] |
Technique | Contributions | Limitations | Authors |
---|---|---|---|
Point Sampling Method | Improves instance segmentation in point clouds for plant growth phenotyping in agriculture | May require large datasets and computational resources | [196] |
RANSAC Extension | Extends RANSAC for complex plant morphologies, achieving high accuracy in measuring plant attributes | May be sensitive to imaging noise and cluttered backgrounds | [5] |
Self-Supervised Pre-Training | Reduces the labeling effort for 3D leaf instance segmentation in agricultural plant phenotyping | May not work as well in highly occluded environments | [197] |
LatticeNet | Data augmentation to enhance plant part segmentation, contributing to the plant architecture reconstruction | May require significant computational resources for semantic segmentation | [145] |
PlantNet | Dual-function deep learning network for semantic and instance segmentation in multiple plant species | May require additional validation for diverse plant species | [198] |
Joint Plant Instance Detection | Method for joint plant instance detection and leaf count estimations in agricultural fields | May have limited scalability to other types of plants or environments | [7] |
Leaf Area Estimation | Estimates the leaf area in tomato plants using RGB-D sensors and point cloud segmentation | Relative error of about 20%, indicating possible accuracy issues | [199] |
Robust RANSAC | Robust method for the direct analysis of plant point cloud data using RANSAC, achieving high accuracy | May not work well in highly noisy or cluttered environments | [5] |
Virtual Design-Based Reconstruction | A 3D reconstruction technique for wheat plants, integrating point cloud data with optimization algorithms | Challenges in precise reconstruction due to the lack of organ templates | [202] |
ASAP-PointNet | Method for analyzing cabbage plant phenotypes using 3D point cloud segmentation | Requires complex preprocessing and training | [200] |
Semantic and Volumetric 3D Modeling | Method for semantic and volumetric 3D modeling from point cloud data for accurate radiation dose distributions | Can be computationally expensive and requires significant memory | [203] |
Tomato Plant Segmentation | Method for segmenting tomato plant stems and leaves, extracting phenotypic parameters | Challenges with canopy adhesion and under-segmentation | [201] |
Sweet Pepper Leaf Area Estimation | Method to estimate the sweet pepper leaf area using semantic 3D point clouds generated from RGB-D images | Issues in fully capturing tall plants and varying point cloud resolutions | [10] |
Aspect | Contribution | Limitation |
---|---|---|
Annotation Process | Efficient segmentation of plant parts using a annotation tool (stems, branches, and suckers) in LabelStudio | Time-consuming annotation process, approximately 2–4 min per image |
Model Training | Successful training of a segmentation model with the available dataset | Limited using full plant images, resulting in an F1 confidence score of 0.65 |
Future Directions | Plan to refine training by focusing on specific plant organs (branches, stems, and suckers) | The current dataset and training approach are insufficient for higher accuracy |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Nethala, P.; Um, D.; Vemula, N.; Montero, O.F.; Lee, K.; Bhandari, M. Techniques for Canopy to Organ Level Plant Feature Extraction via Remote and Proximal Sensing: A Survey and Experiments. Remote Sens. 2024, 16, 4370. https://doi.org/10.3390/rs16234370
Nethala P, Um D, Vemula N, Montero OF, Lee K, Bhandari M. Techniques for Canopy to Organ Level Plant Feature Extraction via Remote and Proximal Sensing: A Survey and Experiments. Remote Sensing. 2024; 16(23):4370. https://doi.org/10.3390/rs16234370
Chicago/Turabian StyleNethala, Prasad, Dugan Um, Neha Vemula, Oscar Fernandez Montero, Kiju Lee, and Mahendra Bhandari. 2024. "Techniques for Canopy to Organ Level Plant Feature Extraction via Remote and Proximal Sensing: A Survey and Experiments" Remote Sensing 16, no. 23: 4370. https://doi.org/10.3390/rs16234370
APA StyleNethala, P., Um, D., Vemula, N., Montero, O. F., Lee, K., & Bhandari, M. (2024). Techniques for Canopy to Organ Level Plant Feature Extraction via Remote and Proximal Sensing: A Survey and Experiments. Remote Sensing, 16(23), 4370. https://doi.org/10.3390/rs16234370