Monitoring Saffron Crops with UAVs
Abstract
:1. Introduction
- Autonomous vehicles and robots that can assist or replace the manual labor.
- Automated control of inputs such as water and fertilizers to lower the cost for the farmers and help protect the environment.
- Remote sensing technologies, such as sensors, UAVs, and satellites, that can be used for monitoring and managing soil, water, and other factors of production. These technologies can help toward the identification of factors that are stressing the crops, such as soil moisture, climate conditions, etc.
- Machine learning techniques and big data analytics that can be adopted to analyze the large amount of data collected in order to detect potential threats to plants, such as weeds, animals, or diseases.
- Computerized applications to create precision farming plans, field maps, crop logs, and yield maps. This allows for more precise application of inputs such as pesticides, herbicides, and fertilizers, helping to reduce costs, achieve higher yields, and practice more environmentally friendly agriculture.
2. Related Work
- This study presented, applied, and evaluated a methodology that incorporates all the steps of the field monitoring process, including the remote data collection step with the appropriate equipment, the photo-processing step, the application of machine learning algorithms step, the application of the estimation models step, and finally to the step of the evaluation.
- In contrast to previous research, the target of this study was twofold: (a) The first target was to estimate a variety of field attributes (i.e., weeds, flowers, and animal intrusions). The target was to upgrade profit earnings by the adoption of new technologies, ensuring flower picking at an appropriate time in a proper collection material at an appropriate age [12]. Moreover, animal intrusions in the fields, causing a huge loss in production, will be prevented, and weeds will be detected in time. (b) The second target was to examine the applicability of a variety of machine learning methods to farming estimations and draw conclusions based on their accuracy.
3. Field Study Design Methodology
3.1. Selection of Field Operations to Be Monitored
- The animal intrusions that destroy the crops.
- The weeds that affect the growth of the plant and need to be removed.
- The identification of saffron, which is the main plant and needs to be collected.
- Thus, the following research questions emerged for the pilot study performed:
- RQ1: Is it possible to estimate the existence of mammals in saffron crops with the help of UAV imagery?
- RQ2: Is it possible to estimate the existence of weeds in saffron crops with the help of UAV imagery?
- RQ3: Is it possible to estimate the production of saffron crops with the help of UAV imagery?
3.2. Site Selection
3.3. Equipment Selection
3.4. Selection of Photogrammetry Techniques
- Orthomosaic: Orthomosaics, or orthophotos (Figure 3), correct any geometric distortion that is inherent in aerial images. By using a process called orthorectification, a highly detailed map referenced to the real world can be created. Orthorectification removes perspective from each individual image to create consistency across the whole map, while keeping the same level of detail from the original image. The final product is a single mosaic built through edge matching and color balancing.
- Normalized Difference Vegetation Index (NDVI): NDVI (Figure 4) is an indicator that calculates the vitality of vegetation based on UAV captured data. Live vegetation (where chlorophyll is present) reflects more infrared and green (in the electromagnetic spectrum) radiation than other wavelengths. Vegetation absorbs more blue and red radiation, so the human eye observes vegetation as green. The NDVI uses near infrared and red imaging channels to measure healthy vegetation. In mathematical terms, the formula is worded as follows:NDVI = (NIR − RED)/(NIR + RED)
- Ground-truth image: “Ground truth” stands for the objective observation, verifiable by humans, of the state of an object or information that can be considered a fact. The term “ground truth” has recently gained popularity, as it has been adopted by machine learning and deep learning approaches. In this context, we refer to a “ground truth image”—human-generated classifications of image data on which algorithms are trained or evaluated. This image can be created through a process called annotation, where we label the objects on an image by using an appropriate software tool.
3.5. Application of Machine Learning Techniques
- Pixel-based image analysis: This method is based on classifying images pixel by pixel, using a set of rules to decide whether different pixels can be grouped according to similar characteristics. Code was developed in Python to export the RGB and NDVI values of the pixels from the images, and an Attribute-Relation File Format (ARFF) dataset was created. ARFF files are ASCII text files that describe a list of instances that have a set of attributes in common. The data that served as input were orthophotos in an RGB and NDVI spectrum. In addition, a separate image was created with the annotations for each pixel, essentially representing the actual data (ground truth) of the image. The python PIL library was used for the analysis of each image, which is suitable for collecting pixel values from an image file. For the image annotation, we used the Computer Vision Annotation Tool (CVAT) [25], an open-source software.
- Object-based image analysis (OBIA): OBIA, in contrast to pixel-based analysis that categorizes each pixel, groups small pixels on a common vector object. This process uses the image segmentation method suggested by Shepherd [26], which divides the pixels into homogeneous parts. Then these sections/objects are arranged in classes based on shape, spectrum, texture, and other features. In more detail, the analysis is based on objects that are built as pixel groups (raster clumps). The properties of each clump are stored in one raster attribute table (RAT), while each object is associated with an ID. For the creation, storage, display, and sorting within the RAT, the following software packages were used:
- o Remote Sensing and GIS software library (RSGISLib): For the image segmentation.
- o GDAL: A library for translating and processing raster and vector geospatial data formats.
- o Raster I/O Simplification (RIOS): For reading, writing, and sorting the created objects.
- o KEA image format: For saving image objects with their attributes.
- o Tuiview: For viewing KEA files.
- Random Forest (RF): RF is a machine learning algorithm for solving regression and classification problems. Its name derives from the use of a large number of decision trees, a fact that results in minimizing the occurrence of overtraining phenomena in each tree. Scipy.io library was utilized for ARFF file reading, and scikit-learn was used for the RandomForestClassifier function, which applies the RF algorithm in pandas data format. The number of trees that we used as a parameter for the function was 100.
- Multilayer Perceptron (MLP): MLP is a kind of artificial neural network (ANN) and specifically belongs to the deep neural network category. Artificial neural networks [27] are an attempt to approach the human learning process. They essentially mimic biological neural networks by assigning nerve functions to a single element (neuron), which is only capable of summing its input and normalizing its output. Neurons are interconnected in arbitrary complex artificial neural networks and are organized into levels: an input level (corresponding to features), one or more hidden levels, and an output level (corresponding to categories). The goal of the learning algorithm is to determine the weights of connections between neurons (which are used to calculate weighted sums in each neuron) in order to reduce the classification error rate. If a neural network consists of more than three levels, it constitutes a deep neural network (DNN). Scikit-learn was used for the MLP algorithm application, too, with the MLPClassifier function.
3.6. Estimations
3.7. Evaluation
- 10-Fold Cross-Validation: Cross-validation [28] is a method of estimating the performance of a trained model. It is a process that enhances the accuracy of the results, helping to draw generalized, safe conclusions about the behavior of the categorizer in a data set. During the categorization process, the data set is divided into a training set (training set) and an evaluation set (test set). The general idea is that the data set is divided into equal parts and follows an iterative process in which, each time, one part is used for evaluation and the rest for the training of the whole in a circular order.
- Confusion matrix: One of the basic concepts of classification efficiency is the confusion matrix table, which depicts the model’s predictions versus ground-truth labels. Each confusion matrix row represents the instances in a predicted class, and each column represents the instances of a real class. A confusion matrix table is a key tool for evaluating the methods used in this study. The confusion matrix [29] is an overall picture of the categorization results. It is a table that compares the predicted categories in which the data were listed to those that they actually belong to and is used in the control phase of the model. It is c x c of dimensions, where c is the number of categories. After the matrix completion, we used the three evaluation metrics described below:
- oPrecision: The ratio between all the data that are assigned to the correct category, to the total number of data that are assigned to a category (correctly or incorrectly).Precision = True Positive/Predicted Positive
- oRecall: The ratio of all the data that are estimated correctly to the data that are actually correct.Recall = True Positive/Actual Positive
- oAccuracy: The ratio of all the data that are estimated correctly that belong and do not belong to a category, to the total number of data.Accuracy = (True Positive + True Negative)/All
4. Results
4.1. Pixel Based Image Analysis
4.2. Object-Based Image Analysis
5. Discussion
5.1. Interpretation of Results
- The estimation accuracy of both pixel-based and object-based methods was promising with pixel-based methods presenting overall higher accuracy. The average accuracy rate for the Random Forest (RF) algorithm was at 100%, and for the Multilayer Perceptron algorithm, it was at 95%. The object-based analysis, despite the weaknesses that existed, also presented encouraging results with respect to accuracy (RF at 80% and for MLP at 76%).
- It is important to note that, in order to evaluate machine learning algorithms, we need to use other performance metrics, such as the recall metric, in addition to the accuracy metric. This is very important because our dataset is not balanced; that is, the soil category outweighs the other categories (saffron, weed, and mammal). Thus, it can be seen that the basic method for evaluating the algorithms is the recall percentage for each category. The recall rates were at a medium level for both ML algorithms.
- Regarding the object-based analysis method, it is important to mention that it presented relatively low percentages in recognizing each class separately.
- One thing that should not be taken for granted is that the method per pixel gave much better results on higher flights than the method per object. In this regard, we can estimate that the method per pixel, although time-consuming, has the potential to improve the detection of weeds, saffron, and overall various types of diseases.
- The results of this study are encouraging. It is possible to estimate the production of saffron, the detection of weeds, and the existence of mammals through images collected by UAV. In particular, by applying these ML models in real-time data, it is possible to detect and even predict from a single image the accurate position of the weed to be removed, the amount of production, and the total resources needed to grow and protect a saffron cultivation.
5.2. Limitations of the Study
5.3. Future Directions
- Further investigate the object-based methodology applied in this type of cultivation so as to more effectively apply the segmentation algorithm in order to recognize the smallest objects of the image. In this direction, it is possible to collect much more information based on the morphology of the ground, or even investigate a more suitable parameterization of the segmentation algorithm. Additionally, in the future, researchers can experiment on a fusion approach of the pixel-based and object-based methods that might provide satisfactory results.
- Perform additional studies in saffron-crop monitoring that will take into consideration the findings of the present study in order to achieve improved estimation accuracy. Researchers can change the parameters of the monitoring process and experiment in different saffron fields, use different types of sensors to make more detailed annotations, etc.
- Compare data of UAV flights at different heights, so as to verify the accuracy of the two models and come up with the optimum height for the saffron cultivation. In this study, we experimented with three flight heights: (a) 400 m, (b) 120 m, and (c) 12 m. The first height (400 m) in the case of saffron cultivations cannot provide useful information, since the flower is very close to the ground and the accuracy of the image is too low to provide information. The second height is ideal for performing long-distance flights that can also provide images that can serve as input to the estimation models, while the third height (12 m) is ideal for acquiring images for training the estimation models. In the last case, the low height gives us access to high-accuracy images that can be precisely annotated by the experts.
- Adopt the presented monitoring process, as the average accuracy of the produced estimation models is overall promising. Since the recall rates of animal intrusion detection were not so high, cultivators are encouraged to collect more spectral information for cultivation to determine the presence of mammals, their extent, and the amount of damage they cause. This information can be thermal images, the use of vegetation indices for moisture, and the information of NIR (near infrared) or RE spectra (red edge).
- Give more attention to the annotation of the images so as to avoid classifying the different types of field attributes into wrong classes. As domain experts, they need to carefully interpret the images and provide the related knowledge to the annotation tools that will be used to train the estimation models.
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Sobayo, R.; Wu, H.-H.; Ray, R.; Qian, L. Integration of convolutional neural network and thermal images into soil moisture estimation. In Proceedings of the 2018 1st International Conference on Data Intelligence and Security (ICDIS), South Padre Island, TX, USA, 8–10 April 2018; pp. 207–210. [Google Scholar]
- Kumpumäki, T.; Linna, P.; Lipping, T. Crop lodging analysis from UAS orthophoto mosaic, Sentinel-2 image and crop yield monitor data. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 23–27 July 2018; pp. 7723–7726. [Google Scholar]
- Tewes, A.; Schellberg, J. Towards Remote Estimation of Radiation Use Efficiency in Maize Using UAV-Based Low-Cost Camera Imagery. Agronomy 2018, 8, 16. [Google Scholar] [CrossRef] [Green Version]
- Mancini, A.; Frontoni, E.; Zingaretti, P. Improving variable rate treatments by integrating aerial and ground remotely sensed data. In Proceedings of the 2018 International Conference on Unmanned Aircraft Systems (ICUAS), Dallas, TX, USA, 12–15 June 2018; pp. 856–863. [Google Scholar]
- Pascuzzi, S.; Anifantis, A.S.; Cimino, V.; Santoro, F. Unmanned aerial vehicle used for remote sensing on an Apulian farm in southern Italy. Eng. Rural. Dev. 2018, 17, 149–154. [Google Scholar]
- Milas, A.S.; Romanko, M.; Reil, P.; Abeysinghe, T.; Marambe, A. The importance of leaf area index in mapping chlorophyll content of corn under different agricultural treatments using UAV images. Int. J. Remote Sens. 2018, 39, 5415–5431. [Google Scholar] [CrossRef]
- Lussem, U.; Bolten, A.; Gnyp, M.L.; Jasper, J.; Bareth, G. Evaluation of RGB-based vegetation indices from UAV imagery to estimate forage yield in grassland. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 42, 1215–1219. [Google Scholar] [CrossRef] [Green Version]
- Cai, G.; Dias, J.; Seneviratne, L. A Survey of Small-Scale Unmanned Aerial Vehicles: Recent Advances and Future Development Trends. Unmanned Syst. 2014, 2, 175–199. [Google Scholar] [CrossRef] [Green Version]
- Tsouros, D.C.; Smyrlis, P.N.; Tsipouras, M.G.; Tsalikakis, D.G.; Giannakeas, N.; Tzallas, A.T.; Manousou, P. Automated collagen proportional area extraction in liver biopsy images using a novel classification via clustering algorithm. In Proceedings of the 2017 IEEE 30th International Symposium on Computer-Based Medical Systems (CBMS), Thessaloniki, Greece, 22–24 June 2017; pp. 30–34. [Google Scholar]
- Wang, G.; Lan, Y.; Qi, H.; Chen, P.; Hewitt, A.J.; Han, Y.; Yubin, L. Field evaluation of an unmanned aerial vehicle (UAV) sprayer: Effect of spray volume on deposition and the control of pests and disease in wheat. Pest Manag. Sci. 2019, 75, 1546–1555. [Google Scholar] [CrossRef] [PubMed]
- Gresta, F.; Lombardo, G.M.; Siracusa, L.; Ruberto, G. Effect of mother corm dimension and sowing time on stigma yield, daughter corms and qualitative aspects of saffron (Crocus sativus L.) in a Mediterranean environment. J. Sci. Food Agric. 2008, 88, 1144–1150. [Google Scholar] [CrossRef]
- Triantafyllou, A.; Sarigiannidis, P.; Bibi, S. Precision Agriculture: A Remote Sensing Monitoring System Architecture. Information 2019, 10, 348. [Google Scholar] [CrossRef] [Green Version]
- Rasooli, M.W.; Bhushan, B.; Kumar, N. Applicability of wireless sensor networks & IoT in saffron & wheat crops: A smart agriculture perspective. Int. J. Sci. Technol. Res. 2020, 9, 2456–2461. [Google Scholar]
- Duan, K.; Vrieling, A.; Kaveh, H.; Darvishzadeh, R. Mapping saffron fields and their ages with Sentinel-2 time series in north-east Iran. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102398. [Google Scholar] [CrossRef]
- Bidgoli, R.D.; Koohbanani, H. Area estimation of saffron cultivation using satellite images and time difference method (case study: Fazl Village in Nishabur County of Iran). Environ. Resour. Res. 2020, 8, 121–128. [Google Scholar] [CrossRef]
- Triantafyllou, A.; Sarigiannidis, P.; Bibi, S.; Vakouftsi, F.; Vassilis, P. Modelling deployment costs of precision agriculture monitoring systems. In Proceedings of the 2020 16th International Conference on Distributed Computing in Sensor Systems (DCOSS), California, LA, USA, 25–27 May 2020; pp. 252–259. [Google Scholar]
- Kiropoulos, K.; Bibi, S.; Vakouftsi, F.; Pantzios, V. Precision Agriculture Investment Return Calculation Tool. In Proceedings of the 2021 17th International Conference on Distributed Computing in Sensor Systems (DCOSS), California, LA, USA, 14–16 July 2021; pp. 267–271. [Google Scholar]
- Tsouros, D.C.; Terzi, A.; Bibi, S.; Vakouftsi, F.; Pantzios, V. Towards a Fully Open-Source System for Monitoring of Crops with UAVs in Precision Agriculture. In Proceedings of the 24th Pan-Hellenic Conference on Informatics, Athens, Greece, 20–22 November 2020; pp. 322–326. [Google Scholar]
- Rossi, G.; Tanteri, L.; Tofani, V.; Vannocci, P.; Moretti, S.; Casagli, N. Multitemporal UAV surveys for landslide mapping and characterization. Landslides 2018, 15, 1045–1052. [Google Scholar] [CrossRef] [Green Version]
- What Is Photogrammetry? GIS Geography. (29 October 2021). Available online: https://gisgeography.com/what-is-photogrammetry/ (accessed on 31 March 2022).
- Moysiadis, V.; Sarigiannidis, P.; Vitsas, V.; Khelifi, A. Smart Farming in Europe. Comput. Sci. Rev. 2021, 39, 100345. [Google Scholar] [CrossRef]
- Kakamoukas, G.; Sarigiannidis, P.; Maropoulos, A.; Lagkas, T.; Zaralis, K.; Karaiskou, C. Towards Climate Smart Farming—A Reference Architecture for Integrated Farming Systems. Telecom 2021, 2, 52–74. [Google Scholar] [CrossRef]
- Radoglou-Grammatikis, P.; Sarigiannidis, P.; Lagkas, T.; Moscholios, I. A compilation of UAV applications for precision agriculture. Comput. Netw. 2020, 172, 107148. [Google Scholar] [CrossRef]
- Lytos, A.; Lagkas, T.; Sarigiannidis, P.; Zervakis, M.; Livanos, G. Towards smart farming: Systems, frameworks and exploitation of multiple sources. Comput. Netw. 2020, 172, 107147. [Google Scholar] [CrossRef]
- Openvinotoolkit/Cvat: Powerful and Efficient Computer Vision Annotation Tool (CVAT). GitHub. (n.d.). Available online: https://github.com/openvinotoolkit/cvat (accessed on 31 March 2022).
- Clewley, D.; Bunting, P.J.; Shepherd, J.; Gillingham, S.; Flood, N.; Dymond, J.; Lucas, R.; Armston, J.; Moghaddam, M. A Python-Based Open Source System for Geographic Object-Based Image Analysis (GEOBIA) Utilizing Raster Attribute Tables. Remote Sens. 2014, 6, 6111–6135. [Google Scholar] [CrossRef] [Green Version]
- Craven, M.; Shavlik, J. Using Neural Networks for Data Mining. Future Gener. Comput. Syst. Spec. Issue Data Min. 1997, 13, 211–229. [Google Scholar] [CrossRef]
- Refaeilzadeh, P.; Tang, L.; Liu, H. Cross-validation. Encycl. Database Syst. 2009, 5, 532–538. [Google Scholar]
- Fawcett, T. An Introduction to ROC analysis. Pattern Recogn. Lett. 2006, 27, 861–874. [Google Scholar] [CrossRef]
- Tang, Z.; Wang, H.; Li, X.; Li, X.; Cai, W.; Han, C. An Object-Based Approach for Mapping Crop Coverage Using Multiscale Weighted and Machine Learning Methods. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 1700–1713. [Google Scholar] [CrossRef]
- Blaschke, T.; Hay, G.J.; Kelly, M.; Lang, S.; Hofmann, P.; Addink, E.; Feitosa, R.Q.; van der Meer, F.; van der Werff, H.; van Coillie, F.; et al. Geographic Object-Based Image Analysis—Towards a new paradigm. ISPRS J. Photogramm. Remote Sens. 2014, 87, 180–191. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Duro, D.C.; Franklin, S.E.; Dubé, M.G. A comparison of pixel-based and object-based image analysis with selected machine learning algorithms for the classification of agricultural landscapes using SPOT-5 HRG imagery. Remote Sens. Environ. 2012, 118, 259–272. [Google Scholar] [CrossRef]
- Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef] [Green Version]
Fields | Coordinates | Number of Flights | Number of Pictures |
---|---|---|---|
Field1 | 40.23398743, 21.85294092 | 5 | 529 |
Field2 | 40.24951173826714, 21.794159027924273 | 1 | 64 |
Field3 | 40.227257236926, 21.817700600381 | 16 | 315 |
Field4 | 40.23466, 21.84936 | 13 | 365 |
Field5 | 40.247819240092, 21.864312059633 | 14 | 290 |
Field6 | 40.171333751771, 21.870423160292 | 13 | 300 |
Sensefly eBee SQ | DJI Phantom 4 RTK | |
---|---|---|
Type | fixed-wing | quadcopter |
Weight | 1.1 kg | 1.39 kg |
Max speed | 110 km/h | 58 km/h |
Flight control software | eMotion Ag | DJI Pilot |
Image processing software | WebODM | WebODM |
Sensors | Parrot Sequoia+: Multispectral, RGB | CMOS: RGB |
Maximum flight time | 55 min | 30 min |
Range | 3 km | 7 km |
Model | Accuracy | Precision | Recall |
---|---|---|---|
RF | 100% | 97% | 74% |
MLP | 95% | 47% | 44% |
Estimation model | Accuracy | Precision | Recall | |
---|---|---|---|---|
RF | Animal intrusion | 100% | 99% | 68% |
Weed | 100% | 99% | 73% | |
Saffron flowers | 100% | 92% | 80% | |
MLP | Animal intrusion | 97% | 50% | 37% |
Weed | 94% | 35% | 52% | |
Saffron | 94% | 55% | 42% |
Model | Soil | Weed | Saffron | Mammal |
---|---|---|---|---|
RF | 83.6% | 0.4% | 8% | 8% |
MLP | 91.65% | 0.2% | 8% | 0.15% |
Real Data | 83.6% | 0.4% | 8% | 8% |
Model | Accuracy | Precision | Recall |
---|---|---|---|
RF | 80% | 61% | 65% |
MLP | 76% | 56% | 57% |
Estimation Model | Accuracy | Precision | Recall | |
---|---|---|---|---|
RF | Animal intrusion | 75% | 31% | 45% |
Weed | 85% | 70% | 81% | |
Saffron flowers | 80% | 60% | 68% | |
MLP | Animal intrusion | 71% | 55% | 52% |
Weed | 82% | 52% | 55% | |
Saffron | 75% | 62% | 65% |
Model | Soil | Weed | Saffron | Mammal |
---|---|---|---|---|
RF | 88.5% | 0.16% | 6.7% | 4.6% |
MLP | 98.3% | 0.05% | 1.5% | 0.15% |
Real Data | 83.6% | 0.4% | 8% | 8% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kiropoulos, K.; Tsouros, D.C.; Dimaraki, F.; Triantafyllou, A.; Bibi, S.; Sarigiannidis, P.; Angelidis, P. Monitoring Saffron Crops with UAVs. Telecom 2022, 3, 301-321. https://doi.org/10.3390/telecom3020017
Kiropoulos K, Tsouros DC, Dimaraki F, Triantafyllou A, Bibi S, Sarigiannidis P, Angelidis P. Monitoring Saffron Crops with UAVs. Telecom. 2022; 3(2):301-321. https://doi.org/10.3390/telecom3020017
Chicago/Turabian StyleKiropoulos, Konstantinos, Dimosthenis C. Tsouros, Foteini Dimaraki, Anna Triantafyllou, Stamatia Bibi, Panagiotis Sarigiannidis, and Pantelis Angelidis. 2022. "Monitoring Saffron Crops with UAVs" Telecom 3, no. 2: 301-321. https://doi.org/10.3390/telecom3020017
APA StyleKiropoulos, K., Tsouros, D. C., Dimaraki, F., Triantafyllou, A., Bibi, S., Sarigiannidis, P., & Angelidis, P. (2022). Monitoring Saffron Crops with UAVs. Telecom, 3(2), 301-321. https://doi.org/10.3390/telecom3020017