remotesensing-logo

Journal Browser

Journal Browser

Synergy of UAV Imagery and Artificial Intelligence for Agriculture

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing in Agriculture and Vegetation".

Deadline for manuscript submissions: closed (1 February 2024) | Viewed by 17707

Special Issue Editor


E-Mail Website
Guest Editor
INSA Centre Val de Loire, PRISME, EA 4229, F18020 Bourges, France
Interests: machine learning; computer vision; image processing; pattern recognition; remote sensing; application in agriculture
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The growing environmental and socioeconomic challenges facing agriculture require innovation in agricultural systems and the implementation of new paradigms. The recent development of optical sensors and UAV imaging technologies has encouraged their use in agriculture. On the other hand, artificial intelligence (AI) has emerged as a very interesting approach to foster the digital transformation of agriculture. An increasing body of literature recognizes the importance of drone imagery and AI in addressing modern challenges in agricultural systems, sustainable agriculture, agri-food chain, production, water management, etc. This Special Issue aims to disseminate new ideas and solutions showing the potential of UAVs and AI to address problems related to these types of challenges.

Dr. Adel Hafiane
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • remote sensing
  • image processing
  • data fusion
  • crop monitoring
  • anomalies detection
  • crop diseases, decision tools
  • smart agriculture, sustainable agriculture

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 26378 KiB  
Article
Pretrained Deep Learning Networks and Multispectral Imagery Enhance Maize LCC, FVC, and Maturity Estimation
by Jingyu Hu, Hao Feng, Qilei Wang, Jianing Shen, Jian Wang, Yang Liu, Haikuan Feng, Hao Yang, Wei Guo, Hongbo Qiao, Qinglin Niu and Jibo Yue
Remote Sens. 2024, 16(5), 784; https://doi.org/10.3390/rs16050784 - 24 Feb 2024
Cited by 6 | Viewed by 1796
Abstract
Crop leaf chlorophyll content (LCC) and fractional vegetation cover (FVC) are crucial indicators for assessing crop health, growth development, and maturity. In contrast to the traditional manual collection of crop trait parameters, unmanned aerial vehicle (UAV) technology rapidly generates LCC and FVC maps [...] Read more.
Crop leaf chlorophyll content (LCC) and fractional vegetation cover (FVC) are crucial indicators for assessing crop health, growth development, and maturity. In contrast to the traditional manual collection of crop trait parameters, unmanned aerial vehicle (UAV) technology rapidly generates LCC and FVC maps for breeding materials, facilitating prompt assessments of maturity information. This study addresses the following research questions: (1) Can image features based on pretrained deep learning networks and ensemble learning enhance the estimation of remote sensing LCC and FVC? (2) Can the proposed adaptive normal maturity detection (ANMD) algorithm effectively monitor maize maturity based on LCC and FVC maps? We conducted the following tasks: (1) Seven phases (tassel initiation to maturity) of maize canopy orthoimages and corresponding ground-truth data for LCC and six phases of FVC using UAVs were collected. (2) Three features, namely vegetation indices (VI), texture features (TF) based on Gray Level Co-occurrence Matrix, and deep features (DF), were evaluated for LCC and FVC estimation. Moreover, the potential of four single-machine learning models and three ensemble models for LCC and FVC estimation was evaluated. (3) The estimated LCC and FVC were combined with the proposed ANMD to monitor maize maturity. The research findings indicate that (1) image features extracted from pretrained deep learning networks more accurately describe crop canopy structure information, effectively eliminating saturation effects and enhancing LCC and FVC estimation accuracy. (2) Ensemble models outperform single-machine learning models in estimating LCC and FVC, providing greater precision. Remarkably, the stacking + DF strategy achieved optimal performance in estimating LCC (coefficient of determination (R2): 0.930; root mean square error (RMSE): 3.974; average absolute error (MAE): 3.096); and FVC (R2: 0.716; RMSE: 0.057; and MAE: 0.044). (3) The proposed ANMD algorithm combined with LCC and FVC maps can be used to effectively monitor maize maturity. Establishing the maturity threshold for LCC based on the wax ripening period (P5) and successfully applying it to the wax ripening-mature period (P5–P7) achieved high monitoring accuracy (overall accuracy (OA): 0.9625–0.9875; user’s accuracy: 0.9583–0.9933; and producer’s accuracy: 0.9634–1). Similarly, utilizing the ANMD algorithm with FVC also attained elevated monitoring accuracy during P5–P7 (OA: 0.9125–0.9750; UA: 0.878–0.9778; and PA: 0.9362–0.9934). This study offers robust insights for future agricultural production and breeding, offering valuable insights for the further exploration of crop monitoring technologies and methodologies. Full article
(This article belongs to the Special Issue Synergy of UAV Imagery and Artificial Intelligence for Agriculture)
Show Figures

Figure 1

25 pages, 6221 KiB  
Article
Better Inversion of Wheat Canopy SPAD Values before Heading Stage Using Spectral and Texture Indices Based on UAV Multispectral Imagery
by Quan Yin, Yuting Zhang, Weilong Li, Jianjun Wang, Weiling Wang, Irshad Ahmad, Guisheng Zhou and Zhongyang Huo
Remote Sens. 2023, 15(20), 4935; https://doi.org/10.3390/rs15204935 - 12 Oct 2023
Cited by 5 | Viewed by 1574
Abstract
In China’s second-largest wheat-producing region, the mid-lower Yangtze River area, cold stress impacts winter wheat production during the pre-heading growth stage. Previous research focused on specific growth stages, lacking a comprehensive approach. This study utilizes Unmanned Aerial Vehicle (UAV) multispectral imagery to monitor [...] Read more.
In China’s second-largest wheat-producing region, the mid-lower Yangtze River area, cold stress impacts winter wheat production during the pre-heading growth stage. Previous research focused on specific growth stages, lacking a comprehensive approach. This study utilizes Unmanned Aerial Vehicle (UAV) multispectral imagery to monitor Soil-Plant Analysis Development (SPAD) values throughout the pre-heading stage, assessing crop stress resilience. Vegetation Indices (VIs) and Texture Indices (TIs) are extracted from UAV imagery. Recursive Feature Elimination (RFE) is applied to VIs, TIs, and fused variables (VIs + TIs), and six machine learning algorithms are employed for SPAD value estimation. The fused VIs and TIs model, based on Long Short-Term Memory (LSTM), achieves the highest accuracy (R2 = 0.8576, RMSE = 2.9352, RRMSE = 0.0644, RPD = 2.6677), demonstrating robust generalization across wheat varieties and nitrogen management practices. This research aids in mitigating winter wheat frost risks and increasing yields. Full article
(This article belongs to the Special Issue Synergy of UAV Imagery and Artificial Intelligence for Agriculture)
Show Figures

Graphical abstract

20 pages, 18500 KiB  
Article
Mapping Soybean Maturity and Biochemical Traits Using UAV-Based Hyperspectral Images
by Lizhi Wang, Rui Gao, Changchun Li, Jian Wang, Yang Liu, Jingyu Hu, Bing Li, Hongbo Qiao, Haikuan Feng and Jibo Yue
Remote Sens. 2023, 15(19), 4807; https://doi.org/10.3390/rs15194807 - 3 Oct 2023
Cited by 6 | Viewed by 2264
Abstract
Soybeans are rich in high-quality protein and raw materials for producing hundreds of chemical products. Consequently, soybean cultivation has gained widespread prevalence across diverse geographic regions. Soybean breeding necessitates the development of early-, standard-, and late-maturing cultivars to accommodate cultivation at various latitudes, [...] Read more.
Soybeans are rich in high-quality protein and raw materials for producing hundreds of chemical products. Consequently, soybean cultivation has gained widespread prevalence across diverse geographic regions. Soybean breeding necessitates the development of early-, standard-, and late-maturing cultivars to accommodate cultivation at various latitudes, thereby optimizing the utilization of solar radiation. In the practical process of determining the maturity of soybean breeding materials within the breeding field, the ripeness is assessed based on three critical criteria: pod moisture content, leaf color, and the degree of leaf shedding. These parameters reflect the crown structure, physicochemical parameters, and reproductive organ changes in soybeans during the maturation process. Therefore, methods for analyzing soybean maturity at the breeding plot scale should match the standards of agricultural experts to the maximum possible extent. This study presents a hyperspectral remote sensing approach for monitoring soybean maturity. We collected five periods of unmanned aerial vehicle (UAV)-based soybean canopy hyperspectral digital orthophoto maps (DOMs) and ground-level measurements of leaf chlorophyll content (LCC), flavonoids (Flav), and the nitrogen balance index (NBI) from a breeding farm. This study explores the following aspects: (1) the correlations between soybean LCC, NBI, Flav, and maturity; (2) the estimation of soybean LCC, NBI, and Flav using Gaussian process regression (GPR), partial least squares regression (PLSR), and random forest (RF) regression techniques; and (3) the application of threshold-based methods in conjunction with normalized difference vegetation index (NDVI)+LCC and NDVI+NBI for soybean maturity monitoring. The results of this study indicate the following: (1) Soybean LCC, NBI, and Flav are associated with maturity. LCC increases during the beginning bloom period (P1) to the beginning seed period (P3) and sharply decreases during the beginning maturity period (P4) stage. Flav continues to increase from P1 to P4. NBI remains relatively consistent from P1 to P3 and then drops rapidly during the P4 stage. (2) The GPR, PLSR, and RF methodologies yield comparable accuracy in estimating soybean LCC (coefficient of determination (R2): 0.737–0.832, root mean square error (RMSE): 3.35–4.202 Dualex readings), Flav (R2: 0.321–0.461, RMSE: 0.13–0.145 Dualex readings), and NBI (R2: 0.758–0.797, RMSE: 2.922–3.229 Dualex readings). (3) The combination of the threshold method with NDVI < 0.55 and NBI < 8.2 achieves the highest classification accuracy (accuracy = 0.934). Further experiments should explore the relationships between crop NDVI, the Chlorophyll Index, LCC, Flav, and NBI and crop maturity for different crops and ecological areas. Full article
(This article belongs to the Special Issue Synergy of UAV Imagery and Artificial Intelligence for Agriculture)
Show Figures

Figure 1

18 pages, 10622 KiB  
Article
Precision Detection of Dense Litchi Fruit in UAV Images Based on Improved YOLOv5 Model
by Zhangjun Xiong, Lele Wang, Yingjie Zhao and Yubin Lan
Remote Sens. 2023, 15(16), 4017; https://doi.org/10.3390/rs15164017 - 14 Aug 2023
Cited by 14 | Viewed by 2418
Abstract
The utilization of unmanned aerial vehicles (UAVs) for the precise and convenient detection of litchi fruits, in order to estimate yields and perform statistical analysis, holds significant value in the complex and variable litchi orchard environment. Currently, litchi yield estimation relies predominantly on [...] Read more.
The utilization of unmanned aerial vehicles (UAVs) for the precise and convenient detection of litchi fruits, in order to estimate yields and perform statistical analysis, holds significant value in the complex and variable litchi orchard environment. Currently, litchi yield estimation relies predominantly on manual rough counts, which often result in discrepancies between the estimated values and the actual production figures. This study proposes a large-scene and high-density litchi fruit recognition method based on the improved You Only Look Once version 5 (YOLOv5) model. The main objective is to enhance the accuracy and efficiency of yield estimation in natural orchards. First, the PANet in the original YOLOv5 model is replaced with the improved Bi-directional Feature Pyramid Network (BiFPN) to enhance the model’s cross-scale feature fusion. Second, the P2 feature layer is fused into the BiFPN to enhance the learning capability of the model for high-resolution features. After that, the Normalized Gaussian Wasserstein Distance (NWD) metric is introduced into the regression loss function to enhance the learning ability of the model for litchi tiny targets. Finally, the Slicing Aided Hyper Inference (SAHI) is used to enhance the detection of tiny targets without increasing the model’s parameters or computational memory. The experimental results show that the overall AP value of the improved YOLOv5 model has been effectively increased by 22%, compared to the original YOLOv5 model’s AP value of 50.6%. Specifically, the APs value for detecting small targets has increased from 27.8% to 57.3%. The model size is only 3.6% larger than the original YOLOv5 model. Through ablation and comparative experiments, our method has successfully improved accuracy without compromising the model size and inference speed. Therefore, the proposed method in this paper holds practical applicability for detecting litchi fruits in orchards. It can serve as a valuable tool for providing guidance and suggestions for litchi yield estimation and subsequent harvesting processes. In future research, optimization can be continued for the small target detection problem, while it can be extended to study the small target tracking problem in dense scenarios, which is of great significance for litchi yield estimation. Full article
(This article belongs to the Special Issue Synergy of UAV Imagery and Artificial Intelligence for Agriculture)
Show Figures

Graphical abstract

24 pages, 4925 KiB  
Article
Estimation of Winter Wheat SPAD Values Based on UAV Multispectral Remote Sensing
by Quan Yin, Yuting Zhang, Weilong Li, Jianjun Wang, Weiling Wang, Irshad Ahmad, Guisheng Zhou and Zhongyang Huo
Remote Sens. 2023, 15(14), 3595; https://doi.org/10.3390/rs15143595 - 18 Jul 2023
Cited by 24 | Viewed by 2926
Abstract
Unmanned aerial vehicle (UAV) multispectral imagery has been applied in the remote sensing of wheat SPAD (Soil and Plant Analyzer Development) values. However, existing research has yet to consider the influence of different growth stages and UAV flight altitudes on the accuracy of [...] Read more.
Unmanned aerial vehicle (UAV) multispectral imagery has been applied in the remote sensing of wheat SPAD (Soil and Plant Analyzer Development) values. However, existing research has yet to consider the influence of different growth stages and UAV flight altitudes on the accuracy of SPAD estimation. This study aims to optimize UAV flight strategies and incorporate multiple feature selection techniques and machine learning algorithms to enhance the accuracy of the SPAD value estimation of different wheat varieties across growth stages. This study sets two flight altitudes (20 and 40 m). Multispectral images were collected for four winter wheat varieties during the green-up and jointing stages. Three feature selection methods (Pearson, recursive feature elimination (RFE), and correlation-based feature selection (CFS)) and four machine learning regression models (elastic net, random forest (RF), backpropagation neural network (BPNN), and extreme gradient boosting (XGBoost)) were combined to construct SPAD value estimation models for individual growth stages as well as across growth stages. The CFS-RF (40 m) model achieved satisfactory results (green-up stage: R2 = 0.7270, RPD = 2.0672, RMSE = 1.1835, RRMSE = 0.0259; jointing stage: R2 = 0.8092, RPD = 2.3698, RMSE = 2.3650, RRMSE = 0.0487). For cross-growth stage modeling, the optimal prediction results for SPAD values were achieved at a flight altitude of 40 m using the Pearson-XGBoost model (R2 = 0.8069, RPD = 2.3135, RMSE = 2.0911, RRMSE = 0.0442). These demonstrate that the flight altitude of UAVs significantly impacts the estimation accuracy, and the flight altitude of 40 m (with a spatial resolution of 2.12 cm) achieves better SPAD value estimation than that of 20 m (with a spatial resolution of 1.06 cm). This study also showed that the optimal combination of feature selection methods and machine learning algorithms can more accurately estimate winter wheat SPAD values. In addition, this study includes multiple winter wheat varieties, enhancing the generalizability of the research results and facilitating future real-time and rapid monitoring of winter wheat growth. Full article
(This article belongs to the Special Issue Synergy of UAV Imagery and Artificial Intelligence for Agriculture)
Show Figures

Figure 1

21 pages, 12641 KiB  
Article
CTFuseNet: A Multi-Scale CNN-Transformer Feature Fused Network for Crop Type Segmentation on UAV Remote Sensing Imagery
by Jianjian Xiang, Jia Liu, Du Chen, Qi Xiong and Chongjiu Deng
Remote Sens. 2023, 15(4), 1151; https://doi.org/10.3390/rs15041151 - 20 Feb 2023
Cited by 9 | Viewed by 3344
Abstract
Timely and accurate acquisition of crop type information is significant for irrigation scheduling, yield estimation, harvesting arrangement, etc. The unmanned aerial vehicle (UAV) has emerged as an effective way to obtain high resolution remote sensing images for crop type mapping. Convolutional neural network [...] Read more.
Timely and accurate acquisition of crop type information is significant for irrigation scheduling, yield estimation, harvesting arrangement, etc. The unmanned aerial vehicle (UAV) has emerged as an effective way to obtain high resolution remote sensing images for crop type mapping. Convolutional neural network (CNN)-based methods have been widely used to predict crop types according to UAV remote sensing imagery, which has excellent local feature extraction capabilities. However, its receptive field limits the capture of global contextual information. To solve this issue, this study introduced the self-attention-based transformer that obtained long-term feature dependencies of remote sensing imagery as supplementary to local details for accurate crop-type segmentation in UAV remote sensing imagery and proposed an end-to-end CNN–transformer feature-fused network (CTFuseNet). The proposed CTFuseNet first provided a parallel structure of CNN and transformer branches in the encoder to extract both local and global semantic features from the imagery. A new feature-fusion module was designed to flexibly aggregate the multi-scale global and local features from the two branches. Finally, the FPNHead of feature pyramid network served as the decoder for the improved adaptation to the multi-scale fused features and output the crop-type segmentation results. Our comprehensive experiments indicated that the proposed CTFuseNet achieved a higher crop-type-segmentation accuracy, with a mean intersection over union of 85.33% and a pixel accuracy of 92.46% on the benchmark remote sensing dataset and outperformed the state-of-the-art networks, including U-Net, PSPNet, DeepLabV3+, DANet, OCRNet, SETR, and SegFormer. Therefore, the proposed CTFuseNet was beneficial for crop-type segmentation, revealing the advantage of fusing the features found by the CNN and the transformer. Further work is needed to promote accuracy and efficiency of this approach, as well as to assess the model transferability. Full article
(This article belongs to the Special Issue Synergy of UAV Imagery and Artificial Intelligence for Agriculture)
Show Figures

Figure 1

19 pages, 4275 KiB  
Article
Generative-Model-Based Data Labeling for Deep Network Regression: Application to Seed Maturity Estimation from UAV Multispectral Images
by Eric Dericquebourg, Adel Hafiane and Raphael Canals
Remote Sens. 2022, 14(20), 5238; https://doi.org/10.3390/rs14205238 - 20 Oct 2022
Cited by 4 | Viewed by 1927
Abstract
Field seed maturity monitoring is essential to optimize the farming process and guarantee yield quality through high germination. Remote sensing of parsley fields through UAV multispectral imagery allows uniform scanning and better capture of crop information, in comparison to traditional limited field sampling [...] Read more.
Field seed maturity monitoring is essential to optimize the farming process and guarantee yield quality through high germination. Remote sensing of parsley fields through UAV multispectral imagery allows uniform scanning and better capture of crop information, in comparison to traditional limited field sampling analysis in the laboratory. Moreover, they only represent localized sub-sections of the crop field and are time consuming to process. The limited availability of seed sample maturity data is a drawback for applying deep learning methods, which have shown tremendous potential in estimating agronomic parameters, especially maturity, as they require large labeled datasets. In this paper, we propose a parametric and non-parametric-based weak labeling approach to overcome the lack of maturity labels and render possible maturity estimation by deep network regression to assist growers in harvest decision-making. We present the data acquisition protocol and the performance evaluation of the generative models and neural network architectures. Convolutional and recurrent neural networks were trained on the generated labels and evaluated on maturity ground truth labels to assess the maturity quantification quality. The results showed improvement by the semi-supervised approaches over the generative models, with a root-mean-squared error of 0.0770 for the long-short-term memory network trained on kernel-density-estimation-generated labels. Generative-model-based data labeling can unlock new possibilities for remote sensing fields where data collection is complex, and in our usage, they provide better-performing models for parsley maturity estimation based on UAV multispectral imagery. Full article
(This article belongs to the Special Issue Synergy of UAV Imagery and Artificial Intelligence for Agriculture)
Show Figures

Figure 1

Back to TopTop