Next Article in Journal
Advancing Convergence Speed of Distributed Consensus Time Synchronization Algorithms in Unmanned Aerial Vehicle Ad Hoc Networks
Previous Article in Journal
Energy-Aware Hierarchical Reinforcement Learning Based on the Predictive Energy Consumption Algorithm for Search and Rescue Aerial Robots in Unknown Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Wheat Yield Prediction Using Machine Learning Method Based on UAV Remote Sensing Data

1
State Key Laboratory of Aridland Crop Science, College of Agronomy, Gansu Agricultural University, Lanzhou 730070, China
2
Institute of Crop Sciences, Chinese Academy of Agricultural Sciences, Beijing 100081, China
3
College of Land Science and Technology, China Agricultural University, Beijing 100193, China
4
Biotechnology Department, Xinjiang Agricultural Vocational Technical College, Changji 831100, China
*
Authors to whom correspondence should be addressed.
Drones 2024, 8(7), 284; https://doi.org/10.3390/drones8070284
Submission received: 3 May 2024 / Revised: 19 June 2024 / Accepted: 20 June 2024 / Published: 24 June 2024

Abstract

:
Accurate forecasting of crop yields holds paramount importance in guiding decision-making processes related to breeding efforts. Despite significant advancements in crop yield forecasting, existing methods often struggle with integrating diverse sensor data and achieving high prediction accuracy under varying environmental conditions. This study focused on the application of multi-sensor data fusion and machine learning algorithms based on unmanned aerial vehicles (UAVs) in wheat yield prediction. Five machine learning (ML) algorithms, namely random forest (RF), partial least squares (PLS), ridge regression (RR), k-nearest neighbor (KNN) and extreme gradient boosting decision tree (XGboost), were utilized for multi-sensor data fusion, together with three ensemble methods including the second-level ensemble methods (stacking and feature-weighted) and the third-level ensemble method (simple average), for wheat yield prediction. The 270 wheat hybrids were used as planting materials under full and limited irrigation treatments. A cost-effective multi-sensor UAV platform, equipped with red–green–blue (RGB), multispectral (MS), and thermal infrared (TIR) sensors, was utilized to gather remote sensing data. The results revealed that the XGboost algorithm exhibited outstanding performance in multi-sensor data fusion, with the RGB + MS + Texture + TIR combination demonstrating the highest fusion performance (R2 = 0.660, RMSE = 0.754). Compared with the single ML model, the employment of three ensemble methods significantly enhanced the accuracy of wheat yield prediction. Notably, the third-layer simple average ensemble method demonstrated superior performance (R2 = 0.733, RMSE = 0.668 t ha−1). It significantly outperformed both the second-layer ensemble methods of stacking (R2 = 0.668, RMSE = 0.673 t ha−1) and feature-weighted (R2 = 0.667, RMSE = 0.674 t ha−1), thereby exhibiting superior predictive capabilities. This finding highlighted the third-layer ensemble method’s ability to enhance predictive capabilities and refined the accuracy of wheat yield prediction through simple average ensemble learning, offering a novel perspective for crop yield prediction and breeding selection.

1. Introduction

Wheat stands as one of the most vital crops globally, with approximately 35–40% of the world’s population relying on it as a primary food source. It contributes approximately 21% of food energy and 20% of protein intake. Given the backdrop of population growth and climate change, the early and accurate prediction of wheat yield holds utmost importance for safeguarding national food security and maintaining people’s living standards [1,2]. Conventionally, the yield prediction method has primarily been dependent on field observation and investigation, which are not only time-consuming and laborious processes but also susceptible to subjective biases, and can even result in crop damage [3]. In recent years, remote sensing technology has gained widespread application in the domain of agricultural monitoring. This technology enables the effective acquisition of canopy spectral data from aerial sources, thereby facilitating the prediction of crop yields [4,5]. Furthermore, unmanned aerial vehicle (UAV)-based remote sensing technology has witnessed rapid development, owing to its distinctive advantages of flexibility and high resolution [6].
The vegetation index (VI) derived from UAV images has demonstrated its effectiveness in predicting crop yields. Spectral, structural, thermal infrared (TIR), and texture features extracted from UAV-collected datasets through sensors can be utilized to assess various plant traits and structures [7]. For instance, low-altitude UAVs were employed to capture RGB imaging data of potato canopies at two distinct growth stages, to predict yields [8]. The use of a multispectral (MS) UAV platform for swift monitoring of the normalized vegetation index (NDVI) during the wheat filling stage exhibited a strong correlation with wheat grain yield [9]. Texture information extracted from UAV images can effectively reflect the spatial variations in pixel intensity, thereby emphasizing the structural and geometric characteristics of the plant canopy [10]. The potential of UAV TIR imaging technology for assessing crop water stress and predicting wheat kernel yield in different wheat varieties has also been thoroughly validated [11]. However, the majority of studies solely rely on data from a single sensor to predict crop yields, overlooking the advantages of combining multiple sensors. For example, by combining the features derived from MS, RGB, and TIR imaging, the accuracy of soybean yield prediction can be significantly improved [7]. The combination of canopy TIR information with spectral and structural characteristics can improve the robustness of crop yield prediction across diverse climatic conditions and developmental stages [12]. In particular, the application of machine learning (ML) techniques to the analysis of multi-sensor data collected by UAVs can significantly enhance the accuracy of crop yield predictions [13]. On this basis, to fully harness the potential of ML algorithms, the machine learning technology was combined with the VIs extracted from the spectral image of the sensor to build a yield prediction model that provides strong support for the relevant practices of precision agriculture [14,15].
At present, a variety of machine learning methods have been applied to yield predictions, such as random forest (RF) [16], partial least squares (PLS) [17], ridge regression (RR) [18], k-nearest neighbor (KNN) [19] and extreme gradient boosting decision tree (XGboost) [20]. However, predictions by the same model may vary significantly across different crops and environments, primarily due to the quality of data, the representation of the model, and the dependencies between input and target variables within the collected dataset [21]. If the data are biased or if the chosen model exhibits overfitting to the respective dataset, the model will fail to demonstrate accurate performance [22]. Ensemble learning, a research hotspot, is proposed to address these challenges. Its objective is to integrate data fusion, data modeling, and data mining into a cohesive framework [23]. The ensemble learning paradigm known as stacked regression involves linearly combining various predictors to enhance prediction accuracy [24,25]. The feature-weighted ensemble method assigns weights according to the correlation of features and predicts the degree of correlation between each feature and the extracted output model [26,27]. In this study, we employ a feature-weighted ensemble learning approach that assigns weights to the training dataset generated by the primary learner, based on the prediction accuracy of each individual learner. Subsequently, utilizing these weighted data, the meta-learner is trained to enhance the overall model’s learning efficiency. To further optimize the model performance, we introduce a novel ensemble method in the third layer, specifically the simple average ensemble method. The method calculates the average values of the predictions of the stacking ensemble method and the feature-weighted ensemble method on the test set and compares them with the actual measured values to realize the effect of the third-layer ensemble learning.
This study utilized remote sensing data collected by UAVs 21 days after wheat flowering to predict wheat yield. The main objectives were (1) to evaluate the data based on UAV-derived RGB, MS, Texture, and TIR, along with multi-sensor data fusion, for multidimensional feature analysis to understand the relationship between crop growth status and yield, and to verify the applicability of various classic machine learning algorithms in agriculture, and (2) to apply stacking, feature-weighted, and simple average ensemble methods to enhance the accuracy of wheat yield prediction, providing new technical approaches for the implementation of precision agriculture.

2. Materials and Methods

2.1. Experiment Location and Design

Two hundred and seventy Recombinant Inbred Lines (RILs) from cross Zhongmai 578/Jimai 22 were planted at the research site of Chinese Academy of Agriculture Sciences (35°18′0″ N, 113°52′0″ E) in Xinxiang, Henan province, China during the 2021–2022 growing season (Figure 1). The experimental design was a randomized complete block with three replicates. The experimental protocol included two distinct irrigation treatments: full and limited irrigation. Both treatments received two irrigations during the seedling and overwintering stages. The full irrigation treatment was supplemented with additional flooding at the greening jointing and early grain filling stage. Each plot area was 3.6 m2 (1.2 m × 3 m). It was designed in 6 lines, spaced 0.20 m, resulting in a total of 1620 plots across the study area (Figure 1). The planting density was maintained at 270 plants/m2, and agricultural management was performed according to local conditions. After maturity, the harvest was conducted using a combine harvester. The seeds were weighed after drying to a moisture content of less than 12.5%.

2.2. Multi-Sensor Image Acquisition and Processing Based on UAV

Data acquisition for all traits was performed by a UAV platform M210 (SZ DJI Technology Co., Shenzhen, China). The RGB and TIR sensors were in the same sensor (Zenmuse XT2 camera, SZ DJI Technology Co., Shenzhen, China), with lens pixels of 4000 × 3000 and 640 × 512, respectively. The MS sensor (Red-Edge MX camera, MicaSense, Seattle, WA, USA) captured the same pixel images (1280 × 960) in five bands, including blue, green, red, red edge and near infrared (NIR), at wavelengths of 475 nm, 560 nm, 668 nm, 717 nm and 842 nm, respectively. The aerial surveys were carried out at 21 days post-anthesis due to the proven high accuracy of yield predictions during this period [13]. All flight tasks were carried out from 10:00 to 14:00 in clear skies, using DJI Pilot 2.3.15 software to set route parameters as follows: the forward and side overlap were 90% and 85%, respectively, and the flight altitude was 30 m.
In this study, the Pix4D Mapper Pro 4.5.6 software (Pix4D, Lausanne, Switzerland) was used to perform radiometric correction and image stitching on RGB, TIR and MS images obtained by UAV, and the visible, TIR orthophoto image and five-band orthophoto reflectance map were obtained. The obtained images with spectral reflectance were imported into ArcGIS 10.8.1 (Environmental Systems Research Institute, Inc., Redlands, CA, USA) software for image cropping; each cell was selected as the area of interest, and the features were extracted to calculate the different VIs used in this study. The detailed process is shown in Figure 2. To minimize the noise impact on the images and enhance the efficiency of subsequent processing steps, it was necessary to exclude non-target areas from the acquired MS images. The Pix4D Mapper Pro 4.5.6 software was utilized to perform image stitching, shading correction, and digital number (DN) processing on the filtered MS data, ultimately converting them into a TIFF image format with spectral reflectivity. Radiation calibration was conducted prior to and following each flight using a dedicated calibration plate. Subsequently, the TIR data were calibrated based on the blackbody reference to determine the temperature corresponding to each pixel value in the TIR imagery.

2.3. Extraction of Vegetation and Texture Index

As a metric for evaluating physiological parameters of crops, VIs could effectively reflect the real-time growth level of crops [28]. The selected 10 color indices and 11 MS VIs provided a robust assessment of the vegetation’s physiological condition, ranging from greenness and biomass to senescence and pigment content, as shown in Table 1.
In addition to spectral information, texture features as another important type of remote sensing information were less susceptible to external environmental factors. They reflected the grayscale nature of the image and its spatial relationships, thereby enhancing the inversion accuracy of single spectral information sources that may suffer from saturation issues. Furthermore, texture features enhanced the potential for inverting physicochemical parameters to a certain extent [29]. In ENVI 5.3, the widely utilized gray-level co-occurrence matrix (GLCM) was used to extract 40 texture features for the RGB-based R, G, B bands and MS-based red-edge and NIR bands, which included parameters such as angular second moment, contrast, correlation, sum average, sum variance, and others. Then, the region of interest was delimited for the texture feature images of each band in ArcGIS 10.8.1 (Figure 2).
Principal component analysis (PCA) is a data mining technique in multivariate statistics. It converts high-dimensional data into low-dimensional data through dimensionality reduction, while preserving the majority of the information within the data without compromising its integrity [30]. Through principal component analysis, we concluded that while using 8 PCAs could capture most of the variance, it did not significantly increase the accuracy of our model. Considering the trade-off between retaining texture information and achieving computational efficiency with PCA, we transformed the original 40 texture features into 3 new principal components. These principal components were linear combinations of the original features and encapsulated the most significant variance in the dataset. By using PCA, we effectively condensed the information from the 40 texture features into a more manageable and interpretable form, without losing the essential characteristics of the texture. This approach not only simplified our analysis but also enhanced our ability to identify and interpret the most meaningful texture features that contribute to the physiological assessment of wheat. Consequently, these three principal components could be regarded as representative of the most significant texture features within the dataset (Figure 2).
Table 1. Vegetation index formula for UAV images.
Table 1. Vegetation index formula for UAV images.
SensorSpectral IndicesEquationReferences
RGBRed Green Blue Vegetation IndexRGBVI = (G2 − B × R)/(G2 + B × R)[31]
Plant Pigment RatioPPR = (G − B)/(G + B)[32]
Green Leaf AlgorithmGLA = (2 × G − R − B)/(2 × G + R + B)[33]
Excess Green IndexExG = 2 × G − R − B[34]
Color Index of Vegetation ExtractionCIVE = 0.441 × R − 0.881 × G + 0.3856 × B + 18.78745[35]
Visible Atmospherically Resistant IndexVARI = (G − R)/(G + R − B)[36]
Kawashima IndexIKAW = (R − B)/(R + B)[37]
Woebbecke IndexWI = (G − B)/(R − G)[34]
Green Blue Ratio IndexGBRI = G/B[38]
Red Blue Ratio IndexRBRI = R/B[38]
MSGreen-NDVIGNDVI = (NIR − G)/(NIR + G)[39]
MERIS Terrestrial Chlorophyll IndexMTCI = (NIR − R)/(RE − R)[40]
Normalized Difference Vegetation IndexNDVI = (NIR − R)/(NIR + R)[36]
Ratio Vegetation IndexRVI1 = NIR/R[41]
Ratio Vegetation IndexRVI2 = NIR/G[42]
Modified Simple Ratio IndexMSRI = (NIR/R − 1)/(NIR/R + 1)0.5[43]
Re-normalized Difference Vegetation IndexRDVI = (NIR − R)/(NIR + R)0.5[44]
Structure Insensitive Pigment IndexSIPI = (NIR − B)/(NIR + B)[45]
Color IndexCI = NIR/G − 1[46]
Generalized Soil-adjusted Vegetation IndexGOSAVI = (NIR − G)/(NIR + G + 0.16)[47]
Plant Senescence Reflectance IndexPSRI = (R − B)/NIR[48]

2.4. Ensemble Learning Framework

This study employed five ML algorithms, RF, PLS, RR, KNN, and XGboost, as base learners to explore the accuracy of yield prediction through ensemble learning and the integration of multi-sensor data.
In ML, each algorithm possesses its distinct strengths. Ensemble learning achieves superior generalization performance by harnessing the combined advantages of various machine learning algorithms [23]. This study proposed three ensemble learning methods in total. The first method was stacking regression, which is a heterogeneous ensemble learning model first introduced by WOLPERT in 1992 [49], capable of enhancing the accuracy of yield prediction [50]. The objective of this method was to integrate the predictive strengths of five fundamental models. Initially, the training dataset was partitioned into an 80% training subset and a 20% testing subset. Each base model was then trained independently on the training subset, utilizing a 10-fold cross-validation approach, and their respective predictions were generated for the testing subset. Subsequently, these prediction results were employed as input features for the meta-model. RR served as the regression algorithm for the meta-model, tasked with learning to effectively integrate the learning algorithms of the various basic models in order to generate a final ensemble prediction. Throughout the training process, cross-validation techniques were employed to meticulously fine-tune the hyperparameters of the meta-model, with the ultimate goal of bolstering its generalization capabilities. Upon completion of the training phase, the refined stacking model was then utilized to predict outcomes for the test set, subsequently enabling a thorough evaluation of the model’s overall performance (Figure 3).
The second approach was feature-weighted ensemble learning. By assigning weights to each base learner that reflected their predictive power, it significantly enhanced the model’s accuracy and generalizability [51]. Each base model underwent training on the training set, and the coefficient of determination (R2) for each base model was computed using the testing set. Subsequently, the R2 values served as the foundation for allocating weights (Figure 3).
The third approach proposed in this study was simple average ensemble learning, characterized by its ease of implementation and enhanced predictive stability. It took the average of the predictive results obtained from stacking regression and feature-weighted ensemble methods on the test set, thereby enhancing accuracy and generalizability while reducing the risk of overfitting. Then, the R2 score was computed between the averaged predictions and the true values of the testing set (Figure 3).

2.5. Model Performance Evaluation

In this study, the selection R2, root-mean-square error (RMSE) and normalized root-mean-square error (NRMSE) were selected as the indexes to evaluate the prediction accuracy of the base learner [52]. The formulas are as follows:
R 2 = 1 i = 1 n   y ^ i y i 2 i = 1 n   y i y ¯ 2
R M S E = i = 1 n y ^ i y i 2 n
N R M S E = R M S E y ¯ 100 %
where y i and y ^ i are measured and predicted values of wheat yield, respectively, y ¯ is the mean value of measured yield, and n is the sample size.
The weight allocation formula [26] is as follows:
w l = E R , l h = 1 T E R , h
where w l is the weight of the l primary learner, l = 1, 2, …, T; T is the number of primary learners; E R , l is the R2 of the l primary learner; E R , h is the R2 of the h primary learner.
This formula transforms the R2 scores of each base model into weights and ensures that the sum of all weights equals 1. Thus, the stronger predictive performance of each base model is assigned a higher weight, leading to a larger proportion in the ensemble prediction.

3. Results

3.1. Principal Component Analysis of Texture Features

In analyzing the initial value, variance contribution rate and cumulative variance contribution rate of the texture feature principal components (Table 2), we observed that the initial eigenvalues of the first, second, and third principal components exceeded 1, specifically 19.72, 11.13 and 3.09, respectively. The variance contribution rates were 49.30%, 27.80% and 7.70%, respectively, and the cumulative variance contribution rate amounted to 84.90%. This indicated that the first three principal components were capable of retaining 84.90% of the information from the original data. Consequently, the first three components were extracted as the principal components for the comprehensive evaluation of texture features.
Figure 4 displays the loadings of the principal component analysis for the 40 texture features. The variance contributions of the first (PC1), second (PC2), and third (PC3) principal components were represented on the X−, Y− and Z−axes, respectively. It was evident that the larger the absolute value of a variable’s coefficient on a particular principal component, the greater its contribution to that component.

3.2. Correlation Analysis of CI, VI, Texture Features and TIR with Wheat Yield

The Pearson’s correlation coefficient (r) analysis of the vegetation index including 10 CIs and 11 VIs, 3 texture features and thermal infrared index are shown in Figure 5. The absolute correlation between CI and wheat yield ranged from r = 0.13 to r = 0.72. Among these, the highest correlation was observed with VARI (r = 0.72), while the lowest correlations were with PPR and GBRI (r = 0.13). The remaining seven indices, IKAW, ExG, RGBVI, GLA, CIVE, RBRI and VARI, all exhibited correlations of 0.6 and above (r ≥ 0.60). The absolute correlation between VIs and wheat yield consistently approached 0.70, with RDVI and GOSAV showing the highest correlation (r = 0.70). The lowest correlation was observed with MTCI (r = 0.68). The texture features were primarily assessed with component analysis. In the correlation analysis between TIR and wheat yield, it was found that the absolute correlation value of PC1 was the highest (r = 0.69), whereas the remaining indices exhibited lower correlations. Notably, TIR demonstrated a relatively higher correlation (r = 0.68).

3.3. Wheat Yield Prediction for Optimal Sensor

In this study, five regression algorithms (RF, PLS, RR, KNN, and XGboost) were employed, alongside three ensemble learning algorithms, to forecast wheat yield. These predictions were based on features extracted from three distinct types of sensors (RGB, MS, and TIR) and their various combinations, as depicted in Table 3 and Figure 6. Among the predicted results from the single data source, the fusion of two data sources, the fusion of three data sources and the fusion of four data sources across eight machine learning algorithms, the highest R2 values were observed for Texture (R2 = 4.773), Texture + TIR (R2 = 4.934), RGB + Texture + TIR (R2 = 5.153) and RGB + MS + Texture + TIR (R2 = 5.238). Additionally, the prediction error value based on the RGB + MS + Texture + TIR data fusion model was also the lowest, with RMSE = 5.546 t ha⁻1 and NRMSE = 55.733%. Therefore, the RGB + MS + Texture + TIR data fusion yielded the most accurate predictions for wheat yield, surpassing single, dual and triple data source fusion. Specifically, it achieved a higher overall R2 value, ranging from 9.74% to 33.48%, 6.17% to 19.61% and 1.64% to 8.88%, respectively, compared to the other fusion strategies. Furthermore, it demonstrated a lower total RMSE, decreasing by 7.53–17.72%, 5.12–16.07% and 3.23–6.97%, respectively. Similarly, the total NRMSE was reduced by 7.54–17.73%, 5.13–16.06% and 3.31–6.98%, respectively. In conclusion, the RGB + MS + Texture + TIR data fusion emerged as the most precise in estimating wheat yield. To further analyze the contribution of each sensor, when compared with the three-source data fusion wheat yield predictions, the RGB, MS, Texture and TIR sensors improved the R2 by 2.13–7.57%, 0.72–2.76%, 6.33–11.05%, and 3.90–7.15%, respectively. Among them, the Texture sensor contributed the most to the accuracy of wheat yield predictions.

3.4. Optimal Machine Learning Algorithm for Wheat Yield Prediction

Based on the results above, the fusion data of RGB + MS + Texture + TIR demonstrated high accuracy in predicting wheat yield. Among the five base models, the RR model performed the best when using RGB data (R2 = 0.517) and TIR data (R2 = 0.490) as single data sources. Conversely, PLS exhibited the highest predictive value for MS data (R2 = 0.534), while XGboost showed the highest predictive value for Texture data (R2 = 0.593). After the fusion of multi-sensor data, the prediction accuracy of most machine learning models was notably enhanced. The findings indicated that XGboost emerged as the top-performing predictive machine learning model, achieving an R2 value of 0.660 (Table 3). The analysis results for the models on different data combinations are depicted in Figure 6. The R2 value of XGboost was observed to be 0.011, 0.014, 0.0053, and 0.044 higher than RF, PLS, RR, and KNN, respectively. Furthermore, the XGboost model exhibited smaller errors in terms of RMSE and NRMSE. Specifically, its RMSE was lower than that of the other four models by 0.010, 0.013, 0.005, and 0.040, respectively, while the NRMSE was lower by 0.104, 0.131, 0.053, and 0.399, respectively. These findings further confirm the superiority of XGboost in wheat yield prediction, followed by RR.
Compared with the basic model, three ensemble methods were used in this study, including two second-layer ensemble methods (stacking and feature-weighted methods) and one third-layer ensemble method (simple average method). The analysis results are shown in Table 3. All three ensemble methods demonstrated higher model prediction accuracy compared to the single ML model. When compared to the single ML model that performed best on single sensor data, stacking, feature-weighted and simple average ensemble learning increased the R2 values of the single sensor by 1.53–2.16%, 0.50–2.67% and 14.33–21.26%, respectively. Additionally, RMSE was reduced by 0.81–1.48%, 0.33–1.55% and 1.10–1.65%, respectively, while NRMSE was reduced by 0.83–1.51%, 0.37–1.54% and 1.08–1.66%, respectively.
Compared with the single ML models exhibiting the best performance in the optimal combination of multi-source data fusion (RGB + MS + Texture + TIR), the prediction accuracy of the three ensemble learning methods was also superior, surpassing each single model by 1.23%, 1.07% and 11.01%, respectively. Additionally, the RMSE was reduced by 1.19%, 1.03% and 1.97%, respectively, while NRMSE decreased by 1.20%, 1.04% and 1.68%, respectively. These results confirmed the effectiveness of the ensemble learning model.
Additionally, Figure 7 illustrates that the R2 of the simple average ensemble model shows a significant difference (p < 0.05) compared to the other models, being notably higher than that of the stacking ensemble model and the feature-weighted ensemble model, with increases of 1.121 and 1.157, respectively. Moreover, both RMSE and NRMSE were significantly lower in the simple average ensemble model compared to the other two ensemble models (p < 0.05). By comparing the correlation and linear fit between the predicted yield and measured yield of different ensemble methods under the optimal combination of RGB + MS + Texture + TIR (Figure 8), it was observed that there was a significant positive correlation between the yield predictions of the simple average ensemble method and the actual yields, demonstrating good fit and reliability. Therefore, it can be inferred that the simple average ensemble model was more accurate for wheat yield prediction.

4. Discussion

4.1. Prediction of Wheat Yield from Single Sensor Data and Multi-Sensor Fusion Data

In this study, through the analysis of the single sensor prediction results, it was found that the wheat yield prediction accuracy ranked as follows: Texture > MS > RGB > TIR. Among them, texture features exhibited superior performance in wheat yield prediction accuracy, with R2 values ranging from 0.539 to 0.593. This has been consistently demonstrated in studies across various sites and crops. The utilization of PCA in maize yield prediction effectively reduced the standard deviation of the prediction performance, thereby enhancing the accuracy of yield forecasts [53].
In Vietnam, the rice yield prediction model utilizing PCA-ML exhibited an average improvement of 18.5–45.0% compared to using ML alone. This outcome fully underscores the reliability and effectiveness of the combined model [54]. This indicates that the method combining PCA and ML effectively handles redundant data in multi-channel texture features, consequently leading to a significant enhancement in the accuracy of yield prediction.
The wheat yield prediction results from MS data were superior to those from RGB data, primarily due to their capability to capture spectral information across multiple bands from visible light to near infrared. Particularly, the near-infrared band provides the opportunity to accurately calculate Vis such as NDVI, which in turn can be utilized to better assess wheat yield. Furthermore, the stability of MS cameras across varying lighting conditions minimizes the influence of environmental fluctuations on prediction accuracy, ensuring the provision of reliable data for yield prediction [55,56]. The performance of TIR information extracted by TIR sensors was not satisfactory, with R2 values ranging from 0.434 to 0.490. This finding aligns with the results reported by Luz and Elarab [57,58]. The possible explanation for this could be that canopy heat information is intricately linked to factors such as leaf water content, pigment concentration and canopy structural characteristics. If these factors are not appropriately controlled or corrected for during data processing, they can significantly impact the accuracy of yield predictions [7,59].
Multi-sensor fusion (RGB + MS + Texture + TIR) demonstrated clear advantages over single sensor prediction. By harnessing the capabilities of multiple sensors and integrating data from different sources, it provided a more comprehensive overview of crop growth information, thereby enhancing forecast accuracy [13].
However, it also poses challenges in terms of data processing and algorithm optimization. Future research efforts should focus on streamlining the fusion process and enhancing algorithm efficiency to achieve more reliable wheat yield prediction.

4.2. Application of Basic Model in Wheat Yield Prediction

Five basic models were employed for wheat yield forecasting. XGboost, as a novel ML algorithm, has demonstrated superior predictive capabilities compared to other models, such as RF [60]. RF has been favored by many researchers due to its capability of removing redundant information from spectral data and achieving higher inversion accuracy through a smaller set of spectral characteristic variables [60,61]. Indeed, the XGboost model exhibited exceptional performance in the wheat yield prediction task. This was primarily attributed to its innovative algorithm design and optimization strategy, which effectively minimized overfitting and reduced computational demands. Consequently, the model’s generalization ability was significantly enhanced, leading to more accurate predictions [62]. This research result has been corroborated by Li et al., who confirmed that the XGboost model outperforms other models in soybean yield prediction when utilizing the same input data [63]. Furthermore, in the prediction of winter wheat yield, the XGboost model not only marginally exceeded the RF model in terms of prediction accuracy but also demonstrated significant superiority in computational efficiency in most scenarios. Notably, it requires less time, making it a more efficient and practical choice for yield prediction [64]. These results underscore the advantages of XGboost in processing large-scale agricultural data, particularly in situations where swift and efficient output predictions are imperative. The model’s superior performance in terms of both accuracy and computational efficiency demonstrates its potential as a valuable tool for agricultural yield forecasting.
The PLS model exhibited the poorest performance in wheat yield prediction, both in single-sensor and multi-sensor data fusion scenarios. Although PLS is capable of addressing the issue of multicollinearity among independent variables, as the number of potential variables increases, the training model tends to overfit. This overfitting phenomenon adversely impacts the model’s performance on new test data, limiting its accuracy and reliability for yield prediction tasks [65,66].

4.3. Performance of Ensemble Learning in Wheat Yield Prediction

Despite the recent significant advancements in ML methods and their successful applications across various fields, the pure data-driven approach in utilizing ML technology still poses some fundamental limitations. The accuracy and uncertainty of predictions generated by ML algorithms heavily depend on several factors. These include the quality of the data, the representativeness of the chosen model, and the dependencies between the input and target variables within the collected dataset [67]. Data that contain high levels of noise, erroneous information, outliers, biases, and incompleteness can significantly diminish the predictive capabilities of a machine learning model [21]. For this reason, this study incorporated three ensemble methods: stacking, feature-weighted and simple average ensemble. In comparison to a single model, the ensemble model demonstrates higher precision. This finding aligns with the outcomes of previous research [13,68]. The R2 values of the stacking ensemble method, which served as the second layer, were closely comparable to those of the feature-weighted ensemble learning approach. The primary advantage of the stacking ensemble method lies in its ability to learn and capitalize on the complementarities among diverse base learners, thereby enhancing the accuracy of predictions [69]. However, since the performance of each primary learner varies, the presence of large output errors in some primary learners can introduce significant error features into the training process of the meta-learner. This, in turn, can negatively impact the prediction accuracy of the entire model [70]. Another feature-weighted ensemble learning method involves correcting the prediction error of each primary learner. By doing so, it addresses the issue of poor prediction performance of individual models to some extent, generating a dataset that is more conducive to learner training [67]. Therefore, when there is variation in the correlation among features within the data, it is a prudent choice to select ensemble methods tailored to the specific characteristics of the dataset [71]. In summary, the prediction accuracy of both stacking and feature-weighted methods was comparable, likely due to the unique advantages each approach offers. Notably, the novel layer 3 simple average ensemble method exhibited the highest R2 value. This superior performance may be attributed to its ability to effectively integrate prediction results from diverse methods, mitigating potential issues such as model disparities, variations in sample distribution, and inaccuracies in feature weights, ultimately leading to enhanced prediction accuracy.
With the rise of deep learning technology, especially the innovative applications of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), they have demonstrated exceptional capabilities in handling large-scale and high-dimensional datasets [72]. In the future, within the pivotal domain of agricultural yield forecasting, the deployment of deep learning models will overcome the constraints inherent in conventional methodologies. These models have fostered the integration of diverse datasets, including satellite imagery, climatic data, and soil conditions, to perform a holistic analysis aimed at forecasting the production of crops [73,74,75].

5. Conclusions

This study delved into the capabilities of UAV multi-sensor data fusion and machine learning algorithms for wheat yield prediction. Three ensemble learning methods of stacking, feature-weighted and simple average were proposed to improve the performance of the prediction model. The results demonstrated that these ensemble learning methods enhanced the accuracy of wheat yield prediction. By synthesizing the strengths of different learners, ensemble learning methods effectively mitigated the potential risk of overfitting associated with individual models, thereby bolstering the model’s generalization ability. The introduction of the simple average as the third layer ensemble learning represented a novel concept in wheat yield prediction. This method not only evaluated and improved the model’s forecasting performance in a more robust and comprehensive manner, but also enhanced its adaptability and flexibility to data variations while maintaining high predictive accuracy. These ensemble learning methods are expected to become valuable tools for assessing the yield potential of diverse wheat genotypes. By providing a scientific basis and crucial decision support, they will contribute significantly to the development of superior wheat varieties with higher yield potential.

Author Contributions

S.Y. wrote the manuscript; L.L. designed the study, and both analyzed the data; Y.X. and Y.M. directed the trial and provided the main idea; L.L. and M.Y. participated in data collection; S.F. helped with image processing; Z.T. provided comments and suggestions to improve the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Science and Technology Major Program (2022ZD015703), the National Natural Science Foundation of China (32372196), and the Beijing Joint Research Program for Germplasm Innovation and New Variety Breeding (G20220628002).

Data Availability Statement

The original contributions presented in this study are included in the article; further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Sun, C.; Dong, Z.; Zhao, L.; Ren, Y.; Zhang, N.; Chen, F. The wheat 660k SNP array demonstrates great potential for marker-assisted selection in polyploid wheat. Plant Biotechnol. J. 2020, 18, 1354–1360. [Google Scholar] [CrossRef]
  2. Zhou, X.; Zheng, H.B.; Xu, X.Q.; He, J.Y.; Ge, X.K.; Yao, X.; Cheng, T.; Zhu, Y.; Cao, W.X.; Tian, Y.C. Predicting grain yield in rice using multi-temporal vegetation indices from uav-based multispectral and digital imagery. ISPRS J. Photogramm. Remote Sens. 2017, 130, 246–255. [Google Scholar] [CrossRef]
  3. Bian, C.; Shi, H.; Wu, S.; Zhang, K.; Wei, M.; Zhao, Y.; Sun, Y.; Zhuang, H.; Zhang, X.; Chen, S. Prediction of Field-Scale Wheat Yield Using Machine Learning Method and Multi-Spectral UAV Data. Remote Sens. 2022, 14, 1474. [Google Scholar] [CrossRef]
  4. Xu, W.; Chen, P.; Zhan, Y.; Chen, S.; Zhang, L.; Lan, Y. Cotton yield estimation model based on machine learning using time series uav remote sensing data. Int. J. Appl. Earth Obs. Geoinf. 2021, 104, 102511. [Google Scholar] [CrossRef]
  5. Weiss, M.; Jacob, F.; Duveiller, G. Remote sensing for agricultural applications: A meta-review. Remote Sens. Environ. 2020, 236, 111402. [Google Scholar] [CrossRef]
  6. Yao, H.; Qin, R.; Chen, X. Unmanned aerial vehicle for remote sensing applications—A review. Remote Sens. 2019, 11, 1443. [Google Scholar] [CrossRef]
  7. Maimaitijiang, M.; Sagan, V.; Sidike, P.; Hartling, S.; Fritschi, F.B. Soybean yield prediction from UAV using multimodal data fusion and deep learning. Remote Sens. Environ. 2020, 237, 111599. [Google Scholar] [CrossRef]
  8. Li, B.; Xu, X.; Zhang, L.; Han, J.; Jin, L. Above-ground biomass estimation and yield prediction in potato by using UAV-based RGB and hyperspectral imaging. ISPRS J. Photogramm. Remote Sens. 2020, 162, 161–172. [Google Scholar] [CrossRef]
  9. Hassan, M.A.; Yang, M.; Rasheed, A.; Yang, G.; Reynolds, M.; Xia, X.; Xiao, Y.; He, Z. A rapid monitoring of NDVI across the wheat growth cycle for grain yield prediction using a multi-spectral UAV platform. Plant Sci. 2019, 282, 95–103. [Google Scholar] [CrossRef] [PubMed]
  10. Concepcion, R.S., II; Lauguico, S.C.; Alejandrino, J.D.; Dadios, E.P.; Sybingco, E. Lettuce Canopy Area Measurement Using Static Supervised Neural Networks Based on Numerical Image Textural Feature Analysis of Haralick and Gray Level Co-Occurrence Matrix. AGRIVITA J. Agric. Sci. 2020, 42, 472–486. [Google Scholar] [CrossRef]
  11. Das, S.; Christopher, J.; Apan, A.; Choudhury, M.R.; Chapman, S.; Menzies, N.W.; Dang, Y.P. UAV-thermal imaging: A robust technology to evaluate in-field crop water stress and yield variation of wheat genotypes. In Proceedings of the 2020 IEEE India Geoscience and Remote Sensing Symposium (InGARSS), Ahmedabad, India, 1–4 December 2020; pp. 138–141. [Google Scholar] [CrossRef]
  12. Gadhwal, M.; Sharda, A.; Sangha, H.S.; Van der Merwe, D. Spatial corn canopy temperature extraction: How focal length and sUAS flying altitude influence thermal infrared sensing accuracy. Comput. Electron. Agric. 2023, 209, 107812. [Google Scholar] [CrossRef]
  13. Fei, S.; Hassan, M.A.; Xiao, Y.; Su, X.; Chen, Z.; Cheng, Q.; Duan, F.; Chen, R.; Ma, Y. UAV-based multi-sensor data fusion and machine learning algorithm for yield prediction in wheat. Precis. Agric. 2022, 24, 187–212. [Google Scholar] [CrossRef] [PubMed]
  14. Ramos, A.P.M.; Osco, L.P.; Furuya, D.E.G.; Gonalves, W.N.; Pistori, H. A random forest ranking approach to predict yield in maize with uav-based vegetation spectral indices. Comput. Electron. Agric. 2020, 178, 105791. [Google Scholar] [CrossRef]
  15. Liakos, K.G.; Busato, P.; Moshou, D.; Pearson, S.; Bochtis, D. Machine Learning in Agriculture: A Review. Sensors 2018, 18, 2674. [Google Scholar] [CrossRef] [PubMed]
  16. Han, J.; Zhang, Z.; Cao, J.; Luo, Y.; Zhang, L.; Li, Z.; Zhang, J. Prediction of Winter Wheat Yield Based on Multi-Source Data and Machine Learning in China. Remote Sens. 2020, 12, 236. [Google Scholar] [CrossRef]
  17. Maimaitijiang, M.; Ghulam, A.; Sidike, P.; Hartling, S.; Maimaitiyiming, M.; Peterson, K.; Shavers, E.; Fishman, J.; Peterson, J.; Kadam, S.; et al. Unmanned aerial system (UAS)-based phenotyping of soybean using multi-sensor data fusion and extreme learning machine. ISPRS J. Photogramm. Remote Sens. 2017, 134, 43–58. [Google Scholar] [CrossRef]
  18. Ahmed, A.A.M.; Sharma, E.; Jui, S.J.J.; Deo, R.C.; Nguyen-Huy, T.; Ali, M. Kernel ridge regression hybrid method for wheat yield prediction with satellite-derived predictors. Remote Sens. 2022, 14, 1136. [Google Scholar] [CrossRef]
  19. Feng, L.; Zhang, Z.; Ma, Y.; Du, Q.; Williams, P.; Drewry, J. Alfalfa yield prediction using UAV-based hyperspectral imagery and ensemble learning. Remote Sens. 2020, 12, 2028. [Google Scholar] [CrossRef]
  20. Sarijaloo, F.B.; Porta, M.; Taslimi, B.; Pardalos, P.M. Yield performance estimation of corn hybrids using machine learning algorithms. Aritificial Intell. Agric. 2021, 5, 82–89. [Google Scholar] [CrossRef]
  21. Chlingaryan, S.W.B. Machine learning approaches for crop yield prediction and nitrogen status estimation in precision agriculture: A review. Comput. Electron. Agric. 2018, 151, 61–69. [Google Scholar] [CrossRef]
  22. Van der Laan, M.J.; Polley, E.C.; Hubbard, A.E. Super learner. Stat. Appl. Genet. Mol. Biol. 2007, 6. [Google Scholar] [CrossRef] [PubMed]
  23. Dong, X.; Zhiwen, Y.U.; Cao, W.; Shi, Y.; Qianli, M.A. A survey on ensemble learning. Front. Comput. Sci. 2020, 14, 241–258. [Google Scholar] [CrossRef]
  24. Zhang, Y.; Liu, J.; Shen, W. A review of ensemble learning algorithms used in remote sensing applications. Appl. Sci. 2022, 12, 8654. [Google Scholar] [CrossRef]
  25. Zhang, W.; Ren, H.; Jiang, Q.; Zhang, K. Exploring Feature Extraction and ELM in Malware Detection for Android Devices. In International Symposium on Neural Networks; Springer: Cham, Switzerland, 2015; pp. 489–498. [Google Scholar] [CrossRef]
  26. Niño-Adan, I.; Manjarres, D.; Landa-Torres, I.; Portillo, E. Feature weighting methods: A review. Expert Syst. Appl. 2021, 184, 115424. [Google Scholar] [CrossRef]
  27. Liu, Z.; Wen, T.; Sun, W.; Zhang, Q. Semi-supervised self-training feature weighted clustering decision tree and random forest. IEEE Access 2020, 8, 128337–128348. [Google Scholar] [CrossRef]
  28. Xue, J.; Su, B. Significant remote sensing vegetation indices: A review of developments and applications. J. Sens. 2017, 1, 1–17. [Google Scholar] [CrossRef]
  29. Humeau-Heurtier, A. Texture feature extraction methods: A survey. IEEE Access 2019, 7, 8975–9000. [Google Scholar] [CrossRef]
  30. Abdi, H.; Williams, L.J. Principal component analysis. Wiley Interdiscip. Rev. Comput. Stat. 2010, 2, 433–459. [Google Scholar] [CrossRef]
  31. Bendig, J.; Yu, K.; Aasen, H.; Bolten, A.; Bennertz, S.; Broscheit, J.; Gnyp, M.; Bareth, G. Combining UAV-based plant height from crop surface models, visible, and near infrared vegetation indices for biomass monitoring in barley. Int. J. Appl. Earth Obs. Geoinf. 2015, 39, 79–87. [Google Scholar] [CrossRef]
  32. Peñuelas, J.; Gamon, J.A.; Fredeen, A.L.; Merino, J.; Field, C.B. Reflectance indices associated with physiological changes in nitrogen-and water-limited sunflower leaves. Remote Sens. Environ. 1994, 48, 135–146. [Google Scholar] [CrossRef]
  33. Louhaichi, M.; Borman, M.M.; Johnson, D.E. Spatially located platform and aerial photography for documentation of grazing impacts on wheat. Geocarto Int. 2001, 16, 65–70. [Google Scholar] [CrossRef]
  34. Woebbecke, D.M.; Meyer, G.E.; Von Bargen, K.; Mortensen, D.A. Color indices for weed identification under various soil, residue, and lighting conditions. Trans. ASAE 1995, 38, 259–269. [Google Scholar] [CrossRef]
  35. Guijarro, M.; Pajares, G.; Riomoros, I.; Herrera, P.J.; Burgos-Artizzu, X.P.; Ribeiro, A. Automatic segmentation of relevant textures in agricultural images. Comput. Electron. Agric. 2011, 75, 75–83. [Google Scholar] [CrossRef]
  36. Gitelson, A.A.; Kaufman, Y.J.; Stark, R.; Rundquist, D. Novel algorithms for remote estimation of vegetation fraction. Remote Sens. Environ. 2002, 80, 76–87. [Google Scholar] [CrossRef]
  37. Kawashima, S.; Nakatani, M. An algorithm for estimating chlorophyll content in leaves using a video camera. Ann. Bot. 1998, 81, 49–54. [Google Scholar] [CrossRef]
  38. Sellaro, R.; Crepy, M.; Trupkin, S.A.; Karayekov, E.; Buchovsky, A.S.; Rossi, C.; Casal, J.J. Cryptochrome as a sensor of the blue/green ratio of natural radiation in Arabidopsis. Plant Physiol. 2010, 154, 401–409. [Google Scholar] [CrossRef] [PubMed]
  39. Gitelson, A.A.; Merzlyak, M.N. Signature analysis of leaf reflectance spectra: Algorithm development for remote sensing of chlorophyll. J. Plant Physiol. 1996, 148, 494–500. [Google Scholar] [CrossRef]
  40. Zhang, S.; Liu, L. The potential of the MERIS Terrestrial Chlorophyll Index for crop yield prediction. Remote Sens. Lett. 2014, 5, 733–742. [Google Scholar] [CrossRef]
  41. Pinter, P.J., Jr.; Hatfield, J.L.; Schepers, J.S.; Barnes, E.M.; Moran, M.S.; Daughtry, C.S.; Upchurch, D.R. Remote sensing for crop management. Photogramm. Eng. Remote Sens. 2003, 69, 647–664. [Google Scholar] [CrossRef]
  42. Xue, L.; Cao, W.; Luo, W.; Dai, T.; Zhu, Y. Monitoring leaf nitrogen status in rice with canopy spectral reflectance. Agron. J. 2004, 96, 135–142. [Google Scholar] [CrossRef]
  43. Chen, J.M. Evaluation of vegetation indices and a modified simple ratio for boreal applications. Can. J. Remote Sens. 1996, 22, 229–242. [Google Scholar] [CrossRef]
  44. Roujean, J.L.; Breon, F.M. Estimating PAR absorbed by vegetation from bidirectional reflectance measurements. Remote Sens. Environ. 1995, 51, 375–384. [Google Scholar] [CrossRef]
  45. Peñuelas, J.; Filella, I.; Gamon, J.A. Assessment of photosynthetic radiation-use efficiency with spectral refectance. New Phytol. 1995, 131, 291–296. [Google Scholar] [CrossRef]
  46. Gitelson, A.A.; Viña, A.; Arkebauer, T.J.; Rundquist, D.C.; Keydan, G.; Leavitt, B. Remote estimation of leaf area index and green leaf biomass in maize canopies. Geophys. Res. Lett. 2003, 30, 52. [Google Scholar] [CrossRef]
  47. Gilabert, M.A.; González-Piqueras, J.; Garcıa-Haro, F.J.; Meliá, J. A generalized soil-adjusted vegetation index. Remote Sens. Environ. 2002, 82, 303–310. [Google Scholar] [CrossRef]
  48. Merzlyak, M.N.; Gitelson, A.A.; Chivkunova, O.B.; Rakitin, V.Y. Non-destructive optical detection of pigment changes during leaf senescence and fruit ripening. Physiol. Plant. 1999, 106, 135–141. [Google Scholar] [CrossRef]
  49. Quinlan, J.R. Learning with continuous classes. In Proceedings of the 5th Australian Joint Conference on Artificial Intelligence, Hobart, Australia, 16–18 November 1992; Volume 92, pp. 343–348. [Google Scholar] [CrossRef]
  50. Fei, S.; Hassan, M.A.; He, Z.; Chen, Z.; Shu, M.; Wang, J.; Li, C.; Xiao, Y. Assessment of ensemble learning to predict wheat grain yield based on UAV-multispectral reflectance. Remote Sens. 2021, 13, 2338. [Google Scholar] [CrossRef]
  51. Janneh, L.L.; Zhang, Y.; Cui, Z.; Yang, Y. Multi-level feature re-weighted fusion for the semantic segmentation of crops and weeds. J. King Saud Univ.-Comput. Inf. Sci. 2023, 35, 101545. [Google Scholar] [CrossRef]
  52. Yu, D.; Zha, Y.; Shi, L.; Jin, X.; Hu, S.; Yang, Q.; Huang, K.; Zeng, W.Z. Improvement of sugarcane yield estimation by assimilating UAV-derived plant height observations. Eur. J. Agron. 2020, 121, 126159. [Google Scholar] [CrossRef]
  53. Croci, M.; Impollonia, G.; Meroni, M.; Amaducci, S. Dynamic maize yield predictions using machine learning on multi-source data. Remote Sens. 2022, 15, 100. [Google Scholar] [CrossRef]
  54. Pham, H.T.; Awange, J.; Kuhn, M.; Nguyen, B.V.; Bui, L.K. Enhancing Crop Yield Prediction Utilizing Machine Learning on Satellite-Based Vegetation Health Indices. Sensors 2022, 22, 719. [Google Scholar] [CrossRef] [PubMed]
  55. Soria, X.; Sappa, A.D.; Akbarinia, A. Multispectral single-sensor RGB-NIR imaging: New challenges and opportunities. In Proceedings of the 2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA), Montreal, QC, Canada, 28 November–1 December 2017; pp. 1–6. [Google Scholar] [CrossRef]
  56. Cao, X.; Liu, Y.; Yu, R.; Han, D.; Su, B. A comparison of UAV RGB and multispectral imaging in phenotying for stay green of wheat population. Remote Sens. 2021, 13, 5173. [Google Scholar] [CrossRef]
  57. Luz, B.R.D.; Crowley, J.K. Identification of plant species by using high spatial and spectral resolution thermal infrared (8.0–13.5μm) imagery. Remote Sens. Environ. 2010, 114, 404–413. [Google Scholar] [CrossRef]
  58. Elarab, M.; Ticlavilca, A.M.; Torres-Rua, A.F.; Maslova, I.; Mckee, M. Estimating chlorophyll with thermal and broadband multispectral high resolution imagery from an unmanned aerial system using relevance vector machines for precision agriculture. Int. J. Appl. Earth Obs. Geoinf. 2015, 43, 32–42. [Google Scholar] [CrossRef]
  59. Beatriz, R.D.L.; Crowley, J.K. Spectral reflectance and emissivity features of broad leaf plants: Prospects for remote sensing in the thermal infrared (8.0–14.0 μm). Remote Sens. Environ. 2007, 109, 393–405. [Google Scholar] [CrossRef]
  60. Bolón-Canedo, V.; Alonso-Betanzos, A. Ensembles for feature selection: A review and future trends. Inf. Fusion 2019, 52, 1–12. [Google Scholar] [CrossRef]
  61. Huang, L.; Liu, Y.; Huang, W.; Dong, Y.; Ma, H.; Wu, K.; Guo, A. Combining random forest and XGBoost methods in detecting early and mid-term winter wheat stripe rust using canopy level hyperspectral measurements. Agriculture 2022, 12, 74. [Google Scholar] [CrossRef]
  62. Nagaraju, A.; Mohandas, R. Multifactor Analysis to Predict Best Crop using XgBoost Algorithm. In Proceedings of the 2021 5th International Conference on Trends in Electronics and Informatics (ICOEI), Tirunelveli, India, 3–5 June 2021; pp. 155–163. [Google Scholar] [CrossRef]
  63. Li, Y.; Zeng, H.; Zhang, M.; Wu, B.; Zhao, Y.; Yao, X.; Cheng, T.; Qin, X.; Wu, F. A county-level soybean yield prediction framework coupled with XGBoost and multidimensional feature engineering. Int. J. Appl. Earth Obs. Geoinf. 2023, 118, 103269. [Google Scholar] [CrossRef]
  64. Joshi, A.; Pradhan, B.; Chakraborty, S.; Behera, M.D. Winter wheat yield prediction in the conterminous United States using solar-induced chlorophyll fluorescence data and XGBoost and random forest algorithm. Ecol. Inform. 2023, 77, 102194. [Google Scholar] [CrossRef]
  65. Aguate, F.M.; Trachsel, S.; Pérez, L.G.; Burgueño, J.; Crossa, J.; Balzarini, M.; de los Campos, G. Use of hyperspectral image data outperforms vegetation indices in prediction of maize yield. Crop Sci. 2017, 57, 2517–2524. [Google Scholar] [CrossRef]
  66. Zeng, W.Z.; Xu, C.; Zhao, G.; Wu, J.W.; Huang, J. Estimation of sunflower seed yield using partial least squares regression and artificial neural network models. Pedosphere 2018, 28, 764–774. [Google Scholar] [CrossRef]
  67. Wei, P.; Lu, Z.; Song, J. Variable importance analysis: A comprehensive review. Reliab. Eng. Syst. Saf. 2015, 142, 399–432. [Google Scholar] [CrossRef]
  68. Ji, Y.; Liu, R.; Xiao, Y.; Cui, Y.; Chen, Z.; Zong, X.; Yang, T. Faba bean above-ground biomass and bean yield estimation based on consumer-grade unmanned aerial vehicle RGB images and ensemble learning. Precis. Agric. 2023, 24, 1439–1460. [Google Scholar] [CrossRef]
  69. Li, C.; Wang, Y.; Ma, C.; Chen, W.; Li, Y.; Li, J.; Ding, F. Improvement of wheat grain yield prediction model performance based on stacking technique. Appl. Sci. 2021, 11, 12164. [Google Scholar] [CrossRef]
  70. Pavlyshenko, B. Using stacking approaches for machine learning models. In Proceedings of the 2018 I EEE Second International Conference on Data Stream Mining & Processing (DSMP), Lviv, Ukraine, 21–25 August 2018; pp. 255–258. [Google Scholar] [CrossRef]
  71. Anh, V.P.; Minh, L.N.; Lam, T.B. Feature weighting and svm parameters optimization based on genetic algorithms for classification problems. Appl. Intell. 2017, 46, 455–469. [Google Scholar] [CrossRef]
  72. Zhang, Q.; Liu, Y.; Gong, C.; Chen, Y.; Yu, H. Applications of deep learning for dense scenes analysis in agriculture: A review. Sensors 2020, 20, 1520. [Google Scholar] [CrossRef] [PubMed]
  73. Shook, J.; Gangopadhyay, T.; Wu, L.; Ganapathysubramanian, B.; Sarkar, S.; Singh, A.K. Crop yield prediction integrating genotype and weather variables using deep learning. PLoS ONE 2021, 16, e0252402. [Google Scholar] [CrossRef] [PubMed]
  74. Oikonomidis, A.; Catal, C.; Kassahun, A. Deep learning for figure prediction: A systematic literature review. N. Z. J. Crop Hortic. Sci. 2023, 51, 1–26. [Google Scholar] [CrossRef]
  75. Van Klompenburg, T.; Kassahun, A.; Catal, C. Crop yield prediction using machine learning: A systematic literature review. Comput. Electron. Agric. 2020, 177, 105709. [Google Scholar] [CrossRef]
Figure 1. (a) Experimental design. Note: T1, Full irrigation; T2, limited irrigation. (b) Image acquisition.
Figure 1. (a) Experimental design. Note: T1, Full irrigation; T2, limited irrigation. (b) Image acquisition.
Drones 08 00284 g001
Figure 2. Processing of UAV-based data.
Figure 2. Processing of UAV-based data.
Drones 08 00284 g002
Figure 3. Modeling construction and assessment. Note: MS, multi-spectral features, TIR, thermal infrared features, RF, random forest, PLS, partial least squares, RR, ridge regression, KNN, k-nearest neighbor, XGboost, extreme gradient boosting decision tree.
Figure 3. Modeling construction and assessment. Note: MS, multi-spectral features, TIR, thermal infrared features, RF, random forest, PLS, partial least squares, RR, ridge regression, KNN, k-nearest neighbor, XGboost, extreme gradient boosting decision tree.
Drones 08 00284 g003
Figure 4. Principal component analysis loading plots for different texture features.
Figure 4. Principal component analysis loading plots for different texture features.
Drones 08 00284 g004
Figure 5. Pearson’s correlation coefficient (r) between CI, VI, texture features, TIR and wheat yield. (a) CIs: Color feature indices; (b) VIs: Vegetation indices; (c) texture features and TIR.
Figure 5. Pearson’s correlation coefficient (r) between CI, VI, texture features, TIR and wheat yield. (a) CIs: Color feature indices; (b) VIs: Vegetation indices; (c) texture features and TIR.
Drones 08 00284 g005
Figure 6. Comparison of the prediction accuracies of models for different sensors and their combinations. (a) R2; (b) RMSE; (c) NRMSE.
Figure 6. Comparison of the prediction accuracies of models for different sensors and their combinations. (a) R2; (b) RMSE; (c) NRMSE.
Drones 08 00284 g006
Figure 7. Comparison of the prediction accuracies of different ML algorithms. Note: The same letter indicates that there is no significant difference between the two groups of data, while different letters indicate significant differences (p < 0.05).
Figure 7. Comparison of the prediction accuracies of different ML algorithms. Note: The same letter indicates that there is no significant difference between the two groups of data, while different letters indicate significant differences (p < 0.05).
Drones 08 00284 g007
Figure 8. Comparison of ensemble learning prediction and measured yields.
Figure 8. Comparison of ensemble learning prediction and measured yields.
Drones 08 00284 g008
Table 2. Initial eigenvalues, contribution rates of variance and cumulative contribution rates of variance of texture feature principal components.
Table 2. Initial eigenvalues, contribution rates of variance and cumulative contribution rates of variance of texture feature principal components.
Principal ComponentInitial Eigenvalues
EigenvalueVariance Contribution Ratio (%)Cumulative Variance Contribution Ratio (%)
119.7249.3049.30
211.1327.8077.10
33.097.7084.90
41.934.8089.70
51.543.8093.50
60.741.9095.40
70.661.7097.00
80.380.9098.00
90.280.7098.70
100.220.6099.20
110.150.4099.60
120.060.10100.00
Table 3. Test accuracy statistics of different models for wheat yield prediction.
Table 3. Test accuracy statistics of different models for wheat yield prediction.
SensorMetricBase LearnerSecondary LearnerTertiary Learner
RFPLSRRKNNXGboostStRREn_FWEn_Mean
RGBR20.492 0.501 0.517 0.465 0.514 0.525 0.524 0.612
RMSE (t ha−1)0.848 0.841 0.827 0.871 0.830 0.820 0.821 0.818
NRMSE (%)8.520 8.449 8.310 8.750 8.339 8.241 8.247 8.172
MSR20.513 0.534 0.534 0.507 0.528 0.542 0.548 0.625
RMSE (t ha−1)0.853 0.834 0.834 0.858 0.839 0.827 0.821 0.822
NRMSE (%)8.565 8.378 8.383 8.619 8.433 8.304 8.249 8.243
TextureR20.579 0.592 0.592 0.539 0.593 0.605 0.596 0.678
RMSE (t ha−1)0.758 0.746 0.746 0.793 0.745 0.734 0.743 0.733
NRMSE (%)7.617 7.498 7.498 7.963 7.487 7.374 7.459 7.384
TIRR20.434 0.490 0.490 0.439 0.482 0.500 0.495 0.594
RMSE (t ha−1)0.879 0.834 0.834 0.875 0.840 0.826 0.830 0.823
NRMSE (%)8.825 8.382 8.382 8.791 8.443 8.295 8.335 8.292
RGB + MSR20.540 0.506 0.545 0.503 0.537 0.561 0.552 0.636
RMSE (t ha−1)0.825 0.854 0.820 0.857 0.827 0.806 0.814 0.805
NRMSE (%)8.285 8.580 8.241 8.611 8.307 8.096 8.173 8.107
RGB + TextureR20.604 0.577 0.577 0.569 0.605 0.619 0.614 0.687
RMSE (t ha−1)0.747 0.772 0.772 0.779 0.746 0.733 0.737 0.733
NRMSE (%)7.506 7.754 7.758 7.828 7.491 7.360 7.407 7.314
RGB + TIRR20.554 0.557 0.560 0.548 0.561 0.575 0.580 0.657
RMSE (t ha−1)0.780 0.777 0.775 0.785 0.774 0.762 0.757 0.756
NRMSE (%)7.839 7.806 7.786 7.889 7.772 7.650 7.602 7.620
MS + TextureR20.598 0.604 0.601 0.551 0.617 0.623 0.619 0.694
RMSE (t ha−1)0.741 0.735 0.738 0.782 0.723 0.718 0.721 0.714
NRMSE (%)7.443 7.389 7.410 7.859 7.263 7.208 7.246 7.198
MS + TIRR20.569 0.561 0.563 0.536 0.566 0.581 0.571 0.656
RMSE (t ha−1)0.772 0.780 0.778 0.801 0.775 0.762 0.770 0.763
NRMSE (%)7.760 7.833 7.811 8.049 7.789 7.654 7.739 7.660
Texture + TIRR20.607 0.607 0.607 0.555 0.614 0.628 0.620 0.697
RMSE (t ha−1)0.732 0.732 0.733 0.780 0.726 0.713 0.720 0.710
NRMSE (%)7.357 7.358 7.359 7.831 7.290 7.161 7.235 7.157
RGB + MS + TextureR20.615 0.590 0.614 0.577 0.613 0.639 0.627 0.702
RMSE (t ha−1)0.736 0.760 0.738 0.772 0.739 0.713 0.725 0.716
NRMSE (%)7.396 7.638 7.412 7.755 7.421 7.163 7.281 7.146
RGB + MS + TIRR20.588 0.582 0.602 0.547 0.591 0.603 0.612 0.686
RMSE (t ha−1)0.750 0.755 0.737 0.786 0.747 0.736 0.728 0.723
NRMSE (%)7.532 7.589 7.405 7.897 7.508 7.389 7.310 7.287
RGB + Texture + TIRR20.636 0.614 0.620 0.589 0.647 0.652 0.655 0.717
RMSE (t ha−1)0.718 0.739 0.733 0.738 0.707 0.702 0.698 0.696
NRMSE (%)7.210 7.424 7.367 7.415 7.098 7.051 7.014 7.061
MS + Texture + TIRR20.627 0.616 0.620 0.568 0.641 0.643 0.645 0.711
RMSE (t ha−1)0.720 0.730 0.726 0.774 0.706 0.704 0.702 0.699
NRMSE (%)7.234 7.336 7.296 7.777 7.090 7.072 7.049 7.046
RGB + MS + Texture + TIRR20.640 0.631 0.649 0.615 0.660 0.668 0.667 0.733
RMSE (t ha−1)0.701 0.709 0.692 0.748 0.681 0.673 0.674 0.668
NRMSE (%)7.038 7.127 6.949 7.519 6.842 6.760 6.771 6.727
MS, multi-spectral features; TIR, thermal infrared features; RF, random forest; PLS, partial least squares; RR, ridge regression; KNN, k-nearest neighbor; XGboost, extreme gradient boosting decision tree; StRR, stacking ensemble using ridge regression as a secondary learner; En_FW feature-weighted ensemble as a secondary learner; En_Mean simple mean ensemble as a tertiary learner.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, S.; Li, L.; Fei, S.; Yang, M.; Tao, Z.; Meng, Y.; Xiao, Y. Wheat Yield Prediction Using Machine Learning Method Based on UAV Remote Sensing Data. Drones 2024, 8, 284. https://doi.org/10.3390/drones8070284

AMA Style

Yang S, Li L, Fei S, Yang M, Tao Z, Meng Y, Xiao Y. Wheat Yield Prediction Using Machine Learning Method Based on UAV Remote Sensing Data. Drones. 2024; 8(7):284. https://doi.org/10.3390/drones8070284

Chicago/Turabian Style

Yang, Shurong, Lei Li, Shuaipeng Fei, Mengjiao Yang, Zhiqiang Tao, Yaxiong Meng, and Yonggui Xiao. 2024. "Wheat Yield Prediction Using Machine Learning Method Based on UAV Remote Sensing Data" Drones 8, no. 7: 284. https://doi.org/10.3390/drones8070284

APA Style

Yang, S., Li, L., Fei, S., Yang, M., Tao, Z., Meng, Y., & Xiao, Y. (2024). Wheat Yield Prediction Using Machine Learning Method Based on UAV Remote Sensing Data. Drones, 8(7), 284. https://doi.org/10.3390/drones8070284

Article Metrics

Back to TopTop