Next Article in Journal
Satellite-Based Estimation of Soil Moisture Content in Croplands: A Case Study in Golestan Province, North of Iran
Previous Article in Journal
An Improved SAR Image Semantic Segmentation Deeplabv3+ Network Based on the Feature Post-Processing Module
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Machine-Learning Model Based on the Fusion of Spectral and Textural Features from UAV Multi-Sensors to Analyse the Total Nitrogen Content in Winter Wheat

1
Institute of Farmland Irrigation, Chinese Academy of Agricultural Sciences, Xinxiang 453002, China
2
College of Land Science and Technology, China Agricultural University, Beijing 100193, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(8), 2152; https://doi.org/10.3390/rs15082152
Submission received: 7 February 2023 / Revised: 1 April 2023 / Accepted: 18 April 2023 / Published: 19 April 2023
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)

Abstract

:
Timely and accurate monitoring of the nitrogen levels in winter wheat can reveal its nutritional status and facilitate informed field management decisions. Machine learning methods can improve total nitrogen content (TNC) prediction accuracy by fusing spectral and texture features from UAV-based image data. This study used four machine learning models, namely Gaussian Process Regression (GPR), Random Forest Regression (RFR), Ridge Regression (RR), and Elastic Network Regression (ENR), to fuse data and the stacking ensemble learning method to predict TNC during the winter wheat heading period. Thirty wheat varieties were grown under three nitrogen treatments to evaluate the predictive ability of multi-sensor (RGB and multispectral) spectral and texture features. Results showed that adding texture features improved the accuracy of TNC prediction models constructed based on spectral features, with higher accuracy observed with more features input into the model. The GPR, RFR, RR, and ENR models yielded coefficient of determination (R2) values ranging from 0.382 to 0.697 for TNC prediction accuracy. Among these models, the ensemble learning approach produced the best TNC prediction performance (R2 = 0.726, RMSE = 3.203 mg·g−1, MSE = 10.259 mg·g−1, RPD = 1.867, RPIQ = 2.827). Our findings suggest that accurate TNC prediction based on UAV multi-sensor spectral and texture features can be achieved through data fusion and ensemble learning, offering a high-throughput phenotyping approach valuable for future precision agriculture research.

1. Introduction

Nitrogen is a critical nutrient for crop growth, influencing development, yield, and quality [1]. TNC is a primary indicator of a crop’s nitrogen status [2]. Monitoring TNC can provide valuable insights into a crop’s nutritional status and facilitate effective field management practices. Traditional nitrogen content determination methods involve destructive sampling, which is a time-consuming and labour-intensive process requiring significant resources [3]. While non-destructive methods such as chlorophyll meters have emerged to estimate nitrogen content, they do not fully reflect the plant’s overall condition.
In several areas, quantitative remote sensing plays an important role [4,5,6]. With the development of UAV remote sensing technology, several studies have been conducted in the agricultural field using UAV-mounted sensors for soil and crop TNC monitoring in a high throughput and non-destructive manner [7,8,9]. For example, Lopez-Calderon [8] demonstrated the effectiveness of estimating the whole nitrogen content of forage maize based on UAV multispectral imagery data, and Liu [10] demonstrated the effectiveness of the inversion of the leaf nitrogen content of winter wheat based on UAV RGB imagery. The commonly used RGB sensor provides information in three high-resolution bands, while the multispectral sensor has five bands with more sensitive band information [11]. These two sensors are widely used in agriculture because of their small size, low cost, simple data processing, and easy disassembly and installation for portability [12,13]. Red, near-red, and thermal infrared bands have been found to perform well for crop monitoring using UAV remote sensing techniques [14]. However, the obtained spectral information and vegetation indices may perform poorly due to soil information and large canopy biomass [15]. Therefore, combinations of spectral features with varying sensitivities are chosen to achieve highly accurate prediction data. Furthermore, most studies have used only a single sensor to demonstrate the effectiveness of UAV remote sensing technology in predicting nitrogen content. Nonetheless, there are a limited number of studies that have explored the fusion of multi-source sensor data from UAV remote sensing to determine total crop nitrogen content.
Texture information is an essential complement to remotely sensed imagery that helps identify important features of objects or regions in an image. It is commonly used in image classification [16,17]. Different nitrogen treatments can affect crop growth, resulting in plant height differences, structural differences, and changes in leaf size and colour, ultimately leading to changes in texture features in spectral imagery [18,19]. Texture features have been used in vegetation identification and classification [20], nitrogen inversion [21] and condition detection [22]. However, most of the previous studies have only analysed RGB texture features. There are fewer studies that comprehensively assess nitrogen content using both RGB and multispectral texture features.
In recent years, the use of machine-learning methods to automatically detect patterns from data and make predictions about unknown data has become increasingly common in data-intensive fields [23]. These algorithms can effectively solve multivariate non-linear agricultural problems with good results [24]. For example, Li [25] demonstrated that a hyper-spectral inversion model based on the random forest algorithm was interpretable, generalisable, required few samples, did not overfit, and had high accuracy (validation area test accuracy R2 = 0.73) for estimating rice canopy nitrogen content. Berger [26] combined machine learning regression to estimate crop nitrogen content and found that the Gaussian process regression model accurately simulated aboveground nitrogen. Zhang [27] demonstrated the applicability of ridge regression in the field by introducing ridge regression analysis to spectral detection methods for crop nitrogen nutrient monitoring. Mahajan [28] demonstrated the feasibility of ENR in estimating the nutrient status of mango leaves through machine learning modelling. These studies provide ample evidence that RFR, GPR, RR, and ENR have good accuracy in agricultural monitoring. Compared to individual machine learning models, ensemble learning models have better accuracy. As one of these models, stacking regression combines multiple weak learning models to obtain a more comprehensive model that performs well in datasets with different sample sizes [29]. Stacking regression is an ensemble learning model that improves accuracy by combining multiple individual learners and capturing their best features [30]. The variety and adequacy of individual learner selection ensure that the information between learners complements each other, which is crucial to obtaining correct results [31]. Stacking regression methods are widely used in agriculture, for example, to estimate chlorophyll content in potatoes [32], to evaluate nitrogen content in citrus leaves [33], and to estimate alfalfa yield [30], where accuracies higher than those of individual machine learning models were obtained. In particular, for understanding the spatial distribution of soil organic carbon (SOC) content in different climatic regions, the stacking method outperforms the single machine learning model for estimating nitrogen content [34]. This method can compensate for the deficiencies of the basic learning model effectively. Although the interpretability of the stacking method may decrease when using multiple basic stacking tools, the increase in model prediction accuracy is significant, so further research on the performance of the stacking model is needed. To date, there are no studies on predicting winter wheat nitrogen content using ensemble learning methods with stacked spectral and texture features from multiple sources.
In summary, this study aims to achieve two main objectives: (1) to assess the efficacy of spectral and texture features obtained from multi-source sensors mounted on UAVs, through data fusion, for predicting TNC during the heading stage of winter wheat, and (2) to develop ensemble learning models that can enhance the accuracy of TNC prediction compared to individual machine learning models.

2. Materials and Methods

2.1. Experimental Area and Design

This study was conducted at Qiliying Comprehensive Experimental Base of the Chinese Academy of Agricultural Sciences, located in Xinxiang City, Henan Province (113°45’38”E, 35°8’10”N) (Figure 1), from 2020 to 2021. The study site has a temperate continental monsoon climate. Figure 2 illustrates the variations in average daily temperature, rainfall, and radiation during the winter wheat growing season. As depicted in the figure, the highest temperature and radiation intensity are observed in May, whereas the highest rainfall occurs in March. The lowest temperature is recorded in January, and the radiation intensity is lowest in November but increases as the winter wheat grows. Rainfall is mainly concentrated in November and March, followed by December, April, and May.
The trial area consisted of 180 plots with three treatments, N1 (300 kg·hm−2), N2 (180 kg·hm−2), and N3 (60 kg·hm−2), applied at two fertility stages: the jointing and the heading stage (Table 1). The total fertiliser application for each plot was proportionally divided into three parts: two at the jointing stage and one at the heading stage. Each nitrogen treatment comprised 60 plots arranged in rows 20 cm apart, with a size of 3 m long and 1.5 m wide, totalling an area of 4.5 m2. To ensure objectivity, thirty wheat varieties were selected for this trial, with two replications in each treatment. Pesticide, fertiliser, and irrigation amounts were based on local management practice standards in the field experiment. The TNC data were obtained from collected wheat samples at the heading stage on 23 April 2021. Six representative wheat plants were taken from each plot of uniform growth as wheat samples, leaving only the aboveground parts with a scissor treatment, resulting in 180 wheat samples. The wheat samples were dried and weighed at 80 °C. The sample plants were ground and sieved, and the TNC of the wheat samples was obtained using concentrated sulphuric acid with hydrogen peroxide digestion and a Kjeldahl nitrogen analyser.

2.2. Acquisition and Processing of Spectral Data

This experiment used an M210 (DJI Technology Co., Ltd., Shenzhen, China) UAV with a Red-Edge MX multispectral sensor and a Phantom 4 Pro (DJI Technology Co., Ltd., Shenzhen, China) UAV with an RGB sensor to obtain UAV multi-sensor image data collection (Figure 3).
The DJI M210 is a quadcopter drone with a maximum take-off weight of 6.14 kg and an average flight endurance of about 30 min. It has a full horizontal flight speed of 18 m/s and is equipped with a Red-Edge MX multispectral sensor that has five channels: red, green, blue, NIR, and red edge. These channels have centre wavelengths of 668 nm, 560 nm, 475 nm, 840 nm, and 717 nm, respectively, with bandwidths of 10 nm, 20 nm, 20 nm, 40 nm, and 10 nm. Each channel has a resolution of 1280 × 960 and a field of view of 47.2°. To convert the DN value of the multispectral sensor into reflectance, a calibration plate is required to calibrate the sensor before and after the mission during post-image processing. The DJI Phantom 4 Pro is a quadrotor drone with a maximum take-off weight of 1.38 kg, a full horizontal flight speed of 20 m/s, a full ascent speed of 6 m/s, and a flight endurance of approximately 30 min. Mounted on this drone is an RGB sensor with a resolution of 3000 × 4000 and a 94°FOV of lens. Both UAV missions took place on 23 April 2021 between 11:00 and 14:00, a period of clear and cloudless weather that allowed the avoidance of shadows as much as possible. The missions were flown at an altitude of 30 m, with a heading overlap of 85% and a collateral overlap of 80%. Each sensor uses GNSS (Global Navigation Satellite System) with millimetre accuracy to accurately record the set ground control points (GCPs) location for later geo-correction. The sensor uses a photo mode with equal intervals of vertical ground photography.

2.3. Pre-Processing of UAV Images

In this study, separately acquired UAV multispectral and RGB images during the same heading period were imported into Pix4DMapper Pro software (Pix4D SA, Switzerland) and aligned using a feature point matching algorithm. Firstly, a sparse point cloud of the flight area was generated based on UAV image and position data. Then, a spatial grid was created based on the sparse point cloud, and the spatial coordinate information of the ground control points (GCPs) was added. Thirdly, a sparse point cloud with precise positions was generated, and the surface geometry and spatial texture information of the flight area were created. Finally, high-definition digital orthophotos (DOM) and digital surface models (DSM) of the flight area were generated, and the processed images were exported as TIFF images. For RGB images, histogram equalisation was used to enhance the contrast and brightness, and noise was removed and sharpening was used to enhance detail and clarity. For multispectral images, radiometric calibration images with known reflectance were used to correct the images for radiometric calibration, and DN values were converted to reflectance. Illumination differences between different bands were corrected to improve data quality. ArcMap 10.5 (Environmental Systems Research Institute, Inc., RedLands, CA, USA) was used to divide the high-definition digital orthophotos into plots, create shapefile files, and divide them into 180 areas with IDs to separately obtain spectral information for the corresponding ID areas. The raster calculator in ArcMap 10.5 was used to calculate the corresponding spectral index according to the band information of RGB and multispectral, respectively. The vegetation index of the corresponding cell was extracted according to the ID areas. To minimise edge effects on the image, the shapefile was created to omit the image edge areas and cropped to obtain the required image for the experiment, which was then imported into ENVI 5.3 (Exelis Visual Information Solutions, Inc., Boulder, CO, USA) for texture feature extraction. The mean of all feature pixel values extracted according to the ID was used as the corresponding feature.

2.4. Spectral and Textural Features

Texture information of the multispectral and RGB images was extracted using the widely used greyscale co-occurrence matrix (GLCM), based on the wavelength information of the spectral images. ENVI 5.3 software was utilised to extract texture information for both types of images, which included mean (ME), variance (VA), homogeneity (HO), contrast (CO), dissimilarity (DI), entropy (EN), second moment (SE), and correlation (COR). Additionally, 21 TNC-sensitive vegetation indices were calculated from the spectral reflectance of the multispectral images. The average DN values of the digital images’ three channels were normalised, and the three channels of red, green, and blue were labelled as R, G, and B. Subsequently, the variables r, g, and b were obtained by normalising the DN values of the digital images’ three channels, and six more TNC-sensitive vegetation indices were calculated based on these three digital image variables. Table 2 shows the spectral and textural characteristics of the RGB images, while Table 3 presents the corresponding characteristics for the multispectral images.

2.5. Model Framework

To improve the TNC prediction accuracy of ensemble models based on multiple source sensors, this study proposes a stacking-based approach which involves two steps. First, four individual machine learning TNC prediction models were constructed based on multiple source sensor data trained separately: Gaussian Process Regression (GPR), Random Forest Regression (RFR), Ridge Regression (RR), and Elastic Network Regression (ENR). Second, multiple predictions were stacked by an RR learner. These four machine learning models have been evaluated for their applicability in many studies, can be used for TNC prediction, and can be supplemented with more useful information by multiple machine learning models for outcome prediction, which is essential for the construction of ensemble machine learning models. The four individual machine-learning models are briefly described below. GPR is a supervised learning process that estimates the parameters of a regression model by sample learning. It can theoretically approximate any continuous function in a tight space and can be used to solve a variety of engineering problems [61]. RFR is a machine-learning model that contains multiple decision trees and can simulate the relationship between dependent and independent variables based on decision rules. It can handle large numbers of input variables, assess the importance of variables when deciding on categories, produce higher accuracy, balance errors, and mine data quickly [62]. RR is a biased estimation regression method that yields more realistic and better fitting results at the expense of losing some information and reducing accuracy [63]. ENR is a combination of Ridge and Lasso regression, an iterative method that maintains the canonical nature of Ridge, produces reasonable solutions, and does not produce cross-paths [64]. In this study, the grid search method was used to optimise the hyperparameters. The model’s best performance was achieved by exhaustively searching a set of predefined hyperparameter value combinations.
The stacking regression model is an ensemble learning approach that utilises ensemble methods to learn different data features for improved prediction results [31]. Figure 4 depicts the construction of the stacking ensemble learning model. This study employed a 5-fold cross-validation method to partition the datasets into five random and equal parts 80 times. The same division method was applied to different input features, with each one serving as the validation set, while the remaining four parts were the training set. This process was repeated five times, and all the data obtained were utilised as training and validation samples. After constructing predictions for the base machine learning models based on the initial dataset, 5 sets of validation data corresponding to the 5 training sets were generated. These 5 sets of validation data were stacked vertically to obtain the test set prediction matrix, which was further utilised as the test set for the secondary machine learning models. The results of the validation set predictions were averaged to obtain the prediction accuracy of each base machine learning model. The RR model served as an ensemble machine learning model to blend the predictive power of each machine learning model, and a 5-fold cross-validation method was employed to train the RR model. Five validation results were obtained based on the test set prediction matrix, and their mean value was utilised to obtain the final prediction accuracy. Dividing the dataset multiple times in line with the 5-fold cross-validation method facilitates the interpretation of the prediction accuracy of different models and improves the reliability of the predictions.

2.6. Parameters for Model Accuracy Evaluation

This study divided the initial data set into training and validation sets 80 times, using a 5-fold cross-validation approach to train the model. Four hundred test results were obtained after 80 divisions, and the mean values of these test results were used as model accuracy evaluation parameters, including coefficient of determination (R2), root mean square error (RMSE), mean square error (MSE), ratio of performance to deviation (RPD) and ratio of performance to quartile distance (RPIQ). The larger the R2, RPD and RPIQ and the smaller the RMSE of a predictive model, the better the predictive ability of the model. The equations for the above four model accuracy assessment parameters are as follows:
R 2 = 1 i = 1 n ( y i y i ) 2 i = 1 n ( y i y ¯ ) 2
R M S E = i = 1 n ( y i y ^ i ) 2 N
M S E = i = 1 n ( y i y ^ i ) 2 N
R P D = S D R M S E
R P I Q = Q 3 Q 1 R M S E
where y i is the observed value, y i ^ is the predicted value, y is the mean of the measured values, N is the sample size, SD is the standard deviation of the measured values of the prediction set, is the lower limit of the third quartile, and is the upper limit of the first quartile [65].
In this study, the importance of each basic learning model in the ensemble model was calculated using the importance function. Each basic learning model was assigned a percentage representing its contribution to the explanatory power of the ensemble learning model. The higher the importance percentage, the greater the contribution of the basic learning model.

3. Results

3.1. Sampling Statistics

Table 4 shows the TNC values for all test plots and plots under the three nitrogen treatments in the experiment. The mean TNC value for all test plots sampled in this experiment was 20.07 mg·g−1. The mean TNC value of the three nitrogen treatments differed, with the N1 treatment having a significantly higher TNC value than the N2 and N3 treatments, at 23.66 mg·g−1, while the N3 treatment had the lowest TNC value at 15.28 mg·g−1. The range, standard deviation (SD), quantile statistics, and coefficient of variation (CV) for all plots and plots under each N treatment showed significant differences in TNC among the N treatments, and good data separation.

3.2. Analysis of TNC Prediction Accuracy

This study employed four individual machine-learning methods and one ensemble machine-learning method to predict TNC based on RGB and multispectral images of the winter wheat heading stage. The prediction results are presented in Table 5. Among the individual machine learning models, GPR performed the best when spectral indices of RGB were used as input features (R2 = 0.493, RMSE = 4.273 mg·g−1, MSE = 18.259 mg·g−1, RPD = 1.386, RPIQ = 2.083), and GPR also performed the best when spectral features of multiple spectra were used as input variables (R2 = 0.541, RMSE = 4.013 mg·g−1, MSE = 16.104 mg·g−1, RPD = 1.468, RPIQ = 2.194). To explore the effect of adding texture information on improving the model’s prediction accuracy, we added texture information of RGB images and multispectral images, respectively. As seen from Table 4, adding texture features to the input variables improved the accuracy for all four individual machine learning models, with the most significant improvement seen for the RFR model, where R2 improved from 0.382 to 0.531. Adding multispectral texture features also improved the accuracy for all four individual machine learning models, with RFR performing the best, and R2 improved from 0.465 to 0.65. In this study, the spectral and texture features of RGB images and multispectral images were used again as input variables, and all four individual machine learning models performed best when based on the spectral and texture features of multispectral images and texture features of RGB images. The RFR and ENR models had the largest R2 of 0.675, and the RFR model had the smallest RMSE and MSE at 3.404 mg·g−1 and 11.587 mg·g−1, respectively, indicating that the RFR model performed the best. In this study, the spectral and texture features based on RGB and multispectral images were used as input variables again, and the results showed that all four individual machine-learning models had the highest accuracy, and the RR model was the best TNC prediction model with an R2 of 0.7, RMSE of 3.352 mg·g−1, MSE of 11.236 mg·g−1, RPD of 1.822, and RPIQ of 2.724. Furthermore, the prediction results of individual machine learning models were used to build an ensemble machine learning model using the Stacking (RR) method. As shown in Figure 5, the ensemble machine learning models were more accurate than the four individual machine learning models when constructed with the same input features. Among the ensemble machine learning models, the model constructed based on the spectral and texture features of RGB and multispectral images had the highest accuracy (R2 = 0.726, RMSE = 3.203 mg·g−1, MSE = 10.259 mg·g−1, RPD = 1.867, RPIQ = 2.827) and was the best TNC prediction model. The distribution of the degree of importance of the results of the four individual machine learning models corresponding to an ensemble machine learning RR model when constructed based on the seven input features is shown in Figure 6. The results of the RFR model had the highest weight in all seven ensemble machine learning models, indicating the higher importance of individual machine learning models with high accuracy when building ensemble machine learning models and highlighting the close relationship between the performance of individual machine learning models and ensemble machine learning models.

3.3. Analysis of TNC Observations and Predictions

Figure 7 displays the observed and predicted values of the best TNC prediction model constructed based on each of the seven input features. The R2 value was 0.511 when RGB spectral features were used as input features, which improved to 0.562 with the addition of RGB texture features. Similarly, the R2 value was 0.551 when multispectral spectral features were used as input features, and it improved to 0.672 with the addition of multispectral texture features. When three of the spectral and texture features of RGB and multispectral were combined as model input features, it was discovered that the TNC prediction model constructed based on multispectral spectral and texture features and RGB texture features had the highest R2 value of 0.71. In contrast, the TNC prediction model built based on RGB spectral and texture features and multispectral spectral features had the smallest R2 value of 0.597. Notably, the TNC prediction model based on the spectral and textural features of RGB and multispectral had the largest R2 value. Finally, the R2 value of the TNC yield prediction model based on all three feature combinations was higher than that of the TNC yield prediction model based on a single feature or a combination of two features.
This paper presents a comparative analysis of the accuracy of TNC prediction models constructed based on multiple features, and it is found that the ensemble machine learning model (RR) constructed based on spectral and texture features of RGB and multispectral achieves the best TNC prediction accuracy. Therefore, the model was used to generate the predicted TNC distribution, as shown in Figure 8. The t-test analysis of TNC between different N treatments is presented in Table 6, and all p-values were less than 0.001, indicating significant differences in TNC between the three N treatments in the order of N1 > N2 > N3. The predicted yields’ distribution showed that the TNC of the N1 treatment ranged from 15 to 31 mg·g−1. Based on the measured TNC results, it was observed that the N1 treatment had the highest TNC, ranging between 15 and 32 mg·g−1, followed by the N2 and N3 treatments. These results are consistent with the predicted TNC distribution of the ensemble machine learning model (RR), and demonstrate that the model can be used for winter wheat TNC estimation.

4. Discussion

4.1. Analysis Based on Multi-Source Spectral Features and Texture Features

In this study, four individual machine learning models (GPR, RFR, RR, and ENR) and a stacking (RR) ensemble learning model were constructed for predicting TNC based on UAV RGB and multispectral image data. The results showed that prediction models constructed based on multispectral spectral features had higher accuracy than those constructed based on RGB spectral features. Similarly, models constructed based on multispectral spectral and texture features had higher accuracy than those constructed based on RGB spectral and texture features. This is because multispectral images have five bands and more near-red and red-edge bands than RGB images [66], which provide richer data information and higher accuracy, consistent with Furukawa’s findings [67]. Texture features were also added to the spectral features in this study, and the accuracy of each model improved with the addition of texture features, consistent with Wang’s findings [68]. Both RGB spectral features and multispectral features, when combined with corresponding texture features, outperformed TNC prediction models constructed using spectral features alone. The improvement in accuracy was greater after adding multispectral texture features to multispectral spectral features than after adding RGB texture features to RGB spectral features. This may be because multispectral texture features extracted in this study based on the grey-scale co-occurrence matrix consisted of texture information in five bands, which contain more information sensitive to TNC. Additionally, Zhang and Liu [18,69] found that texture features were unsuitable as independent remote sensing variables for constructing prediction models, so texture features were combined with spectral features in this study. This study also used a combination of three spectral and texture features of RGB and multispectral as input variables for the prediction model, and found that the accuracy of the model constructed based on the combination of all three features was higher than the accuracy of the model constructed based on a single feature or a combination of two of the three features, consistent with Fei’s findings [31]. The prediction model based on the combination of multispectral spectral features and texture features and RGB texture features had the highest accuracy. In contrast, the prediction model based on the combination of RGB spectral features and texture features and multispectral texture features had higher accuracy than the model based on the combination of RGB spectral features and texture features and multispectral spectral features, indicating that texture features can significantly improve model accuracy compared with spectral features [70]. Constructing these five models by combining all four of the RGB and multispectral spectral features and texture features as input features showed the highest accuracy, demonstrating that data fusion from multiple sensors can produce higher prediction accuracy due to the fact that data acquired by different sensors all contribute to TNC prediction in unique and complementary ways [71].

4.2. Potential for Ensemble Learning Models

The study used four individual machine learning models to predict TNC [72]. However, the calibration of these models required large sample sizes, and in cases of small sample sizes the results obtained by machine learning models are often inconsistent [73]. To address this issue, this study used the Bootstrap method [74] to calculate the mean of 400 times the model prediction results as the accuracy of the machine learning model. This not only solved the problem of the small sample size, but also improved the generalisation ability of the model. As a single machine learning model cannot effectively perform with multiple sources of data [31], this study constructed a Stacking (RR) ensemble learning model by combining four individual learner models. The results showed that when prediction models were constructed based on the seven input features selected for this study, the ensemble learning models outperformed the individual models, with the highest R2 of the models improving to 0.726 and the lowest RMSE reaching 3.203 mg·g−1. To construct ensemble models, individual models need to be selected based on their adequacy and diversity [75]. The four individual learning models chosen for this study had good predictive power and low similarity between models, which effectively complemented information during the ensemble [76]. The study found that the ensemble learning model could collect the advantages of several individual learning models, compensate for the limitations and shortcomings of individual models, and produce results with better robustness and generalisation ability in regression prediction [77]. An ensemble learning model, which combines different base models, can reduce the risk of overfitting of a single model, avoid the impact of the curse of dimensionality, and improve the overall generalisation ability of the ensemble model [33]. Overall, each basic learning model can be learned as an independent model or a model of the same type with different parameters or subsets of data for training [65]. After evaluating the contribution of four models (GPR, RFR, RR, and ENR) in the ensemble model, it was found that RFR had the highest contribution to the ensemble learning model among all the models constructed based on different input features. RFR is one of the most commonly used and important machine learning methods in current research. The model can utilise various input features to obtain optimal and balanced results, and has demonstrated good predictive performance in several studies [78,79], further demonstrating the effectiveness of base model selection. While linear regression models are often used as secondary learners in stacking models and have shown good accuracy in most studies, Bayesian model averaging, decision layer fusion, and other secondary learners have also been shown to be effective in various domains [80,81], improving model accuracy. In this study, RR as a secondary learning model also showed good performance.

4.3. Implications and Reflections

In this study, we constructed a TNC prediction model based on spectral image data acquired by UAV-carried RGB and multispectral sensors at the heading stage of the winter wheat canopy. The model achieved reliable prediction accuracy, but there is still room for improvement. To enhance the accuracy of TNC prediction, future studies could consider fusing data from different sources such as thermal infrared and hyperspectral [31,82]. Additionally, this study only used TNC data for winter wheat at the heading stage. Therefore, including trial data for other fertility stages can test the stability and practicality of the proposed method. Furthermore, we constructed a stacking (RR) ensemble learning model based on four individual learning models in this study, which significantly improved the prediction accuracy. The next step of the study should consider two things: (1) adopting multiple ensemble learning models to predict TNC and comparing the accuracy and applicability of each model; (2) adding other well-performing individual learning models as the base model of the ensemble learning model. Finally, this study was limited to 30 varieties of winter wheat from the Yellow and Huaihe River wheat regions, which had certain regional variety limitations. Including winter wheat varieties from other areas as research subjects could further demonstrate the applicability of the proposed method to different regions and materials.

5. Conclusions

In this study, the spectral and textural features of RGB and multispectral data were fused as input features using UAV remote sensing technology. The study investigated the prediction accuracy of GPR, RFR, RR, and ENR models with different input features, as well as the potential for an ensemble of the four base models in predicting winter wheat TNC. The results indicated that the fusion of multi-source spectral and texture features, along with the ensemble learning method, improved the prediction accuracy of winter wheat TNC. The method successfully estimated the TNC of winter wheat at the heading stage under different nitrogen treatments, providing a basis for future evaluations of winter wheat TNC. Future studies should consider conducting trials at different fertility stages and in various growth environments to improve the stability and practicality of the method.

Author Contributions

Conceptualisation: Z.C., X.Z., and S.F.; trial management and data collection and analysis: Z.L., Q.C., S.F., and Z.C.; writing under supervision of Z.C., and X.Z.; editing: Z.L., Z.C., S.F., X.Z., and Q.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Central Public-interest Scientific Institution Basal Research Fund(No.IFI2023-01,IFI2022-23), Key projects of China National Tobacco Corporation Shandong Province (KN281), the Intelligent Irrigation Water and Fertilizer Digital Decision System and Regulation Equipment (2022YFD1900404), and the Key Grant Technology Project of Henan and Xinxiang (221100110700&ZD2020009).

Data Availability Statement

The datasets used in this study are available from the corresponding author on reasonable request.

Conflicts of Interest

The authors assert that they have no conflicts of interest regarding this study.

References

  1. Song, Y.; Wang, J. Soybean canopy nitrogen monitoring and prediction using ground based multispectral remote sensors. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 6389–6392. [Google Scholar]
  2. Tian, Z.; Zhang, Y.; Zhang, H.; Li, Z.; Li, M.; Wu, J.; Liu, K. Winter wheat and soil total nitrogen integrated monitoring based on canopy hyperspectral feature selection and fusion. Comput. Electron. Agric. 2022, 201, 107285. [Google Scholar] [CrossRef]
  3. Zhang, J.; Wei, Q.; Xiong, S.; Shi, L.; Ma, X.; Du, P.; Guo, J. A spectral parameter for the estimation of soil total nitrogen and nitrate nitrogen of winter wheat growth period. Soil Use Manag. 2021, 37, 698–711. [Google Scholar] [CrossRef]
  4. Guo, B.; Zang, W.; Luo, W.; Yang, X. Detection model of soil salinization information in the Yellow River Delta based on feature space models with typical surface parameters derived from Landsat8 OLI image. Geomat. Nat. Hazards Risk 2020, 11, 288–300. [Google Scholar] [CrossRef]
  5. Liu, Y.; Guo, B.; Lu, M.; Zang, W.; Yu, T.; Chen, D. Quantitative distinction of the relative actions of climate change and human activities on vegetation evolution in the Yellow River Basin of China during 1981–2019. J. Arid Land 2022, 15, 91–108. [Google Scholar] [CrossRef]
  6. Chen, S.T.; Guo, B.; Zhang, R.; Zang, W.Q.; Wei, C.X.; Wu, H.W.; Yang, X.; Zhen, X.Y.; Li, X.; Zhang, D.F.; et al. Quantitatively determine the dominant driving factors of the spatial-temporal changes of vegetation NPP in the Hengduan Mountain area during 2000–2015. J. Mt. Sci. 2021, 18, 427–445. [Google Scholar] [CrossRef]
  7. Cheng, H.; Wang, J.; Du, Y. Combining multivariate method and spectral variable selection for soil total nitrogen estimation by Vis-NIR spectroscopy. Arch. Agron. Soil Sci. 2021, 67, 1665–1678. [Google Scholar] [CrossRef]
  8. Lopez-Calderon, M.J.; Estrada-Avalos, J.; Rodriguez-Moreno, V.M.; Mauricio-Ruvalcaba, J.E.; Martinez-Sifuentes, A.R.; Delgado-Ramirez, G.; Miguel-Valle, E. Estimation of Total Nitrogen Content in Forage Maize (Zea mays L.) Using Spectral Indices: Analysis by Random Forest. Agriculture 2020, 10, 451. [Google Scholar] [CrossRef]
  9. Qiu, Z.; Xiang, H.; Ma, F.; Du, C. Qualifications of Rice Growth Indicators Optimized at Different Growth Stages Using Unmanned Aerial Vehicle Digital Imagery. Remote Sens. 2020, 12, 3228. [Google Scholar] [CrossRef]
  10. Liu, S.; Yang, G.; Jing, H.; Feng, H.; Li, H.; Chen, P.; Yang, W. Retrieval of winter wheat nitrogen content based on UAV digital image. Trans. Chin. Soc. Agric. Eng. 2019, 35, 75–85. [Google Scholar]
  11. Possoch, M.; Bieker, S.; Hoffmeister, D.; Bolten, A.; Schellberg, J.; Bareth, G. Ulti-temporal crop surface models combined with the rgb vegetation index from uav-based images for forage monitoring in grassland. In Proceedings of the XXIII ISPRS Congress, Prague, Czech Republic, 12–19 July 2016; pp. 991–998. [Google Scholar]
  12. Lebourgeois, V.; Begue, A.; Labbe, S.; Houles, M.; Martine, J.F. A light-weight multi-spectral aerial imaging system for nitrogen crop monitoring. Precis. Agric. 2012, 13, 525–541. [Google Scholar] [CrossRef]
  13. Oscoa, L.P.; Marques Ramos, A.P.; Saito Moriya, E.A.; de Souza, M.; Marcato Junior, J.; Matsubara, E.T.; Imai, N.N.; Creste, J.E. Improvement of leaf nitrogen content inference in Valencia-orange trees applying spectral analysis algorithms in UAV mounted-sensor images. Int. J. Appl. Earth Obs. Geoinf. 2019, 83, 101907. [Google Scholar] [CrossRef]
  14. Zheng, H.; Cheng, T.; Li, D.; Zhou, X.; Yao, X.; Tian, Y.; Cao, W.; Zhu, Y. Evaluation of RGB, Color-Infrared and Multispectral Images Acquired from Unmanned Aerial Systems for the Estimation of Nitrogen Accumulation in Rice. Remote Sens. 2018, 10, 824. [Google Scholar] [CrossRef]
  15. Zhang, J.; Liu, X.; Liang, Y.; Cao, Q.; Tian, Y.; Zhu, Y.; Cao, W.; Liu, X. Using a Portable Active Sensor to Monitor Growth Parameters and Predict Grain Yield of Winter Wheat. Sensors 2019, 19, 1108. [Google Scholar] [CrossRef]
  16. Laliberte, A.S.; Rango, A. Texture and Scale in Object-Based Analysis of Subdecimeter Resolution Unmanned Aerial Vehicle (UAV) Imagery. IEEE Trans. Geosci. Remote Sens. 2009, 47, 761–770. [Google Scholar] [CrossRef]
  17. Murray, H.; Lucieer, A.; Williams, R. Texture-based classification of sub-Antarctic vegetation communities on Heard Island. Int. J. Appl. Earth Obs. Geoinf. 2010, 12, 138–149. [Google Scholar] [CrossRef]
  18. Zhang, J.; Qiu, X.; Wu, Y.; Zhu, Y.; Cao, Q.; Liu, X.; Cao, W. Combining texture, color, and vegetation indices from fixed-wing UAS imagery to estimate wheat growth parameters using multivariate regression methods. Comput. Electron. Agric. 2021, 185, 106138. [Google Scholar] [CrossRef]
  19. Fu, Z.; Yu, S.; Zhang, J.; Xi, H.; Gao, Y.; Lu, R.; Zheng, H.; Zhu, Y.; Cao, W.; Liu, X. Combining UAV multispectral imagery and ecological factors to estimate leaf nitrogen and grain protein content of wheat. Eur. J. Agron. 2022, 132, 126405. [Google Scholar] [CrossRef]
  20. Geng, R.; Fu, B.; Jin, S.; Cai, J.; Geng, W.; Lou, P. Object-based Karst wetland vegetation classification using UAV images. Bull. Surv. Mapp. 2020, 13–18. [Google Scholar] [CrossRef]
  21. Jia, D.; Chen, P. Effect of Low-altitude UAV Image Resolution on Inversion of Winter Wheat Nitrogen Concentration. Trans. Chin. Soc. Agric. Mach. 2020, 51, 164–169. [Google Scholar]
  22. Zhang, H.; Huang, L.; Huang, W.; Dong, Y.; Weng, S.; Zhao, J.; Ma, H.; Liu, L. Detection of wheat Fusarium head blight using UAV-based spectral and image feature fusion. Front. Plant Sci. 2022, 13, 1004427. [Google Scholar] [CrossRef]
  23. Zha, H.; Miao, Y.; Wang, T.; Li, Y.; Zhang, J.; Sun, W.; Feng, Z.; Kusnierek, K. Improving Unmanned Aerial Vehicle Remote Sensing-Based Rice Nitrogen Nutrition Index Prediction with Machine Learning. Remote Sens. 2020, 12, 215. [Google Scholar] [CrossRef]
  24. Liakos, K.G.; Busato, P.; Moshou, D.; Pearson, S.; Bochtis, D. Machine Learning in Agriculture: A Review. Sensors 2018, 18, 2674. [Google Scholar] [CrossRef] [PubMed]
  25. Li, X.; Liu, X.; Liu, M.; Wu, L. Random forest algorithm and regional applications of spectral inversion model for estimating canopy nitrogen concentration in rice. J. Remote Sens. 2014, 18, 923–945. [Google Scholar]
  26. Berger, K.; Verrelst, J.; Feret, J.; Hank, T.; Wocher, M.; Mauser, W.; Camps-Valls, G. Retrieval of aboveground crop nitrogen content with a hybrid machine learning method. Int. J. Appl. Earth Obs. Geoinf. 2020, 92, 102174. [Google Scholar] [CrossRef] [PubMed]
  27. Zhang, X.; Su, C.; Duan, M. Experiment on sweet pepper nitrogen detection based on near-infrared reflectivity spectral ridge regression. J. Drain. Irrig. Mach. Eng. 2019, 37, 86–92. [Google Scholar]
  28. Mahajan, G.R.; Das, B.; Murgaokar, D.; Herrmann, I.; Berger, K.; Sahoo, R.N.; Patel, K.; Desai, A.; Morajkar, S.; Kulkarni, R.M. Monitoring the Foliar Nutrients Status of Mango Using Spectroscopy-Based Spectral Indices and PLSR-Combined Machine Learning Models. Remote Sens. 2021, 13, 641. [Google Scholar] [CrossRef]
  29. Comito, C.; Pizzuti, C. Artificial intelligence for forecasting and diagnosing COVID-19 pandemic: A focused review. Artif. Intell. Med. 2022, 128, 102286. [Google Scholar] [CrossRef]
  30. Feng, L.; Zhang, Z.; Ma, Y.; Du, Q.; Williams, P.; Drewry, J.; Luck, B. Alfalfa Yield Prediction Using UAV-Based Hyperspectral Imagery and Ensemble Learning. Remote Sens. 2020, 12, 2028. [Google Scholar] [CrossRef]
  31. Fei, S.; Hassan, M.A.; Xiao, Y.; Su, X.; Chen, Z.; Cheng, Q.; Duan, F.; Chen, R.; Ma, Y. UAV-based multi-sensor data fusion and machine learning algorithm for yield prediction in wheat. Precis. Agric. 2022, 24, 187–212. [Google Scholar] [CrossRef]
  32. Yang, H.; Hu, Y.; Zheng, Z.; Qiao, Y.; Zhang, K.; Guo, T.; Chen, J. Estimation of Potato Chlorophyll Content from UAV Multispectral Images with Stacking Ensemble Algorithm. Agronomy 2022, 12, 2318. [Google Scholar] [CrossRef]
  33. Wu, T.; Li, Y.; Ge, Y.; Liu, L.; Xi, S.; Ren, M.; Yuan, X.; Zhuang, C. Estimation of nitrogen contents in citrus leaves using Stacking ensemble learning. Trans. Chin. Soc. Agric. Eng. 2021, 37, 163–171. [Google Scholar] [CrossRef]
  34. Taghizadeh-Mehrjardi, R.; Schmidt, K.; Chakan, A.A.; Rentschler, T.; Scholten, T. Improving the Spatial Prediction of Soil Organic Carbon Content in Two Contrasting Climatic Regions by Stacking Machine Learning Models and Rescanning Covariate Space. Remote Sens. 2020, 12, 1095. [Google Scholar] [CrossRef]
  35. Gitelson, A.A.; Kaufman, Y.J.; Stark, R.; Rundquist, D. Novel algorithms for remote estimation of vegetation fraction. Remote Sens. Environ. 2002, 80, 76–87. [Google Scholar] [CrossRef]
  36. Louhaichi, M.; Borman, M.M.; Johnson, D.E. Spatially Located Platform and Aerial Photography for Documentation of Grazing Impacts on Wheat. Geocarto Int. 2001, 16, 56–70. [Google Scholar] [CrossRef]
  37. Tucker, C.J. Red and photographic infrared linear combinations for monitoring vegetation. Remote Sens. Environ. 1979, 8, 127–150. [Google Scholar] [CrossRef]
  38. Meyer, G.E.; Neto, J.C. Verification of color vegetation indices for automated crop imaging applications. Comput. Electron. Agric. 2008, 63, 282–293. [Google Scholar] [CrossRef]
  39. Woebbecke, D.M.; Meyer, G.E.; Vonbargen, K.; Mortensen, D.A. Color indexes for weed identification under various soil, residue, and lighting conditions. Trans. ASAE 1995, 38, 259–269. [Google Scholar] [CrossRef]
  40. Niu, Q.; Feng, H.; Yang, G.; Li, C.; Yang, H.; Xu, B.; Zhao, Y. Monitoring plant height and leaf area index of maize breeding material based on UAV digital images. Trans. Chin. Soc. Agric. Eng. 2018, 34, 73–82. [Google Scholar]
  41. Haralick, R.M.; Shanmugam, K.S. Combined spectral and spatial processing of ERTS imagery data. Remote Sens. Environ. 1974, 3, 3–13. [Google Scholar] [CrossRef]
  42. Datt, B.; McVicar, T.R.; Van Niel, T.G.; Jupp, D.; Pearlman, J.S. Preprocessing EO-1 Hyperion hyperspectral data to support the application of agricultural indexes. IEEE Trans. Geosci. Remote Sens. 2003, 41, 1246–1259. [Google Scholar] [CrossRef]
  43. Kumar, S.; Gautam, G.; Saha, S.K. Hyperspectral remote sensing data derived spectral indices in characterizing salt-affected soils: A case study of Indo-Gangetic plains of India. Environ. Earth Sci. 2015, 73, 3299–3308. [Google Scholar] [CrossRef]
  44. El-Shikha, D.M.; Waller, P.; Hunsaker, D.; Clarke, T.; Barnes, E. Ground-based remote sensing for assessing water and nitrogen status of broccoli. Agric. Water Manag. 2007, 92, 183–193. [Google Scholar] [CrossRef]
  45. Hunt, E.R., Jr.; Daughtry, C.S.T.; Eitel, J.U.H.; Long, D.S. Remote Sensing Leaf Chlorophyll Content Using a Visible Band Index. Agron. J. 2011, 103, 1090–1099. [Google Scholar] [CrossRef]
  46. Tucker, C.J.; Elgin, J.H., Jr.; McMurtrey, J.E.I.; Fan, C.J. Monitoring corn and soybean crop development with hand-held radiometer spectral data. Remote Sens. Environ. 1979, 8, 237–248. [Google Scholar] [CrossRef]
  47. Underwood, E.; Ustin, S.; DiPietro, D. Mapping nonnative plants using hyperspectral imagery. Remote Sens. Environ. 2003, 86, 150–161. [Google Scholar] [CrossRef]
  48. Wang, F.; Huang, J.; Tang, Y.; Wang, X. New Vegetation Index and Its Application in Estimating Leaf Area Index of Rice. Rice Sci. 2007, 14, 195–203. [Google Scholar] [CrossRef]
  49. Ehammer, A.; Fritsch, S.; Conrad, C.; Lamers, J.; Dech, S. Statistical derivation of fPAR and LAI for irrigated cotton and rice in arid Uzbekistan by combining multi-temporal RapidEye data and ground measurements. In Remote Sensing for Agriculture, Ecosystems, and Hydrology XII, Toulouse, France; SPIE: Bellingham, WA, USA, 2010; Volume 7824, p. 9. [Google Scholar]
  50. Ren, H.; Zhou, G.; Zhang, F. Using negative soil adjustment factor in soil-adjusted vegetation index (SAVI) for aboveground living biomass estimation in arid grasslands. Remote Sens. Environ. 2018, 209, 439–445. [Google Scholar] [CrossRef]
  51. Cao, Q.; Miao, Y.; Feng, G.; Gao, X.; Li, F.; Liu, B.; Yue, S.; Cheng, S.; Ustin, S.L.; Khosla, R. Active canopy sensing of winter wheat nitrogen status: An evaluation of two sensor systems. Comput. Electron. Agric. 2015, 112, 54–67. [Google Scholar] [CrossRef]
  52. Hu, H.; Zheng, K.; Zhang, X.; Lu, Y.; Zhang, H. Nitrogen Status Determination of Rice by Leaf Chlorophyll Fluorescence and Reflectance Properties. Sens. Lett. 2011, 9, 1207–1211. [Google Scholar] [CrossRef]
  53. Zhou, J.; Yungbluth, D.; Vong, C.N.; Scaboo, A.; Zhou, J. Estimation of the Maturity Date of Soybean Breeding Lines Using UAV-Based Multispectral Imagery. Remote Sens. 2019, 11, 2075. [Google Scholar] [CrossRef]
  54. Cao, Q.; Miao, Y.; Wang, H.; Huang, S.; Cheng, S.; Khosla, R.; Jiang, R. Non-destructive estimation of rice plant nitrogen status with Crop Circle multispectral active canopy sensor. Field Crop Res. 2013, 154, 133–144. [Google Scholar] [CrossRef]
  55. Sripada, R.P.; Heiniger, R.W.; White, J.G.; Meijer, A.D. Aerial color infrared photography for determining early in-season nitrogen requirements in corn. Agron. J. 2006, 98, 968–977. [Google Scholar] [CrossRef]
  56. Roujean, J.L.; Breon, F.M. Estimating par absorbed by vegetation from bidirectional reflectance measurements. Remote Sens. Environ. 1995, 51, 375–384. [Google Scholar] [CrossRef]
  57. Ren, H.; Zhou, G. Determination of green aboveground biomass in desert steppe using litter-soil-adjusted vegetation index. Eur. J. Remote Sens. 2014, 47, 611–625. [Google Scholar] [CrossRef]
  58. Rondeaux, G.; Steven, M.; Baret, F. Optimization of soil-adjusted vegetation indices. Remote Sens. Environ. 1996, 55, 95–107. [Google Scholar] [CrossRef]
  59. Dash, J.; Curran, P.J. The MERIS terrestrial chlorophyll index. Int. J. Remote Sens. 2004, 25, 5403–5413. [Google Scholar] [CrossRef]
  60. Gong, P.; Pu, R.L.; Biging, G.S.; Larrieu, M.R. Estimation of forest leaf area index using vegetation indices derived from Hyperion hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2003, 41, 1355–1362. [Google Scholar] [CrossRef]
  61. Karbasi, M.; Jamei, M.; Ahmadianfar, I.; Asadi, A. Toward the accurate estimation of elliptical side orifice discharge coefficient applying two rigorous kernel-based data-intelligence paradigms. Sci. Rep. 2021, 11, 19784. [Google Scholar] [CrossRef]
  62. Huang, N.; Wang, L.; Song, X.; Black, T.A.; Jassal, R.S.; Myneni, R.B.; Wu, C.; Wang, L.; Song, W.; Ji, D.; et al. Spatial and temporal variations in global soil respiration and their relationships with climate and land cover. Sci. Adv. 2020, 6, eabb8508. [Google Scholar] [CrossRef]
  63. Nakagome, S.; Trieu, P.L.; He, Y.; Ravindran, A.S.; Contreras-Vidal, J.L. An empirical comparison of neural networks and machine learning algorithms for EEG gait decoding. Sci. Rep. 2020, 10, 4372. [Google Scholar] [CrossRef]
  64. Galie, F.; Rospleszcz, S.; Keeser, D.; Beller, E.; Illigens, B.; Lorbeer, R.; Grosu, S.; Selder, S.; Auweter, S.; Schlett, C.L.; et al. Machine-learning based exploration of determinants of gray matter volume in the KORA-MRI study. Sci. Rep. 2020, 10, 8363. [Google Scholar] [CrossRef] [PubMed]
  65. Li, Z.; Chen, Z.; Cheng, Q.; Duan, F.; Sui, R.; Huang, X.; Xu, H. UAV-Based Hyperspectral and Ensemble Machine Learning for Predicting Yield in Winter Wheat. Agronomy 2022, 12, 202. [Google Scholar] [CrossRef]
  66. Coburn, C.A.; Smith, A.M.; Logie, G.S.; Kennedy, P. Radiometric and spectral comparison of inexpensive camera systems used for remote sensing. Int. J. Remote Sens. 2018, 39, 4869–4890. [Google Scholar] [CrossRef]
  67. Furukawa, F.; Laneng, L.A.; Ando, H.; Yoshimura, N.; Kaneko, M.; Morimoto, J. Comparison of RGB and Multispectral Unmanned Aerial Vehicle for Monitoring Vegetation Coverage Changes on a Landslide Area. Drones 2021, 5, 97. [Google Scholar] [CrossRef]
  68. Wang, F.; Yi, Q.; Hu, J.; Xie, L.; Yao, X.; Xu, T.; Zheng, J. Combining spectral and textural information in UAV hyperspectral images to estimate rice grain yield. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102397. [Google Scholar] [CrossRef]
  69. Liu, Y.; Feng, H.; Yue, J.; Li, Z.; Yang, G.; Song, X.; Yang, X.; Zhao, Y. Remote-sensing estimation of potato above-ground biomass based on spectral and spatial features extracted from high-definition digital camera images. Comput. Electron. Agric. 2022, 198, 107089. [Google Scholar] [CrossRef]
  70. AlSuwaidi, A.; Grieve, B.; Yin, H. Combining spectral and texture features in hyperspectral image analysis for plant monitoring. Meas. Sci. Technol. 2018, 29, 104001. [Google Scholar] [CrossRef]
  71. Xu, X.; Fan, L.; Li, Z.; Meng, Y.; Feng, H.; Yang, H.; Xu, B. Estimating Leaf Nitrogen Content in Corn Based on Information Fusion of Multiple-Sensor Imagery from UAV. Remote Sens. 2021, 13, 340. [Google Scholar] [CrossRef]
  72. Peterson, K.T.; Sagan, V.; Sidike, P.; Hasenmueller, E.A.; Sloan, J.J.; Knouft, J.H. Machine Learning-Based Ensemble Prediction of Water-Quality Variables Using Feature-Level and Decision-Level Fusion with Proximal Remote Sensing. Photogramm. Eng. Remote Sens. 2019, 85, 269–280. [Google Scholar] [CrossRef]
  73. Wang, J.; Shi, T.; Yu, D.; Teng, D.; Ge, X.; Zhang, Z.; Yang, X.; Wang, H.; Wu, G. Ensemble machine-learning-based framework for estimating total nitrogen concentration in water using drone-borne hyperspectral imagery of emergent plants: A case study in an arid oasis, NW China. Environ. Pollut. 2020, 266, 115412. [Google Scholar] [CrossRef]
  74. Rossel, R.A.V.; Webster, R.; Bui, E.N.; Baldock, J.A. Baseline map of organic carbon in Australian soil to support national carbon accounting and monitoring under climate change. Glob. Chang. Biol. 2014, 20, 2953–2970. [Google Scholar] [CrossRef] [PubMed]
  75. Frame, J.; Merrilees, D.W. The effect of tractor wheel passes on herbage production from diploid and tetraploid ryegrass swards. Grass Forage Sci. 1996, 51, 13–20. [Google Scholar] [CrossRef]
  76. Zhou, K.; Yang, Y.; Qiao, Y.; Xiang, T. Domain Adaptive Ensemble Learning. IEEE Trans. Image Process. 2021, 30, 8008–8018. [Google Scholar] [CrossRef] [PubMed]
  77. Fu, B.; Sun, J.; Wang, Y.; Yang, W.; He, H.; Liu, L.; Huang, L.; Fan, D.; Gao, E. Evaluation of LAI Estimation of Mangrove Communities Using DLR and ELR Algorithms With UAV, Hyperspectral, and SAR Images. Front. Mar. Sci. 2022, 9, 944454. [Google Scholar] [CrossRef]
  78. Osco, L.P.; Marques Ramos, A.P.; Faita Pinheiro, M.M.; Saito Moriya, E.A.; Imai, N.N.; Estrabis, N.; Ianczyk, F.; de Araujo, F.F.; Liesenberg, V.; de Castro Jorge, L.A.; et al. A Machine Learning Framework to Predict Nutrient Content in Valencia-Orange Leaf Hyperspectral Measurements. Remote Sens. 2020, 12, 906. [Google Scholar] [CrossRef]
  79. Osco, L.P.; Marcato Junior, J.; Marques Ramos, A.P.; Garcia Furuya, D.E.; Santana, D.C.; Ribeiro Teodoro, L.P.; Goncalves, W.N.; Rojo Baio, F.H.; Pistori, H.; Da Silva Junior, C.A.; et al. Leaf Nitrogen Concentration and Plant Height Prediction for Maize Using UAV-Based Multispectral Imagery and Machine Learning Techniques. Remote Sens. 2020, 12, 3237. [Google Scholar] [CrossRef]
  80. Wang, L.; Zhu, Z.; Sassoubre, L.; Yu, G.; Wang, Y. Improving the robustness of beach water quality modeling using an ensemble machine learning approach. Sci. Total Environ. 2020, 765, 142760. [Google Scholar] [CrossRef]
  81. Jyab, C.; Ma, C.; Eb, D.; Zhu, L.A. Bayesian machine learning ensemble approach to quantify model uncertainty in predicting groundwater storage change. Sci. Total Environ. 2021, 769, 144715. [Google Scholar]
  82. Singh, H.; Roy, A.; Setia, R.K.; Pateriya, B. Estimation of nitrogen content in wheat from proximal hyperspectral data using machine learning and explainable artificial intelligence (XAI) approach. Model. Earth Syst. Environ. 2022, 8, 2505–2511. [Google Scholar] [CrossRef]
Figure 1. Test area and plots.
Figure 1. Test area and plots.
Remotesensing 15 02152 g001
Figure 2. Meteorological data for the whole wheat crop period 2020–2021. (a) Average daily temperature, (b) average daily radiation, and (c) average daily precipitation.
Figure 2. Meteorological data for the whole wheat crop period 2020–2021. (a) Average daily temperature, (b) average daily radiation, and (c) average daily precipitation.
Remotesensing 15 02152 g002
Figure 3. Drones for collecting data. (a) DJI M210, (b) DJI Phantom 4 Pro, (c) Red-Edge MX multispectral sensor, (d) RGB sensor.
Figure 3. Drones for collecting data. (a) DJI M210, (b) DJI Phantom 4 Pro, (c) Red-Edge MX multispectral sensor, (d) RGB sensor.
Remotesensing 15 02152 g003
Figure 4. Ensemble learning model framework. GPR Gaussian process regression, RFR random forest regression, RR ridge regression, ENR elastic network regression, p predictions from different models.
Figure 4. Ensemble learning model framework. GPR Gaussian process regression, RFR random forest regression, RR ridge regression, ENR elastic network regression, p predictions from different models.
Remotesensing 15 02152 g004
Figure 5. Statistical distribution of the prediction accuracy of individual machine learning models and ensemble learning models constructed based on spectral and texture features of UAV RGB and multispectral. RT RGB Texture Features, MS Multi-spectral spectral features, MT Multi-spectral texture features, ENR Elastic network regression, RFR Random Forest regression, GPR Gaussian process regression, RR Ridge Regression, Stacking (RR) stacking regression using ridge regression as a secondary learning model.
Figure 5. Statistical distribution of the prediction accuracy of individual machine learning models and ensemble learning models constructed based on spectral and texture features of UAV RGB and multispectral. RT RGB Texture Features, MS Multi-spectral spectral features, MT Multi-spectral texture features, ENR Elastic network regression, RFR Random Forest regression, GPR Gaussian process regression, RR Ridge Regression, Stacking (RR) stacking regression using ridge regression as a secondary learning model.
Remotesensing 15 02152 g005aRemotesensing 15 02152 g005bRemotesensing 15 02152 g005cRemotesensing 15 02152 g005d
Figure 6. The importance of individual machine learning models based on the seven input features in stacking ensemble learning models. RT RGB Texture Features, MS Multi-spectral spectral features, MT Multi-spectral texture features, GPR Gaussian process regression, RFR Random Forest regression, RR Ridge Regression, ENR Elastic network regression.
Figure 6. The importance of individual machine learning models based on the seven input features in stacking ensemble learning models. RT RGB Texture Features, MS Multi-spectral spectral features, MT Multi-spectral texture features, GPR Gaussian process regression, RFR Random Forest regression, RR Ridge Regression, ENR Elastic network regression.
Remotesensing 15 02152 g006aRemotesensing 15 02152 g006b
Figure 7. Observed and predicted values of the best TNC prediction model constructed based on each of the seven input features. RT RGB Texture Features, MS Multi-spectral spectral features, MT Multi-spectral texture features.
Figure 7. Observed and predicted values of the best TNC prediction model constructed based on each of the seven input features. RT RGB Texture Features, MS Multi-spectral spectral features, MT Multi-spectral texture features.
Remotesensing 15 02152 g007aRemotesensing 15 02152 g007b
Figure 8. Map of optimum TNC.
Figure 8. Map of optimum TNC.
Remotesensing 15 02152 g008
Table 1. Information about the fertiliser.
Table 1. Information about the fertiliser.
TreatmentsJointing Stage (kg·hm−2)Heading Stage (kg·hm−2)Fertiliser Types
N1200100Urea
N212060Urea
N34020Urea
Table 2. Spectral and textural features of the RGB sensor.
Table 2. Spectral and textural features of the RGB sensor.
Data TypeFeatureFormulaSource
RGBr r = R / ( R + G + B ) /
g g = G / ( R + G + B ) /
b b = B / ( R + G + B ) /
Visible atmospherically resistant index V A R I = ( g r ) / ( g + r + b ) [35]
Ground-level image index G L A = ( 2 × g r + b ) / ( 2 × g + r + b ) [36]
Green, red vegetation index G R V I = ( g r ) / ( g + r ) [37]
Excess red index E X R = 2 × g r b [38]
Normalised difference index N D I = ( r + g ) / ( r + g + 0.01 ) [39]
g/r g / r = g / r [40]
r/b r / b = r / b [40]
Grey-level co-occurrence matrixME, HO, DI, EN, SE, VA, CO, COR[41]
/ empirical visible vegetation index, ME mean, HO homogeneity, DI dissimilarity, EN entropy, SE second moment, VA variance, CO contrast, COR correlation.
Table 3. Spectral and textural features of multispectral sensor.
Table 3. Spectral and textural features of multispectral sensor.
Data TypeFeatureFormulaSource
MSChlorophyll vegetation index C V I = ( N I R × R ) / G 2 [42]
Colouration index C I = ( R B ) / R [43]
Canopy chlorophyll content index C C C I = ( N I R R E ) / ( N I R + R E ) / ( N I R R ) / ( N I R + R ) [44]
Chlorophyll index
Red-edge
C I R E = ( N I R / R E ) 1 [45]
Green difference vegetation index G D V I = N I R G [46]
Normalised difference vegetation index N D V I = ( N I R R ) / ( N I R + R ) [47]
Green NDVI G N D V I = ( N I R G ) / ( N I R + G ) [48]
Normalised difference red-edge N D R E = ( N I R R E ) / ( N I R + R E ) [49]
Green soil adjusted vegetation index G S A V I = 1.5 × ( N I R G ) / ( N I R + G + 0.5 ) [50]
Green optimised soil adjusted vegetation index G O S A V I = ( N I R G ) / ( N I R + G + 0.16 ) [51]
Nitrogen reflectance index N R I = ( G R ) / ( G + R ) [52]
Green ratio vegetation index G R V I = N I R / G [53]
Normalised red-edge index N R E I = R E / ( N I R + R E + G ) [54]
Normalised NIR index N N I = N I R / ( N I R + R E + G ) [55]
Modified normalised difference index M N D I = ( N I R R E ) / ( N I R G ) [54]
Difference vegetation index D V I = N I R R [37]
Renormalised difference vegetation index R D V I = ( N I R R ) / S Q R T ( N I R + R ) [56]
Soil-adjusted vegetation index S A V I = 1.5 × ( N I R R ) / ( N I R + R + 0.16 ) [57]
Optimised SAVI O S A V I = 1.16 × ( N I R R ) / ( N I R + R + 0.16 ) [58]
MERIS terrestrial chlorophyll index M T C I = ( N I R R E ) / ( R E R ) [59]
ModifiedNon-linear index N L I = 1.5 × ( N I R 2 R ) / ( N I R 2 + R + 0.5 ) [60]
Grey-level co-occurrence matrixME, HO, DI, EN, SE, VA, CO, COR[41]
MS Multi-spectral, ME mean, HO homogeneity, DI dissimilarity, EN entropy, SE second moment, VA variance, CO contrast, COR correlation.
Table 4. Statistics on the characteristics of TNC in samples from each plot at the heading stage (mg·g−1).
Table 4. Statistics on the characteristics of TNC in samples from each plot at the heading stage (mg·g−1).
CategoryObservationsMinMaxMeanSDQ25Q50Q75CV
All datasets1808.2631.6320.075.7016.1219.2824.740.28
N1 dataset6015.3331.6323.664.3820.4424.5026.970.19
N2 dataset6012.2030.7221.285.0017.4020.6825.470.23
N3 dataset608.2626.3415.284.0711.6615.5618.010.27
SD standard deviation, Q25 lower quartile, Q50 median quartile, Q75 upper quartile, CV coefficient of variation.
Table 5. TNC prediction accuracy based on different machine learning methods.
Table 5. TNC prediction accuracy based on different machine learning methods.
Sensor TypeFeature TypeMetricsGPRRFRRRENRStacking (RR)
RGBSpectralR20.4930.3820.4810.4790.511
RMSE (mg·g−1)4.2734.5914.3034.4014.216
MSE (mg·g−1)18.25921.07718.51619.36917.775
RPD1.3861.2791.3741.3421.384
RPIQ2.0831.9622.0692.0262.125
MSSpectralR20.5410.4650.5150.5050.551
RMSE (mg·g−1)4.0134.2054.1494.1743.978
MSE (mg·g−1)16.10417.68217.21417.42215.824
RPD1.4681.3731.4201.4051.468
RPIQ2.1942.0682.1132.1042.198
RGB + RGBSpectral + texturalR20.4940.5310.5090.5070.562
RMSE (mg·g−1)4.1793.9554.1384.1583.947
MSE (mg·g−1)17.46415.64217.12317.28915.579
RPD1.4011.4661.4131.3951.469
RPIQ2.1562.2622.1782.1652.280
MS + MSSpectral + texturalR20.6250.6500.6300.6250.672
RMSE (mg·g−1)3.6103.5363.5843.6083.415
MSE (mg·g−1)13.03212.50312.84513.01811.662
RPD1.6411.6861.6571.6451.738
RPIQ2.4832.5432.4902.4782.625
RGB + MS + RGBSpectral + spectral + texturalR20.5700.5540.5540.5460.597
RMSE (mg·g−1)3.9363.8813.9423.9913.788
MSE (mg·g−1)15.49215.06215.53915.92814.349
RPD1.5041.5041.4841.4681.540
RPIQ2.2562.2782.2392.2242.337
RGB + RGB + MSSpectral + textural + texturalR20.6510.6510.6710.6620.686
RMSE (mg·g−1)3.5993.5533.4683.4953.386
MSE (mg·g−1)12.95312.62412.02712.21511.465
RPD1.6891.6801.7421.7191.765
RPIQ2.5082.4942.5762.5442.628
RGB + MS + MSSpectral + spectral + texturalR20.6590.6750.6680.6640.699
RMSE (mg·g−1)3.4873.4663.4333.4383.300
MSE (mg·g−1)12.15912.01311.78511.82010.890
RPD1.7201.7141.7451.7341.802
RPIQ2.5682.5342.5792.5622.668
MS + RGB + MSSpectral + textural + texturalR20.6660.6750.6710.6750.710
RMSE (mg·g−1)3.5043.4043.4353.4163.257
MSE (mg·g−1)12.27811.58711.79911.66910.608
RPD1.7191.7131.7381.7341.802
RPIQ2.6052.6372.6392.6432.746
RGB + MS + RGB + MSSpectral + spectral + textural + texturalR20.6700.6970.7000.6920.726
RMSE (mg·g−1)3.4563.3653.3523.3623.203
MSE (mg·g−1)11.94411.32311.23611.30310.259
RPD1.7351.7691.8221.7981.867
RPIQ2.6472.7312.7242.7082.827
Table 6. t-test under different nitrogen treatments.
Table 6. t-test under different nitrogen treatments.
Featuretp-Value
N1 vs. N23.8470.000
N1 vs. N39.4160.000
N2 vs. N35.6540.000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Z.; Zhou, X.; Cheng, Q.; Fei, S.; Chen, Z. A Machine-Learning Model Based on the Fusion of Spectral and Textural Features from UAV Multi-Sensors to Analyse the Total Nitrogen Content in Winter Wheat. Remote Sens. 2023, 15, 2152. https://doi.org/10.3390/rs15082152

AMA Style

Li Z, Zhou X, Cheng Q, Fei S, Chen Z. A Machine-Learning Model Based on the Fusion of Spectral and Textural Features from UAV Multi-Sensors to Analyse the Total Nitrogen Content in Winter Wheat. Remote Sensing. 2023; 15(8):2152. https://doi.org/10.3390/rs15082152

Chicago/Turabian Style

Li, Zongpeng, Xinguo Zhou, Qian Cheng, Shuaipeng Fei, and Zhen Chen. 2023. "A Machine-Learning Model Based on the Fusion of Spectral and Textural Features from UAV Multi-Sensors to Analyse the Total Nitrogen Content in Winter Wheat" Remote Sensing 15, no. 8: 2152. https://doi.org/10.3390/rs15082152

APA Style

Li, Z., Zhou, X., Cheng, Q., Fei, S., & Chen, Z. (2023). A Machine-Learning Model Based on the Fusion of Spectral and Textural Features from UAV Multi-Sensors to Analyse the Total Nitrogen Content in Winter Wheat. Remote Sensing, 15(8), 2152. https://doi.org/10.3390/rs15082152

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop