Next Article in Journal
The Effects of Thermal and Pulsed Electric Field Processing on the Physicochemical and Microbial Properties of a High-Fiber, Nutritious Beverage from a Milk-Based Date Powder
Next Article in Special Issue
Integrating Satellite and UAV Technologies for Maize Plant Height Estimation Using Advanced Machine Learning
Previous Article in Journal
Optimization of Black Tea Drying Temperature in an Endless Chain Pressure (ECP) Dryer
Previous Article in Special Issue
Crop Yield Assessment Using Field-Based Data and Crop Models at the Village Level: A Case Study on a Homogeneous Rice Area in Telangana, India
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Machine Learning for Precise Rice Variety Classification in Tropical Environments Using UAV-Based Multispectral Sensing

1
Department of Forest Resources Conservation and Ecotourism, Faculty of Forestry and Environment, IPB University, Bogor 16680, Indonesia
2
Japan Society for Promotion of Science (JSPS) Ronpaku Fellow, Tokyo 102-0083, Japan
3
Environmental Research Center, IPB University, Bogor 16680, Indonesia
4
Department of Agronomy and Horticulture, Faculty of Agriculture, IPB University, Bogor 16680, Indonesia
5
Graduate School of Agriculture, Kyoto University, Kyoto 606-8502, Japan
6
Program of Agrotechnology, Faculty of Agriculture, Universitas Singaperbangsa Karawang, Karawang 41361, Indonesia
7
Center for Environmental Remote Sensing (CEReS), Chiba University, Chiba 263-8522, Japan
*
Author to whom correspondence should be addressed.
AgriEngineering 2023, 5(4), 2000-2019; https://doi.org/10.3390/agriengineering5040123
Submission received: 19 September 2023 / Revised: 22 October 2023 / Accepted: 26 October 2023 / Published: 1 November 2023
(This article belongs to the Special Issue Remote Sensing-Based Machine Learning Applications in Agriculture)

Abstract

:
An efficient assessment of rice varieties in tropical regions is crucial for selecting cultivars suited to unique environmental conditions. This study explores machine learning algorithms that leverage multispectral sensor data from UAVs to evaluate rice varieties. It focuses on three paddy rice types at different ages (six, nine, and twelve weeks after planting), analyzing data from four spectral bands and vegetation indices using various algorithms for classification. The results show that the neural network (NN) algorithm is superior, achieving an area under the curve value of 0.804. The twelfth week post-planting yielded the most accurate results, with green reflectance the dominant predictor, surpassing the traditional vegetation indices. This study demonstrates the rapid and effective classification of rice varieties using UAV-based multispectral sensors and NN algorithms to enhance agricultural practices and global food security.

1. Introduction

Variety classification plays a pivotal role in assessing yield and optimizing rice-field cultivation. Different rice varieties possess distinctive characteristics, including growth habits [1,2,3], nutrient requirements [4,5,6], disease resistance [7,8,9,10], and tolerance to environmental stressors, such as drought, heat, and pests [11,12]. Identifying the best-performing varieties in each region allows farmers to enhance their yields and increase profitability. Accurate variety classification leads to more precise yield estimation. Additionally, diverse rice varieties require specific management practices, including fertilization and irrigation. Tailoring these practices to the particular needs of each variety, made possible by accurate variety prediction, enhances crop health and boosts yields. Variety classification is also crucial for predicting plant capabilities for handling threats such as pests and diseases [13]. Moreover, yield and pest/disease management are critical considerations in agricultural insurance assessments, underscoring the significance of variety classifications in the initial stages of such evaluations. For this purpose, researchers have made considerable efforts to develop remote sensing technology, including the utilization of drones or unmanned aerial vehicles (UAVs), which have made significant strides in agriculture [14,15,16,17].
With their remote sensing technology, UAVs have been growing rapidly in the last decade. They offer high spatial resolution for agriculture and flexible and low-cost monitoring, especially for frequent or scheduled monitoring [18,19,20,21,22,23,24]. In line with this, sensors for UAVs are also increasing [25,26]. The multispectral sensor is one of the sensors developed as a payload for UAVs, imitating the multispectral imager on satellite platforms, such as Thematic Mapper (TM) on Landsat 8 or Multispectral Instrument (MSI) on Sentinel 2, with more detail in spatial resolution. Multispectral sensors attached to UAV are utilized in agriculture primarily to calculate vegetation indices, such as the traditional normalized difference vegetation index (NDVI), to determine the conditions of plants. The NDVI derived from a multispectral sensor on a UAV is accurate, with high determination values compared to direct measurement [27]. The leaf chlorophyll index (LCI) is another vegetation index used to estimate leaf chlorophyll content, an indicator of photosynthetic activity. LCI relies on reflectance values from red-edge and near-infrared wavelengths, which are sensitive to the chlorophyll content in plant leaves [28,29,30,31]. Research indicates that the LCI has the potential to offer more accurate estimates of chlorophyll content than other indices such as the NDVI and the chlorophyll vegetation index (CVI) [27,28,29,30,31].
Existing studies have employed approaches such as extreme learning machine (ELM), support vector machine (SVM), and random forest (RF) algorithms [32,33] to accurately classify paddy growth stages and map the spatial distribution of paddy rice fields. These studies collectively suggest that the SVM and RF algorithms are suitable for classifying different paddy varieties using remote sensing data. However, the classification of paddy rice varieties using UAV data has not been extensively explored. Although machine learning techniques have been extensively studied for various topics, such as yield estimation, their application for rice variety classification using multispectral data from UAVs is still limited. Wang et al. [34] investigated hyperspectral imaging (HSI) to distinguish rice varieties and assess their quality, successfully generating a classification map that visualized distinct rice varieties. However, it is essential to note that the analysis was limited to the rice grains. Another study by Darvishsefat et al. [35] evaluated the spectral reflectance of rice varieties using hyperspectral remote sensing, demonstrating the potential for accurately mapping the cultivated areas of rice varieties based on hyperspectral remotely sensed data. Nevertheless, when considering the advantages of multispectral sensors, such as better performance, cost-effectiveness, and improved spatial coverage compared to hyperspectral sensors [36,37,38], the reduced complexity and data processing requirements of multispectral sensors outweigh the benefits of the higher spectral resolution offered by hyperspectral data. Hyperspectral remote sensing, while capturing a more detailed spectral signature, requires sophisticated data processing techniques and considerable computational resources. Consequently, there is an opportunity for further research on integrating machine learning algorithms with the multispectral data collected by UAVs for rice variety classification.
In this study, we employed a multispectral sensor attached to a quadcopter UAV to classify paddy varieties and explore machine learning applications for classifying them. We evaluated several machine learning algorithms, including a neural network (NN), decision tree, SVM, RF, naïve Bayes, and logistic regression. Additionally, we examined the effectiveness of ensemble algorithms, such as AdaBoost, gradient boosting, and a combination of high-performing algorithms as stacked learners. The performance of these algorithms was assessed by classifying three rice varieties, INPARI-32, INPARI-33, and INPARI-43 [39,40,41], which are high-yield rice varieties developed by the Indonesian Agency for Agricultural Research and Development (IAARD) in collaboration with the International Rice Research Institute (IRRI), at three different growth stages: six, nine, and twelve weeks after planting (WAP). This study aimed to determine the most effective algorithm for accurately classifying rice varieties, considering features such as reflectance from multispectral bands and vegetation indices. By enhancing automated systems for rice variety classification, this study contributes to improved agricultural practices. It offers a reliable method for farmers, researchers, and stakeholders to efficiently identify and manage specific rice varieties at the optimal growth stage.
The structure of this article is as follows. The Introduction provides an overview of the research problem, its significance, and our research objectives. In the Methods section, we detail the methods and techniques employed in our study, including a description of the study location and instruments, data collection procedures, experimental setup, data analysis methods, feature selection, accuracy assessment procedures, and feature contribution analysis. We present the key findings of our research in the Results section and thoroughly discuss them in the Discussion section. Finally, we conclude our study and propose potential areas for improvement in the Conclusions and Future Work section.

2. Methods

This study involved a series of well-defined steps, beginning with the acquisition of field data and culminating in the preparation of these data for use as a dataset in machine learning algorithms. The overall progression of these steps is visually represented in Figure 1, which provides an overview of the workflow of the study. Each part of the general flow is explained in the following section.

2.1. Location and Instruments

This study was conducted at the Indonesia Center for Rice Research in Subang, West Java Province, Indonesia (Figure 2). Two quadcopter drones were used: DJI Mavic 2 Pro and DJI Inspire 1 produced by DJI (SZ DJI Technology Co., Ltd., Shenzhen, Guangdong, China). The former was used to generate a detailed map of the study area. A multispectral sensor called Parrot Sequoia was attached as a payload on the latter (DJI Inspire 1) (Figure 3a).
Parrot Sequoia is equipped with a 4-band multispectral sensor with 1.2 megapixels (1280 × 960). The four bands were green (550 ± 40 nm), red (660 ± 40 nm), red edge (735 ± 10 nm), and near-infrared (NIR) (790 ± 40 nm) with horizontal and vertical fields of view (FOV) of 62° and 49°, respectively. An attached sunshine sensor compensates for the dynamic changes in solar illumination. This sensor makes it possible to obtain the reflectance under all illumination conditions [42]. The sensor was calibrated using a calibration panel to normalize the images and correct for atmospheric effects.
The UAV was operated at an altitude of 75 m from the takeoff point to capture the study area, as shown in Figure 3b. This flight altitude satisfied safety precautions, considering the presence of structures and trees near the study plots while ensuring a detailed ground sampling distance. We used Pix4d Capture (Pix4D S.A, Prilly, Switzerland) version 4.13.0 as the mission planning software and configured the side and front overlap fractions to 80%. Using the “fast mode” in the Pix4D Capture, the UAV captured the images while translating with a constant velocity of approximately 6 m/s. With a horizontal FOV of 62°, the width of the captured area was approximately 90 m and the ground sampling distance was 8.2 cm/pixel (Figure 3b). Because this resolution was satisfactory enough for the purpose of this study, we did not further assess the effects of different flight altitudes. The observations took place from 09:00 to 11:00 on 9 November, 30 November, and 28 December 2022, under clear sky weather conditions.

2.2. Experimental Plot Design

In this study, 24 experimental plots were observed, each featuring one of three high-yield rice varieties: INPARI-32, INPARI-33, and INPARI-43. These three varieties share similar physical characteristics, with the primary distinguishing feature being height. INPARI-32 was the tallest, measuring approximately 97 cm, followed by INPARI-33 at approximately 93 cm and INPARI-43 at approximately 83 cm.
Each individual variety is represented by eight repetition plots. These plots were distributed across two distinct locations: Unit 1 and Unit 2 (Figure 4). Plots in Unit 1 measured 12 m by 4 m, while plots in Unit 2 measured 9.5 m by 4.2 m. In the present analysis, no distinction was made between these two locations. Twenty random points were created in each plot. To ensure that the points were precisely on the observed plants and not on the edge, a 0.5 m buffer was kept at each border to generate random points. This equated to 20 random points multiplied by 24 plots, resulting in 480 samples (Table 1). From these samples, we extracted all multispectral bands and indices associated with each point. We observed plants at three different growth stages: six, nine, and twelve WAP.

2.3. Airborne Data Processing

In addition to the multispectral bands, we generated vegetation indices (VIs) from the acquired data. These indices, derived from mathematical formulas combining different spectral bands, offer further insight into the health, vigor, and physiological characteristics of rice plants. Table 2 summarizes the formulas used to generate the vegetation indices. The raw data acquired from the airborne platform in the field were processed using the Pix4D Mapper software version 4.8.4 to obtain orthomosaic data for each multispectral band. Subsequently, six vegetation indices, NDVI, LCI, GNDVI (green NDVI), SAVI (soil adjusted vegetation index), OSAVI (optimized SAVI), and LAI (leaf area index), were derived from the orthomosaic multispectral data. To ensure precise alignment of the data, we performed a geometric correction using a map derived from high-resolution data obtained from DJI Mavic 2. In this case, the orthomosaic map from DJI Mavic 2 was used as a reference. Both the individual multispectral bands and vegetation indices were subsequently orthorectified using an orthomosaic map. We used several points within the field, such as the corner of the block, as ground control points (GCP).

2.4. Feature Selection

We utilized a range of scoring methods to assess the relationship between the features (multispectral bands and vegetation indices) and each target variable (INPARI-32, INPARI-33, and INPARI-43). These methods encompass internal scorers, such as information gain, chi squared, and linear regression. By measuring the correlation between the variables and the target variable, we assigned scores to each feature by employing the appropriate scorers and models to determine their significance and predictive capabilities. At each growth stage, we conducted rankings and selected the top five features for that specific stage.

2.5. Classification Algorithms

We conducted an extensive evaluation of seven different machine learning classification algorithms to assess their performance in classifying rice varieties. The algorithms we tested included NN, decision tree, SVM, RF, naïve Bayes, and logistic regression. In addition to these individual algorithms, we also explored the performance of ensemble algorithms, which combine multiple base classifiers to improve classification performance. Our study specifically examined AdaBoost, gradient boosting, and a stack of high-performance algorithms. Ensemble algorithms have been shown to enhance the classification accuracy by combining the strengths of different models and mitigating their weaknesses. Table 3 lists all classification algorithms and their respective references.

2.6. Accuracy Assessment

K-fold cross-validation is a method for evaluating machine learning models. In the present analysis, we divided the dataset into ten subsets, trained the model on nine of them, and tested it on the remaining one. This process was repeated ten times, with each subset serving as the validation set. It provides a robust estimate of the model’s performance, helping to detect overfitting and make efficient use of available data. We conducted accuracy assessments using five standard parameters: area under the curve (AUC), classification accuracy (CA), F1, precision, and recall. These parameters can provide a comprehensive measure of the effectiveness of algorithms in accurately classifying rice varieties.
All classifications, as well as the accuracy assessment, were conducted using Orange Data Mining [58] software version 3.34. This open-source software is used for data mining and machine learning. Developed by the Bioinformatics Lab at the University of Ljubljana, Slovenia and a community of developers and researchers worldwide, it allows users to build workflows and models using drag-and-drop components. In this study, we used the default parameters set in the software, without parameter tuning.

2.7. Feature Contribution Analysis

During the analysis, the performance of the classification model was tested with and without individual features (each of the input variables, namely, four multispectral bands and six vegetation indices) to observe how each feature affects the accuracy of the classification. The contribution of each feature is then measured using the permutation feature importance technique. This is a method for understanding which features are most influential in making predictions, and helps in feature selection and model interpretation. The basic idea behind permutation feature importance is to measure the extent to which a model’s performance (e.g., accuracy, F1 score, or any other relevant metric) decreases when the values of a particular feature are randomly shuffled while keeping other features constant. To investigate how each feature contributes to the classification of a specific class, we used the SHAP library [59], which provides an estimation of the degree to which each feature influences the output of the model.

3. Results

3.1. Spectral Characteristics of Observed Plants

We observed dynamic changes in the spectra and vegetation indices values from six to 12 WAP. Figure 5 shows the dynamic changes in spectral reflectance and vegetation indices for all three varieties. The spectral values of plants tend to change during the growing stages [45,60] because of the physiological and morphological changes that occur as the plant develops. Under normal conditions, the paddy spectra change from six to nine WAP. This is because nine WAP are at the peak of the vegetative stage of paddy growth [61]. The values of NIR and other vegetation indices (NDVI, LCI, GNDVI, SAVI, and OSAVI) usually decrease from nine WAP to twelve WAP because the latter is the period of the generative stage when the plants are ready to be harvested [62]. Similar results were observed in the present study. The spectral reflectance of the red-edge and NIR, as well as the vegetation indices, increased from six to nine WAP and decreased from nine to twelve WAP.

3.2. Feature Selection

Table 4 offers a succinct yet comprehensive summary of our feature selection analysis, providing a detailed representation of the specific features selected along with their associated attributes. Notably, our results indicate that the Green feature was the sole variable that consistently appeared across all growth-stage datasets.
After identifying the features for each growth stage, we assessed the intraclass variability of these features. Intraclass variability refers to the degree of variation within each class or category of rice varieties and is evaluated as the standard deviation. This analysis provides insights into the consistency and reliability of the features for differentiating between rice varieties. A lower standard deviation suggests that the values of the feature within a specific growth stage are more closely clustered around the mean, whereas a higher standard deviation indicates more noticeable variability within the class. By quantifying the intraclass variability, we assessed the discriminative power of the features and their ability to accurately distinguish between different rice varieties. Lower intraclass variability implies that the feature is more consistent and reliable in capturing the distinctive characteristics of each variety, thereby enhancing the precision of the classification process. From this analysis, we found that the Green feature exhibited the lowest standard deviation among the growth stages (Figure 6).

3.3. Varieties Classification

We observed the possibility of using multiple features to determine the rice varieties. As we assessed nine classification algorithms with four multispectral bands and six vegetation indices as features, the performance of each feature was observed in accordance with the classification algorithm.
Table 5 presents the average performance of various machine learning algorithms for classifying paddy rice varieties using multiple variables as features. Upon analyzing the results, it became apparent that the performance of the algorithms varied based on the growth stage of the plants. For example, the best algorithm at six WAP was the NN; at nine WAP, it was the stack algorithm; and at twelve WAP, it was the NN. Among these, the NN algorithm demonstrated the highest performance in terms of CA, F1, precision, and recall, specifically at twelve WAP.
We further assessed the classification performance of the seven machine learning algorithms for each rice variety, as shown in Table 6. We found that for six and nine WAP, the stack algorithm exhibited commendable performance in terms of CA when applied to the INPARI-32 variety. Notably, this algorithm achieved a CA of 0.834 and 0.816 for six and nine WAP, respectively, effectively distinguishing INPARI-32 from the other varieties. However, the results differed slightly when considering the results for twelve WAP. During this period, the algorithms demonstrated a more satisfactory ability to differentiate INPARI-43 from other varieties. Specifically, NN yielded the highest CA of 0.841, closely followed by the stack algorithm, with a CA of 0.831.
The performance of each algorithm is reflected in the classification of the paddy varieties. As shown in Figure 7, NN performed well in classifying the three varieties: INPARI-32, INPARI-33, and INPARI-43. Approximately 60% of the training samples for INPARI-32 were correctly classified. The remaining 40% were classified incorrectly as INPARI-33 and INPARI-43. Logistic regression provided the lowest accuracy. This algorithm failed to classify the rice varieties correctly.

3.4. Features Contribution

Based on these results, the NN algorithm applied during the twelfth WAP emerged as the most effective algorithm for determining rice varieties in this study. Subsequently, we conducted an analysis to assess the contribution of each feature to identifying the varieties. By permutation feature importance, we discovered that among the selected features from the twelfth week after planting, the Green feature exhibited the highest level of contribution, surpassing vegetation indices such as the LCI, GNDVI, NDVI, and LAI. This significance of the Green feature is evident in Figure 8, where it is considered the most essential feature. Notably, the absence of this band resulted in a decrease of approximately 0.24 in the AUC of the classification. A decrease in the AUC typically indicates a reduction in the model’s ability to distinguish between classes. Since the AUC values range from 0 to 1, a decrease of 0.24 in the AUC is significant, suggesting that the Green feature has a substantial impact on the model’s performance.
We also analyzed the contribution of each feature to the classification of each rice variety. The results indicated that the Green feature demonstrated the highest predictive power in classifying the INPARI-32 variety, as illustrated in Figure 9a. Specifically, a lower value (reflectance) of Green during the twelfth WAP exhibited a greater contribution towards identifying INPARI-32. By contrast, a higher LCI value was observed to have a stronger association with the classification of this variety.
To detect INPARI-33, Green was also found to have a higher contribution, with higher values contributing more to the classification of this rice variety, as illustrated in Figure 9b. Additionally, Green was the feature with the highest contribution in classifying INPARI-43, followed by the two vegetation indices (LCI and NDVI), as shown in Figure 9c.
In this study, the data for the three varieties during the twelve WAP exhibited different reflectance characteristics from the Green reflectance, as shown by the normal distribution chart in Figure 10. INPARI-32 can be easily determined using the low reflectance of the green light. On the other hand, the high reflectance of green can be used to determine INPARI-43. Moreover, the map in Figure 11 shows a visual representation of Green reflectance from the experimental plots of the three varieties, which can be distinguished visually.

4. Discussion

Every remote sensing technology has its advantages and disadvantages, and there are no exceptions when considering UAVs as the primary tool for monitoring rice varieties, as reported in this study. The most notable limitation is the limited capacity of UAVs to cover extensive areas. Their restricted spatial coverage may require additional flight missions, thereby creating logistical complexities in dealing with large areas. Satellite remote sensing provides a solution to address these challenges because it offers broad spatial coverage. Moreover, some satellite imagery already has a very high spatial resolution, such as GeoEye-1, developed and managed by the European Space Agency (ESA), and included in the WorldView constellation, which consists of four WorldView satellites and GeoEye-1. It has a spatial resolution of 41 cm/pixel, which is almost equivalent to that of a UAV. In terms of the temporal resolution, which is related to the revisit time, it offers a 4.6-day temporal resolution, which is quite high. However, one major drawback of high-resolution satellite imagery is its unaffordability owing to its high cost. Hence, UAVs remain superior in terms of spatial, spectral, and temporal resolution, even though they have limited spatial coverage. Furthermore, the development of UAVs has now entered the era of fixed-wing UAVs, which offers more efficient and broader coverage.
This study investigated the application of machine learning to classify different paddy rice varieties using UAV data. Our analysis involved exploring various algorithms to identify the most effective approach tailored to this task. We also aimed to identify the most suitable growth stage for rice plants for a reliable variety classification. Our findings revealed that machine learning algorithms perform differently depending on the growth stage of the plant, with twelve WAP found to be the most optimum growth stage. Overall, the NN emerged as the most effective algorithm in this study.
Several studies have indicated that the growth stage of paddy rice plants affects transpiration, photosynthesis, and the response to soil water potential, as well as the impact of nitrogen application on yield [63,64]. In this study, we examined rice images captured at six, nine, and twelve WAP. During these stages, paddy rice undergoes significant physiological and morphological changes, which can affect its spectral reflectance. Our study revealed that twelve WAP is the optimum growth stage for rice variety classification, with NN as the best machine learning algorithm. This is due to several factors, such as morphological differences, increased canopy development, stable feature representation, and reduced intraclass variability. By the twelfth WAP, paddy rice plants have reached a more advanced growth stage, and there are more pronounced morphological differences among different plant varieties. These differences can include variations in leaf shape, size, color, and overall plant structure. These characteristics reflect the differences in the spectral reflectance captured by the UAV sensor. Machine learning algorithms can leverage these distinctive features to classify different varieties more accurately. It is also worth noting that the intraclass variability of the twelfth week after planting was the lowest, as shown in Figure 6. Such a reduction in intraclass variability plays a crucial role in improving classification accuracy, as it minimizes the overlap and confusion between different plant varieties. By reducing the variations within each class and enhancing the separability between classes, machine learning algorithms can achieve higher accuracy and reliability in the classification process. This observation is supported by previous studies, which highlighted the positive impact of reduced intraclass variability on classification performance. Fusheng et al. [65] reduced intraclass variability through instance-level embedding adaptation, significantly improving the classification accuracy of few-shot learning tasks. Hence, the combination of leveraging distinctive features and reduced intraclass variability at the twelve WAP contributes to the improved accuracy of machine learning algorithms in classifying plant varieties.
In line with the findings of this study, previous research by Tan et al. [66] also supported the superiority of the NN algorithm. They demonstrated that NNs outperformed maximum likelihood in land cover classification using Landsat multispectral data. Similarly, Etheridge et al. [67] conducted a classification study based on overall error rate. They found that the probabilistic NN exhibited the highest level of reliability, followed by the backpropagation and categorical learning networks. These findings highlight the consistent success of NN-based approaches for different classification tasks.
Other studies have explored the effectiveness of NNs in various agricultural applications. For example, Senan et al. [68], Bouguettaya et al. [69,70,71], and Ramesh and Vydeki [72] utilized deep learning techniques based on a convolutional neural network (CNN) for paddy leaf disease classification. Muthukumaran et al. [73], Amaratunga et al. [74], and Abdullah et al. [75] performed paddy yield prediction and forecasting using artificial NNs (ANNs). Although these studies did not specifically investigate the classification of rice varieties, the successful utilization of NNs in related agricultural tasks further emphasizes the efficiency of these algorithms.
NNs are suitable for classification because of their efficiency, reliability, and accuracy, surpassing those of mainstream methods. Despite longer training times, the precise outcomes outweigh this drawback. Our study noted that the NN’s extended training times were still faster than those of some ensemble algorithms. The NNs required approximately 3 s, while the others required less than 0.2 s. Therefore, it is important to consider increased training time when applying this algorithm. Their unique ability to handle multi-classification tasks and adapt through backpropagation sets them apart, excelling in intricate classification of paddy rice varieties. Despite the potential drawbacks of training time, NN accuracy and adaptability make it the ultimate choice for paddy rice classification.
In this study, we explored the predictive power of various features for classifying different rice varieties. Interestingly, we observed that different features exhibited varying degrees of effectiveness for this classification task. Among the features examined, Green emerged as the most influential and impactful in accurately discerning rice varieties. Remarkably, this feature surpassed well-established vegetation indices, such as the LCI, GNDVI, NDVI, and LAI, in its ability to accurately classify the rice varieties treated in the present study. This finding highlights the significance of this feature and its potential to capture the essential characteristics that differentiate rice varieties.
Our results shed light on the importance of considering a diverse range of features when undertaking variety classification. Although vegetation indices have traditionally been relied upon for this purpose, our results emphasize that alternative features, such as the Green feature, can offer superior performance and contribute significantly to the accurate classification of rice varieties.
The selection of Green reflectance as the most important feature in determining rice variety can be attributed to several factors. First, Green reflectance captures vital information about vegetation health and vigor, as it represents the amount of light reflected by the plants in the relevant wavelength range. This feature is particularly meaningful for rice varieties, as their growth and development rely heavily on chlorophyll content, which affects their overall health and productivity. Higher Green reflectance values indicate healthier and more vigorous vegetation, which is indicative of specific rice varieties.
Second, the Green reflectance feature may possess unique spectral characteristics that are particularly discriminative for differentiating rice varieties. It can capture subtle differences in leaf structure, pigmentation, or physiological traits among the varieties, which are not adequately captured by other vegetation indices such as the LCI, GNDVI, NDVI, or LAI. These indices often focus on specific wavelengths or combinations of wavelengths, whereas Green reflectance provides a more comprehensive representation of the overall greenness of vegetation. Fu et al. [76] suggested that leaf greenness could potentially function as a convenient indicator for identifying genotypes with elevated photosynthetic capabilities.
In addition to its importance as a predictor, the Green reflectance parameter exhibits noteworthy intraclass variability. As mentioned earlier, the variability of a feature within each class provides valuable insight into its discriminative power. In the context of our study, Green reflectance demonstrated a significantly lower standard deviation across all growth stages when compared to other features. Figure 6 visually represents the distribution of Green reflectance and highlights its narrower variability. This reduced variability suggests that Green reflectance possesses distinct and consistent spectral characteristics that enable it to effectively differentiate among different rice varieties. The ability to discern subtle variations in Green reflectance across growth stages makes it a reliable feature for classification. This finding emphasizes the importance of the “green” reflectance as a significant contributor to the classification process.
Additionally, the high contribution of the Green reflectance feature could also be influenced by the specific growth stage of the rice plants during twelve WAP. Different growth stages exhibit varying spectral signatures, and during this stage, the Green reflectance feature may have stronger discriminatory power in distinguishing rice varieties. It is essential to consider the growth dynamics of rice plants and the corresponding physiological changes that occur at different stages when interpreting the significance of these features. The visualization provided in Figure 8 further supports the importance of the Green reflectance feature, demonstrating its prominent position as an essential predictor. The potential decrease in the AUC of the classification in the absence of this feature highlights its critical role in achieving accurate and reliable classification results for rice varieties.
These findings indicate that different features may be important for classifying different rice varieties. This information can be valuable for designing more effective machine learning algorithms for classifying rice varieties based on remote sensing data. For example, algorithms that weigh certain features more heavily for certain varieties may be able to achieve higher accuracy in classifying these varieties. Overall, this analysis provides important insights into the relationship between different features and classification of different rice varieties, which can inform future research in this area.
This study achieved its highest classification accuracy (0.644) using data from the twelfth WAP, which is significant within the context of this research methodology. As there are no prior studies available for comparison, this outcome is promising and underscores the novelty of our approach, suggesting potential enhancements through further research. Future investigations should explore alternative vegetation indices in conjunction with multispectral bands to improve classification. Deep learning algorithms, renowned for their ability to automatically extract relevant features, hold promise for this task owing to advancements in hardware and software technologies and the abundance of available data. Additionally, evaluating various paddy rice varieties can bolster the model’s capabilities and enhance the robustness of the research for reliable analysis of the study area.

5. Conclusions and Future Works

In conclusion, this study demonstrated the effectiveness of machine learning algorithms, particularly neural networks, in classifying different paddy rice varieties from the data taken by UAV sensors. The twelfth WAP period was identified as the optimal growth stage for accurate variety classification, considering the significant morphological differences and reduced intraclass variability. The Green reflectance parameter emerged as the most influential predictor, surpassing traditional vegetation indices, owing to its ability to capture essential information about vegetation health and discriminate spectral characteristics for this specific growth stage of rice plants. The combination of leveraging distinctive features and reduced intraclass variability contributed to the improved accuracy of the machine learning algorithms in classifying the rice varieties in this study.
Future research could explore alternative vegetation indices and deep learning algorithms to further enhance the classification outcomes. In the future, evaluating additional paddy rice varieties would strengthen the model’s capabilities and expand its applicability.

Author Contributions

Conceptualization, A.K.W., L.B.P., A.J. and C.H.; methodology, A.K.W.; validation, A.K.W., L.B.P. and C.H.; formal analysis, A.K.W.; data acquisition, A.K.W., A.A.S. and M.B.R.K.; writing—original draft preparation, A.K.W.; writing—review and editing, L.B.P., A.J., C.H. and H.K; visualization, A.K.W.; supervision, L.B.P., A.J., C.H. and H.K.; project administration, A.K.W., A.J. and C.H. All authors have read and agreed to the published version of the manuscript.

Funding

The data collection for this research was funded by the Ministry of Education, Research, and Technology of Indonesia through the Prioritas Riset Nasional (PRN) scheme, based on reference number 2987/E4/AK.04/2021. This work was supported by JSPS RONPAKU (Dissertation Ph.D.) Program.

Data Availability Statement

The data used to support the findings of this study may be released upon application to the corresponding author.

Acknowledgments

We extend our heartfelt appreciation to Balai Besar Penelitian Padi (Indonesia Center for Rice Research) Subang, West Java for generously providing the necessary research facilities that greatly facilitated this study. We are sincerely grateful for their support and collaboration throughout the study. We express our sincere gratitude to the anonymous reviewers for their invaluable feedback and constructive comments, which have contributed significantly to the improvement of this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hussain, S.; Fujii, T.; McGoey, S.; Yamada, M.; Ramzan, M.; Akmal, M. Evaluation of Different Rice Varieties for Growth and Yield Characteristics. J. Anim. Plant Sci. 2014, 24, 1504–1510. [Google Scholar]
  2. Karthikraja, M.; Sudhakar, P.; Ramesh, S.; Kumar, B.S. Evaluation of Rice Varieties for Growth and Yield Performance in Aerobic Cultivation. Int. J. Plant Soil Sci. 2022, 34, 532–538. [Google Scholar] [CrossRef]
  3. Hutapea, P.P.; Ginting, J.; Rahmawati, N. Growth And Production Of Several Rice Varieties with The Biochar from Different Sources of Materials. AGRITEPA J. Ilmu dan Teknol. Pertan. 2022, 9, 247–258. [Google Scholar] [CrossRef]
  4. Nagabovanalli Basavarajappa, P.; Shruthi, P.; Lingappa, M.; G, K.G.; Goudra Mahadevappa, S. Nutrient Requirement and Use Efficiency of Rice (Oryza sativa L.) as Influenced by Graded Levels of Customized Fertilizer. J. Plant Nutr. 2021, 44, 2897–2911. [Google Scholar] [CrossRef]
  5. Sun, T.; Yang, X.; Tang, S.; Han, K.; He, P.; Wu, L. Genotypic Variation in Nutrient Uptake Requirements of Rice Using the QUEFTS Model. Agronomy 2021, 11, 26. [Google Scholar] [CrossRef]
  6. Vu, D.H.; Stuerz, S.; Asch, F. Nutrient Uptake and Assimilation under Varying Day and Night Root Zone Temperatures in Lowland Rice. J. Plant Nutr. Soil Sci. 2020, 183, 602–614. [Google Scholar] [CrossRef]
  7. Astuti, L.P. Susceptibility of Four Rice Types to Sitophilus Oryzae Linnaeus (Coleoptera: Curculionidae). Agrivita 2019, 41, 277–283. [Google Scholar] [CrossRef]
  8. Syahri; Somantri, R.U. The Use of Improved Varieties Resistant to Pests and Diseases to Increase National Rice Production. J. Litbang Pert. 2016, 35, 25–36. [Google Scholar]
  9. Santoso, A.A.; Kartikawati, R.; Mellyga WP, D.; Supraptomo, E.; Fikra, M. Productivity of Four Rice Varieties and Pest Diseases with the Application of Environment Friendly Agriculture Technology in Jaken, Pati, Central Java. Agric 2022, 34, 35–44. [Google Scholar] [CrossRef]
  10. Rasheed, S.; Venkatesh, P.; Singh, D.R.; Renjini, V.R.; Jha, G.K.; Sharma, D.K. Who Cultivates Traditional Paddy Varieties and Why? Findings from Kerala, India. Curr. Sci. 2021, 121, 1188–1193. [Google Scholar] [CrossRef]
  11. Haridasan, A.; Thomas, J.; Raj, E.D. Deep Learning System for Paddy Plant Disease Detection and Classification. Environ. Monit. Assess. 2023, 195, 120. [Google Scholar] [CrossRef] [PubMed]
  12. Al Viandari, N.; Wihardjaka, A.; Pulunggono, H.B. Suwardi Sustainable Development Strategies of Rainfed Paddy Fields in Central Java, Indonesia: A Review. Caraka Tani J. Sustain. Agric. 2022, 37, 275–288. [Google Scholar] [CrossRef]
  13. Latif, M.S.; Kazmi, R.; Khan, N.; Majeed, R.; Ikram, S.; Ali-Shahid, M. Pest Prediction in Rice Using IoT and Feed forward Neural Network. KSII Trans. Internet Inf. Syst. 2022, 161, 133–153. [Google Scholar] [CrossRef]
  14. Karthikeyan, L.; Chawla, I.; Mishra, A.K. A Review of Remote Sensing Applications in Agriculture for Food Security: Crop Growth and Yield, Irrigation, and Crop Losses. J. Hydrol. 2020, 586, 124905. [Google Scholar] [CrossRef]
  15. Sishodia, R.P.; Ray, R.L.; Singh, S.K. Applications of Remote Sensing in Precision Agriculture: A Review. Remote Sens. 2020, 12, 3136. [Google Scholar] [CrossRef]
  16. Mutanga, O.; Dube, T.; Galal, O. Remote Sensing of Crop Health for Food Security in Africa: Potentials and Constraints. Remote Sens. Appl. Soc. Environ. 2017, 8, 231–239. [Google Scholar] [CrossRef]
  17. Usha, K.; Singh, B. Potential Applications of Remote Sensing in Horticulture—A Review. Sci. Hortic. 2013, 153, 71–83. [Google Scholar] [CrossRef]
  18. Ecke, S.; Dempewolf, J.; Frey, J.; Schwaller, A.; Endres, E.; Klemmt, H.J.; Tiede, D.; Seifert, T. UAV-Based Forest Health Monitoring: A Systematic Review. Remote Sens. 2022, 14, 3205. [Google Scholar] [CrossRef]
  19. Laporte-Fauret, Q.; Marieu, V.; Castelle, B.; Michalet, R.; Bujan, S.; Rosebery, D. Low-Cost UAV for High-Resolution and Large-Scale Coastal Dune Change Monitoring Using Photogrammetry. J. Mar. Sci. Eng. 2019, 7, 63. [Google Scholar] [CrossRef]
  20. Bareth, G.; Aasen, H.; Bendig, J.; Gnyp, M.L.; Bolten, A.; Jung, A.; Michels, R.; Soukkamäki, J. Low-Weight and UAV-Based Hyperspectral Full-Frame Cameras for Monitoring Crops: Spectral Comparison with Portable Spectroradiometer Measurements. Photogramm. Fernerkund. Geoinf. 2015, 103, 69–79. [Google Scholar] [CrossRef]
  21. Al-Rawabdeh, A.; He, F.; Moussa, A.; El-Sheimy, N.; Habib, A. Using an Unmanned Aerial Vehicle-Based Digital Imaging System to Derive a 3D Point Cloud for Landslide Scarp Recognition. Remote Sens. 2016, 8, 95. [Google Scholar] [CrossRef]
  22. Linli, L.; Ying, L.; Xuyang, Z.; Yongdong, S.; Xiaoyang, C. Application of Unmanned Aerial Vehicle in Surface Soil Characterization and Geological Disaster Monitoring in Mining Areas. Meitiandizhi Yu Kantan/Coal Geol. Explor. 2021, 49, 25. [Google Scholar] [CrossRef]
  23. Gómez, C.; Goodbody, T.R.H.; Coops, N.C.; Álvarez-Taboada, F.; Sanz-Ablanedo, E. Forest Ecosystem Monitoring Using Unmanned Aerial Systems. In Unmanned Aerial Remote Sensing; CRC Press: Boca Raton, FL, USA, 2020. [Google Scholar]
  24. Liu, J.; Chen, P.; Xu, X. Estimating Wheat Coverage Using Multispectral Images Collected by Unmanned Aerial Vehicles and a New Sensor. In Proceedings of the 2018 7th International Conference on Agro-Geoinformatics, Agro-Geoinformatics 2018, Hangzhou, China, 6–9 August 2018. [Google Scholar]
  25. Pilarska, M.; Ostrowski, W.; Bakuła, K.; Górski, K.; Kurczyński, Z. The Potential of Light Laser Scanners Developed for Unmanned Aerial Vehicles—The Review and Accuracy. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2016, XLII-2/W2, 87–95. [Google Scholar] [CrossRef]
  26. Niu, H. A UAV Resolution and Waveband Aware Path Planning for Onion Irrigation Treatments Inference. In Proceedings of the 2019 International Conference on Unmanned Aircraft Systems (ICUAS), Atlanta, GA, USA, 11–14 June 2019; pp. 808–812. [Google Scholar] [CrossRef]
  27. Fawcett, D.; Panigada, C.; Tagliabue, G.; Boschetti, M.; Celesti, M.; Evdokimov, A.; Biriukova, K.; Colombo, R.; Miglietta, F.; Rascher, U.; et al. Multi-Scale Evaluation of Drone-Based Multispectral Surface Reflectance and Vegetation Indices in Operational Conditions. Remote Sens. 2020, 12, 514. [Google Scholar] [CrossRef]
  28. Gitelson, A.A.; Chivkunova, O.B.; Merzlyak, M.N. Nondestructive Estimation of Anthocyanins and Chlorophylls in Anthocyanic Leaves. Am. J. Bot. 2009, 96, 1861–1868. [Google Scholar] [CrossRef] [PubMed]
  29. Peng, Y.; Nguy-Robertson, A.; Arkebauer, T.; Gitelson, A.A. Assessment of Canopy Chlorophyll Content Retrieval in Maize and Soybean: Implications of Hysteresis on the Development of Generic Algorithms. Remote Sens. 2017, 9, 226. [Google Scholar] [CrossRef]
  30. Widjaja Putra, B.T.; Soni, P. Evaluating NIR-Red and NIR-Red Edge External Filters with Digital Cameras for Assessing Vegetation Indices under Different Illumination. Infrared Phys. Technol. 2017, 81, 148–156. [Google Scholar] [CrossRef]
  31. Boiarskii, B. Comparison of NDVI and NDRE Indices to Detect Differences in Vegetation and Chlorophyll Content. J. Mech. Contin. Math. Sci. 2019, spl1, 20–29. [Google Scholar] [CrossRef]
  32. Onojeghuo, A.O.; Blackburn, G.A.; Wang, Q.; Atkinson, P.M.; Kindred, D.; Miao, Y. Mapping Paddy Rice Fields by Applying Machine Learning Algorithms to Multi-Temporal {Sentinel}-1A and {Landsat} Data. Int. J. Remote Sens. 2017, 39, 1042–1067. [Google Scholar] [CrossRef]
  33. Zheng, B.; Myint, S.W.; Thenkabail, P.S.; Aggarwal, R.M. A Support Vector Machine to Identify Irrigated Crop Types Using Time-Series {Landsat} {NDVI} Data. Int. J. Appl. Earth Obs. Geoinf. 2015, 34, 103–112. [Google Scholar] [CrossRef]
  34. Wang, L.; Liu, D.; Pu, H.; Sun, D.-W.; Gao, W.; Xiong, Z. Use of Hyperspectral Imaging to Discriminate the Variety and Quality of Rice. Food Anal. Methods 2015, 8, 515–523. [Google Scholar] [CrossRef]
  35. Darvishsefat, A.A.; Abbasi, M.; Schaepman, M.E. Evaluation of Spectral Reflectance of Seven Iranian Rice Varieties Canopies. J. Agric. Sci. Technol. 2011, 13, 1091–1104. [Google Scholar]
  36. Karaçali, B.; Snyder, W. Automatic Target Detection Using Multispectral Imaging. In Proceedings of the Applied Imagery Pattern Recognition Workshop, Washington, DC, USA, 16–18 October 2002; Volume 2002. [Google Scholar]
  37. Zhou, X.; Marani, M.; Albertson, J.D.; Silvestri, S. Hyperspectral and Multispectral Retrieval of Suspended Sediment in Shallow Coastal Waters Using Semi-Analytical and Empirical Methods. Remote Sens. 2017, 9, 393. [Google Scholar] [CrossRef]
  38. Badzmierowski, M.J.; McCall, D.S.; Evanylo, G. Using Hyperspectral and Multispectral Indices to Detect Water Stress for an Urban Turfgrass System. Agronomy 2019, 9, 439. [Google Scholar] [CrossRef]
  39. Slameto; Fariroh, I.; Rusdiana, R.Y.; Kriswanto, B. Nitrogen Fertilizer Reduction on Way Apo Buru and Inpari 33 Rice Varieties. Indones. J. Agron. 2022, 50, 132–138. [Google Scholar] [CrossRef]
  40. Ghazali, M.F.; Wikantika, K.; Aryantha, I.N.P.; Maulani, R.R.; Yayusman, L.F.; Sumantri, D.I. Integration of Spectral Measurement and UAV for Paddy Leaves Chlorophyll Content Estimation. Sci. Agric. Bohem. 2020, 51, 86–97. [Google Scholar] [CrossRef]
  41. Agustian, A.; Aldillah, R.; Nurjati, E.; Yaumidin, U.K.; Muslim, C.; Ariningsih, E.; Rachmawati, R.R. Analysis of the Utilization of Rice Seeds of Improved Variety (Inpari 32) in Indramayu District, West Java. IOP Conf. Ser. Earth Environ. Sci. 2022, 1114, 012098. [Google Scholar] [CrossRef]
  42. Olsson, P.O.; Vivekar, A.; Adler, K.; Garcia Millan, V.E.; Koc, A.; Alamrani, M.; Eklundh, L. Radiometric Correction of Multispectral Uas Images: Evaluating the Accuracy of the Parrot Sequoia Camera and Sunshine Sensor. Remote Sens. 2021, 13, 577. [Google Scholar] [CrossRef]
  43. Li, F.; Miao, Y.; Feng, G.; Yuan, F.; Yue, S.; Gao, X.; Liu, Y.; Liu, B.; Ustin, S.L.; Chen, X. Improving Estimation of Summer Maize Nitrogen Status with Red Edge-Based Spectral Vegetation Indices. F. Crop. Res. 2014, 157, 111–123. [Google Scholar] [CrossRef]
  44. Huete, A.; Didan, K.; Miura, T.; Rodriguez, E.P.; Gao, X.; Ferreira, L.G. Overview of the Radiometric and Biophysical Performance of the MODIS Vegetation Indices. Remote Sens. Environ. 2002, 83, 195–213. [Google Scholar] [CrossRef]
  45. Gitelson, A.A.; Kaufman, Y.J.; Merzlyak, M.N. Use of a Green Channel in Remote Sensing of Global Vegetation from EOS- MODIS. Remote Sens. Environ. 1996, 58, 289–298. [Google Scholar] [CrossRef]
  46. Huete, A.R. A Soil-Adjusted Vegetation Index (SAVI). Remote Sens. Environ. 1988, 25, 295–309. [Google Scholar] [CrossRef]
  47. Wan, L.; Cen, H.; Zhu, J.; Li, Y.; Zhu, Y.; Sun, D.; Weng, H.; He, Y. Combining UAV-Based Vegetation Indices, Canopy Height and Canopy Coverage to Improve Rice Yield Prediction under Different Nitrogen Levels. In Proceedings of the 2019 ASABE Annual International Meeting, Boston, MA, USA, 7–10 July 2019. [Google Scholar]
  48. Carlson, T.N.; Ripley, D.A. On the Relation between NDVI, Fractional Vegetation Cover, and Leaf Area Index. Remote Sens. Environ. 1997, 62, 241–252. [Google Scholar] [CrossRef]
  49. Freund, Y.; Schapire, R.E. Experiments with a New Boosting Algorithm. In ICML’96: Proceedings of the Thirteenth International Conference on Machine Learning, Bari, Italy, 3–6 July 1996; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1996. [Google Scholar]
  50. Friedman, J.H. Greedy Function Approximation: A Gradient Boosting Machine. Ann. Stat. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
  51. Cox, D.R. The Regression Analysis of Binary Sequences. J. R. Stat. Soc. Ser. B 1958, 20, 215–232. [Google Scholar] [CrossRef]
  52. Lian, B.; Kartal, Y.; Lewis, F.L.; Mikulski, D.G.; Hudas, G.R.; Wan, Y.; Davoudi, A. Anomaly Detection and Correction of Optimizing Autonomous Systems with Inverse Reinforcement Learning. IEEE Trans. Cybern. 2022, 53, 4555–4566. [Google Scholar] [CrossRef]
  53. Rosenblatt, F. The Perceptron: A Perceiving and Recognizing Automation; Report; Cornell Aeronautical Laboratory: New York, NY, USA, 1957. [Google Scholar]
  54. Breiman, L. Random Forests—Random Features. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  55. Wolpert, D.H. Stacked Generalization. Neural Netw. 1992, 5, 241–259. [Google Scholar] [CrossRef]
  56. Vapnik, V.N. The Nature of Statistical Learning Theory; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1995. [Google Scholar]
  57. Quinlan, J.R. Induction of Decision Trees. Mach. Learn. 1986, 1, 81–106. [Google Scholar] [CrossRef]
  58. Demšar, J.; Curk, T.; Erjavec, A.; Gorup, Č.; Hočevar, T.; Milutinovič, M.; Možina, M.; Polajnar, M.; Toplak, M.; Starič, A.; et al. Orange: Data Mining Toolbox in Python. J. Mach. Learn. Res. 2013, 14, 2349–2353. [Google Scholar]
  59. Lundberg, S.M.; Lee, S.-I. A Unified Approach to Interpreting Model Predictions. In Advances in Neural Information Processing Systems; Guyon, I., Luxburg, U., Von Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R., Eds.; Curran Associates, Inc.: Boston, MA, USA, 2017; Volume 30. [Google Scholar]
  60. Huete, A.R. 11—Remote sensing for environmental monitoring. In Environmental Monitoring and Characterization; Artiola, J.F., Pepper, I.L., Brusseau, M.L., Eds.; Academic Press: Burlington, ON, Canada, 2004; pp. 183–206. ISBN 978-0-12-064477-3. [Google Scholar]
  61. Domiri, D.D. The New Method for Detecting Early Planting and Bare Land Condition in Paddy Field by Using Vegetation-Bare-Water Index. In Proceedings of the 2nd International Conference of Indonesian Society for Remote Sensing, Yogyakarta, Indonesia, 17–19 October 2016. [Google Scholar]
  62. Wijayanto, A.K.; Prasetyo, L.B.; Setiawan, Y. Spectral Pattern of Paddy as Response to Drought Condition: An Experimental Study. J. Pengelolaan Sumberd. Alam dan Lingkung 2021, 11, 83–92. [Google Scholar] [CrossRef]
  63. Choi, W.Y.; Park, H.-K.; Kang, S.; Kim, S.S.; Choi, S.-Y. Effects Water Stress on Physiological Traits at Various Growth-stages of Rice. Korean J. Crop Sci. 1999, 44, 282–287. [Google Scholar]
  64. Sui, B.; Feng, X.; Tian, G.; Hu, X.; Shen, Q.; Guo, S. Optimizing Nitrogen Supply Increases Rice Yield and Nitrogen Use Efficiency by Regulating Yield Formation Factors. F. Crop. Res. 2013, 150, 99–107. [Google Scholar] [CrossRef]
  65. Hao, F.; Cheng, J.; Wang, L.; Cao, J. Instance-Level Embedding Adaptation for Few-Shot Learning. IEEE Access 2019, 7, 100501–100511. [Google Scholar] [CrossRef]
  66. Tan, K.C.; Lim, H.S.; Jafri, M.Z.M. Comparison of Neural Network and Maximum Likelihood Classifiers for Land Cover Classification Using Landsat Multispectral Data. In Proceedings of the 2011 IEEE Conference on Open Systems, ICOS 2011, Langkawi, Malaysia, 25–28 September 2011. [Google Scholar]
  67. Etheridge, H.L.; Sriram, R.S.; Hsu, H.Y.K. A Comparison of Selected Artificial Neural Networks That Help Auditors Evaluate Client Financial Viability. Decis. Sci. 2000, 31, 531–550. [Google Scholar] [CrossRef]
  68. Senan, N.; Aamir, M.; Ibrahim, R.; Taujuddin, N.S.A.M.; Muda, W.H.N.W. An Efficient Convolutional Neural Network for Paddy Leaf Disease and Pest Classification. Int. J. Adv. Comput. Sci. Appl. 2020, 11, 116–122. [Google Scholar] [CrossRef]
  69. Bouguettaya, A.; Zarzour, H.; Kechida, A.; Taberkit, A.M. Recent Advances on UAV and Deep Learning for Early Crop Diseases Identification: A Short Review. In Proceedings of the 2021 International Conference on Information Technology (ICIT), Amman, Jordan, 14–15 July 2021; pp. 334–339. [Google Scholar] [CrossRef]
  70. Bouguettaya, A.; Zarzour, H.; Kechida, A.; Taberkit, A.M. A Survey on Deep Learning-Based Identification of Plant and Crop Diseases from UAV-Based Aerial Images. Cluster Comput. 2023, 26, 1297–1317. [Google Scholar] [CrossRef]
  71. Bouguettaya, A.; Zarzour, H.; Kechida, A.; Taberkit, A.M. Deep Learning Techniques to Classify Agricultural Crops through UAV Imagery: A Review. Neural Comput. Appl. 2022, 34, 9511–9536. [Google Scholar] [CrossRef]
  72. Ramesh, S.; Vydeki, D. Recognition and Classification of Paddy Leaf Diseases Using Optimized Deep Neural Network with Jaya Algorithm. Inf. Process. Agric. 2020, 7, 249–260. [Google Scholar] [CrossRef]
  73. Muthukumaran, S.; Geetha, P.; Ramaraj, E. Multi-Objective Optimization with Artificial Neural Network Based Robust Paddy Yield Prediction Model. Intell. Autom. Soft Comput. 2023, 35, 215–230. [Google Scholar] [CrossRef]
  74. Amaratunga, V.; Wickramasinghe, L.; Perera, A.; Jayasinghe, J.; Rathnayake, U.; Zhou, J.G. Artificial Neural Network to Estimate the Paddy Yield Prediction Using Climatic Data. Math. Probl. Eng. 2020, 2020, 1–11. [Google Scholar] [CrossRef]
  75. Abdullah, S.N.S.; Shabri, A.; Samsudin, R. Use of Empirical Mode Decomposition in Improving Neural Network Forecasting of Paddy Price. MATEMATIKA Malays. J. Ind. Appl. Math. 2019, 35, 53–64. [Google Scholar] [CrossRef]
  76. Fu, X.; Zhou, L.; Huang, J.; Mo, W.; Zhang, J.; Li, J.; Wang, H.; Huang, X. Relating Photosynthetic Performance to Leaf Greenness in Litchi: A Comparison among Genotypes. Sci. Hortic. 2013, 152, 16–25. [Google Scholar] [CrossRef]
Figure 1. General flow of the study.
Figure 1. General flow of the study.
Agriengineering 05 00123 g001
Figure 2. Map depicting study location at Subang (red dot), West Java Province, Indonesia (red rectangle in the overview map).
Figure 2. Map depicting study location at Subang (red dot), West Java Province, Indonesia (red rectangle in the overview map).
Agriengineering 05 00123 g002
Figure 3. DJI Inspire 1 with Parrot Sequoia attached (a) and illustration of flight planning (b).
Figure 3. DJI Inspire 1 with Parrot Sequoia attached (a) and illustration of flight planning (b).
Agriengineering 05 00123 g003
Figure 4. Experimental plots used in this study. There were 24 plots at two locations (Unit 1 and Unit 2), with 12 plots in each plot and four plots for each variety in a single location.
Figure 4. Experimental plots used in this study. There were 24 plots at two locations (Unit 1 and Unit 2), with 12 plots in each plot and four plots for each variety in a single location.
Agriengineering 05 00123 g004
Figure 5. Dynamic change in average spectral reflectance and VIs during growth-stage of observed plants from six WAP until twelve WAP.
Figure 5. Dynamic change in average spectral reflectance and VIs during growth-stage of observed plants from six WAP until twelve WAP.
Agriengineering 05 00123 g005
Figure 6. Standard deviation (SD) of the top five selected features for each growth stage, highlighting twelve WAP as the period with the narrowest distribution.
Figure 6. Standard deviation (SD) of the top five selected features for each growth stage, highlighting twelve WAP as the period with the narrowest distribution.
Agriengineering 05 00123 g006
Figure 7. Box plots depicting the performance of seven algorithms to determine three paddy rice varieties (blue bar: INPARI-32; red bar: INPARI-33; green bar: INPARI-43) using the data for twelve WAP: (a) NN, (b) SVM, (c) RF, (d) Naïve Bayes, (e) AdaBoost, (f) decision tree, (g) stack algorithm, (h) gradient boosting, and (i) logistic regression.
Figure 7. Box plots depicting the performance of seven algorithms to determine three paddy rice varieties (blue bar: INPARI-32; red bar: INPARI-33; green bar: INPARI-43) using the data for twelve WAP: (a) NN, (b) SVM, (c) RF, (d) Naïve Bayes, (e) AdaBoost, (f) decision tree, (g) stack algorithm, (h) gradient boosting, and (i) logistic regression.
Agriengineering 05 00123 g007
Figure 8. The permutation feature importance of five features during the twelve WAP showed Green reflectance as an essential feature.
Figure 8. The permutation feature importance of five features during the twelve WAP showed Green reflectance as an essential feature.
Agriengineering 05 00123 g008
Figure 9. Features contributing to the detection of INPARI-32 (a), INPARI-33 (b), and INPARI-43 (c) using a neural network.
Figure 9. Features contributing to the detection of INPARI-32 (a), INPARI-33 (b), and INPARI-43 (c) using a neural network.
Agriengineering 05 00123 g009
Figure 10. A normal distribution and its probability chart display varieties that are distinguishable through Green reflectance for the data in twelve WAP.
Figure 10. A normal distribution and its probability chart display varieties that are distinguishable through Green reflectance for the data in twelve WAP.
Agriengineering 05 00123 g010
Figure 11. During the period of twelve WAP, three varieties could be visually distinguished using Green reflectance. In particular, INPARI-43, which has high Green reflectance, dominated the field.
Figure 11. During the period of twelve WAP, three varieties could be visually distinguished using Green reflectance. In particular, INPARI-43, which has high Green reflectance, dominated the field.
Agriengineering 05 00123 g011
Table 1. Sampling points.
Table 1. Sampling points.
UnitVarietiesNumber of RepetitionsSample PointsTotal
1INPARI-3242080
INPARI-3342080
INPARI-4342080
2INPARI-3242080
INPARI-3342080
INPARI-4342080
Total480
Table 2. Formulas for generating vegetation indices used as features.
Table 2. Formulas for generating vegetation indices used as features.
VI NameFormulaReference
NDVINDVI = (NIR − Red)/(NIR + Red)[43,44]
LCILCI = (NIR − RedEdge)/(NIR + RedEdge)[43]
GNDVIGNDVI = (NIR − Green)/(NIR + Green)[45]
SAVISAVI = ((NIR − Red)/(NIR + Red + 0.5)) × 1.5[46]
OSAVIOSAVI = ((NIR − Red)/(NIR + Red + 0.16)) × (1 + 0.16)[47]
LAILAI = [ln(NIR/Red)/(1.4 × (NIR/Red))] − 1[48]
Table 3. Classification algorithms used in this study.
Table 3. Classification algorithms used in this study.
AlgorithmReference
AdaBoost[49]
Gradient boosting[50]
Logistic regression[51]
Naïve Bayes[52]
NN[53]
RF[54]
Stack algorithm[55]
SVM[56]
Decision tree[57]
Table 4. Summary of feature selection (O = remain, X = dropped).
Table 4. Summary of feature selection (O = remain, X = dropped).
FeatureSix WAPNine WAPTwelve WAP
NIROOX
GreenOOO
RedOXX
Red edgeOOX
NDVIXXO
LCIXXO
GNDVIXXO
SAVIXOX
OSAVIOOX
LAIXXO
Table 5. Average performance of seven machine learning algorithms over classes. A single asterisk (*) indicates the highest value within the same growth stage and a double asterisk (**) indicates the highest value across all growth stages.
Table 5. Average performance of seven machine learning algorithms over classes. A single asterisk (*) indicates the highest value within the same growth stage and a double asterisk (**) indicates the highest value across all growth stages.
Growth Stage (WAP)AlgorithmAUC 1CA 2F1 3PrecisionRecall
6AdaBoost0.6480.5300.5350.5450.530
Gradient boosting0.792 *0.6020.5980.5960.602
Logistic regression0.6340.4580.5180.4170.458
Naïve Bayes0.7220.4170.5000.4990.517
NN0.7900.618 *0.612 *0.608 *0.618 *
RF0.7620.5710.5690.5680.571
Stack algorithm0.7860.6080.5970.5920.608
SVM0.5540.3570.3440.3460.357
Decision tree0.6620.5330.5310.5330.533
9AdaBoost0.5900.4540.4550.4580.454
Gradient boosting0.6960.5080.5030.5000.508
Logistic regression0.5950.4570.4050.4170.457
Naïve Bayes0.7000.5170.5090.5080.517
NN0.750 *0.5900.5800.5790.590
RF0.7040.5490.5450.5420.549
Stack algorithm0.7430.600 *0.581 *0.586 *0.600 *
SVM0.6490.4290.4120.4230.429
Decision tree0.6820.5710.5670.5670.571
12AdaBoost0.6620.5500.5500.5500.550
Gradient boosting0.8020.6250.6220.6210.625
Logistic regression0.5480.3720.3630.3580.372
Naïve Bayes0.7170.5060.4940.4900.506
NN0.804 **0.644 **0.642 **0.642 **0.644 **
RF0.7880.6280.6260.6260.628
Stack algorithm0.8000.6410.6360.6350.637
SVM0.6460.3590.3680.3870.359
Decision tree0.6660.5280.5320.5370.528
1 AUC: area under the curve; 2 CA: classification accuracy; 3 F1: F1 score.
Table 6. Performance of the seven machine learning algorithms for classifying each rice variety in terms of classification accuracy. A single asterisk (*) indicates the highest value within the same variety and the same growth stage, while a double asterisk (**) indicates the highest value across all varieties in the same growth stage.
Table 6. Performance of the seven machine learning algorithms for classifying each rice variety in terms of classification accuracy. A single asterisk (*) indicates the highest value within the same variety and the same growth stage, while a double asterisk (**) indicates the highest value across all varieties in the same growth stage.
Growth-Stage (WAP)AlgorithmINPARI-32INPARI-33INPARI-43
6AdaBoost0.7810.6710.608
Gradient boosting0.8240.7400.639
Logistic regression0.6550.6520.608
Naïve Bayes0.7430.6650.627
NN0.8280.7590.649 *
RF0.8150.7080.618
Stack algorithm0.834 **0.743 *0.639
SVM0.5420.6240.549
Decision tree0.7770.6800.608
9AdaBoost0.7050.6250.578
Gradient boosting0.7650.6540.597
Logistic regression0.6790.6030.632
Naïve Bayes0.7210.6890.625
NN0.7970.721 *0.663
RF0.7810.6830.635
Stack algorithm0.816 **0.7140.670
SVM0.6510.6410.565
Decision tree0.7780.6860.679 *
12AdaBoost0.7030.6470.750
Gradient boosting0.7280.694 *0.828
Logistic regression0.6090.4910.644
Naïve Bayes0.6940.5970.722
NN0.753 *0.694 *0.841 **
RF0.7500.6840.828
Stack algorithm0.7500.6880.831
SVM0.5000.4970.722
Decision tree0.6840.5970.775
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wijayanto, A.K.; Junaedi, A.; Sujaswara, A.A.; Khamid, M.B.R.; Prasetyo, L.B.; Hongo, C.; Kuze, H. Machine Learning for Precise Rice Variety Classification in Tropical Environments Using UAV-Based Multispectral Sensing. AgriEngineering 2023, 5, 2000-2019. https://doi.org/10.3390/agriengineering5040123

AMA Style

Wijayanto AK, Junaedi A, Sujaswara AA, Khamid MBR, Prasetyo LB, Hongo C, Kuze H. Machine Learning for Precise Rice Variety Classification in Tropical Environments Using UAV-Based Multispectral Sensing. AgriEngineering. 2023; 5(4):2000-2019. https://doi.org/10.3390/agriengineering5040123

Chicago/Turabian Style

Wijayanto, Arif K., Ahmad Junaedi, Azwar A. Sujaswara, Miftakhul B. R. Khamid, Lilik B. Prasetyo, Chiharu Hongo, and Hiroaki Kuze. 2023. "Machine Learning for Precise Rice Variety Classification in Tropical Environments Using UAV-Based Multispectral Sensing" AgriEngineering 5, no. 4: 2000-2019. https://doi.org/10.3390/agriengineering5040123

APA Style

Wijayanto, A. K., Junaedi, A., Sujaswara, A. A., Khamid, M. B. R., Prasetyo, L. B., Hongo, C., & Kuze, H. (2023). Machine Learning for Precise Rice Variety Classification in Tropical Environments Using UAV-Based Multispectral Sensing. AgriEngineering, 5(4), 2000-2019. https://doi.org/10.3390/agriengineering5040123

Article Metrics

Back to TopTop