Next Article in Journal
Spatiotemporal Feature Fusion Transformer for Precipitation Nowcasting via Feature Crossing
Next Article in Special Issue
New Insights on the Information Content of the Normalized Difference Vegetation Index Sentinel-2 Time Series for Assessing Vegetation Dynamics
Previous Article in Journal
Weak-Texture Seafloor and Land Image Matching Using Homography-Based Motion Statistics with Epipolar Geometry
Previous Article in Special Issue
An Angle Effect Correction Method for High-Resolution Satellite Side-View Imaging Data to Improve Crop Monitoring Accuracy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Classifying Stand Compositions in Clover Grass Based on High-Resolution Multispectral UAV Images

1
Remote Sensing Group, Institute for Computer Science, Osnabrück University, 49074 Osnabrück, Germany
2
Faculty of Agricultural Sciences and Landscape Architecture, Osnabrück University of Applied Sciences, 49090 Osnabrück, Germany
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(14), 2684; https://doi.org/10.3390/rs16142684
Submission received: 28 June 2024 / Revised: 18 July 2024 / Accepted: 19 July 2024 / Published: 22 July 2024
(This article belongs to the Special Issue Crops and Vegetation Monitoring with Remote/Proximal Sensing II)

Abstract

:
In organic farming, clover is an important basis for green manure in crop rotation systems due to its nitrogen-fixing effect. However, clover is often sown in mixtures with grass to achieve a yield-increasing effect. In order to determine the quantity and distribution of clover and its influence on the subsequent crops, clover plants must be identified at the individual plant level and spatially differentiated from grass plants. In practice, this is usually done by visual estimation or extensive field sampling. High-resolution unmanned aerial vehicles (UAVs) offer a more efficient alternative. In the present study, clover and grass plants were classified based on spectral information from high-resolution UAV multispectral images and texture features using a random forest classifier. Three different timestamps were observed in order to depict the phenological development of clover and grass distributions. To reduce data redundancy and processing time, relevant texture features were selected based on a wrapper analysis and combined with the original bands. Including these texture features, a significant improvement in classification accuracy of up to 8% was achieved compared to a classification based on the original bands only. Depending on the phenological stage observed, this resulted in overall accuracies between 86% and 91%. Subsequently, high-resolution UAV imagery data allow for precise management recommendations for precision agriculture with site-specific fertilization measures.

Graphical Abstract

1. Introduction

Increasing nitrate pollution in the soil prompted the European Commission (EC) to adopt a nitrate directive to reduce the use of mineral fertilizers [1]. Taking into account the growing demand for food as a result of global population growth, this poses major challenges for agriculture. In the recent past, rising energy costs due to global political conflicts have also led to an increase in fertilizer prices [2]. Alternative, sustainable methods for stabilizing and increasing yields are needed to ensure long-term food security. Legumes such as red or white clover can provide an ecological solution. Primarily used as a forage crop, clover is often cultivated in combination with grass species as temporary grassland in a crop rotation. There, clover serves as a natural fertilizer for subsequent crops, as it can fix atmospheric nitrogen (N) [3,4]. Furthermore, it helps to increase biomass production by suppressing weeds and reducing soil degradation [5]. Clover is usually sown in mixtures with grass in order to achieve higher yield stability and improved feed quality compared to pure grass or clover stands [6,7]. Within a crop rotation, grasses can contribute to better rooting of the soil, while clover makes atmospheric nitrogen available to the grass plants [8,9]. Here, the amount of N fixation correlates strongly with legume biomass [10] and can be further increased by competition with non-legumes such as grasses [11]. Due to variable soil conditions, the spatial distribution of clover and grass plants can vary, which may result in differences in N availability in the soil [12].
Information on the spatial distribution of clover in clover grass mixtures can be used to derive the expected forage quality and the potential nitrogen input [4]. Conventional fertilization measures in temporary grasslands are usually based on a visual assessment of the crop structure. Although more reliable information is provided by in situ data acquisition, this can be time-consuming and costly [13]. In contrast, spatial data offer a more efficient and cost-effective way to determine the clover grass distribution for site-specific fertilization. The fertilizer requirement can be adapted to the determined quantity and distribution of the clover in order to achieve improved biomass production and at the same time reduce N leaching from the soil [14].
Image-based estimations of plant compositions in clover grass mixtures have been the subject of several studies. In particular, different methodologies for the classification of clover and grass from ground-based imagery have been developed. This includes the pixel-based analysis of high-resolution image data to identify clover and grass leaves [15], as well as the estimation of clover and grass proportions in comparison with field-measured dry biomass [16,17]. These studies focused on the analysis of stand compositions from ground-based imagery with a particularly high spatial resolution.
Site-specific management recommendations for fertilization measures require information on the field scale. This is especially important in temporary grassland with small-scale heterogeneity [18,19]. Unmanned aerial vehicle (UAV)-based camera systems can be used to generate high-resolution image data over a wide area [20]. They are, therefore, suitable for monitoring agricultural fields [21,22]. UAV-based imagery provides the possibility of a spatially differentiated analysis of temperate grassland. Monitoring of clover grass stands using UAV imagery has already been discussed in various studies. The main focus here was the determination of stand biomass [23,24,25]. However, UAV imagery has also been used for the differentiation of crop cover types. Abduleil et al. [26] showed the suitability of UAV-based RGB data for estimating the amount of clover. They captured images using a fixed-wing UAV and quantified the composition of the clover grass mixture using different image classification algorithms.
Random forest classifier (RF) has proven itself in several studies for identifying clover stands. To monitor compositions of arable landscapes on a larger scale, it was used to classify various crop types (including clover and grass) [27]. Hahn et al. [28] discussed RF for differentiating between clover, grass, and weeds based on high-resolution imagery, acquired in a simulated UAV setup. Here, a pixel-based approach was compared with object-based image analysis (OBIA). When classifying crop cover types, accuracies between 77% and 98% were achieved. They also demonstrated the added value of texture information, as these take into account not only individual pixels but also the surrounding pixels and can, thus, emphasize spatial features [29]. Grüner et al. [30,31] demonstrated the added value of texture features in quantifying biomass in clover grass mixtures. They used texture features based on the Haralick texture extraction from the Orfeo toolbox (OTB) [32].
Texture features must be calculated band by band. The resulting high data dimensionality can lead to overfitting and a high computational and memory cost when scaling the method to larger areas. Feature selection approaches can be used to reduce datasets to the most important features. Currently, no studies are known that use UAV-based spectral and textural information filtered by feature selection for clover grass classification. Therefore, this approach will be the subject of the present work. Furthermore, comparable studies are only focused on a single observation date. In order to evaluate the phenological development of stand composition, this methodology is applied at three different observation dates. In this work, clover and grass plants were distinguished at individual plant levels based on high-resolution UAV multispectral (MS) image data using an RF classifier. The classifier was trained in a practice area. Texture parameters were preselected via feature selection based on a wrapper analysis approach and integrated into the classification. Finally, classification results from three recording dates were used to evaluate the stand development.
This results in three main objectives: (I) The suitability of high-resolution UAV image data for differentiating clover and grass plants at the single-plant level has to be examined. (II) Analyzing whether texture parameters contribute to an increase in classification accuracy. Both a stage-specific and a cross-stage selection will be evaluated in order to be able to assess the applicability of selected texture features independently of the phenological stage. (III) Investigating whether trends for the spatial development of stand composition can be derived from the classification of UAV image data.

2. Materials and Methods

2.1. Study Area

This study was carried out on the organically farmed field “Kiesschacht” (52°19′N, 8°09′E) near Osnabrück (Germany). In order to reduce the processing effort for field data sampling, the test field boundaries were limited to a smaller part (0.74 ha) of the original field. In 2021, the area was cultivated with a clover grass mixture as part of a crop rotation. Figure 1 shows an orthophoto and the experimental setup. The clover grass mixture at “Kiesschacht” was sown on 20 August 2020 with a seed mixture of 22% red clover (Trifolium pratense), 11% alsike clover (Trifolium hybridum), 11% white clover (Trifolium repens), and 56% perennial ryegrass (Lolium perenne). The observed growing period was between 16 July 2021 (second mowing) and 22 September 2021 (third mowing).
Climatic changes during the observation period have to be taken into account when analyzing phenological stages. Weather changes can influence crop development, as clover and grass plants react differently to drought. During the field campaign, climatic conditions were characterized by a long dry phase (Figure 2). The total precipitation of approx. 46 mm in August was lower than the long-term monthly average between 1990 and 2020 (83 mm). The average daily temperature was 17.1 °C, which was comparable to the long-term average for August (18.0 °C).

2.2. Data Collection

UAV-based MS images were captured using a DJI Phantom Multispectral. The attached camera records spectral properties of the surface in the wavelengths blue (450 nm ± 16 nm), green (560 nm ± 16 nm), red (650 nm ± 16 nm), RedEdge (720 nm ± 16 nm), and near-infrared (840 nm ± 26 nm). A flight altitude of 12 m provided a spatial resolution of 6 mm per pixel and was maintained across all observation dates. To ensure low-loss alignment of the individual images at this resolution, a front and side overlap of 85% was defined for all flights. This resulted in a number of 1040 to 1070 captured images per flight. The images were recorded under changeable conditions with sunny and cloudy days alternating irregularly. In order to analyze the development of the clover grass mixtures and possible changes in its spatial composition over time, image data were recorded at three timestamps between the second and third mowing (Table 1). An interval of about two weeks was chosen between the data recordings to be able to observe different stages of plant development and their influence on the UAV image data analysis.
Field data were collected on each observation date at 48 sampling areas (0.5 m × 0.5 m), evenly distributed across the study area. High-resolution ground-based images were taken for each sampling area to reference the UAV-based training and validation data sampling. Dividing the sampling areas into training areas (see Figure 1, red squares) and validation areas (see Figure 1, blue squares) should ensure spatially independent sampling. To map spectral properties and variations of the crop in both datasets sampling areas for training and validation were spatially alternated. The location of every corner of the sampling area was recorded using a GNSS receiver (Stonex GNSS S9III).

2.3. Data Processing

To analyze the spatial distribution of clover and grass plants from MS UAV data from different phenological stages, a workflow was developed to (partially) automate raw image processing and analysis (Figure 3). UAV images were radiometrically and geometrically corrected using Agisoft Metashape (Version 1.7.2). Illumination differences between the observation dates could be recorded via the sunlight sensor of the Phantom Multispectral and normalized during image processing in raw data. The single images were radiometrically corrected and aligned to a point cloud at a high accuracy level. Based on the resulting sparse cloud, a dense cloud was calculated at a high-quality level using aggressive depth filtering. Finally, a terrain model and orthophoto were generated from the dense cloud. Geographical positioning was carried out using eight reference panels distributed evenly over the area. The center point of every panel was measured with the GPS and the resulting point coordinates were used as ground control points (GCPs) for georeferencing. The average deviation between the image and reference coordinates was between 3 cm and 6 cm.

2.4. Training and Validation Data

Training and validation were selected spatially, independently from the associated sampling areas for each stage individually. High-resolution ground-based images served as a reference for sampling. When selecting training and validation data, a distinction was made between three types of coverage. Perennial ryegrass was determined with the class “Grass”. As the different clover species from the seed mixture hardly differ from each other in visual appearance and functionality, they are grouped in the class “Clover”. Changes in phenological appearance due to the beginning of flowering (stage 2) and actual flowering (stage 3) were also included in the “Clover” class at the corresponding observation dates. Field components that could not be assigned to these surface types became part of the class “Others”, which contains open soil and crop residues.
In order to obtain comparable classification results, the number of training and validation data samples was standardized across all stages. For each stage, 1000 pixels per class were randomly drawn from the training data. Validation was conducted by counting correctly identified pixels for each class in every classification image. For validation 2500 pixels were randomly selected per class and split with a five-time repeat without replacement to ensure comparability of the classification results. Overall Accuracy (OA) was calculated as a measure of quality in each replicate. To evaluate class-wise classification results additionally, the F1 score was determined from validation samples. The aforementioned accuracy measures served as the benchmark for evaluating stage-independent applicability.

2.5. Texture Feature Extraction

Textural characteristics were considered in addition to the pure spectral information of the original bands to optimize the identification of clover and grass plants. Textures can be interpreted as spatial gray-level variabilities of images [34]. They are therefore particularly suitable for reproducing structures and heterogeneities of vegetation [35]. Haralick et al. [36] developed a gray-level co-occurrence matrix (GLCM) consisting of 14 features to derive textures from image data. Referring to Grüner et al. [30,31], the eight features—energy, entropy, correlation, inverse difference moment (IDM), inertia, cluster shade, cluster prominence, and Haralick correlation—were used in the present work as well. Texture features were calculated in QuantumGIS (Version 3.24.1) with a window size of 2 × 2 and 32 histogram bins.

2.6. Random Forest Classifier

An RF classifier was trained to differentiate between “Clover”, “Grass”, and “Others”. The RF is based on a defined number of decision trees that are arranged in an ensemble. To train the classifier for each decision tree, samples are randomly drawn from a training dataset [37]. Based on this, individual classification results are obtained for each tree. Finally, a classification result is defined from all trees using a majority vote [38]. While single decision trees tend to overfit the training data with increasing depth [39], this can be prevented by the classification decision from an ensemble of several uncorrelated trees within the forest. Due to its design, the RF is, therefore, considered to be robust against overfitting [40,41]. In addition, it is particularly suitable for high-dimensional data (e.g., multisource or hyperspectral) and needs less computation time than comparable machine learning approaches [42,43]. In this work, Python (Version 3.7) was used to implement an RF classifier with the software library scikit-learn [44]. The classifier parameters were kept in the default settings since the RF is basically evaluated as an algorithm that achieves good results in these settings [45].

2.7. Wrapper Analysis

To determine the essential texture features for classification, a feature selection based on wrapper analysis was carried out before classification. This approach avoids the need to individually calculate all texture bands for each spectral band, as well as data redundancy from highly correlated bands. In remote sensing, wrapper analyses are mainly used for the iterative selection of features from high-dimensional data [46]. They are, therefore, primarily suitable for hyperspectral imagery [47] and MS data when the dataset includes higher-dimensioned by-products derived from the original bands [48]. This approach makes it possible to estimate the importance of different bands for classification accuracy.
For this purpose, an RF classifier was parameterized as described in Section 2.6 and used to compare input bands by their classification impact. The training dataset was split into 50% training and 50% testing. For validation, the out-of-bag (OOB) score was calculated from test data. The OOB score describes the mean estimation error of all decision trees of an RF and is calculated using a subset of the data that is not part of the training (“out-of-bag”) [49]. The wrapper designed in the present work is shown in Figure 4. In the first iteration, all bands were used individually to train the RF and then compared based on their OOB score. The band with the highest score was selected and combined with each of the remaining bands individually in the following iteration to determine the second most important band. This procedure is followed until the last iteration step. The number of steps depends on the number of single bands examined in the wrapper analysis. Finally, the development of the OOB scores indicates whether the accuracy level stagnates after a certain iteration.
Wrappers were used in two steps for feature selection at each phenological stage individually: (1) Since the original bands served as the basis for the texture calculation, feature selection was first carried out using the spectral bands. The number of bands selected was based on the observed saturation point in the OOB score development. All eight texture features were then calculated for these selected bands only. (2) With the resulting features, a second wrapper analysis was performed to finally determine the texture information essential for classification. For each phenological stage, all calculated features of all bands selected in step (1) were equally included in a single wrapper analysis. To determine their added value compared to the pure spectral information, all five original bands were used as the basis in each iteration and a single texture feature was added during each step according to the principle described in Figure 4. The first iteration starts with six bands (five original bands plus a single texture feature derived from one of the selected original bands).

2.8. Classification Setup

Three different datasets were generated for each phenological stage. These are referred to below as “Original bands (OB)”, “Texture bands: stage-adapted (TBSA)”, and “Texture bands: stage-independent (TBSI)”. First, RF classification was carried out based on all original bands only (OB). Secondly, the most relevant texture features were selected based on the wrapper analysis depending on the OOB scores in each phenological stage individually and were combined stagewise with all corresponding original bands (TBSA). Thirdly, it should be tested whether a set of texture features could be identified that would improve the classification accuracy, regardless of the observed phenological stage. For this purpose, the most relevant texture features that occurred most frequently across all stages were selected and combined with the original bands (TBSI). Training and validation samples were used identically for all datasets within a stage.

3. Results

3.1. Wrapper Analysis

The wrapper analysis of the original bands showed an increase in the OOB score after the first two iterations, independent of the stage (Figure 5). From the third iteration onward, a saturation effect was observed in the development of the maximum OOB score in stages 1 and 2. This led to the conclusion that only the bands with the highest OOB score determined in the first two iterations should form the basis for the texture analysis.
In stage 3, this effect occurred from the fourth iteration onward. In order to ensure comparability of the classification results, the selection of bands is nevertheless limited to the first two iterations in this stage. It should also be noted that the first wrapper analysis is only used to determine suitable bands as the basis for texture feature calculation and that all of the original bands are still taken into account in the classification.
In all three phenological stages, the NIR band produced the highest OOB score after the first iteration (Table 2). The same was observed for the red band in the second iteration. A combination of NIR and red contained the essential information for clover grass differentiation. These bands were, therefore, used for the texture feature computation in all three phenological stages.
A feature selection from the resulting texture features was then carried out individually for each phenological stage, which led to three further wrapper analyses. In two of the three wrapper analyses, the calculated maximum OOB scores again indicated a saturation effect from the sixth iteration at the latest (Figure 6). The only exception was noticeable during the wrapper analysis of the features in stage 3. A further slight increase was observed here up to the seventh iteration. Nevertheless, only the six most important texture features were included in the classification to maintain comparability between the datasets.
In contrast to the wrapper analysis of the original bands, the feature selection of the six most important texture features was not unique. Therefore, datasets were first generated individually and adapted to the phenological stages under consideration (TBSA). For each TBSA dataset, the texture features with the highest OOB score from the first six iterations were selected (Table 3) and combined with the original bands. Here, a dominance of NIR-based features was noticeable in the first iteration. Beyond this, no significant differences in the OOB score were observed between red- and NIR-based features. Each TBSA dataset, therefore, consists of 11 features (five spectral bands plus six texture features).
Although the results of the wrapper analysis from the texture features differed fundamentally between the stages, individual features appeared particularly frequently in the first three iterations. This provides the opportunity to test uniform selected texture features across all phenological stages (TBSI). NIR-based cluster prominence achieved the highest OOB score in all three stages in the first iteration. In the two following iterations, the highest scores were also achieved across all stages with the same features (red- and NIR-based Haralick correlation). This trend was no longer confirmed in the following iterations, as different features achieved the highest OOB score between the observed stages. These can therefore not be generalized for a cross-stage application. Furthermore, from iteration four onward, the gap between the highest (colored) and lowest (gray) OOB score decreased compared to iteration three. This suggests that essential information for the classification may already be represented by the first three features. Therefore, the TBSI datasets for each phenological stage were created by combining the original bands with three features: red- and NIR-based Haralick correlation, and NIR-based cluster prominence.

3.2. Classification Results

Validation of the classification results revealed differences in accuracy between the datasets based on the original bands and the texture-optimized datasets (Figure 7). Mean OAs of classifications based on the original bands varied between 82.5% (stage 1), 82.8% (stage 2), and 85.7% (stage 3).
By adding the stage-adapted texture features, the OAs could be significantly increased in all observed stages. The added value of the texture features was most clearly recognizable in the first stage. Here, the classification accuracy was increased to 90.7% (+8.2% compared to original bands) on average, while 86.8% (+4.0%) and 88.9% (+3.4%) were achieved in stages 2 and 3, respectively. The differences between TBSA and TBSI dataset-based classifications were minor. In stage 1, the overall accuracy decreased to 90.2% (−0.4%), in stage 2 to 86.4% (−0.4%) in stage 3 to 88.7% (−0.2%).
Analyzing the F1 scores, differences in classification accuracies between the individual classes could be recognized independently of the underlying dataset (Table 4). With the texture-optimized datasets, a consistently higher F1 score was achieved for each class. Samples of the “Others” class were generally identified correctly with very high accuracy. At least 90% of the pixels in this class were correctly assigned across all phenological stages.
The added value of the texture features was particularly evident for the “Clover” and “Grass” classes. In stage 1, the F1 score for the identification of clover increased from 77.8% (OB) to 89.4% (TBSA). Grass pixels were correctly assigned based on the original bands with an accuracy of 77.4%. Adding the texture features, the F1 score was increased to 87.9%. Similar trends were observed in stages 2 and 3. Significant improvements between OB and TBSA were measurable for clover (stage 2: +6.7%; stage 3: +5.6%) and grass (stage 2: +5.0%; stage 3: +4.8%). F1 scores between the classifications based on the TBSA and TBSI datasets did not differ noticeably from each other in any of the classes in any of the stages. Deviations in the F1 score varied between 0.0% and 0.7%. The interval limits of the F1 scores revealed significant differences between the classification results based on the original bands and the datasets extended by the texture features. In contrast, no significant differences were found when comparing the datasets extended with stage-adapted (TBSA) and stage-independent (TBSI) texture features.
The results of the classification maps (Figure 8) showed a heterogeneous distribution of clover and grass plants typical of clover grass meadows. In the first phenological stage, the dominance of grasses (44%) and bare soil (39%) was noticeable, especially in the southwestern and northeastern parts of the study area. An increase in the proportion of clover could be observed between the first and second stages (+27%), while the proportion of grass decreased slightly (−1%). The increase in clover proportion was mainly associated with a decrease in bare soil and crop residues (−26%) and was distributed very heterogeneously across the entire study area. Increases in the proportion of clover were most clearly recognizable in the eastern, northeastern, and southwestern parts of the area. Between the second and third stages, a lower growth in clover proportion was observed (+9%). This was particularly evident in the northeastern part of the study area. The degree of soil cover continued to decrease (−5%) as well as the grass proportion (−4%).

4. Discussion

4.1. Objective I: Single Plant Clover Grass Classification

In the present study, an RF classifier was used to distinguish between clover and grass plants in an organically managed practice area. A low flight altitude was chosen to enable high-resolution images in order to distinguish plant species with a high level of detail. Despite the still lower spatial resolution compared to ground-based image acquisition methods, spectral information from the UAV-based MS camera system proved to be an efficient basis for clover and grass classification. In the present study, OAs between 82% and 91% were achieved, depending on the underlying dataset. Although a direct comparison with the above-mentioned studies is challenging (e.g., due to different numbers of test samples, sampling design, size of field plots, and image resolution), these results are in line with the pixel-based approaches developed for detecting clover in clover grass [26,28]. However, these studies were limited to single observation dates and evaluated the features used in the classification post-classificatory.
Therefore, in the present study, an approach using spectral and textural information was developed and tested in different stages of phenology. In a pre-classification step, texture features that significantly improve the classification accuracy were determined by a wrapper analysis. With this step, data redundancy and processing time can be reduced, which seems particularly interesting for operational applications. Moreover, RF is considered to be more computationally efficient in complex classification approaches compared to other machine learning algorithms such as support vector machines [50] or deep learning methods [43]. Since RF classification usually requires only a few training samples, the sampling effort is also significantly lower than with conventional deep learning approaches [51,52].

4.2. Objective II: Texture Features and Feature Selection

Texture features have proven to be very useful for monitoring clover grass stands, but usually have to be calculated band by band. With increasing pixel resolution, field size, and number of bands, this can lead to increasing demands on storage and computing capacity. Furthermore, as high correlations between the original bands can cause data redundancies, feature selection is required for data reduction. This can additionally contribute to the robustness of the approach against overfitting and thus improve its transferability [53,54].
A two-stage feature selection based on a wrapper analyses approach was implemented in this study. During the first feature selection process, it was shown that not all spectral bands are necessary for texture feature calculation to achieve high accuracies. The wrapper analyses of the original bands revealed the NIR band and the red band as the most important features in the first and second iterations, respectively. This could be observed regardless of the phenological stage. NIR bands are sensitive to chlorophyll content and are, therefore, particularly suitable in dense vegetation stands [55]. Since the red band also has a comparatively low spectral correlation to the NIR band [56], a combination of both bands provides high information content for differentiating soil and plant as well as plant species. They, therefore, serve as a suitable basis for calculating texture features to identify clover and grass.
The second feature selection revealed that the most important texture features for classification can be fundamentally variable depending on the phenological stage. However, as some features were selected particularly frequently in the first iterations, trends could be identified. In particular, the Haralick correlation and cluster prominence are well suited to distinguish between clover, grass, and soil, regardless of the phenological stage observed. The Haralick correlation is a measure of the correlation between neighboring pixels with similar gray values. High values mean a strong correlation and, therefore, high homogeneity [36]. Cluster prominence describes the asymmetry between pixel values. Here, high values are associated with lower gray value symmetry and therefore higher variation [57]. Clover plants are more susceptible to variable lighting than grass plants due to their leaf mass and size. The resulting differences in homogeneity and gray value variation between samples with pixels of clover and grass plants could also be observed in the present study.
Furthermore, the wrapper analysis of the texture parameters confirmed the necessity of using both spectral information from the red and NIR bands for calculating texture information. In the initial iterations, the highest OOB scores were alternately achieved with texture features based on the Red and NIR bands.
Texture features were included in the classification to highlight spatial differences between clover and grass plants. Based on the high spatial resolution of the original bands, the spectral reflectance of the plants could be derived in high detail. Information on the shape and structure of individual and contiguous plants was determined using texture features. By integrating these spatial parameters a significant improvement of the classification results could be achieved. As a result, OAs could be improved by up to 8%. Furthermore, no significant differences were found between a stage-adapted (TBSA) and a stage-independent (TBSI) selection of texture features. This is due to the fact that all features selected for the TBSI datasets were consistently the most important features at all stages examined. Thus, a cross-stage selection of texture features could be realized in the present work.

4.3. Objective III: Development of Spatial Composition of Clover Grass Mixture

By monitoring the crop at three different timestamps during phenology, it was possible to evaluate the development of the stand composition. The classification maps revealed spatial and quantitative changes in the occurrence and quantity of clover. With respect to the spatial distribution of clover and grass based on RF classification, an increase in clover proportion was observed between the stages (see Figure 8). Here, the increase from the first to the second observation date was more noticeable than at the follow-up date in stage 3. At the same time, a significant decrease in the amount of soil pixels was observed. An increasing proportion of clover over time can have a direct influence on the N-fixing capacity of the soil and must also be considered in a spatially differentiated manner with regard to its impact on subsequent crops.
In particular, climatic influences on the crop can make it necessary to monitor the spatial development of clover grass over several points in time. N-fixing plants like Trifolium repens or Trifolium pratense have been shown to be more resistant to drought stress than non-N-fixing plants like Lolium perenne [58]. Additionally, the root system of the plants must be taken into account. Deep-rooted plant species (e.g., Trifolium pratense) are considered more resistant to drought than shallow-rooted species (e.g., Lolium perenne) [59,60]. This was also observed in the present study. Between the first and second observation dates, little precipitation, as well as temperature peaks in the middle of August, were measured (see Figure 2). Analysis of the plant species distribution based on the classification maps showed a sharp increase in the proportion of clover during this period while the amount of grass was sinking. A similar dry phase was also observed between the second and third observation dates. This led to a further increase in the proportion of clover, although it was lower than between stages 1 and 2. Due to the more favorable weather conditions for clover, its phenological development also progressed faster. As a result, an increase in clover leaf mass and the beginning of flowering were already evident at the second observation time, which must be taken into account when considering the stand composition development.

4.4. Limitations

Based on the F1 scores of the classification results, it was found that clover plants were better recognized than grass plants at all phenological stages. This is likely due to the plant structure and leaf area in conjunction with the recording geometry of the UAV camera. Despite the high spatial resolution (6 mm), grass plants were more difficult to identify, as they have a much smaller leaf mass and width than clover plants. Consequently, the risk of mixed signals in the training data is increased. Furthermore, clover leaves are easier to identify due to their size based on spatial structures that should be represented by the texture features. This has also been taken into account when analyzing the resulting classification maps and the shown amounts of crop fractions.
The methodology presented requires a high spatial resolution. Depending on the camera system, this is associated with a fixed maximum flight altitude. The camera used in this study provided a lower resolution than UAV-based camera systems currently available. Consequently, applications of the methodology with higher-resolution camera systems are required in order to be able to guarantee higher flight altitudes and higher area performance. However, an increasing volume of data must be taken into account here. This can be compressed using the above-described pre-classificatory wrapper analysis to be able to identify the most important parameters for classification.
Orthophotos showed strong differences in illumination in parts of the study area. Despite the illumination sensor attached to the UAV, these could not be fully corrected. Negative effects on the classification accuracy were always evident when the plants exhibited very atypical spectral signatures due to these effects. This might reason for the slight drop in classification accuracy at the second acquisition date, as differences in illumination were very high during data acquisition.

5. Conclusions

The present study has shown that high-resolution UAV imagery is suitable for identifying clover and grass plants on a single-plant level. Using RF classifier and texture feature analysis, accuracies between 86% and 91% could be achieved depending on the observed phenological stage. By determining stand compositions over three observation timestamps from the classification results, phenological features in crop development can be inferred. Due to longer dry phases in the observation period, an increasing dominance of clover crystallized at the observed site. However, classification inaccuracies must always be taken into account. It was found that including texture features is accompanied by significant improvement in OA. In particular, features for deriving structural homogeneity (Haralick correlation) and gray value variation (cluster prominence) enabled precise differentiation between clover and grass independent of the observed stage.
The results can be used for site-specific management recommendations (N-fertilization, reseeding, and crop rotation management). Nevertheless, future research should focus on scaling the methodology to larger study areas. Furthermore, measured yield proportions should be included in order to be able to allow statements on the (fractional) crop biomass. A combination of area-wide determination of the proportion of clover and grass and spatially differentiated biomass estimation could be used to improve yield predictions.

Author Contributions

Conceptualization, K.N., T.R. and T.J.; Methodology, K.N., B.W. and T.J.; Software, K.N. and T.R.; Validation, K.N. and T.R.; Formal Analysis, K.N., T.R. and T.J.; Investigation, K.N. and T.R.; Resources, D.T., B.W. and T.J.; Data Curation, K.N.; Writing—Original Draft Preparation, K.N. and T.R.; Writing—Review and Editing, K.N., T.J., B.W., T.R. and D.T.; Visualization, K.N. and T.J.; Supervision, T.R., K.N. and T.J.; Project Administration, T.J., K.N., T.R. and D.T.; Funding Acquisition, T.J. All authors have read and agreed to the published version of the manuscript.

Funding

We acknowledge the support of the Federal Ministry of Food and Agriculture (BMEL) and the Federal Office for Agriculture and Food (BLE) as part of the project “Experimentierfeld Agro-Nordwest” (funding code [28DE103C22]), as well as the Open Access Publishing Fund of Osnabrück University.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Acknowledgments

Special thanks are due to our colleagues at Osnabrück University and the Osnabrück University of Applied Science who actively supported us in the acquisition of data and their processing and evaluation during our field campaign.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
UAVunmanned aerial vehicle
MSmultispectral
RFrandom forest
OOBout-of-bag
OAoverall accuracy
IDMinverse difference moment
OBsoriginal bands
TBSAtexture bands: stage-adapted
TBSItexture bands: stage-independent

References

  1. EEC. Council Directive of 12 December 1991 Concerning the Protection of Waters against Pollution Caused by Nitrates from Agricultural Sources (91/676/EEC). Off. J. Eur. Communities 1991, 375. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:31991L0676 (accessed on 19 February 2024).
  2. Snapp, S.; Sapkota, T.B.; Chamberlin, J.; Cox, C.M.; Gameda, S.; Jat, M.L.; Marenya, P.; Mottaleb, K.A.; Negra, C.; Senthilkumar, K.; et al. Spatially differentiated nitrogen supply is key in a global food–fertilizer price crisis. Nat. Sustain. 2023, 6, 1268–1278. [Google Scholar] [CrossRef]
  3. Eriksen, J.; Askegaard, M.; Søegaard, K. Residual effect and nitrate leaching in grass-arable rotations: Effect of grassland proportion, sward type and fertilizer history. Soil Use Manag. 2008, 24, 373–382. [Google Scholar] [CrossRef]
  4. Ledgard, S.; Schils, R.; Eriksen, J.; Luo, J. Environmental impacts of grazed clover/grass pastures. Ir. J. Agric. Food Res. 2009, 48, 209–226. Available online: https://www.jstor.org/stable/20720369 (accessed on 19 February 2024).
  5. Gaudin, A.C.M.; Westra, S.; Loucks, C.E.S.; Janovicek, K.; Martin, R.C.; Deen, W. Improving resilience of northern field crop systems using inter-seeded red clover: A review. Agronomy 2013, 3, 148–180. [Google Scholar] [CrossRef]
  6. Arturi, M.J.; Aulicino, M.B.; Ansìn, O.; Gallinger, G.; Signorio, R. Combining Ability in Mixtures of Prairie Grass and Clovers. Am. J. Plant Sci. 2012, 3, 1355–1360. [Google Scholar] [CrossRef]
  7. Zarza, R.; Rebuffo, M.; La Manna, A.; Balzarini, M. Red clover (Trifolium pratense L.) seedling density in mixed pastures as predictor of annual yield. Field Crop. Res. 2020, 256, 107925. [Google Scholar] [CrossRef]
  8. Kirwan, L.; Lüscher, A.; Sebastià, M.T.; Finn, F.A.; Collins, R.P.; Porqueddu, C.; Helgadottir, A.; Baadshaug, O.H.; Brophy, C.; Coran, C.; et al. Evenness drives consistent diversity effects in intensive grassland systems across 28 European sites. J. Ecol. 2007, 95, 530–539. [Google Scholar] [CrossRef]
  9. Nyfeler, D.; Huguenin-Elie, O.; Suter, M.; Frossard, E.; Lüscher, A. Grass–legume mixtures can yield more nitrogen than legume pure stands due to mutual stimulation of nitrogen uptake from symbiotic and non-symbiotic sources. Agric. Ecosyst. Environ. 2011, 140, 150–163. [Google Scholar] [CrossRef]
  10. Suter, M.; Connolly, J.; Finn, J.A.; Loges, R.; Kirwan, L.; Sebastià, M.T.; Lüscher, A. Nitrogen yield advantage from grass-legume mixtures is robust over a wide range of legume proportions and environmental conditions. Glob. Chang. Biol. 2015, 21, 2424–2438. [Google Scholar] [CrossRef] [PubMed]
  11. Rasmussen, J.; Søegaard, K.; Pirhofer-Walzl, K.; Eriksen, J. N2-fixation and residual N effect of four legume species and four companion grass species. Eur. J. Agron. 2012, 36, 66–74. [Google Scholar] [CrossRef]
  12. Bloor, J.M.G.; Tardif, A.; Pottier, J. Spatial Heterogeneity of Vegetation Structure, Plant N Pools and Soil N Content in Relation to Grassland Management. Agronomy 2020, 10, 716. [Google Scholar] [CrossRef]
  13. Wachendorf, M.; Fricke, T.; Möckel, T. Remote sensing as a tool to assess botanical composition, structure, quantity and quality of temperate grasslands. Grass Forage Sci. 2018, 73, 1–14. [Google Scholar] [CrossRef]
  14. Skovsen, S.; Dyrmann, M.; Mortensen, A.K.; Steen, K.A.; Green, O.; Eriksen, J.; Gislum, R.; Jørgensen, R.N.; Karstoft, H. Estimation of the Botanical Composition of clover grass Leys from RGB Images Using Data Simulation and Fully Convolutional Neural Networks. Sensors 2017, 17, 2930. [Google Scholar] [CrossRef] [PubMed]
  15. Bonesmo, H.; Kaspersen, K.; Bakken, A.K. Evaluating an Image Analysis System for Mapping White Clover Pastures. Acta Agric. Scand. Sect. Soil Plant Sci. 2004, 54, 76–82. [Google Scholar] [CrossRef]
  16. Himstedt, M.; Fricke, T.; Wachendorf, M. Determining the Contribution of Legumes in Legume–Grass Mixtures Using Digital Image Analysis. Crop. Sci. 2009, 49, 1910–1916. [Google Scholar] [CrossRef]
  17. Mortensen, A.K.; Karstoft, H.; Søegaard, K.; Gislum, R.; Jørgensen, R.N. Preliminary Results of Clover and Grass Coverage and Total Dry Matter Estimation in Clover-Grass Crops Using Image Analysis. J. Imaging 2017, 3, 59. [Google Scholar] [CrossRef]
  18. Fricke, T.; Richter, F.; Wachendorf, M. Assessment of forage mass from grassland swards by height measurement using an ultrasonic sensor. Comput. Electron. Agric. 2011, 79, 142–152. [Google Scholar] [CrossRef]
  19. Lussem, U.; Bolten, A.; Menne, J.; Gnyp, M.; Martin, L.; Schellberg, J.; Bareth, G. Estimating biomass in temperate grassland with high resolution canopy surface models from UAV-based RGB images and vegetation indices. J. Appl. Remote Sens. 2019, 13, 034525. [Google Scholar] [CrossRef]
  20. Torres-Sánchez, J.A.; López-Granados, F.; De Castro, A.I.; Peña-Barragán, J.M. Configuration and Specifications of an Unmanned Aerial Vehicle (UAV) for Early Site Specific Weed Management. PloS ONE 2013, 8, e58210. [Google Scholar] [CrossRef] [PubMed]
  21. Delavarpour, N.; Koparan, C.; Nowatzki, J.; Bajwa, S.; Sun, X. A Technical Study on UAV Characteristics for Precision Agriculture Applications and Associated Practical Challenges. Remote Sens. 2021, 13, 1204. [Google Scholar] [CrossRef]
  22. Tsouros, D.C.; Bibi, S.; Sarigiannidis, P.G. A Review on UAV-Based Applications for Precision Agriculture. Information 2019, 10, 349. [Google Scholar] [CrossRef]
  23. Michez, A.; Lejeune, P.; Bauwens, S.; Herinaina, A.A.L.; Blaise, Y.; Castro Muñoz, E.; Lebeau, F.; Bindelle, J. Mapping and Monitoring of Biomass and Grazing in Pasture with an Unmanned Aerial System. Remote Sens. 2019, 11, 473. [Google Scholar] [CrossRef]
  24. Li, K.; Burnside, N.G.; de Lima, R.S.; Peciña, M.V.; Sepp, K.; Yang, M.; Raet, J.; Vain, A.; Selge, A.; Sepp, K. The Application of an Unmanned Aerial System and Machine Learning Techniques for Red Clover-Grass Mixture Yield Estimation under Variety Performance Trials. Remote Sens. 2021, 13, 1994. [Google Scholar] [CrossRef]
  25. Albert, P.; Saadeldin, M.; Narayanan, B.; Fernandez, J.; Mac Namee, B.; Hennessey, D.; O’Conner, N.E.; McGuinness, K. Detection and quantification of broadleaf weeds in turfgrass using close-range multispectral imagery with pixel- and object-based classification. Int. J. Remote Sens. 2022, 42, 8035–8055. [Google Scholar] [CrossRef]
  26. Abduleil, A.M.; Taylor, G.W.; Moussa, M. An Integrated System for Mapping Red Clover Ground Cover Using Unmanned Aerial Vehicles, A Case Study in Precision Agriculture. In Proceedings of the 12th Conference on Computer and Robot Vision, Halifax, NS, Canada, 3–5 June 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 277–284. [Google Scholar] [CrossRef]
  27. Böhler, J.E.; Schaepman, M.E.; Kneubühler, M. Crop Classification in a Heterogeneous Arable Landscape Using Uncalibrated UAV Data. Remote Sens. 2018, 10, 1282. [Google Scholar] [CrossRef]
  28. Hahn, D.S.; Roosjen, P.; Morales, A.; Njip, J.; Beck, L.; Cruz, C.V.; Leinauer, B. Detection and quantification of broadleaf weeds in turfgrass using close-range multispectral imagery with pixel- and object-based classification. Int. J. Remote Sens. 2021, 42, 8035–8055. [Google Scholar] [CrossRef]
  29. Li, S.; Yuan, F.; Ata-Ul-Karim, S.T.; Zheng, H.; Cheng, T.; Liu, X.; Tian, Y.; Zhu, Y.; Cao, W.; Cao, Q. Combining Color Indices and Textures of UAV-Based Digital Imagery for Rice LAI Estimation. Remote Sens. 2019, 11, 1763. [Google Scholar] [CrossRef]
  30. Grüner, E.; Wachendorf, M.; Astor, T. The potential of uav-borne spectral and textural information for predicting aboveground biomass and n fixation in legume-grass mixtures. PloS ONE 2021, 15, e0234703. [Google Scholar] [CrossRef] [PubMed]
  31. Grüner, E.; Astor, T.; Wachendorf, M. Prediction of biomass and n fixation of legume–grass mixtures using sensor fusion. Front. Plant Sci. 2021, 11, 603921. [Google Scholar] [CrossRef] [PubMed]
  32. Grizzonet, M.; Michel, J.; Poughon, V.; Inglada, J.; Savinaud, M.; Cresson, R. Orfeo ToolBox: Open source processing of remote sensing images. Open Geospat. Data Softw. Stand. 2017, 2, 15. [Google Scholar] [CrossRef]
  33. Deutscher Wetterdienst (DWD). Open Data Bereich des Climate Data Center. Available online: https://www.dwd.de/DE/leistungen/cdc/climate-data-center.html?nn=17626 (accessed on 16 December 2023).
  34. Petrou, M.; Sevilla, P.G. Image Processing: Dealing with Texture; John Wiley & Sons: Chilchester, UK, 2006; pp. 1–10. [Google Scholar]
  35. Gallardo-Cruz, J.A.; Meave, J.A.; González, E.J.; Lebrija-Trejos, E.E.; Romero-Romero, M.A.; Pérez-García, E.A.; Gallardo-Cruz, R.; Hernández-Stefanoni, J.L.; Martorell, C. Predicting Tropical Dry Forest Successional Attributes from Space: Is the Key Hidden in Image Texture? PLoS ONE 2012, 7, e30506. [Google Scholar] [CrossRef] [PubMed]
  36. Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural features for image classification. IEEE Trans. Syst. Man Cybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef]
  37. Cutler, A.; Cutler, D.R.; Stevens, J.R. Random Forests. Mach. Learn. 2012, 45, 157–175. [Google Scholar] [CrossRef]
  38. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  39. Bramer, M. Avoiding overfitting of decision trees. Princ. Data Min. 2007, 119–134. [Google Scholar] [CrossRef]
  40. Liaw, A.; Wiener, M. Classification and Regression by randomForest. R News 2022, 2, 18–22. [Google Scholar]
  41. Robnik-Šikonja, M. Improving Random Forests. In European Conference on Machine Learning; Boulicaut, J.F., Esposito, F., Giannotti, F., Pedreschi, D., Eds.; Springer: Berlin/Heidelberg, Germany, 2004; Volume 3201, pp. 359–370. [Google Scholar] [CrossRef]
  42. Belgiu, M.; Drăgut, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  43. Lan, T.; Hu, H.; Jiang, C.; Yang, C. A comparative study of decision tree, random forest, and convolutional neural network for spread-F identification. Adv. Space Res. 2020, 65, 2056–2061. [Google Scholar] [CrossRef]
  44. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  45. Fernandez-Delgado, M.; Cernadas, E.; Barro, S.; Amorim, D. Do we Need Hundreds of Classifiers to Solve Real World Classification Problems? J. Mach. Learn. Res. 2014, 15, 3133–3181. [Google Scholar]
  46. Kohavi, R.; John, G.H. Wrappers for feature subset selection. Artif. Intell. 1997, 97, 273–324. [Google Scholar] [CrossRef]
  47. Cao, X.; Wei, C.; Han, J.; Jiao, L. Hyperspectral Band Selection Using Improved Classification Map. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2147–2151. [Google Scholar] [CrossRef]
  48. Su, J.; Yi, D.; Coombes, M.; Liu, C.; Zhai, X.; MacDonald-Maier, K.; Chen, W. Spectral analysis and mapping of blackgrass weed by leveraging machine learning and UAV multispectral imagery. Comput. Electron. Agric. 2022, 192, 106621. [Google Scholar] [CrossRef]
  49. Breiman, L. Out-of-Bag Estimation. Technical Report, Statistics Department, University. 1996. Available online: https://www.stat.berkeley.edu/pub/users/breiman/OOBestimation.pdf (accessed on 10 April 2024).
  50. Adugna, T.; Xu, W.; Fan, J. A Comparison of Random Forest and Support Vector Machine Classifiers for Regional Land Cover Mapping Using Coarse Resolution FY-3C Images. Remote Sens. 2022, 14, 574. [Google Scholar] [CrossRef]
  51. Shetty, S.; Gupta, P.; Belgiu, M.; Srivastav, S.K. Assessing the Effect of Training Sampling Design on the Performance of Machine Learning Classifiers for Land Cover Mapping Using Multi-Temporal Remote Sensing Data and Google Earth Engine. Remote Sens. 2021, 13, 1433. [Google Scholar] [CrossRef]
  52. Maxwell, A.E.; Warner, T.A.; Guillén, L.A. Accuracy Assessment in Convolutional Neural Network-Based Deep Learning Remote Sensing Studies—Part 1: Literature Review. Remote Sens. 2021, 13, 2450. [Google Scholar] [CrossRef]
  53. Meyer, H.; Reudenbach, C.; Hengl, T.; Katurj, M.; Nauss, T. Improving performance of spatio-temporal machine learning models using forward feature selection and target-oriented validation. Environ. Model. Softw. 2018, 101, 1–9. [Google Scholar] [CrossRef]
  54. Freiesleben, T.; Grote, T. Beyond generalization: A theory of robustness in machine learning. Synthese 2023, 202, 109. [Google Scholar] [CrossRef]
  55. Ashapure, A.; Jung, J.; Chang, A.; Oh, S.; Maeda, M.; Landivar, J. A Comparative Study of RGB and Multispectral Sensor-Based Cotton Canopy Cover Modelling Using Multi-Temporal UAS Data. Remote Sens. 2019, 11, 2757. [Google Scholar] [CrossRef]
  56. Peng, Y.; He, M.; Zheng, Z.; He, Y. Enhanced Neural Network for Rapid Identification of Crop Water and Nitrogen Content Using Multispectral Imaging. Remote Sens. 2023, 13, 2464. [Google Scholar] [CrossRef]
  57. Yang, X.; Tridandapani, S.; Beitler, J.J.; Yu, D.S.; Yoshida, E.J.; Curran, W.J.; Liu, T. Ultrasound GLCM texture analysis of radiation-induced parotid-gland injuryin head-and-neck cancer radiotherapy: An in vivo study of late toxicity. Med. Phys. 2012, 39, 5732–5739. [Google Scholar] [CrossRef] [PubMed]
  58. Hoekstra, N.J.; Suter, M.; Finn, J.A.; Husse, S.; Lüscher, A. Do belowground vertical niche differences between deepand shallow-rooted species enhance resource uptake and drought resistance in grassland mixtures? Plant Soil 2015, 394, 21–34. [Google Scholar] [CrossRef]
  59. Hofer, D.; Suter, M.; Haughey, E.; Finn, J.A.; Nyncke, J.; Hoekstra, N.J.; Buchmann, N.; Lüscher, A. Yield of temperate forage grassland species is either largely resistant or resilient to experimental summer drought. J. Appl. Ecol. 2016, 53, 1023–1034. [Google Scholar] [CrossRef]
  60. Tahir, M.; Li, C.; Zeng, T.; Xin, Y.; Chen, C.; Javed, H.H.; Yang, W.; Yan, Y. Mixture Composition Influenced the Biomass Yield and Nutritional Quality of Legume–Grass Pastures. Agronomy 2022, 12, 1449. [Google Scholar] [CrossRef]
Figure 1. Study site; (A): location in Germany, (B): field location in suburban Osnabrück (Lower Saxony), (C): orthophoto with sampling area distribution (left) and example samples (right).
Figure 1. Study site; (A): location in Germany, (B): field location in suburban Osnabrück (Lower Saxony), (C): orthophoto with sampling area distribution (left) and example samples (right).
Remotesensing 16 02684 g001
Figure 2. Climate constitutions during the field campaign and observation dates. Data were taken from the local weather station at Belm [33].
Figure 2. Climate constitutions during the field campaign and observation dates. Data were taken from the local weather station at Belm [33].
Remotesensing 16 02684 g002
Figure 3. Workflow summary. Beige boxes describe processing steps. Key data sets resulting from individual processing steps are shown in green boxes. Boxes in blue, purple and red describe the final data sets for classification.
Figure 3. Workflow summary. Beige boxes describe processing steps. Key data sets resulting from individual processing steps are shown in green boxes. Boxes in blue, purple and red describe the final data sets for classification.
Remotesensing 16 02684 g003
Figure 4. Scheme of the wrapper analysis. In the first iteration, RF classifiers are trained with one band each and validated using the test data from the training samples to identify the band with the highest OOB score (MaxOOBIter1). In the following step, (MaxOOBIter1) is combined with each of the remaining bands individually. The RF is then tested again on all band combinations. B1 to Bn (beige boxes) represents the individual bands tested in one iteration step. P (blue boxes) describes the number of bands considered in the respective iteration step.
Figure 4. Scheme of the wrapper analysis. In the first iteration, RF classifiers are trained with one band each and validated using the test data from the training samples to identify the band with the highest OOB score (MaxOOBIter1). In the following step, (MaxOOBIter1) is combined with each of the remaining bands individually. The RF is then tested again on all band combinations. B1 to Bn (beige boxes) represents the individual bands tested in one iteration step. P (blue boxes) describes the number of bands considered in the respective iteration step.
Remotesensing 16 02684 g004
Figure 5. Results of wrapper analysis of the original bands. OOB scores of the individual bands or band combinations are shown. The iteration step is equal to the number of combined bands. Lines represent the development of the maximum OOB score. In stage 3, the RedEdge band is covered by the NIR band in the first iteration.
Figure 5. Results of wrapper analysis of the original bands. OOB scores of the individual bands or band combinations are shown. The iteration step is equal to the number of combined bands. Lines represent the development of the maximum OOB score. In stage 3, the RedEdge band is covered by the NIR band in the first iteration.
Remotesensing 16 02684 g005
Figure 6. Results of the wrapper analysis of the Haralick texture features. The features were combined individually with the original bands. Added features with the highest OOB score in each iteration are shown in color. Gray symbols represent features with the lowest OOB score. The iteration step is equal to the number of combined features. Lines represent the development of the maximum OOB score.
Figure 6. Results of the wrapper analysis of the Haralick texture features. The features were combined individually with the original bands. Added features with the highest OOB score in each iteration are shown in color. Gray symbols represent features with the lowest OOB score. The iteration step is equal to the number of combined features. Lines represent the development of the maximum OOB score.
Remotesensing 16 02684 g006
Figure 7. Comparison of overall accuracies for classifications out of the five-time repeat based on the three datasets. Overall accuracies of the five single validations are marked with white triangles.
Figure 7. Comparison of overall accuracies for classifications out of the five-time repeat based on the three datasets. Overall accuracies of the five single validations are marked with white triangles.
Remotesensing 16 02684 g007
Figure 8. Classification results for the three phenological stages based on the combined dataset of original bands and the selected texture features (TBSI).
Figure 8. Classification results for the three phenological stages based on the combined dataset of original bands and the selected texture features (TBSI).
Remotesensing 16 02684 g008
Table 1. Dates from field campaign for data acquisition and corresponding phenological information.
Table 1. Dates from field campaign for data acquisition and corresponding phenological information.
StageDateDays after MowingPhenological Stage GrassPhenological Stage Clover
Stage 127 July11Begin of tilleringBegin of tillering
Stage 211 August25End of tilleringBegin of flowering
Stage 326 August40Ear emergingFlowering
Table 2. Features with the max OOB score of wrapper analysis of the original bands for each iteration. The feature with the highest (additive) OOB score in the corresponding iteration is shown.
Table 2. Features with the max OOB score of wrapper analysis of the original bands for each iteration. The feature with the highest (additive) OOB score in the corresponding iteration is shown.
IterationStage 1Stage 2Stage 3
1NIRNIRNIR
2RedRedRed
3GreenGreenBlue
4RedEdgeRedEdgeRedEdge
5BlueBlueGreen
Table 3. Features with maximum OOB score of wrapper analysis of the Haralick texture features. The feature with the highest (additive) OOB score in the corresponding iteration is shown. Only the first six iterations are shown.
Table 3. Features with maximum OOB score of wrapper analysis of the Haralick texture features. The feature with the highest (additive) OOB score in the corresponding iteration is shown. Only the first six iterations are shown.
IterationStage 1Stage 2Stage 3
FeatureBandFeatureBandFeatureBand
1Cluster
Prominence
NIRCluster
Prominence
NIRCluster
Prominence
NIR
2Haralick
Correlation
RedHaralick
Correlation
RedHaralick
Correlation
NIR
3Haralick
Correlation
NIRHaralick
Correlation
NIRHaralick
Correlation
Red
4EnergyRedCorrelationNIRCluster
Shade
Red
5Cluster
Shade
NIREntropyNIRIDMRed
6EnergyNIRCluster
Prominence
RedEntropyNIR
Table 4. Mean F1 scores and interval limits of classes “Clover”, “Grass”, and “Others” for the classifications based on the original bands (OBs), the original bands combined with the stage-adapted texture bands (TBSA), and the original bands combined with the stage-independent texture bands (TBSI).
Table 4. Mean F1 scores and interval limits of classes “Clover”, “Grass”, and “Others” for the classifications based on the original bands (OBs), the original bands combined with the stage-adapted texture bands (TBSA), and the original bands combined with the stage-independent texture bands (TBSI).
StageClassF1 Score [%]
OB TBSA TBSI
Mean ± Mean ± Mean ±
Clover77.81.589.41.388.71.1
S1Grass77.41.287.90.587.51.4
Others92.80.794.80.794.60.6
Clover79.61.286.31.186.80.7
S2Grass75.61.380.61.379.11.4
Others92.80.692.91.292.60.7
Clover86.21.391.81.291.71.3
S3Grass79.01.383.81.983.81.8
Others90.01.191.01.990.61.0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nahrstedt, K.; Reuter, T.; Trautz, D.; Waske, B.; Jarmer, T. Classifying Stand Compositions in Clover Grass Based on High-Resolution Multispectral UAV Images. Remote Sens. 2024, 16, 2684. https://doi.org/10.3390/rs16142684

AMA Style

Nahrstedt K, Reuter T, Trautz D, Waske B, Jarmer T. Classifying Stand Compositions in Clover Grass Based on High-Resolution Multispectral UAV Images. Remote Sensing. 2024; 16(14):2684. https://doi.org/10.3390/rs16142684

Chicago/Turabian Style

Nahrstedt, Konstantin, Tobias Reuter, Dieter Trautz, Björn Waske, and Thomas Jarmer. 2024. "Classifying Stand Compositions in Clover Grass Based on High-Resolution Multispectral UAV Images" Remote Sensing 16, no. 14: 2684. https://doi.org/10.3390/rs16142684

APA Style

Nahrstedt, K., Reuter, T., Trautz, D., Waske, B., & Jarmer, T. (2024). Classifying Stand Compositions in Clover Grass Based on High-Resolution Multispectral UAV Images. Remote Sensing, 16(14), 2684. https://doi.org/10.3390/rs16142684

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop