Next Article in Journal
Mapping Priority Areas for Apiculture Development with the Use of Geographical Information Systems
Next Article in Special Issue
Plant and Weed Identifier Robot as an Agroecological Tool Using Artificial Neural Networks for Image Identification
Previous Article in Journal
Fertilization Management Improves the Yield and Capsaicinoid Content of Chili Peppers
Previous Article in Special Issue
Average Degree of Coverage and Coverage Unevenness Coefficient as Parameters for Spraying Quality Assessment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Oil Palm Tree Detection and Health Classification on High-Resolution Imagery Using Deep Learning

by
Kanitta Yarak
1,
Apichon Witayangkurn
2,*,
Kunnaree Kritiyutanont
3,
Chomchanok Arunplod
4 and
Ryosuke Shibasaki
2
1
Department of Geoinformatics, Rambhai Barni Rajabhat University, Chanthaburi 22000, Thailand
2
Center for Spatial Information Science, The University of Tokyo, Chiba 277-8568, Japan
3
Department of Information and Communication Technologies, School of Engineering and Technology, Asian Institute of Technology, Pathumthani 12120, Thailand
4
Department of Geography, Faculty of Social Sciences, Srinakharinwirot University, Bangkok 10110, Thailand
*
Author to whom correspondence should be addressed.
Agriculture 2021, 11(2), 183; https://doi.org/10.3390/agriculture11020183
Submission received: 28 January 2021 / Revised: 16 February 2021 / Accepted: 20 February 2021 / Published: 23 February 2021
(This article belongs to the Special Issue Artificial Neural Networks in Agriculture)

Abstract

:
Combining modern technology and agriculture is an important consideration for the effective management of oil palm trees. In this study, an alternative method for oil palm tree management is proposed by applying high-resolution imagery, combined with Faster-RCNN, for automatic detection and health classification of oil palm trees. This study used a total of 4172 bounding boxes of healthy and unhealthy palm trees, constructed from 2000 pixel × 2000 pixel images. Of the total dataset, 90% was used for training and 10% was prepared for testing using Resnet-50 and VGG-16. Three techniques were used to assess the models’ performance: model training evaluation, evaluation using visual interpretation, and ground sampling inspections. The study identified three characteristics needed for detection and health classification: crown size, color, and density. The optimal altitude to capture images for detection and classification was determined to be 100 m, although the model showed satisfactory performance up to 140 m. For oil palm tree detection, healthy tree identification, and unhealthy tree identification, Resnet-50 obtained F1-scores of 95.09%, 92.07%, and 86.96%, respectively, with respect to visual interpretation ground truth and 97.67%, 95.30%, and 57.14%, respectively, with respect to ground sampling inspection ground truth. Resnet-50 yielded better F1-scores than VGG-16 in both evaluations. Therefore, the proposed method is well suited for the effective management of crops.

1. Introduction

Oil palm trees are one of Thailand’s most essential economic crops considering it has the highest oil production when compared to other oil-producing plants such as soybean, peanut, sunflower, and rapeseed. Palm oil can be processed into various products such as cooking oil, soap, margarine, and sweetened condensed milk. In addition, it is also used as a raw material in the manufacturing of biodiesel and pulp. Oil palm trees grow well in tropical climates, which are often found in countries situated in equatorial regions. Thus, the oil palm is a crop that is widely cultivated by farmers in Southern Thailand.
Precision agriculture requires reliable data on the current situation at the right time. Therefore, the automated detection of oil palm trees and health disorder recognition is an alternative method for farmers to manage their resources using technology instead of a manual approach. The method also provides information on plant growth and health, which is especially useful to track the age and survival rate of plants that will contribute to the oil palm tree production in the future. Oil palm tree detection and enumeration are mostly performed using high-resolution imagery. For instance, many researchers have used high-resolution satellite images [1] and unmanned aerial vehicle (UAV) images [2]. UAVs have played a vital role in remote sensing in recent years as they can provide high-resolution images when there is no cloud cover. Users can set the altitude and time to fly. Aliero et al. used UAVs for automated counting of oil palm trees based on crown properties and the plants’ response to radiation. Moreover, spatial analysis and morphological analysis were also used in their study [3]. Daliman et al. used Haar-based rectangular windows and support vector machines (SVMs) to detect oil palm trees on the WorldView-2 satellite imagery dataset [4]. Manandhar et al. presented a methodology for object detection with aerial imagery by applying shape feature characteristics for oil palm tree detection and counting. They used circular autocorrelation of the polar shape matrix to represent images as the shape feature and used a linear SVM to standardize and reduce the feature dimensions. Finally, they used local maximum detection on the spatial distribution of standardized features for oil palm tree detection [5].
Deep learning is one of the various machine learning procedures with a mechanism similar to that of the human brain. In addition, it is commonly applied to analyze visual imagery. Recently, much attention has been paid to this method and it has been applied in many fields, such as image recognition [6], handwriting recognition [7], medical, and healthcare [8]. Moreover, deep learning has also been applied to agricultural management to reduce production costs resulting in more effective agricultural production. For example, our method was used to detect and enumerate agricultural populations, including the classification of diseased plants. Cheang et al. proposed a system for the counting and positioning oil palm trees using a convolutional neural network (CNN) to classify the oil palm dataset on a high-resolution satellite image with a sliding window technique [9]. Li et al. proposed using deep learning to detect plants instead of manual detection methods. They used data from a manual count to train and improve the performance of the CNN system. Then, all samples were predicted on images using the sliding window technique [10]. Sladojevic et al. studied the development of plant disease patterns from leaf images using deep convolutional networks. The results of their study demonstrated the ability to distinguish diseased plants from healthy plants [11]. Mubin et al. used a geographic information system (GIS) and CNN named LeNet on WorldView-3 images for young and mature oil palm detection [12]. They used a training dataset with a mini-batch of size 20 and used GIS software to display and create maps of oil palm tree prediction.
There are various popular deep learning algorithms, such as recurrent neural networks (RNNs), long short-term memory networks (LSTMs), and CNNs. RNNs and LSTMs have similar capabilities and are widely used in time series problems/forecasting, but LSTM can be trained for tasks that require long-term memory. CNN is the deep neural network most commonly used in computer vision and object detection [13]. Many CNN models are available in the public repository. VGG-16 [14] and Resnet-50 [15] are examples of the CNN models commonly used for image classification. In recent years, a selective search based on regions was introduced to improve the speed and performance of a detector. The most common models include R-CNN, Fast R-CNN [16], and Faster R-CNN [17]. Faster R-CNN performed best among those three in terms of accuracy and detection speed.
The use of remote sensing in conjunction with these deep-learning techniques is common in the agricultural industry in many countries. Most research on oil palm tree detection uses satellite imagery and focuses solely on detection and counting. For example, Zheng et al. used Faster-RCNN, one of the most popular networks for object detection, to detect tree crowns from satellite images [18]. In contrast, in this study, high-resolution images from UAVs are used instead. Due to this approach, data surveys can be performed without time restrictions and under any cloud conditions. Our study focuses on oil palm health classification (healthy or unhealthy) rather than just detection and automatic counting as other research. Therefore, it is useful for modern precision agriculture, which focuses on reducing farmer work processes and unnecessary production costs.
This study proposes a technique to automatically detect and count oil palm trees and recognize oil palm health from high-resolution images using CNN with Faster-RCNN structure. The training data included over 4000 images of individual oil palm trees with healthy and unhealthy classes. We evaluated the model by conducting model training evaluation and comparing prediction results with visual and ground inspections. The model was also tested with images taken at different altitudes.

2. Materials and Methods

2.1. Data Used

Detection and health classification of oil palm trees by using the deep learning need to be performed using high-resolution images and require field survey data to study important physical characteristics of oil palm trees and evaluate the reliability of the results. Thailand is the 3rd largest producer of palm oil in the world [19]. Southern Thailand is the region with the most oil palm plantations. The study areas included three oil palm plantations, two of which were located in Surat Thani and the other one was located in Krabi, shown in Figure 1, with approximately 40 Rai (64,000 square meters) in each plantation area. Surat Thani and Krabi are the two provinces with the largest oil palm production, ranked first and second, respectively. The recommended practice for oil palm plantation is to allocate an area of 9 m × 9 m for each oil palm tree. This results in a regular pattern of oil palm positions within an area.
The dataset used in this study consists of high-resolution RGB images taken from a UAV by a DJI Phantom 4 Pro camera with a 20 megapixel resolution using a D-Log color profile, as illustrated in Figure 2. All the images used were captured at an altitude of 100 m, and the ground sampling distance (GSD) was approximately 2.8 cm.
Moreover, field survey data were obtained by performing a health assessment using a paper survey and ground photography. Field observation data such as crown size, crown density, crown color, and examples of problematic oil palm trees were used to study the correlation between oil palm trees in plantations and UAV images.

2.2. Methodology

The methodology section of this paper presents the research process used during this study. First, data collection is discussed, together with the data preparation process used to prepare the high-resolution image dataset for use as training and testing data. Second, the method used to develop training data and a model for counting and health classification is addressed. Last, we covered model evaluation, which included three techniques: model training inspection, visual inspection, and ground inspection.

2.2.1. Data Collection and Data Preparation

The study area survey was critical given that the survey data would be used to check and monitor physical characteristic changes and oil palm tree health. The data collection process focused on three obvious external physical characteristics that could easily be observed: color (the level of green in the canopy), crown density, and crown size. Moreover, other general information such as age, height, diameter at breast height (DBH), nutritional deficiency symptoms, diseases at various levels, and the approximate amount of production were recorded for every sample tree. All the data above could be used as indicators of oil palm health and abnormalities. A tree under stress was also classified as unhealthy. Figure 3 showed the example of unhealthy trees. A laser range finder collected information on height and crown size (measured along the x- and y-axes). The DBH was measured using a tape measure or rope with scale. A sample of surveyed oil palm trees was chosen using a systematic sampling method, sampling approximately 25% of the total area. The sample trees were chosen by selecting every fourth tree in a row, starting with the first tree of the row.
The ground truth labels were acquired through sample observations using the following criteria: crown size was determined by measuring the crown’s radius, and the crown color was assessed by classifying the amount of green in the canopy at different heights (lower fronds, middle fronds, and upper fronds) into three levels. We also separated crown density into three levels: low, moderate, and high, as shown in Figure 4. The measurements and labeling were assessed in collaboration with agricultural officers specializing in oil palms. To obtain additional information, a sample leaf from the 17th frond was collected for nutrient inspection in the laboratory. However, the aim of this study is to emphasize external indicators and focus on large-scale measurements before considering other methods such as the use of a multispectral camera or spectroradiometer for leaf inspection. Thus, obscure health conditions in oil palms may be less well detected, and this one of the limitations of this study.
The preparation of the processed UAV images with red, green, and blue (RGB) bands began by converting the format of orthophotos from .TIF to .JPG. The image size used as data for training and testing had an extreme effect on memory and data processing. Thus, each image was divided into smaller images of similar sizes. Therefore, each image was split into equal-sized 2000 pixel × 2000 pixel images without taking image overlap into account, as shown in Figure 5.

2.2.2. Training Data Development and Oil Palm Tree Classification Model

In accordance with the development of training data, for automatic oil palm tree detection and counting based on deep learning techniques, this method required data that were already classified (labeled data). The data used for training were divided into two classes, i.e., healthy oil palm trees and unhealthy oil palm trees, as illustrated in Figure 6. In the train set, the class of healthy oil palm trees included healthy oil palm trees on images in various conditions, such as blurred images of healthy oil palm trees and healthy oil palm trees with incomplete canopies (however, more than 50% of the canopy is still present). It also included healthy oil palm trees that overlapped other objects, other plants, and other backgrounds. For the unhealthy oil palm tree class, the dataset included unhealthy oil palm trees on images in various conditions similar to that of the healthy dataset. To reduce the problem of detecting the same tree in multiple images, the bounding boxes were drawn around the regions that enclose more than 50% of the canopy.
During the preparation of a training set, including healthy and unhealthy oil palm trees, the palm trees were separated from other objects that were not of interest by finding the Xmin, Ymin, Xmax, and Ymax values for each canopy boundary. These values were determined by using the program LabelImg and drawing a bounding box surrounding each canopy along with providing the class definition. The program generated the training dataset as the Pascal VOC dataset in .xml format. This format was then converted into a text file (.txt), with each line containing “filepath, Xmin, Ymin, Xmax, Ymax, class_name” before being used for processing. Additionally, the testing dataset was prepared the same as the training dataset, with the testing dataset accounting for testing was 10% of the total dataset.
According to Figure 7, the processing procedure for classifying and counting oil palm trees began with data collection and training data development. Then it followed with the selection of CNN architecture, including parameter optimization. The CNN architecture was then trained and tested using the dataset with Python3. Another criterion for architecture selection was the availability of the software, which enables people from multiple disciplines to benefit from the findings of this study.
As a result, Faster-RCNN was chosen as the underlying network architecture, and the two models selected for this research were VGG-16 and Resnet-50. These models are common options for Faster-RCNN and were mostly used as baseline models for further improvements. The models were implemented using the TensorFlow and Keras deep-learning libraries for Python3. Model training was performed on a PC running Windows 10 and equipped with a GeForce RTX 2080 Ti. The parameters were optimized using 1000 iterations per epoch with a batch size of 1, as supported by Faster RCNN. The training was halted at 39 epochs for the VGG-16 network and 40 epochs for the Resnet-50 network when no further increase in accuracy was noted.

2.2.3. Evaluating Model Performance

The evaluation of performance after processing was evaluated in three parts. A preliminary assessment of the effectiveness of the model was done by observing the accuracy and loss function values. After processing, classifier accuracy for bounding boxes from the region proposal network (RPN), four loss values, and elapsed times were compared for this research. Next, visual inspection, which evaluated the model’s accuracy by comparing the number of oil palm trees predicted by the model to the number counted on UAV imagery. The performance evaluation of an object detection model commonly uses precision, recall, and F1-score without considering intersection over union (IoU), which is more suitable for image segmentation evaluation. Moreover, it takes a long time and is difficult to generate labeled training data for an IoU-based evaluation, especially for oil palm trees, because of the overlapping boundaries of the oil palms. Therefore, a confusion matrix was used to describe the achieved model classification showing true positive (TP), false negative (FN), false positive (FP), and true negative (TN). Then, the values from the confusion matrix were used to measure performance by calculating precision, recall, and F1-score defined by the following equations.
P r e c i s i o n =   T P T P + F P  
R e c a l l =   T P T P + F N
F 1 s c o r e = 2 ×   P r e c i s i o n   ×   R e c a l l   r e c i s i o n +   R e c a l l  
The ground inspection was performed by comparing the predicted results from the CNN model with the physical characteristics of oil palm tree samples observed from surveying and ground photography. This method showed the consistency between the predicted results of the model and the oil palm trees in plantations. The result obtained from this evaluation was the ratio of all predicted oil palms to the total number of oil palms surveyed. Further, it compared the proportion of healthy and unhealthy oil palms predicted by the model to healthy and unhealthy oil palms from the survey.

3. Results and Discussion

3.1. Physical Characteristics and Data Preparation

The physical characteristics of oil palm trees are essential, as they indicate good or bad health. The health of oil palm trees on high-resolution imagery at vertical angles can be observed using three crucial characteristics: crown size, crown color, and crown density. The crown size relates to the age of the oil palm tree. Young oil palms (less than eight years) have a small crown size, becoming larger as they grow, and reaching full size at about eight years. Aside from the oil palm’s age, the canopy’s size can also be affected by exposure to Ganoderma disease. Concerning color, most healthy oil palm trees have a green canopy. In contrast, unhealthy oil palms tend to have different colored canopies ranging from yellowish green to brown, resulting from water or essential nutrient deficiency. Additionally, crown density is another important physical feature as the number of fronds can inform whether the oil palm tree is healthy or not. Specifically, healthy oil palm trees have very dense fronds and when viewed from a vertical angle it is almost impossible to see the ground below. However, problematic oil palm trees have fewer fronds; therefore, it is possible to see the ground below. Figure 8 presents a few examples of the important physical characteristics indicative of oil palm health.
The general characteristics of healthy, mature oil palm trees are usually dark green leaves, about 18–25 fronds, and a diameter of approximately 7.5 m [20]. Nutritional deficiencies are most often the cause for the deterioration of oil palm health. Nutrients that play an essential role in changing the physical characteristics of oil palms are nitrogen, phosphorus, potassium, magnesium, and boron. The epidemic disease in the oil palms, called Ganoderma, is also a significant problem that stunts growth and reduces production.
In this study, observing the physical characteristics on the UAV images and surveying oil palms on plots suggested that most of the oil palm trees were healthy. The healthy palm trees in the images appeared with large crown sizes, had a dark green color, and a higher frond density than unhealthy oil palms. However, the classification of health according to significant external features was determined by all the characteristics mentioned above. For example, a young oil palm with a small crown size and low density could be a healthy oil palm. On the contrary, oil palms with large crown sizes and low density were classified as unhealthy.
Furthermore, the images oil palm trees that had differences in color, crown density, and crown size when compared to healthy oil palm trees were investigated further. The survey found that images showing a yellow canopy indicated an oil palm with nitrogen deficiency. Similarly, canopies with a greenish orange color indicated a palm with potassium deficiency. It was also found that most of the deficient palm trees were not only lacking in one nutrient but often lacked multiple nutrients. In the case of trees with severe nutritional deficiencies, reductions in crown density and size were noted. Additionally, some oil palm trees presented on the images with tiny crown sizes, low frond density, and colors of light green and yellow. When surveying and investigating these oil palm trees in the field, it was discovered that this resulted from the palm trees being affected by an advanced stage of Ganoderma. This disease causes leaves to dry out and to eventually drop down against the trunk, resulting in reduced crown size and density.
A dataset of high-resolution images from UAVs with visible bands (RGB) was used for training and testing on a deep learning model. The images used were images taken in May 2019 at an altitude of 100 m with a spatial resolution of approximately 2.8 cm. The images of each study area were converted into .JPG format before splitting them into equally sized images of 2000 pixels × 2000 pixels each. This was determined to be the optimal size to maximize memory usage and processing performance. The dataset comprised 133 images total, 116 were assigned to the training set and 17 to the test set.

3.2. Training Data Development and Model for Oil Palm Tree Classification

The training dataset contained the position of each oil palm tree, known as bounding boxes. From the 116 images in the training dataset, 3780 bounding boxes were created and used for training. The dataset comprised 3035 healthy oil palms and 745 unhealthy oil palms. The testing dataset contained 392 bounding boxes, with 325 healthy oil palms and 67 unhealthy oil palms sourced from 17 images. Table 1 shows a breakdown of the 4172 bounding boxes used in this study. Data augmentation was used to reduce the impact of the imbalanced data.
This study used the same dataset to train and test two different base models, namely Resnet-50 and VGG-16, both of which were based on the Faster-RCNN structure. The training and testing results were compared to assess the ability and efficiency of the model. Additionally, we conducted experiments to test the ability of automatic oil palm tree detection at different flying altitudes and settings. We flew the UAV at low altitudes of 50 m, 80 m, and 90 m. Image mosaicking was an issue because palm trees have a similar pattern, and lower attitude images mostly contained palm trees. This resulted in unmosaicking in many areas. The overlap and sidelap parameters were adjusted, but this did not solve the issue. In addition to the flights at these altitudes, we performed flights using different color profiles and at different times to determine the best lighting. The experimental results demonstrated that flying with a D-log color profile before 3:00 PM was a suitable scenario.
The experiment was performed using UAV images taken at altitudes of 100, 120, 140, 160, 180, and 200 m. The results are shown in Figure 9, and Table 2 summarizes the experimental results.
Accuracy was evaluated by comparing the results with actual data derived from the visual interpretation of high-resolution images. Thus, this assessment emphasizes counting the number of oil palms. Table 2 indicates that oil palm trees are best detected on UAV images taken at a 100 m altitude, with accuracy measuring at 99.53%. Therefore, images taken at a height of 100 m were used in this study to detect and classify oil palm tree health. After the training, the results were tested using the test dataset that were prepared to be approximately 10% of the training set. The Resnet-50 network model predicted 331 healthy palms, and only 51 oil palm trees were predicted as unhealthy. Therefore, this model detected a total of 382 oil palm trees on 17 images. The VGG-16 network model predicted 268 healthy palms, and 103 oil palm trees were predicted as unhealthy. In summary, this model detected a total of 371 oil palm trees on 17 images. The prediction results of each image for the two models used are shown in Table 3. Example images of prediction results from Resnet-50 and VGG-16 are shown in Figure 10.
Table 4 shows the comparison of the actual counts and each model’s predictions. The results show that both models were useful in terms of oil palm tree detection (healthy and unhealthy) as the number of predicted palm trees is very close to the actual count, with the Resnet-50 model performing slightly better. When comparing health classification, the number of healthy and unhealthy oil palms detected by the Resnet-50 model was more accurate than those of the VGG-16 network. Further, the accuracy and precision ratios were calculated and will be discussed in the section evaluating model performance.
The initial assessment results also indicated that errors occurred because the oil palm trees were obstructed by other tree canopies. Coconut and other trees with similar physical characteristics to those of oil palm trees were another cause of errors. The number of detectable and undetectable oil palm trees was affected by palms located on the edges of an image, where some parts of the crown area extended across two images. Moreover, the crown size, especially in young oil palms with small crown sizes, was another cause of detection error. This was due to the small number of young oil palm samples with small crowns, as the study focused on large number of mature oil palms that appear in the study area. However, the errors related to crown size could be addressed to improve performance by increasing the number of young oil palms in the training data. Figure 11 illustrates examples of misclassification cases.

3.3. Oil Palm Tree Classification Model Performance Evaluation

The model’s performance was evaluated in three main sections that evaluated the accuracy of data training, comparing the prediction results with visual interpretation and field surveys by using precision, recall, and F1-score as measures.

3.3.1. Model Training Inspection

Figure 12 shows the evaluation of the model performance focusing on accuracy and loss. The results indicate that the Resnet-50 network achieved higher performance than the VGG-16 network in all aspects. However, both models’ values increase or decrease similarly. Therefore, evaluating model performance with multiple indicators is advised to assist in choosing an appropriate model.
Apart from accuracy and loss, the time used to train the model is another indicator that can assess a model’s ability. Figure 13 illustrates the comparison of the time spent on each epoch for the models used. From the graph, it can be observed that Resnet-50 used approximately 10 min less time to process than VGG-16. Thus, the total processing time between the two-model differed with about 5 h, with Resnet-50 using approximately 40 h, while VGG-16 took approximately 45 h.

3.3.2. Visual Inspection

In this section we discussed the model performance when evaluated by comparing the results of both models with actual data derived from visual interpretation of high-resolution images in another area. As mentioned before, the effectiveness was measured using precision, recall, and F1-scores, as shown in Figure 14 below.
Each model’s performance was evaluated by dividing the prediction results into three classes: oil palm tree detection, healthy, and unhealthy. Overall, the Resnet-50 network showed superior performance when comparing F1-scores. However, the Resnet-50 model had lower precision than VGG-16, when identifying healthy oil palms. Resnet-50 achieved 91.24%, while VGG-16 had a high value of 98.13%.

3.3.3. Ground Inspection

This evaluation was performed by using the results of the model showing the performance, i.e., Resnet-50. The results were compared with the field survey’s sampling data to verify the predicted results from the model. In addition, this evaluation method determined the pattern of physical characteristics and their associated symptoms that the model can detect and classify. A total of 251 samples of oil palm trees were obtained from field survey sampling. Among these, 236 were healthy palm trees and 15 were unhealthy palm trees. The prediction results obtained from the Resnet-50 model comprised 223 healthy palm trees and 6 unhealthy palm trees. The performance evaluation was performed by calculating precision, recall, and F1-score.
Figure 15 illustrates the performance of Resnet-50. The precision of the predicted results was 100% for both the oil palm tree and unhealthy oil palm tree classes, while precision for the healthy oil palm tree class was 96.54%. For recall, the model achieved the highest percentage in the oil palm class, at 95.62%, with the healthy and unhealthy palm tree classes achieving 94.09% and 40.00%, respectively. For the F1-score, the class of oil palm tree had the highest performance, at 97.76%. Then followed the healthy and unhealthy oil palm tree classes with percentages of 95.30 and 57.14, respectively.
Evaluating the model’s predictive performance demonstrated that the model was best used for oil palm tree detection. The model showed predictive errors for health classification, especially in unhealthy oil palm. However, this model can be used for preliminary detection of health issues, since it is effective in identifying significant physical symptoms in palm trees.
According to the survey of 15 unhealthy oil palms on the plot, there were six oil palms with nitrogen deficiency, 12 oil palms with potassium deficiency, and seven oil palms with boron deficiency. Moreover, there were also two oil palms suffering from magnesium deficiency and two oil palms with Ganoderma disease. Of the six palms that were predicted to be unhealthy, two were lacking nitrogen, four were lacking potassium, two had boron deficiency, one had boron deficiency, and the last had Ganoderma disease. From the results, it can be concluded that the most common problem of unhealthy oil palm trees is a lack of nitrogen. However, the abnormality of the oil palm is usually not caused by the lack of one nutrient, but several. In addition, Ganoderma is another significant factor affecting the health classification since it always shows apparent physical symptoms. Based on the evaluation, the model can accurately predict up to 50% of all the oil palm trees that face this problem.

4. Conclusions

Since oil palms are one of the most important economic plants in Thailand, developing technology that can help to manage and maintain them is important. It can be a tool for farmers to become more efficient and increase their income. Currently, UAVs are widely used in agriculture as it can take many images with high spatial resolution in a short time, while traditional methods such as field surveying take longer. This research studied the detection and health classification of oil palms by using high-resolution imagery in conjunction with deep learning. Our study used Faster RCNN for object detection and evaluated the Resnet-50 and VGG-16 models.
The research used three important physical characteristics for detection and health classification: crown size, crown color, and crown density. These characteristics could indicate the age of the oil palm, nutrient deficiencies, and the presence of an epidemic disease, named Ganoderma.
In evaluating model performance, the accuracy from model training indicated that the Resnet-50 model was more accurate than VGG-16, and had fewer errors. Moreover, training on Resnet-50 was approximately 5 h faster than on VGG-16. The evaluation of the test prediction results was done by comparing them with both visual interpretations and field survey results. Next, precision, recall, and F1-score were calculated and evaluated.
Based on the study results and performance assessments, it can be concluded that the Resnet-50 network performed better in detection and health classification than the VGG-16 network. Further, the analysis of results highlighted that primarily the unhealthy palm trees faced potassium deficiencies and infection with Ganoderma. Additionally, the results showed that the model was often unable to detect young palm trees due to their smaller crown sizes.
In conclusion, our study showed that our proposed method could be used in the effective management of oil palm trees in Thailand. By tracking the number and health of oil palm trees this method can reduce fieldwork and the number of laborers. In addition, it can help to reduce the cost of production as treatments and fertilizer can only be applied in areas where it is needed. This will be beneficial to both farmers and organizations.
For the recommendation, the short processing time is an advantage of the Faster-RCNN structure, but it requires a large amount of varied training datasets. Therefore, we recommend that future studies increase the amount and variety of datasets, including varied image sizes, as this will improve model performance. Our study found errors in oil palm tree detection often occurred at the edge of the image. Consequently, increasing the overlap between images will result in a reduction of prediction errors. Our study focused on only three significant physical characteristics for oil palm tree health classification, all of which can be detected in RGB images. Thus, only preliminary classification of the health of oil palm trees can be done. For future studies, a multispectral camera could be used to enhance health classification.

Author Contributions

Conceptualization, A.W. and K.Y.; methodology, A.W. and K.Y.; software, K.K.; investigation, C.A.; writing—original draft preparation, K.Y.; writing—review and editing, A.W.; supervision, R.S.; funding acquisition, C.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Thai Research Organization (TRON) of The National Research Council of Thailand (NRCT) and Agricultural Development Agency (ARDA).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Santoso, H.; Tani, H.; Wang, X. A simple method for detection and counting of oil palm trees using high-resolution multispectral satellite imagery. Int. J. Remote Sens. 2016, 37, 5122–5134. [Google Scholar] [CrossRef]
  2. Kalantar, B.; Idrees, M.O.; Mansor, S.; Halin, A.A. Smart Counting—Oil Palm tree inventory with UAV. Coordinates 2017, 13, 17–22. [Google Scholar]
  3. Aliero, M.M.; Mansur, M.A.; Al-Doksi, J. The Usefulness of Unmanned Airborne Vehicle (UAV) Imagery for Automated Palm Oil Tree Counting. J. For. 2014, 1, 1–12. [Google Scholar]
  4. Daliman, S.; Abu-Bakar, S.A.R.; Azam, S.H.M.N. Development of young oil palm tree recognition using Haar- based rectangular windows. IOP Conf. Ser. Earth Environ. Sci. 2016, 37, 12041. [Google Scholar] [CrossRef] [Green Version]
  5. Manandhar, A.; Hoegner, L.; Stilla, U. PALM TREE DETECTION USING CIRCULAR AUTOCORRELATION OF POLAR SHAPE MATRIX. ISPRS Ann. Photogramm. Remote. Sens. Spat. Inf. Sci. 2016, III, 465–472. [Google Scholar] [CrossRef] [Green Version]
  6. Yang, W.; Liu, Q.; Wang, S.; Cui, Z.; Chen, X.; Chen, L.; Zhang, N. Down image recognition based on deep convolutional neural network. Inf. Process. Agric. 2018, 5, 246–252. [Google Scholar] [CrossRef]
  7. Perwej, Y.; Chaturvedi, A. Neural Networks for Handwritten English Alphabet Recognition. Int. J. Comput. Appl. 2011, 20, 1–5. [Google Scholar] [CrossRef]
  8. Tajbakhsh, N.; Shin, J.Y.; Gurudu, S.R.; Hurst, R.T.; Kendall, C.B.; Gotway, M.B.; Liang, J. Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning? IEEE Trans. Med. Imaging 2016, 35, 1299–1312. [Google Scholar] [CrossRef] [Green Version]
  9. Cheang, E.K.; Cheang, T.K.; Tay, Y.H. Using Convolutional Neural Networks to Count Palm Trees in Satellite Images. arXiv 2017, arXiv:1701.06462. [Google Scholar]
  10. Li, W.; Fu, H.; Yu, L.; Cracknell, A. Deep Learning Based Oil Palm Tree Detection and Counting for High-Resolution Remote Sensing Images. Remote Sens. 2016, 9, 22. [Google Scholar] [CrossRef] [Green Version]
  11. Sladojevic, S.; Arsenovic, M.; Anderla, A.; Culibrk, D.; Stefanovic, D. Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification. Comput. Intell. Neurosci. 2016, 2016, 1–11. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Mubin, N.A.; Nadarajoo, E.; Shafri, H.Z.M.; Hamedianfar, A. Young and mature oil palm tree detection and counting using convolutional neural network deep learning method. Int. J. Remote Sens. 2019, 40, 7500–7515. [Google Scholar] [CrossRef]
  13. Shrestha, A.; Mahmood, A. Review of Deep Learning Algorithms and Architectures. IEEE Access 2019, 7, 53040–53065. [Google Scholar] [CrossRef]
  14. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  15. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. arXiv 2015, arXiv:1512.03385. [Google Scholar]
  16. Girshick, R. Fast R-CNN. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
  17. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2015; MIT Press: Cambridge, MA, USA, 2015; pp. 91–99. [Google Scholar]
  18. Zheng, J.; Li, W.; Xia, M.; Dong, R.; Fu, H.; Yuan, S. Large-Scale Oil Palm Tree Detection from High-Resolution Remote Sensing Images Using Faster-RCNN. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 1422–1425. [Google Scholar]
  19. Chuasuwan, C. Palm Oil Industry. 2018, pp. 4–8. Available online: http://www.tft-earth.org/wp-content/uploads/2017/02/TFT-Palm-Oil-Industry-Transformation-Paper.pdf (accessed on 10 March 2020).
  20. Mandang, T.; Sinambela, R.; Pandianuraga, N.R. Physical and mechanical characteristics of oil palm leaf and fruits bunch stalks for bio-mulching. IOP Conf. Ser. Earth Environ. Sci. 2018, 196, 012015. [Google Scholar] [CrossRef]
Figure 1. Location of study area.
Figure 1. Location of study area.
Agriculture 11 00183 g001
Figure 2. Unmanned aerial vehicle (UAV) image of the study area.
Figure 2. Unmanned aerial vehicle (UAV) image of the study area.
Agriculture 11 00183 g002
Figure 3. Examples of unhealthy trees with nutrient deficiencies and Ganoderma.
Figure 3. Examples of unhealthy trees with nutrient deficiencies and Ganoderma.
Agriculture 11 00183 g003
Figure 4. Examples of ground truth observations.
Figure 4. Examples of ground truth observations.
Agriculture 11 00183 g004
Figure 5. UAV images with 2000 pixel × 2000 pixel size.
Figure 5. UAV images with 2000 pixel × 2000 pixel size.
Agriculture 11 00183 g005
Figure 6. Examples from the (a) healthy and (b) unhealthy datasets.
Figure 6. Examples from the (a) healthy and (b) unhealthy datasets.
Agriculture 11 00183 g006
Figure 7. Methodology workflow.
Figure 7. Methodology workflow.
Agriculture 11 00183 g007
Figure 8. Examples of important physical characteristics of oil palm trees.
Figure 8. Examples of important physical characteristics of oil palm trees.
Agriculture 11 00183 g008
Figure 9. Oil palm tree detection at different flying altitudes.
Figure 9. Oil palm tree detection at different flying altitudes.
Agriculture 11 00183 g009
Figure 10. Example of resulting images of testing with (a) Resnet-50 and (b) VGG-16.
Figure 10. Example of resulting images of testing with (a) Resnet-50 and (b) VGG-16.
Agriculture 11 00183 g010
Figure 11. Examples of (a) young oil palm, (b) oil palm under other trees, (c) coconut tree, and (d) other trees.
Figure 11. Examples of (a) young oil palm, (b) oil palm under other trees, (c) coconut tree, and (d) other trees.
Agriculture 11 00183 g011
Figure 12. Comparison of model training performance.
Figure 12. Comparison of model training performance.
Agriculture 11 00183 g012
Figure 13. Processing time.
Figure 13. Processing time.
Agriculture 11 00183 g013
Figure 14. Comparative performance of oil palm tree classification between Resnet-50 and VGG-16 after compared with visual interpretation.
Figure 14. Comparative performance of oil palm tree classification between Resnet-50 and VGG-16 after compared with visual interpretation.
Agriculture 11 00183 g014
Figure 15. Comparative performance of Resnet-50 results and ground sampling.
Figure 15. Comparative performance of Resnet-50 results and ground sampling.
Agriculture 11 00183 g015
Table 1. The total amount of bounding boxes for the training and testing datasets used in this study.
Table 1. The total amount of bounding boxes for the training and testing datasets used in this study.
DatasetHealthyUnhealthyTotal
Number of training bounding boxes30357453780
Number of testing bounding boxes32567392
Table 2. Automatic detection of oil palm trees at various altitudes.
Table 2. Automatic detection of oil palm trees at various altitudes.
Altitude (m.)Actual Oil Palm TreeDetected Oil Palm TreeAccuracy (%)
10085685299.53
12085683697.66
14085676989.84
16085642249.30
180856758.76
20085630.35
Table 3. The predictable number from Resnet-50 and VGG-16.
Table 3. The predictable number from Resnet-50 and VGG-16.
ImagePredictable Number from
Resnet-50 Network
Predictable Number from
VGG-16 Network
Oil PalmHealthyUnhealthyOil PalmHealthyUnhealthy
14242040400
24141041410
33939036360
413130972
5550431
61919022175
71111018126
8981972
945212441833
10271710261412
11110835
12402713331221
132320321138
14330330
15121201138
161212012120
174040037370
Total38233151371268103
Table 4. The comparison of results to the actual count.
Table 4. The comparison of results to the actual count.
DatasetOil PalmHealthy Oil PalmUnhealthy Oil Palm
Actual count39232567
Resnet-5038233151
VGG-16371268103
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yarak, K.; Witayangkurn, A.; Kritiyutanont, K.; Arunplod, C.; Shibasaki, R. Oil Palm Tree Detection and Health Classification on High-Resolution Imagery Using Deep Learning. Agriculture 2021, 11, 183. https://doi.org/10.3390/agriculture11020183

AMA Style

Yarak K, Witayangkurn A, Kritiyutanont K, Arunplod C, Shibasaki R. Oil Palm Tree Detection and Health Classification on High-Resolution Imagery Using Deep Learning. Agriculture. 2021; 11(2):183. https://doi.org/10.3390/agriculture11020183

Chicago/Turabian Style

Yarak, Kanitta, Apichon Witayangkurn, Kunnaree Kritiyutanont, Chomchanok Arunplod, and Ryosuke Shibasaki. 2021. "Oil Palm Tree Detection and Health Classification on High-Resolution Imagery Using Deep Learning" Agriculture 11, no. 2: 183. https://doi.org/10.3390/agriculture11020183

APA Style

Yarak, K., Witayangkurn, A., Kritiyutanont, K., Arunplod, C., & Shibasaki, R. (2021). Oil Palm Tree Detection and Health Classification on High-Resolution Imagery Using Deep Learning. Agriculture, 11(2), 183. https://doi.org/10.3390/agriculture11020183

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop