Next Article in Journal
Perceptions on Drinking Water and Wastewater in a Local Area in Western Romania
Next Article in Special Issue
Landscape Analysis of the Arribes del Duero Natural Park (Spain): Cartography of Quality and Fragility
Previous Article in Journal
TFNetPropX: A Web-Based Comprehensive Analysis Tool for Exploring Condition-Specific RNA-Seq Data Using Transcription Factor Network Propagation
Previous Article in Special Issue
Cross-Domain Transfer Learning for Natural Scene Classification of Remote-Sensing Imagery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparison of Automatic Classification Methods for Identification of Ice Surfaces from Unmanned-Aerial-Vehicle-Borne RGB Imagery

by
Jakub Jech
1,*,
Jitka Komárková
1 and
Devanjan Bhattacharya
2
1
Institute of System Engineering and Informatics, Faculty of Economics and Administration, University of Pardubice, Studentská 95, 532 10 Pardubice, Czech Republic
2
Bayes Centre, The University of Edinburgh, 47 Potterrow, Edinburgh EH8 9BT, UK
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(20), 11400; https://doi.org/10.3390/app132011400
Submission received: 1 July 2023 / Revised: 28 September 2023 / Accepted: 5 October 2023 / Published: 17 October 2023
(This article belongs to the Special Issue New Trends of GIS Technology in Environmental Studies)

Abstract

:
This article describes a comparison of the pixel-based classification methods used to distinguish ice from other land cover types. The article focuses on processing RGB imagery, as these are very easy to obtained. The imagery was taken using UAVs and has a very high spatial resolution. Classical classification methods (ISODATA and Maximum Likelihood) and more modern approaches (support vector machines, random forests, deep learning) have been compared for image data classifications. Input datasets were created from two distinct areas: The Pond Skříň and the Baroch Nature Reserve. The images were classified into two classes: ice and all other land cover types. The accuracy of each classification was verified using a Cohen’s Kappa coefficient, with reference values obtained via manual surface identification. Deep learning and Maximum Likelihood were the best classifiers, with a classification accuracy of over 92% in the first area of interest. On average, the support vector machine was the best classifier for both areas of interest. A comparison of the selected methods, which were applied to highly detailed RGB images obtained with UAVs, demonstrates the potential of their utilization compared to imagery obtained using satellites or aerial technologies for remote sensing.

1. Introduction

Remote sensing is nowadays an essential tool for everyday needs [1], especially with the use of GIS tools, whether it is logistics, spatial planning, or landscape monitoring [2,3,4]. Introducing unmanned aerial vehicles (UAVs) as remote sensing carriers has led to a new era in the industry, especially for applications requiring a high spatial resolution.
Image classification is one of the essential functions of the GIS [5]. It is used to process data, and, for example, it is also an essential tool for identifying particular land cover types and monitoring their changes.
There are many methods for automatic image classification, and they can generally be divided into two groups: unsupervised automatic classification (learning without a teacher) and supervised classification (learning with a teacher).
The classification of images using automatic methods is highly required, especially for precision agriculture [6], but it is also widely applied throughout all other socio-economic and scientific fields. Along with very high spatial resolution, high accuracy for classifications to particular land cover types is necessary [7,8]. Therefore, this article deals with this topic.
Over the last ten years, the classification of image data has been a very popular method of data processing through various industries, even in scientific works. In the last five years, there has been a significant increase in the number of works dealing with the classification of imagery data, see Figure 1. The combination of image classification, “drone”, and RGB is less covered by the published works than the combination of deep learning with “drone” and RGB. The focus on particular classification methods is significantly lower, see Table 1 and Table 2. The term “drone” stands for UAV, UAS, drone, RPAS, MAV, unmanned aerial vehicle, and unmanned aerial system. This low attention is another reason for research in this area.
It is also important to mention that this article focuses on the automatic classification] of RGB image data obtained from UAVs. Most remote sensing data are obtained as multispectral or hyperspectral. Such images contain units of up to tens of spectral bands, each with its own specific spectral behaviour. The advantages of different bands are in the subsequent classification of an image; e.g., scanning in the NIR band is suitable for monitoring chlorophyll in vegetation.
The availability of UAVs is very high nowadays. They represent a non-invasive observation and data collection device. Smaller UAVs allow most users to retrieve their own data without any problems [9]. The significant advantage of UAVs is the data quality, namely a very high spatial resolution in cm × px−1. Smaller UAVs equipped with an RGB camera can also be considered a low-cost data acquisition solution [10]. The sensor is one of the main parameters that affect the price and quality of images.
When looking for relevant contributions in scientific databases, it is necessary to clarify the terminology [11,12,13] because many terms are used. UAV, or unmanned aerial vehicle, is the technical term, which is the most widely used term for these machines. A UAV is a correct designation only for the machine itself. An unmanned aircraft system (UAS) is used to describe the entire set. This designation is often used in Anglo-Saxon terms. The term drone can be found very often as well. It comes from the French designation. The term drone has gained tremendous popularity for naming machines among the public and in non-scientific contributions. The most formal and international term is remotely piloted aircraft system or RPAS [14]. With the development of technology and the downsizing of the machines themselves, the term micro aerial vehicle (MAV) can also be found.
Therefore, it is necessary to consider these terms when searching for scientific contributions. Table 2 shows the frequency of occurrences of particular terms when searching in the scientific databases WOS and SCOPUS. Figure 2 shows that it is appropriate to focus on the terms UAV, UAS, drone, and unmanned aerial system when searching. Other terms are not used so often compared to the above-stated terms. All the terms are often used in a supplementary way.
As mentioned earlier, image classification methods can be divided into two fundamental types of methods. The first type is an automatic image classification based on learning without a teacher or unsupervised classifications. The basis of this approach is the clustering method, i.e., it is based on the clustering of pixels based on the similarity of a given classification class. The second type is an automatic classification based on learning with a teacher or supervised classification. The basis for this approach is the creation of training sets with the correct classification outputs, i.e., the required classes. After selecting a training set, they are applied to the testing set [15].
Unsupervised classification methods are limited by only the clustering method, so a few methods for classifying image data exist. The most frequently used methods are ISODATA and the K-means method. The advantage of these methods is that they are given, as an input criterion, only the number of clusters, and then the methods run on their own.
Supervised methods are based on learning with a teacher, thus on training and test sets. The required classification classes are selected on a training set, and the selected methods are taught. Taught training sets are applied to the test set, and the output is a classified image. The disadvantage of this may be the error rate of entering training areas when overlaps of different classes may occur on the same data. The most frequently used supervised method is Maximum Likelihood method, which has been used for a long time [16]. A newer method for supervised classification is SVMs (support vector machines). Next, deep learning methods are more often used for image classification in the GIS, and they were successfully used in various studies: tree species classification using RGB imagery and deep learning [17] or classification of fluvial scenes [18] may be given as examples. An automated pipeline based on deep learning was designed to identify particular animal species [19]. The identification of plant leaf diseases is another useful application of deep learning [20].
An accuracy assessment ([21], pp. 306–308) is essential to any classification project. It compares the classified image with another data source considered accurate or ground truth data. The ground truth can be collected in the field, which is time-consuming and expensive. The ground truth data can also be derived from interpreting high-resolution imagery, existing classified imagery, or GIS data layers. The ground truth data can be provided as a reference layer or created by manual identification. This process is time-consuming and requires knowledge of the research area.
The most common way to assess the accuracy of a classified map is to create a set of random points from the ground truth data and compare that with the classified data in a confusion matrix [22]. A confusion matrix, also known as an error matrix, is a specific table layout that allows the performance of an algorithm to be visualized, typically a supervised learning one. Each row of the matrix represents the instances in an actual class, while each column represents the instances in a predicted class. A confusion matrix is a table with two rows and two columns (representing actual and predicted classes) that reports the number of false positives, false negatives, true positives, and true negatives. It allows for more a detailed analysis than a mere proportion of correct classifications (accuracy).
Output data from a confusion matrix must be represented. Cohen’s Kappa index [23] measures valid agreement. It indicates the proportion of agreement beyond the one expected by chance, that is, the achieved beyond-chance agreement as a proportion of the possible beyond-chance agreement.
The Cohen’s Kappa index is calculated by Equation (1), where Po is the relative observed agreement among raters; Pc is the hypothetical probability of chance agreement. The Kappa index is represented in the interval 0 to 1, where 1 means a maximum match.
K = o b s e r v e d   a g r e e m e n t c h a n c e   a g r e e m e n t 1 c h a n c e   a g r e e m e n t = P O P C 1 P C
The aim of this article is a comparison of selected automatic classification methods for ice detection from RGB imagery. ISODATA and Maximum Likelihood represent an older approach, and SVM, random forest, and deep learning represent a newer approach. The methods were used to classify land cover types into two classes (ice and all other land covers). Very-high-spatial-resolution UAV-borne datasets were used. The following infographics, see Figure 2, point out the importance of particular topics.

1.1. Iso Cluster

ISODATA computes class means consistently circulated in the data space before iteratively clustering the continuing pixels utilizing least distance approaches [24]. ArcGIS software (10.3 version) tools use this method, which is called an Iso cluster.
The Iso cluster [25] tool uses a modified iterative optimization clustering procedure known as the migrating means technique. The algorithm separates all cells into the user-specified number of distinct unimodal groups in the multidimensional space of the input bands. The Iso prefix of the ISODATA clustering algorithm is an abbreviation for the iterative self-organizing way of performing clustering. This type of clustering uses a process in which, during each iteration, all samples are assigned to existing cluster centres and new means are recalculated for every class. The optimal number of classes to specify is usually unknown. The Iso Cluster algorithm is an iterative process for computing the minimum Euclidean distance when assigning each candidate cell to a cluster ([21], pp. 297–299).

1.2. Maximum Likelihood

The Maximum Likelihood [16] classification assumes that the statistics for each class in each band are normally distributed and calculates the probability that a given pixel belongs to a specific class. Unless you select a probability threshold, all the pixels are classified. Each pixel is assigned to the class with the highest probability (the Maximum Likelihood). The pixel remains unclassified if the highest probability is smaller than the specified threshold.

1.3. Random Trees

In its simplest form, a random forest [26] can be thought of as using the bagging and the random subsets meta-classifier on a tree classifier. The random forest [27] classifier consists of a combination of tree classifiers. Each classifier is generated using a random vector, sampled independently from the input vector, and each tree casts a unit vote for the most popular class to classify an input vector. The random forest introduces randomness of two types: each tree is built on slightly different rows sampled with repetitions from the original (bagging), and each column tree (or, in some cases, each branch decision) is built using a small randomly selected subset of columns [28].
The random trees [29] is the classification method used in ArcGIS software and is based on the random forest.

1.4. Support Vector Machine

A support vector machine (SVM) [30] is a supervised learning model with associated learning algorithms that analyse data for classification and regression analyses. SVM is one of the most robust prediction methods based on statistical learning frameworks, or the VC theory proposed by Vapnik (1995) and Chervonenkis (1974) [30]. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples to one category or the other, making it a non-probabilistic binary linear classifier. SVM [31] maps training examples to points in space to maximize the gap width between the two categories. New examples are then mapped into that space and predicted to belong to a category based on which side of the gap they fall into.

1.5. Deep Learning

Deep learning [32] is a subset of machine learning that uses several layers of algorithms in the form of neural networks. The input data are analysed through different layers of the network, with each layer defining specific features and patterns in the data. For example [33], if you want to identify features such as buildings and roads, the deep learning model can be trained with images of different buildings and roads, processing the images through layers within the neural network, and then finding the identifiers required to classify a building or road.

2. State of Art

The authors of one article [34] used a different approach to image classification. This work’s advantage is that image classification methods were used for RGB data from a UAV. The Iso Cluster and Maximum Likelihood methods are older but widely used methods and sometimes provide good results. Images were classified into three classes in two time records. In this article, a 46% accuracy of unsupervised classification (also the worst) and 92% accuracy of supervised classification (also the best) were achieved.
The article [18] compares selected supervised classification methods to classify fluvial scenes. Data were in RGB spectre from the existing dataset. The dataset was formed by 11 rivers worldwide (Japan, Italy, Canada, the UK, and Costa Rica) in the RGB images. In the first phase, they compare Maximum Likelihood, random forest, and a multilayer perceptron with a 70–80% F1 result. In the second phase, they use the CSC method and obtain a 92–95% F1 result. This demonstrates the suitability of deep learning or methods based on deep learning.
The study [35] introduces a technique to use UAV-acquired RGB images coupled with ground information for a reliable and fast estimation of sugarcane yield for two popular varieties in Thailand. This article is interested in two first approaches (OBIA and ExG). OBIA provided better results with 92 and 96% accuracy. The pixel-based method provided 84 and 88% accuracy. The utilization of the UAV with a standard camera with an RGB sensor and the OBIA method are the contributions of this study. The OBIA method was identified as a good classifier for heterogeneous data.
The study’s primary goal [36] was to measure the accuracy of the selected methods (SVM, ML) to assess historical, current, and future land use and land cover patterns. The data were based on satellite imagery (river Birupa), and the methods were compared. The data were obtained from Landsat with a 30 m spatial resolution. From the results, it is possible to observe that both classifiers in some land covers (build-up area and fallow land) have identical accuracy. Furthermore, you can see in other land cover types that the results are very different. According to Kappa statistics, the SVM is a better classifier with an 86% average accuracy.
This article [37] describes an automatic method of identifying soil and water conservation measures from the centimetre-resolution imagery of UAVs. The authors chose an object-based image analysis (OBIA) approach, machine learning models, and a support vector machine (SVM). Data were obtained from a UAV with a low-cost RGB camera. The images from a UAV provide have very high spatial resolution in cm*px−1. After obtaining the data, they used vegetation indices as the first step. They obtained 91% accuracy.
The article [38] classifies sea ice from high-resolution observations. Data were collected during OIB Arctic summer campaigns with the nominal flight altitude. They obtained RGB images from a Canon EOS 5D camera with a high spatial resolution of 10 cm*px−1. They are classified into selected classes by histograms. This article focused on classifying ice and water in the RGB spectre.
The article [39] investigates the potential of SAR imagery for sea ice classification. This article classified the imagery into two approaches (pixel-based and region-based). The classification accuracy of ice ranges from 80 to 90%, depending on the set of classification classes used.
The classification of sea ice images was the aim of the authors [40]. In this article, hyperspectral data obtained from satellites were used. As a classifier, deep learning was used, specifically, 3D-CNNs. They reach very high accuracy, around 98%.
The authors of another study [41] focused on differentiating snow and rock in colour imagery (RGB). They used unsupervised and supervised approaches. The Maximum Likelihood classification and a new approach, Polynomial Thresholding were used as supervised methods. Both supervised approaches reach an accuracy of around 95%.

3. Data Collection, Preprocessing and Methodology

The procedure used in this study consisted of the following steps:
  • Data collection:
  • Flight planning
  • Flight itself
  • Data preprocessing—mosaics creation
  • Data processing—classifications
  • Results visualization
The aim was to compare the used methods, but there was no specific order of the classification methods set.

3.1. Used Hardware and Software

Experiments, calculations, and visualization were performed on a machine with an Intel Core i7 at 2.6 GHz and 16 GB of RAM, which represents acceptable computational requirements for the used data. The only exception is deep learning. Its performance can be improved by using a GPU (Nvidia GTX 960) for computation. Using a GPU provides much more effective processing of data. The GPU calculation of the deep learning method provides faster processing, from 6 h for only a CPU-based calculation to 30 min using both CPU and GPU.
DJI GS PRO, ENVI OneButton, and ArcGIS software were used for the following: DJI GS PRO (on iOS, version 2.0.13) was used to plan and control flights, and ENVI OneButton (version 5.0.0.181) was used to create mosaics from collected images. ArcGIS for Desktop (version 10.6) and ArcGIS PRO (version 2.7) are GIS software used for image classification by unsupervised and supervised methods and for accuracy assessment.

3.2. Area of Interest

This study is focused on two small areas during the winter term. The first one is situated close to the village of Neratov (next to the city of Lázně Bělohrad and near Pardubice) in the Pardubický region, the Czech Republic. The north-western part of the pond Skříň was monitored (see Figure 3). The total area of the pond is 269,000 m2, and the pond is used for fish farming. Pond Baroch was chosen as the second area of interest (see Figure 4). Pond Baroch is part of a nature reserve with evidence number 1926. The pond is situated south to southwest of the village of Hrobice, near Pardubice. The Regional Office of the Pardubice Region manages the area. The size of the nature reservation is around 30,000 m2. The reason to protect this nature reserve is its grounded pond, adjacent reeds, forest and meadow communities, and ornithological locality.

4. Data Collection

Phantom 3 Pro was used for the data collection. The characteristics of this UAV are as follows: weight, 1216 g, four motors, max. flight time, 25 min. The flight with UAV is performed under suitable weather conditions. The drone contains a built-in ultra HD camera. The camera has an F2.8 lens with a 20 mm focal length and a viewing angle of 94 degrees. The built-in camera is attached to the drone by a three-axis gimbal. The drone itself costs approx. 700–800 Eur; this can be taken as a low-cost solution [43].
The planned flight was used for comprehensive data collection. The planned flight ensures low data redundancy, selects the optimal points for capturing images (waypoints), and ensures a smooth flight of the desired area. The flight was planned in DJI software, DJI GS PRO for iPad (version 2.0.16), and according to the rules for obtaining high-quality data [43]. The data from the planned flight are collected in the form of individual images with a 60% overlap with each other, a 60 m altitude, and a 90-degree angle of view (perpendicular to the drone); see Figure 5. In addition, the flight itself is subject to legislation on the operation of unmanned aerial vehicles. The rules on flying are administered by the CAA, called ÚCL Czechia [44].

5. Data Preprocessing and Processing

After obtaining the data, the next step is preprocessing. The individual data (images) must be composed into a mosaic. For mosaicking data, Icaros OneButton was used. Atmospheric correction does not need to be performed because the flight is performed at a low altitude and under very similar lighting conditions (same light conditions—sunny, beginning of the same month, around 10 am). Figure 6 and Figure 7 represent the selected frame of individuals’ areas of interest with 2.6 cm × px−1. Processing was based on the default setting of methods with some modifications; see Table 3.

6. Results and Discussion

ArcGIS PRO was used for the calculations for all the methods. ArcGIS for Desktop was used to perform the accuracy assessment and data visualization. The calculations of the Kappa index were carried out in Supplementary File S1 to make the calculation easier.
The following methods were used: Iso cluster, Maximum Likelihood, random trees, support vector machine (SVM) and deep learning (supported by ArcGIS PRO version 2.6). Both collected images were used for image classification by selected the methods. The methods were set to classify land covers into two classes, only allowing their comparison from the point of view of ice surface differentiation from all other land covers. Thus, the chosen classes were ice and all other covers. Generally, supervised methods provide better results, but it highly depends on the selected training sets. In the case of Skříň, deep learning was the best classifier, with a 92% classification success, see Figure 8. The worst classification method was the Iso cluster, with a 69% classification success, see Figure 9. In the case of Baroch, Maximum Likelihood was the best classifier, with a 93% classification success. The worst classifier was the Iso cluster, with an 88% classification success.
There was a problem with the classification of the edge of islands in the case of the Baroch image, see Figure 10. The edge consisted of frozen vegetation and pieces of pure ice. In the RGB spectre, the combination looks like pure ice and has almost the same pixel colour as ice. In this case, the worst method had a slightly worse result.
Table 4 shows the results of all the classification methods. The support vector machine can be considered as the best method on average. The SVM provided the second-best results in both areas, with an accuracy of around 90%.
Results of all classifications for both areas of interest are visualized, see Figure 11.
The results are in agreement with other authors. Overall, the supervised methods provide better results than the unsupervised methods. The classification accuracy of the supervised methods is around 90%, in line with the article [34]. Remarkably, the deep learning method provided the highest level of accuracy in our study. This result aligns with other authors [37,40]. The unsupervised method provided worse results than supervised methods, and its accuracy was around 70%. On the other hand, unsupervised methods can provide similar results to supervised methods in specific cases of input data (e.g., homogeneous areas).

7. Conclusions and Future Work

Today, it is essential to classify particular land cover types to support environmental protection, water management, and sustainable development. This article is focused on UAV-borne very-high-spatial-resolution monitoring of the ice surface.
UAVs can provide very-high-spatial-resolution data on demand, of course, with respect to the legal and weather conditions. On the other hand, cheaper UAVs may be equipped with a standard RGB camera only. In this case, not all standard remote sensing methods can be used for data processing; e.g., many spectral indices require other bands; RGB bands are insufficient.
The article provides a comparison of various pixel-based classifications, which were used to classify RGB imagery. The classification aimed to identify and distinguish the ice surface from the other land cover types.
The direct focus on identifying only ice surfaces represents a fundamental limitation of this study. We did not distinguish between different types of ice (e.g., dry and wet). RGB bands do not allow the utilization of many spectral indices and the full range of the reflectance curve.
Future work can expand on this work by including object-based classification methods, e.g., OBIA or an object-based version of deep learning, and comparing these methods.
Next, there is space to improve the deep learning training phase to prevent classification mistakes, e.g., based on frozen edges.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/app132011400/s1, File S1: The results of searches in the WoS and Scopus databases during the literature review phase.

Author Contributions

Conceptualization, J.K.; Methodology, J.J. and D.B.; Software, J.J.; Validation, J.J. and D.B.; Formal Analysis, J.J.; Investigation, J.J.; Resources, J.J. and D.B.; Data Curation, J.J.; Writing—Original Draft Preparation, J.J. and J.K.; Writing—Review and Editing, J.K. and D.B.; Visualization, J.J.; Supervision, J.K.; Project Administration, J.K. All authors have read and agreed to the published version of the manuscript.

Funding

The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: this article was supported by the Student Grant Competition of the University of Pardubice (grant No. SGS_2023_013).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The results of searches in the WoS and Scopus databases during the literature review phase are available in the attached Supplementary File S1.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kamilaris, A.; Pitsillides, A. The impact of remote sensing on the everyday lives of mobile users in urban areas. In Proceedings of the 7th International Conference on Mobile Computing and Ubiquitous Networking (ICMU), Singapore, 6–8 January 2014; pp. 153–158. [Google Scholar]
  2. Gao, J.; Liu, Y. Applications of remote sensing, GIS and GPS in glaciology: A review. Prog. Phys. Geogr. 2001, 25, 520–540. [Google Scholar] [CrossRef]
  3. Weng, Q. A remote sensing? GIS evaluation of urban expansion and its impact on surface temperature in the Zhujiang Delta, China. Int. J. Remote Sens. 2001, 22, 1999–2014. [Google Scholar]
  4. Sedlák, P.; Komárková, J.; Jech, J.; Mašín, O. Low-cost UAV as a Source of Image Data for Detection of Land Cover Changes. J. Inf. Syst. Eng. Manag. 2019, 4, em0095. [Google Scholar] [CrossRef] [PubMed]
  5. Lillesand, T.; Kiefer, R.W.; Chipman, J. Remote Sensing and Image Interpretation; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
  6. Mushtaq, S. SMART AGRICULTURE SYSTEM+ AND IMAGE PROCESSING. Int. J. Adv. Res. Comput. Sci. 2018, 9, 351–353. [Google Scholar] [CrossRef]
  7. Foody, G.M. Harshness in image classification accuracy assessment. Int. J. Remote Sens. 2008, 29, 3137–3158. [Google Scholar] [CrossRef]
  8. Comber, A.; Fisher, P.; Brunsdon, C.; Khmag, A. Spatial analysis of remote sensing image classification accuracy. Remote Sens. Environ. 2012, 127, 237–246. [Google Scholar] [CrossRef]
  9. Aicardi, I.; Chiabrando, F.; Grasso, N.; Lingua, A.M.; Noardo, F.; Spanò, A. UAV photogrammetry with oblique images: First analysis on data acquisition and processing. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 835–842. [Google Scholar] [CrossRef]
  10. Sedlák, P.; Komárková, J.; Mašín, O.; Jech, J. The Procedure for Processing Images from a Low-cost UAV. In Proceedings of the 14th Iberian Conference on Information Systems and Technologies (CISTI), Coimbra, Portugal, 19–22 June 2019; pp. 1–4. [Google Scholar]
  11. Fahlstrom, P.; Gleason, T. Introduction to UAV Systems; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  12. AltiGator. Drone, UAV, UAS, RPA or RPAS …. Available online: https://altigator.com/en/drone-uav-uas-rpa-or-rpas/ (accessed on 14 June 2022).
  13. Granshaw, S.I. RPV, UAV, UAS, RPAS… or just drone? Photogramm. Rec. 2018, 33, 160–170. [Google Scholar] [CrossRef]
  14. ICAO. Unmanned Aviation. Available online: https://www.icao.int/safety/UA/UASToolkit/Pages/FAQ.aspx (accessed on 14 June 2022).
  15. Zhao, Z.; Liu, H. Spectral feature selection for supervised and unsupervised learning. In Proceedings of the 24th International Conference on Machine Learning, Corvalis, OR, USA, 20–24 June 2007; pp. 1151–1157. [Google Scholar]
  16. Stigler, S.M. The epic story of maximum likelihood. Stat. Sci. 2007, 22, 598–620. [Google Scholar] [CrossRef]
  17. Nezami, S.; Khoramshahi, E.; Nevalainen, O.; Pölönen, I.; Honkavaara, E. Tree species classification of drone hyperspectral and RGB imagery with deep learning convolutional neural networks. Remote Sens. 2020, 12, 1070. [Google Scholar] [CrossRef]
  18. Carbonneau, P.E.; Dugdale, S.J.; Breckon, T.P.; Dietrich, J.T.; Fonstad, M.A.; Miyamoto, H.; Woodget, A.S. Adopting deep learning methods for airborne RGB fluvial scene classification. Remote Sens. Environ. 2020, 251, 112107. [Google Scholar] [CrossRef]
  19. Bothmann, L.; Wimmer, L.; Charrakh, O.; Weber, T.; Edelhoff, H.; Peters, W.; Nguyen, H.; Benjamin, C.; Menzel, A. Automated wildlife image classification: An active learning tool for ecological applications. Ecol. Inform. 2023, 77, 102231. [Google Scholar] [CrossRef]
  20. Alqahtani, Y.; Nawaz, M.; Nazir, T.; Javed, A.; Jeribi, F.; Tahir, A. An improved deep learning approach for localization and Recognition of plant leaf diseases. Expert Syst. Appl. 2023, 230, 120717. [Google Scholar] [CrossRef]
  21. Tempfli, K.; Huurneman, G.; Bakker, W.; Janssen, L.L.; Feringa, W.F.; Gieske, A.S.M.; Grabmaier, K.A.; Hecker, C.A.; Horn, J.A.; Kerle, N.; et al. Principles of Remote Sensing: An Introductory Textbook; International Institute for Geo-Information Science and Earth Observation: Enschede, The Netherlands, 2009. [Google Scholar]
  22. Powers, D.M.W. Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation. arXiv 2020, arXiv:2010.16061. [Google Scholar]
  23. Sim, J.; Wright, C.C. The kappa statistic in reliability studies: Use, interpretation, and sample size requirements. Phys. Ther. 2005, 85, 257–268. [Google Scholar] [CrossRef]
  24. Abbas, A.W.; Minallh, N.; Ahmad, N.; Abid, S.A.R.; Khan, M.A.A. K-Means and ISODATA clustering algorithms for landcover classification using remote sensing. Sindh Univ. Res. J. SURJ (Sci. Ser.) 2016, 48, 315–318. [Google Scholar]
  25. Norzaki, N.; Tahar, K.N. A comparative study of template matching, ISO cluster segmentation, and tree canopy segmentation for homogeneous tree counting. Int. J. Remote Sens. 2019, 40, 7477–7499. [Google Scholar] [CrossRef]
  26. Pal, M. Random forest classifier for remote sensing classification. Int. J. Remote Sens. 2005, 26, 217–222. [Google Scholar] [CrossRef]
  27. Breiman, L. Classification and Regression Trees; CRC Press: Boca Raton, FL, USA, 1984. [Google Scholar]
  28. Chen, W.; Zhang, S.; Li, R.; Shahabi, H. Performance evaluation of the GIS-based data mining techniques of best-first decision tree, random forest, and naïve Bayes tree for landslide susceptibility modeling. Sci. Total Environ. 2018, 644, 1006–1018. [Google Scholar] [CrossRef]
  29. ArcGIS PRO. Train Random Trees Classifier. Available online: https://pro.arcgis.com/en/pro-app/latest/tool-reference/spatial-analyst/train-random-trees-classifier.htm (accessed on 14 June 2022).
  30. Lazar, A.; Shellito, B.A. Classification in GIS using support vector machines. In Handbook of Research on Geoinformatics; IGI Global: Hershey, PA, USA, 2009; pp. 106–112. [Google Scholar]
  31. ArcGIS PRO. Train Support Vector Machine Classifier. Available online: https://desktop.arcgis.com/en/arcmap/10.3/tools/spatial-analyst-toolbox/train-support-vector-machine-classifier.htm (accessed on 14 June 2022).
  32. Kentsch, S.; Cabezas, M.; Tomhave, L.; Groß, J.; Burkhard, B.; Lopez Caceres, M.L.; Waki, K.; Diez, Y. Analysis of UAV-Acquired Wetland Orthomosaics Using GIS, Computer Vision, Computational Topology and Deep Learning. Sensors 2021, 21, 471. [Google Scholar] [CrossRef]
  33. ArcGIS PRO. What Is Deep Learning. Available online: https://pro.arcgis.com/en/pro-app/latest/help/analysis/deep-learning/what-is-deep-learning-.htm (accessed on 14 June 2022).
  34. Komarkova, J.; Jech, J. Processing UAV Based RGB Data to Identify Land Cover with Focus on Small Water Body Comparison of Methods. In Proceedings of the 15th Iberian Conference on Information Systems and Technologies (CISTI), Seville, Spain, 24–27 June 2020; pp. 1–6. [Google Scholar]
  35. Som-Ard, J.; Hossain, M.D.; Ninsawat, S.; Veerachitt, V. Pre-harvest sugarcane yield estimation using UAV-based RGB images and ground observation. Sugar Tech 2018, 20, 645–657. [Google Scholar] [CrossRef]
  36. Mondal, A.; Kundu, S.; Chandniha, S.K.; Shukla, R.; Mishra, P.K. Comparison of support vector machine and maximum likelihood classification technique using satellite imagery. Int. J. Remote Sens. GIS 2012, 1, 116–123. [Google Scholar]
  37. Zhang, Y.; Shen, H.; Xia, C. Automatic identification of soil and water conservation measures from centimeter-resolution unmanned aerial vehicle imagery. J. Soil Water Conserv. 2020, 75, 472–480. [Google Scholar] [CrossRef]
  38. Buckley, E.M.; Farrell, S.L.; Duncan, K.; Connor, L.N.; Kuhn, J.M.; Dominguez, R.T. Classification of sea ice summer melt features in high-resolution IceBridge imagery. J. Geophys. Res. Ocean. 2020, 125, e2019JC015738. [Google Scholar] [CrossRef]
  39. Sun, Y.; Carlström, A.; Askne, J. SAR image classification of ice in the Gulf of Bothnia. Int. J. Remote Sens. 1992, 13, 2489–2514. [Google Scholar] [CrossRef]
  40. Han, Y.; Gao, Y.; Zhang, Y.; Wang, J.; Yang, S. Hyperspectral sea ice image classification based on the spectral-spatial-joint feature with deep learning. Remote Sens. 2019, 11, 2170. [Google Scholar] [CrossRef]
  41. Burton-Johnson, A.; Wyniawskyj, N.S. Rock and snow differentiation from colour (RGB) images. Cryosphere Discuss 2020. preprint. [Google Scholar] [CrossRef]
  42. ESRI. HERE, Garmin, Intermap, Increment P. Corp., GEBCO, USGS, FAO, NPS, NRCAN, GeoBase, IGN, Kadaster NL, Ordnance Survey; ESRI: Tokyo, Japan, 2022. [Google Scholar]
  43. DJI. Available online: https://www.dji.com/ (accessed on 14 June 2022).
  44. Unmanned Aircraft. Civil Aviation Authority Czech Republic. Available online: https://www.caa.cz/en/flight-operations/unmanned-aircraft (accessed on 14 June 2022).
Figure 1. Chart of the evolution of keyword search focusing on image classification and classification methods (2015–2022).
Figure 1. Chart of the evolution of keyword search focusing on image classification and classification methods (2015–2022).
Applsci 13 11400 g001
Figure 2. Infographics pointing out the frequency of particular terms in the article.
Figure 2. Infographics pointing out the frequency of particular terms in the article.
Applsci 13 11400 g002
Figure 3. Pond Skříň, data source: [42]. The red box bounds the area of interest.
Figure 3. Pond Skříň, data source: [42]. The red box bounds the area of interest.
Applsci 13 11400 g003
Figure 4. Reservation Baroch, data source: [42]. The red box bounds the area of interest.
Figure 4. Reservation Baroch, data source: [42]. The red box bounds the area of interest.
Applsci 13 11400 g004
Figure 5. Screen from the flight planning software, source: authors.
Figure 5. Screen from the flight planning software, source: authors.
Applsci 13 11400 g005
Figure 6. Area of interest—Skříň.
Figure 6. Area of interest—Skříň.
Applsci 13 11400 g006
Figure 7. Area of interest—Baroch.
Figure 7. Area of interest—Baroch.
Applsci 13 11400 g007
Figure 8. Skříň, the best classification.
Figure 8. Skříň, the best classification.
Applsci 13 11400 g008
Figure 9. Skříň, the worst classification.
Figure 9. Skříň, the worst classification.
Applsci 13 11400 g009
Figure 10. Detail of Baroch’s frozen edges.
Figure 10. Detail of Baroch’s frozen edges.
Applsci 13 11400 g010
Figure 11. Results of all classifications for both areas of interest.
Figure 11. Results of all classifications for both areas of interest.
Applsci 13 11400 g011
Table 1. Numbers of records focusing on RGB UAV-borne data classification in WOS and SCOPUS databases (2000–2022).
Table 1. Numbers of records focusing on RGB UAV-borne data classification in WOS and SCOPUS databases (2000–2022).
Searched Phrases in WOS and SCOPUS (2000–2022)WOSSCOPUS
image classification AND drone AND RGB561878
deep learning AND drone AND RGB564885
maximum likelihood AND drone AND RGB25254
random trees * AND drone AND RGB699
iso cluster ** AND drone AND RGB127
SVM *** AND drone AND RGB911873
The term drone represents UAV, UAS, drone, RPAS, MAV, unmanned aerial vehicle, unmanned aerial system; * random tree and random trees; ** Iso Cluster, isodata; *** SVM— support vector machine.
Table 2. Records in WOS and SCOPUS database—summary for the terms.
Table 2. Records in WOS and SCOPUS database—summary for the terms.
Key WordWOSSCOPUS
UAV28,42357,575
drone830819,923
UAS66739839
Unmanned aerial vehicle15,48053,445
Unmanned aerial system9423391
RPAS657937
MAV24766692
Table 3. Parameters of methods.
Table 3. Parameters of methods.
MethodParameters
Manual IdentificationManual vectorization of areas into 2 classes
Iso ClusterClassification into 2 classes; 20 iterations
Maximum LikelihoodTrain set based on manual selection of each class; 5 samples per class
Random TreesSame train set as Maximum Likelihood; number of trees: 50; tree depth: 30; maximum number of samples per class 1000
SVMSame train set as Maximum Likelihood; number of samples per class: 500
Deep LearningSame train set as Maximum Likelihood; U-net (convolutional neural network) pixel classification; ResNet-34 (convolutional neural network with 34 layers) as Backbone model; 10 epochs
Table 4. Classification results—Kappa coefficient.
Table 4. Classification results—Kappa coefficient.
MethodPond SkříňPond Baroch
Manual identification—reference data11
Iso Cluster0.69460.8842
Maximum Likelihood0.84640.9312
Random Trees0.81750.9175
Support Vector Machine0.89400.9225
Deep Learning0.92120.8889
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jech, J.; Komárková, J.; Bhattacharya, D. Comparison of Automatic Classification Methods for Identification of Ice Surfaces from Unmanned-Aerial-Vehicle-Borne RGB Imagery. Appl. Sci. 2023, 13, 11400. https://doi.org/10.3390/app132011400

AMA Style

Jech J, Komárková J, Bhattacharya D. Comparison of Automatic Classification Methods for Identification of Ice Surfaces from Unmanned-Aerial-Vehicle-Borne RGB Imagery. Applied Sciences. 2023; 13(20):11400. https://doi.org/10.3390/app132011400

Chicago/Turabian Style

Jech, Jakub, Jitka Komárková, and Devanjan Bhattacharya. 2023. "Comparison of Automatic Classification Methods for Identification of Ice Surfaces from Unmanned-Aerial-Vehicle-Borne RGB Imagery" Applied Sciences 13, no. 20: 11400. https://doi.org/10.3390/app132011400

APA Style

Jech, J., Komárková, J., & Bhattacharya, D. (2023). Comparison of Automatic Classification Methods for Identification of Ice Surfaces from Unmanned-Aerial-Vehicle-Borne RGB Imagery. Applied Sciences, 13(20), 11400. https://doi.org/10.3390/app132011400

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop