Next Article in Journal
An Approach to the Automatic Construction of a Road Accident Scheme Using UAV and Deep Learning Methods
Next Article in Special Issue
R-YOLO: A YOLO-Based Method for Arbitrary-Oriented Target Detection in High-Resolution Remote Sensing Images
Previous Article in Journal
Fuzzy-Assisted Mobile Edge Orchestrator and SARSA Learning for Flexible Offloading in Heterogeneous IoT Environment
Previous Article in Special Issue
PROSPECT-PMP+: Simultaneous Retrievals of Chlorophyll a and b, Carotenoids and Anthocyanins in the Leaf Optical Properties Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mapping of Land Cover with Optical Images, Supervised Algorithms, and Google Earth Engine

by
Fernando Pech-May
1,*,
Raúl Aquino-Santos
2,
German Rios-Toledo
3 and
Juan Pablo Francisco Posadas-Durán
4
1
Department of Computer Science, Instituto Tecnológico Superior de los Ríos, Balancán 86930, Tabasco, Mexico
2
Faculty of Telematics, University of Colima, 333 University Avenue, Colima 28040, Colima, Mexico
3
Tecnológico Nacional de México Campus Tuxtla Gutiérrez, Tuxtla Gutiérrez 29050, Chiapas, Mexico
4
Instituto Politécnico Nacional (IPN), Mexico City 07738, Mexico
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(13), 4729; https://doi.org/10.3390/s22134729
Submission received: 18 March 2022 / Revised: 2 June 2022 / Accepted: 7 June 2022 / Published: 23 June 2022
(This article belongs to the Special Issue Advances in Remote Sensors for Earth Observation)

Abstract

:
Crops and ecosystems constantly change, and risks are derived from heavy rains, hurricanes, droughts, human activities, climate change, etc. This has caused additional damages with economic and social impacts. Natural phenomena have caused the loss of crop areas, which endangers food security, destruction of the habitat of species of flora and fauna, and flooding of populations, among others. To help in the solution, it is necessary to develop strategies that maximize agricultural production as well as reduce land wear, environmental impact, and contamination of water resources. The generation of crop and land-use maps is advantageous for identifying suitable crop areas and collecting precise information about the produce. In this work, a strategy is proposed to identify and map sorghum and corn crops as well as land use and land cover. Our approach uses Sentinel-2 satellite images, spectral indices for the phenological detection of vegetation and water bodies, and automatic learning methods: support vector machine, random forest, and classification and regression trees. The study area is a tropical agricultural area with water bodies located in southeastern Mexico. The study was carried out from 2017 to 2019, and considering the climate and growing seasons of the site, two seasons were created for each year. Land use was identified as: water bodies, land in recovery, urban areas, sandy areas, and tropical rainforest. The results in overall accuracy were: 0.99% for the support vector machine, 0.95% for the random forest, and 0.92% for classification and regression trees. The kappa index was: 0.99% for the support vector machine, 0.97% for the random forest, and 0.94% for classification and regression trees. The support vector machine obtained the lowest percentage of false positives and margin of error. It also acquired better results in the classification of soil types and identification of crops.

1. Introduction

The world bank considers that one of the leading global concerns is food security. However, in recent years different factors such as fires, floods, and droughts have been caused by climate change, putting at risk the areas dedicated to food crops. This has caused crop cycles to be modified and agricultural production to decrease [1]. Besides, the rapid increase in the world population has generated an unprecedented additional burden on agriculture, causing the degradation of farmland, water resources, and ecosystems, thus affecting food security [2]. It is estimated that by 2050, agricultural production needs to increase by 60% to ensure food and sustenance for the population [3].
Changes in land use caused by human activities influence the alteration of ecosystems [4]. Some organizations have proposed projects to improve crop yields but with environmentally sustainable agriculture, avoiding soil deterioration to address this situation [5,6].
On the one hand, because Mexico has a diversity of climates and massive extensions of farmland, agriculture is one of the country’s economic activities. Thus, Mexico produces a great variety of agricultural products. The agricultural production of Mexico covers 4% of gross domestic product (GDP) [7]. In recent years, the demand for farming foods has increased, causing overexploitation of natural resources. In Mexico, extreme droughts and severe floods have been recorded that have caused the loss of large extensions of crops, reducing their production. For this reason, it is of great importance to obtain information, map and identify crop areas that allow the development of strategies that counteract the effects of climate change on crops, develop sustainable agriculture, develop strategies that strengthen the field, and evaluate projects already implemented, in addition to estimating agricultural production. Therefore, it is necessary to obtain multitemporal data that monitor and identify crops, climate change, and human activities.
Currently, techniques and tools are being developed to monitor the Earth’s crust and determine changes in vegetation. Remote sensing is the science that collects information about the Earth’s surface, providing valuable data for land-use mapping, crop detection, etc. [8,9,10]. Artificial intelligence includes machine learning algorithms for land-use classification through satellite images [11].
The satellites that orbit the Earth provide unique information for additional research such as natural disasters, climate change, crop monitoring, etc. They use optical, infrared, and microwave sensors. Optical sensors provide high-resolution and multispectral images. Microwave sensors provide SAR (Synthetic Aperture Radar) prints with higher resolution and can operate in any weather condition. Many approaches analyze land cover using optical images. However, these images may contain noise (cloud cover). Some methods use SAR images. However, these images require more processing.
Spectral data from optical sensors is highly correlated with the Earth’s surface, and image analysis algorithms are mainly based on visual data. Therefore, they are primarily used for land-cover analysis.
With the advancement of satellite programs, the spatial, temporal, and infrared spectral resolution have improved significantly. New indices have been developed for land-cover analysis.
This paper proposes a methodology to map corn and sorghum crops by Sentinel-2 satellite imagery, reflectance index calculations, and supervised machine learning methods. The study area belongs to the state of Tabasco, Mexico. The document is structured as follows: Section 2 describes the theoretical framework and related works; Section 3 describes the materials and methods used in the research; Section 4 presents the results of the experiments; and finally, Section 5 contains the conclusions derived from the study.

2. Background and Related Works

2.1. Remote Sensing

Remote sensing (RS) is the science responsible for collecting information from an object, area, or phenomenon without direct contact with it through sensors that capture the electromagnetic radiation emitted or reflected by the target [12,13]. Earth observation satellites orbiting the planet record the electromagnetic radiation emitted by the Earth’s surface. Its operation is based on spectral signatures (the ability of objects to reflect or emit electromagnetic energy). With spectral signatures, it is possible to identify different types of crops, water bodies, soils, and other characteristics of the Earth’s crust.
RS has evolved from visible wavelength analog systems based on aerial platforms to digital systems using satellite platforms or unmanned aerial vehicles with coverage on a global scale [14]. The sensor resolution is the ability to record and separate information and depends on the combined effect of different components. The solution involves four essential characteristics: (1) spatial, which characterizes the Earth’s surface that each pixel of an image represents; (2) spectral, number, and bandwidth of the electromagnetic spectrum that can be recorded; (3) temporal, which determines the time it takes to obtain an image of the same place with the same satellite; and (4) radiometric, which represents the different digital levels used to record radiation intensity.
The use of satellite images may be limited by the type of passive sensors they use: (1) sensors that operate in the optical range and (2) microwave electromagnetic spectrum. The optical sensors provide multispectral images with 13 bands with characteristics that differentiate geological components such as water, vegetation, cloud, and ground cover. However, they can be affected by clouds or rain [15]. This makes it impossible to acquire images without cloudiness. The microwave sensors provide images that are not affected by weather conditions since they operate at longer wavelengths and are independent of solar radiation. However, spatial resolution and complicated processing technologies and tools limit their use [16].
Optimal optical images for vegetation identification, mapping, and atmospheric monitoring are derived from multispectral sensors [17]. Data for monitoring and analysis at the local level are obtained by drones or airplanes. In contrast, data from dedicated satellite platforms are used for ground monitoring [18].
Space programs dedicated to Earth observation include Landsat [19], Aqua [20], Copernicus [21], and more.
  • Copernicus Sentinel. It is a series of space missions developed by the European Space Agency (ESA) that observe the Earth’s surface and are composed of five satellites with different objectives [21]: (1) Sentinel-1 focuses on land and ocean monitoring; (2) Sentinel-2 has the mission of Earth monitoring; (3) Sentinel-3 is dedicated to marine monitoring; (4) Sentinel-4’s main objective is the measurement of the composition of the atmosphere in Europe and Africa; and (5) Sentinel-5 measures the atmospheric composition.

2.2. Sentinel-2 Project

The Sentinel-2 mission monitors the Earth’s surface with two satellites with similar characteristics (Sentinel 2A and 2B) that have an integrated 13-bands MSI (Multi-Spectral Instrument) optical sensor (see Table 1) that allows the acquisition of high-spatial-resolution images [22]. Each of the Sentinel-2 images covers a 290 km strip that, combined with its resolution of 10 to 60 meters per pixel and the 15-day review frequency on the equator, means that 1.6 Tbytes of image data are generated daily [23].
Among the Sentinel-2 mission objectives are:
  • Provide global and systematic acquisitions of high-resolution multispectral images with a high review frequency.
  • Provide continuity of the multispectral images provided by the SPOT satellites and the LANDSAT thematic mapping instrument of the USGS (United States Geological Survey).
  • Provide observational data for the next generation of operational products, such as land-cover maps, land change detection maps, and geophysical variables.
Due to its characteristics, Sentinel-2 images can be used in different research fields such as water body detection [24] and land-cover classification [25]. In addition, Sentinel-2 photos can be combined with images from other space projects such as SPOT 4 and 5 that allow for historical studies.
Sentinel-2 spectral bands provide data for land-cover change detection/classification, atmospheric correction, and cloud/snow separation [26]. It is essential to mention that the MSI of Sentinel-2 supports many Earth observation studies and programs. It also reduces the time needed to build a European cloud-free image archive.
The MSI works by passively collecting reflected sunlight from Earth. The incoming light beam is split by a filter and focused onto two separate focal plane arrays inside the instrument: one for the visible and near-infrared (VNIR) bands and one for the short-wave infrared (SWIR) bands. The new data are acquired as the satellite moves along its orbital path.

2.3. Reflectance Indices

Reflectance indices are dimensionless variables that result from mathematical combinations involving two or more spectral bands. The reflectance indices are designed to maximize the characteristics of vegetation and water resources but reduce noise [27,28].
This allows analyzing the activity of vegetation and water bodies showing their seasonal and spatial changes. The most used indices in RS are:
  • Normalized Difference Vegetation Index ( N D V I ) [29]. An indicator of photosynthetic biomass that calculates vegetation’s health is highly related in studies under drought conditions [30,31]. Its range is between +1 and −1. The highest value reflects healthy and dense vegetation; the lowest value reflects sparse or unhealthy vegetation. The N D V I is calculated using the following formula:
    N D V I = ( N I R R E D ) ( N I R + R E D )
    where N I R corresponds to the near-infrared band and R E D to the red band.
  • Green Normalized Vegetation Index ( G N D V I ). It is a modified version of N D V I that increases the sensitivity to variations in the chlorophyll of the vegetation [32]. It is calculated using the following formula:
    G N D V I = ( N I R G R E E N ) ( N I R + G R E E N )
    where N I R corresponds to the near-infrared band and G R E E N to the green band.
  • Enhanced Vegetation Index ( E V I ). It is an indicator that allows quantifying the greenness of the vegetation, increasing the sensitivity of the regions with a high presence of vegetation and correcting atmospheric conditions that cause distortions such as aerosols [33,34]. The formula to calculate it is:
    E V I = G ( N I R R E D ) ( N I R + C 1 R E D C 2 B L U E + L )
    where L is used for the soil adjustment factor; C 1 and C 2 are the coefficients used in the blue band to correct for the presence of the aerosol in the red band; G corresponds to the profit factor; and R E D , N E A R , and B L U E correspond to the red, near infrared, and blue bands, respectively.
  • Soil Adjusted Vegetation Index ( S A V I ). It is used to suppress the effect of the soil in areas where the vegetative cover is low, minimizing the error caused by the variation of the soil brightness [35]. The formula to calculate it is:
    S A V I = ( N I R R E D ) ( N I R + R E D + L ) ( 1 + L )
    where L is the ground adjusted factor, N I R is the near-infrared band, and R E D is the red band.
  • Normalized Difference Water Index ( N D W I ). It is sensitive to changes in the content of water resources and is less susceptible to the atmospheric effects than affect N D V I , and it is widely used in the analysis of water bodies [36]. It is calculated using the following formula:
    N D W I = ( N I R S W I R ) ( N I R + S W I R )
    where N I R corresponds to the near-infrared band, and S W I R refers to the short wave infrared band.

2.4. Satellite Image Classification Algorithms

Image classification is used in many works in RS. Multiband imagery is widely used to map land use and recognize areas of crops, forests, bodies of water, etc. The use of predictive machine learning methods makes it possible to identify patterns contained in the images. Generally, a distinction is made between supervised and unsupervised classification.
Supervised classification techniques work as part of a group of elements belonging to the image, known as training areas. The classification of the image set is the process by which each piece contained in the picture is assigned a category based on the attributes in the training areas. The supervised classification forces the result to correspond to land covers defined by the user and, therefore, of interest to them. However, it does not guarantee that the classes are statistically separable [37].
Unsupervised classification methods perform an automatic search by grouping uniform values within an image. From the digital levels, it creates several clusters with pixels with similar spectral behavior. It is important to note that the analyst must indicate the thematic meaning of the generated spectral classes since the program does not detect it [37].
Due to the interest in classification, many automatic classifiers have been developed that can be used in the SR area. Some of the most used algorithms are:
  • Maximum Likelihood. It starts from the assumption that the reflectivity values in each class follow without a multivariate normal probability distribution, which uses the vector of means and the variance–covariance matrix to estimate the probability that a given pixel belongs to each class. The pixel will finally be assigned to the class whose membership probability is higher. Once the assignment of pixels to the classes is finished, probability thresholds are established for each category, rejecting the pixels with a very low probability [38].
  • Support Vector Machine (SVM). This method was developed from the statistical learning theory, which reduces the error related to the size of the training or sample data [39,40]. It is a machine learning algorithm used in problems where input–output vector dependencies such as image classification and linear regression are unknown [41].
  • Random Forest (RF). It is a classification algorithm that aims to counteract variations in predictions in a decision tree caused by disturbances in training data [42]. The algorithm is designed so that the predictor trees produce as many errors as possible, thus ensuring that the rest of the classifiers reject it, improving their precision [43]. This algorithm has been widely used in remote sensing for land-cover classification [44].
  • Classification and Regression Trees (CART). It is a nonparametric machine learning method [45]. Create a predictive tree using binary division until the rule for inductive detection of relationships between input and output attributes is met. Used in prediction and classification problems, the constructed trees are optimized to obtain the best prediction possible [46].

2.5. Google Earth Engine

Google Earth Engine (GEE) (https://developers.google.com/earth-engine, accessed on 1 March 2022) allows high-performance computational resources to process extensive referenced data collections [47]. GEE has a robust repository of free access geospatial data that includes data from various spatial projects such as Sentinel images [48,49], Landsat [50], and climate data [51], among others [52,53,54]. This web platform facilitates the development and execution of algorithms applied to collections of georeferenced images and other data types.

2.6. Related Works

Several approaches to vegetation mapping have been explored. The ones mentioned here generally use Sentinel satellite imagery and spectral indices.
Shaharum et al. [55] presented an oil palm mapping plantations by Landsat 8 satellite images. The study period was 2016 and 2017. They used NDVI and NDWI spectral indices. They used NDVI and NDWI spectral indices and three classification methods: (1) random forests, (2) classification tree and regression, and (3) support vector machines. The data and methods were processed in GEE. The results obtained demonstrated the capacity of GEE for data processing and the generation of high-precision crop maps. Furthermore, they mapped the land cover of oil palms. They got 80% overall accuracy in each of the methods used.
Borrás et al. [56] present research that addresses two objectives: (1) determine the best classification method with Sentinel-2 images; (2) quantify the improvement of Sentinel-2 concerning other space missions. They selected four automatic classifiers (LDA, RF, Decision Trees, KNN) applied in two agricultural areas (Valencia, Spain, and Buenos Aires, Argentina). Based on the Kappa Index, they obtained a land-use map from the best classifier. They determined that the best classifiers for Sentinel-2 images are KNN and the combination of KNN with RF. They obtained 96.52% overall accuracy. Detection of abandoned soils and lucerne was better.
On the other hand, Liu et al. [57] present a pixel-based algorithm and phenological analysis to generate large-scale annual crop maps in seven areas of China. They used Landsat 8 and Sentinel-2 images from 2016 to 2018. They use GEE for image processing and several spectral indices to examine the phenological characteristics of the crop. The results show the importance of spectral indices for crop phenological detection. In addition, they allowed working with different image repositories in GEE. Overall accuracy was 78%, 76%, and 93% using Landsat and Sentinel-2. Detection of abandoned soils and lucerne was better.
Ashourloo et al. [58] presented a method for mapping the potato crop in Iran in 2019. They analyzed and used Sentinel-2 images and the machine learning method SVM and Maximum Likelihood (ML). The method is based on the potato’s spectral characteristics during its life cycle. The results show that SVM obtains better results—an overall accuracy of better than 90% in the study sites. Finally, Macintyre et al. [59] tested Sentinel-2 images for use in vegetation mapping. They used SVM, RF, Decision Tree (DT), and K-Nearest Neighbors (KNN) algorithms. The algorithm that obtained the highest performance in the classification was SVM. Furthermore, the authors state that Sentinel-2 is ideal for classifying vegetation composition. The obtained results were: SVM (74%), KNN (72%), RF (65%), and CT (50%).
Hudait et al. [60] mapped the heterogeneous crop area according to the crop type in the Purba Medinipur District of West Bengal. They used Sentinel-2 multispectral imagery and two machine learning algorithms: KNN and RF. Plot-level field information was collected from different cropland types to frame the training and validation datasets for cropland classification and accuracy assessment. The maps obtained allowed us to identify the cultivated surfaces of Boro rice, vegetables, and betel. They got 95% overall accuracy. The study showed that RF is the more accurate.
Silva et al. [61] developed an algorithm (phenology-based) for soybean crop mapping by spectral indices and Landsat and Sentinel-2 images. The study season was 2016–2017. The algorithm is based on the soybean’s phenology during their growing cycle. Therefore, they divided their life cycle into two stages: (1) vegetative and (2) reproductive. The results demonstrate the difficulty of obtaining many images with little noise in the study area. On the other hand, the images acquired by the MODIS sensor (from the Terra satellite program) [32] were slightly better than MSI. However, MSI images have better resolution.

3. Materials and Methods

The methodology applied for mapping crops is divided into five stages (see Figure 1) described below.

3.1. Location

The study area is located in the eastern part of Tabasco, Mexico (see Figure 2a). Approximately between latitude 17 ° 15 29.7329 N, y 18 ° 10 45.0525 N, and between longitude 90 ° 59 12.4464 O y 91 ° 44 22.1932 O. The area includes the towns of Balancán, Emiliano Zapata, and Tenosique, with an approximate size of 6079 km 2 (see Figure 2b). It has large volumes of aquifers and sediments collected by streams, rivers, and lagoons; the region’s climate is hot-humid with abundant rains in summer; its mean annual temperature is 26.55 ° C; the average humidity is 80% and maximum 85%. Due to the terrain and climate, the main activities are cattle ranching and agriculture, with corn, sorghum, and sugar cane growing.
Data. Sentinel-2 satellite images with the Google Earth Engine (GEE code) platform through the Copernicus/S2 repository. Because crop coverage is identified in the different seasons of the year, time series per year were created considering the crop cycles and weather type of study area. The images were selected in two annual time series: (1) Spring–Summer (20 March–20 October) and (2) Autumn–Winter (21 October–20 March), from 2017 to 2019, obtaining six collections of images.
To delimit the study area, a shapefile file obtained from the National Commission for the Knowledge and Use of Biodiversity (CONABIO) [62] was used. An images fitering was applied with less than 20% clouds to obtain better images. Thus, 309 images were obtained (see Table 2).

3.2. Image Selection

To obtain cleaner and sharper images, pixels with small accumulations of clouds (dense and cirrus) were removed by cloud masking using the QA60 band. The thick clouds were identified by the reflectance threshold of the blue band, and to avoid erroneous detection (e.g., snow), the SWIR reflectance and the Band 10 reflectance were used. For identification of cirrus clouds, a filter was applied based on morphological operations in dense and cirrus masks: (1) erosion, to eliminate isolated pixels, and (2) dilation, to fill the gap and extend the clouds.

3.3. Preprocessing

Spectral indices were calculated for collections of masked images. Spectral indices are based on vegetation’s red and infrared spectral bands and electromagnetic energy interactions. For vegetation detection, the following were calculated: Normalized Difference Vegetation Index ( N D V I ), Green Normalized Difference Vegetation Index ( G N D V I ), Improved Vegetation Index ( E V I ), Soil Adjusted Vegetation Index ( S A V I ), and Normalized Difference Moisture Index ( N D M I ). For water bodies: Normalized Difference Water Index ( N D W I ). The Sentinel-2 bands used for each spectral index are:
N D V I = ( B 8 B 4 ) ( B 8 + B 4 )
G N D V I = ( B 8 B 3 ) ( B 8 + B 3 )
E V I = ( 2.5 ( B 8 B 4 ) ) ( B 8 + 6 B 4 7.5 B 2 + 1 )
S A V I = ( B 08 B 04 ) ( B 08 + B 04 + 0.428 ) ( 1.428 )
N D W I = ( B 3 B 8 ) ( B 3 + B 8 )
N D M I = ( B 8 B 11 ) ( B 8 + B 11 )
For image correction, mosaics were formed by cutting out the contour of the study area and a reduction method by histograms, and linear regression (supplied by GEE through the ee. Reducer class) was applied to allow the data aggregation over time. This required reducing the image collection (input) to a single image (output) with the same number of bands as the input collection. Each pixel in the output image bands contains summary information for the pixels in the input collection. To provide additional information to the classification methods on the dynamic range of the study area, five percentages (10%, 30%, 50%, 70%, and 90%) and the variance of each band that composes the reduced image were calculated. The electron spectrum is recorded by placing the minimum, medium, maximum, and intermediate points to form a 78-band image.

3.4. Supervised Classification

In the classification stage, the study area’s main types of land were identified. This was done by visual analysis of satellite images, vegetation maps, and crop estimation maps obtained from the agricultural and fishing information service (SIAP, Ministry of Agriculture and Rural Development of Mexico).
Two types of crops (corn and sorghum) and six types of land use were identified: water masses (extensions of water), lands in recovery (grounds without sowing with little or no presence of vegetation), urban areas (towns or cities), sandy areas (accumulation of mineral or biological sediments), forests or tropical jungle (zone with a high vegetation index), and others (grasslands, etc.). For crops and soil types identification, three supervised classification algorithms were applied: Random Forest (RF), Support Vector Machines (SVM), and Classification and Regression Trees (CART). Supervised learning classification methods require datasets labeled with land-use categories for learning and training. GeoPDF (https://www.gob.mx/siap/documentos/mapa-con-la-estimacion-de-superficie-sembrada-de-cultivos-basicos, accessed on 3 January 2022) (estimation of crop sowing area) documents and Google Earth files provided by SIAP (https://datos.gob.mx/busca/dataset/estimacion-de-superficie-agricola-para-el-ciclo-primavera–verano, accessed on 24 August 2021) with hydrographic maps and vegetation maps and visual identification were selected to compose the training dataset.
Crop cycles and seasonal climate change cause differences in spectral indices in crops and soil types, leading to misclassifications. Therefore, it was decided to form independent datasets corresponding to each crop cycle. To address this issue, two separate data sets were created using sample points or pixels corresponding to each growing process. The pixels of the spring–summer and autumn–winter cycles were selected and entered manually in GEE based on the collection of images from 2019 and 2018 (see Figure 3), forming two datasets with 2510 sample points for spring–summer and 3012 for autumn–winter (see Table 3).
Considering the data-driven framework of machine learning models to evaluate the performance and accuracy of classification methods [63] and avoid overtraining, the dataset was divided into 70% for the training set and 30% to evaluate the performance and accuracy of classification methods.
The SVM, RF, and CART classification algorithms were evaluated and executed with different configurations on the GEE platform to improve classification efficiency.
For SVM, a kernel with a radial and gamma base function of 0.7 was used with a cost of 30. Two pieces of training were carried out: spring–summer and autumn–winter. RF was configured so that the random forest limits 20 trees and avoids misclassifications; this configuration obtained significant improvements. The base GEE configuration was used with CART since it acquired a lower number of classification errors.

4. Results Evaluation

From the data, two categories were defined: (1) types of crops and (2) types of land use. Corn (CC) and sorghum (SC) are found in crops. Soil types are water bodies (WB), land in recovery (LR), urban areas (UA), sandy areas (SA), tropical rainforest (TR), and others.
For the test of the classified maps, 30% of the sample points were used: 742 for the spring–summer season and 868 for the autumn–winter season (see Table 4).
The overall training accuracy (OA) and the kappa index (KI) were calculated for each season and classification method. Table 5 shows that SVM obtained the best performance in both seasons; OA and KI were 0.996%. The RF method brought an OA and a KI greater than 0.990 in the spring–summer season; in the autumn–winter season, it was 0.96% and 0.95%, respectively. Lastly, the CART method obtained an OA of 0.94% and a KI of 0.92% in the first season, and in the second season, it received 0.98% and 0.97%, respectively. Values closer to 1 indicate better performance, and therefore, the results are more reliable, while values relative to 0 indicate unreliable results.

Coverage of Sorghum and Corn Crops with Government Data

The SIAP oversees collecting crop data. However, these data only consider the hectares planted. Consequently, those that do not sprout or do not grow are ignored. That makes these data unreliable. As a result, the margins of error of the hectares detected by the algorithms and the SIAP data are enormous.
The types of crops were compared with data obtained from the SIAP. Table 6 shows the hectares of produce for the spring–summer (s-s) and autumn–winter (a-w) seasons.
Figure 4 shows the maps generated by the SVM method. Table 7 shows the results of the estimation of the coverage of the crop types and land use using the SVM method. Results are reported in square hectares. They are classified by municipality (zone) and in two seasons of each year: spring–summer (s-s) and autumn–winter (a-w). The gray cells indicate the extensions with the highest coverage, corn in 2019 autumn–winter (1514.59 ha) and sorghum in 2017 autumn–winter (348.11 ha) for zone 1. For zone 2, it was corn in 2018 spring–summer (11,856.54 ha) and sorghum in 2017 autumn–winter (4248.01 ha).
Figure 5 shows the maps generated by the RF method, and Table 8 shows the results in land cover.
Finally, Figure 6 and Table 9 show the results obtained with the CART method.
The predictions of the three classifiers were compared with the ground truth provided by SIAP. Percentage errors for each classifier are shown in Table 10. The results obtained by SVM were superior to the actual data. The SVM method received a 5.86% general error in corn and 9.55% in sorghum crops. On the other hand, the accurate data may have a margin of error because some lands may be cultivated occasionally. This means that small crops or lands where crops are intermittent are not accounted for.

5. Discussion

We obtained that optical satellite images are beneficial for land and land-cover maps. Some approaches that use the same technologies and tools for the land-cover map are [48,50,64].
These images have characteristics that allow different research types in various fields to be carried out. However, Sentinel-2 photos are obtained through passive sensors; they usually present cumulus clouds that make it difficult to collect scenes in areas where the high frequency of cloudiness prevents the taking of large amounts of images. In the southeast of Mexico, specifically the state of Tabasco, as it has a high humidity index, large amounts of clouds are frequent, making investigations using Sentinel-2 images difficult, which makes it necessary to be preprocessed to obtain cleaner images. On the other hand, supervised classification methods can perform soil classifications. All this is according to the configuration, and data sets used.
Some studies in the literature used Sentinel-2 and the three classification algorithms mentioned. Praticó et al. [48] and Loukika et al. [64] used Sentinel-2 and the RF, SVM, and CART algorithms. The best results obtained were with RF and SVM. Both our approach and that of Praticó et al. and Loukika et al. used NDWI for water body detection and vegetation NDVI, GNDVI, and SAVI. The processing tool was GEE.
It is important to note that for NDVI and NDWI, training points and polygons were created for each class, and each pixel within the polygon represents training data. Since the assigned value for each pixel is known, we can compare them with the classified ones and generate an error and precision.
It should be noted that a bagging technique was applied for the RF training. For SVM, an instance was created that looks for an optical hyperplane separating the decision boundaries between different classes. RF and SVM receive the training data, detectable types, and spectral bands (bands 2, 3, 4, 5, 8, 11, NDVI, NDWI). Furthermore, in RF, the number of trees and variables in each split is needed, while in SVM, the Gamma costs and kernel functions are required [65].
On the other hand, Tassi et al. [50] analyze land cover by Landsat 8 images, RF, and GEE. They use two approaches: pixel-based (PB) and two object-based (OB). SVM and RD are the algorithms with the best results in these mentioned approaches.
The three mentioned approaches, as well as our proposal, use supervised algorithms for land-cover classification. They also use the Google Earth Engine for image processing. The results obtained from the three approaches are like our proposal. They also use the same spectral indices for land-use and land-cover maps. The evidence presented above demonstrates the importance of Sentinel-2 satellite imagery in the field of soil classification and crop detection. Sentinel-2 images have characteristics that allow different investigations to be carried out in different fields.
However, Sentinel-2 images usually present cumulus clouds that make it difficult to collect scenes in areas where cloudiness is high, preventing the taking of large amounts of photos. This is because passive sensors obtained them, making it necessary to be preprocessed to get cleaner images.

6. Conclusions

Sentinel-2 satellite images have characteristics that allow them to be used in land-use clasification, crop detection, and different research fields. However, since they are obtained through passive sensors, they can present cumulus clouds that make it difficult to collect scenes in gray areas. The area and seasons studied presented a high rate of humidity, which made the research difficult. On the other hand, the execution capacity of the Google Earth Engine platform proved to be effective in land-use analysis and classification. The methods used for land-use classification and crops of sorghum and corn were SVM, RF, and CART, which obtained different results. SVM obtained 0.99%, RF 0.95%, and CART 0.92% overall accuracy. SVM had the lowest percentage of false positives and the lowest margin of error compared to the real data. According to the data obtained, the corn crop has the greatest presence in the study area, and sorghum has a decreased presence.
Food production in the study area does not show significant changes. Compared to population growth, production is inefficient, which is a risk to food security in the area. This makes it necessary to import products.
Future work intends to improve the sample datasets to have a better data range, use unsupervised learning methods, and use SAR data (Sentinel-1) and other satellites to increase the images and build maps with greater precision.

Author Contributions

Conceptualization, G.R.-T. and J.P.F.P.-D.; methodology, F.P.-M.; Writing—original draft preparation, F.P.-M.; writing—review and editing, R.A.-S. All authors have read and agreed to the published version of the manuscript.

Funding

This study was partly supported by the Council of Science and Technology of the state of Tabasco, Mexico (CCYTET). Thanks to National Technology of Mexico (TecNM). Council reference: 14080.22-PD.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. de Raymond, A.B.; Alpha, A.; Ben-Ari, T.; Daviron, B.; Nesme, T.; Tétart, G. Systemic risk and food security. Emerging trends and future avenues for research. Glob. Food Secur. 2021, 29, 100547. [Google Scholar] [CrossRef]
  2. Tim, C. La Agricultura en el Siglo XXI: Un Nuevo Paisaje Para la Gente, la Alimentación y la Naturaleza. 2017. Available online: https://humbertoarmenta.mx/el-rol-de-la-agricultura-en-el-siglo-xxi/ (accessed on 20 April 2021).
  3. World Bank. Food Security. Available online: https://www.worldbank.org/en/topic/food-security (accessed on 15 May 2021).
  4. Yawson, D.O.; Mulholland, B.J.; Ball, T.; Adu, M.O.; Mohan, S.; White, P.J. Effect of Climate and Agricultural Land Use Changes on UK Feed Barley Production and Food Security to the 2050s. Land 2017, 6, 74. [Google Scholar] [CrossRef] [Green Version]
  5. Ren, C.; Liu, S.; van Grinsven, H.; Reis, S.; Jin, S.; Liu, H.; Gu, B. The impact of farm size on agricultural sustainability. J. Clean. Prod. 2019, 220, 357–367. [Google Scholar] [CrossRef]
  6. Liu, S.Y. Artificial Intelligence (AI) in Agriculture. IT Prof. 2020, 22, 14–15. [Google Scholar] [CrossRef]
  7. Organización de las Naciones Unidas para la Alimentación y la Agricultura. México en Una Mirada. Available online: http://www.fao.org/mexico/fao-en-mexico/mexico-en-una-mirada/es/ (accessed on 14 June 2020).
  8. Pareeth, S.; Karimi, P.; Shafiei, M.; De Fraiture, C. Mapping Agricultural Landuse Patterns from Time Series of Landsat 8 Using Random Forest Based Hierarchial Approach. Remote. Sens. 2019, 11, 601. [Google Scholar] [CrossRef] [Green Version]
  9. Pei, T.; Xu, J.; Liu, Y.; Huang, X.; Zhang, L.; Dong, W.; Qin, C.; Song, C.; Gong, J.; Zhou, C. GIScience and remote sensing in natural resource and environmental research: Status quo and future perspectives. Geogr. Sustain. 2021, 2, 207–215. [Google Scholar] [CrossRef]
  10. Yin, J.; Dong, J.; Hamm, N.A.; Li, Z.; Wang, J.; Xing, H.; Fu, P. Integrating remote sensing and geospatial big data for urban land use mapping: A review. Int. J. Appl. Earth Obs. Geoinf. 2021, 103, 102514. [Google Scholar] [CrossRef]
  11. Pantazi, X.E.; Moshou, D.; Bochtis, D. Chapter 2 - Artificial intelligence in agriculture. In Intelligent Data Mining and Fusion Systems in Agriculture; Pantazi, X.E., Moshou, D., Bochtis, D., Eds.; Academic Press: Cambridge, MA, USA, 2020; pp. 17–101. [Google Scholar] [CrossRef]
  12. Horning, N. Remote sensing. In Encyclopedia of Ecology, 2nd ed.; Fath, B., Ed.; Elsevier: Oxford, UK, 2019; pp. 404–413. [Google Scholar] [CrossRef]
  13. Roy, P.; Behera, M.; Srivastav, S. Satellite Remote Sensing: Sensors, Applications and Techniques. Proc. Natl. Acad. Sci., India Sect. A Phys. Sci. 2017, 87, 465–472. [Google Scholar] [CrossRef] [Green Version]
  14. Read, J.M.; Chambers, C.; Torrado, M. Remote Sensing. In International Encyclopedia of Human Geography, 2nd ed.; Kobayashi, A., Ed.; Elsevier: Oxford, UK, 2020; pp. 411–422. [Google Scholar] [CrossRef]
  15. Fu, W.; Ma, J.; Chen, P.; Chen, F. Remote sensing satellites for digital Earth. In Manual of Digital Earth; Guo, H., Goodchild, M.F., Annoni, A., Eds.; Springer: Singapore, 2020; pp. 55–123. [Google Scholar] [CrossRef] [Green Version]
  16. Oliveira, E.R.; Disperati, L.; Cenci, L.; Gomes Pereira, L.; Alves, F.L. Multi-Index Image Differencing Method (MINDED) for Flood Extent Estimations. Remote. Sens. 2019, 11, 1305. [Google Scholar] [CrossRef] [Green Version]
  17. DeFries, R. Remote sensing and image processing. In Encyclopedia of Biodiversity, 2nd ed.; Levin, S.A., Ed.; Academic Press: Cambridge, MA, USA, 2013; pp. 389–399. [Google Scholar] [CrossRef]
  18. Horning, N. Remote sensing. In Encyclopedia of Ecology; Jørgensen, S.E., Fath, B.D., Eds.; Academic Press: Oxford, UK, 2008; pp. 2986–2994. [Google Scholar] [CrossRef]
  19. Emery, W.; Camps, A. Chapter 1-The history of satellite remote sensing. In Introduction to Satellite Remote Sensing; Emery, W., Camps, A., Eds.; Elsevier: Amsterdam, The Netherlands, 2017; pp. 1–42. [Google Scholar] [CrossRef]
  20. Lillesand, T.M. Remote Sensing and Image Interpretation; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2006. [Google Scholar]
  21. Jutz, S.; Milagro-Pérez, M. 1.06 - Copernicus program. In Comprehensive Remote Sensing; Liang, S., Ed.; Elsevier: Oxford, UK, 2018; pp. 150–191. [Google Scholar] [CrossRef]
  22. Louis, J.; Pflug, B.; Main-Knorn, M.; Debaecker, V.; Mueller-Wilm, U.; Iannone, R.Q.; Giuseppe Cadau, E.; Boccia, V.; Gascon, F. Sentinel-2 Global Surface Reflectance Level-2a Product Generated with Sen2Cor. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 8522–8525. [Google Scholar] [CrossRef] [Green Version]
  23. Drusch, M.; Bello, U.D.; Carlier, S.; Colin, O.; Fernandez, V.; Gascon, F.; Hoersch, B.; Isola, C.; Laberinti, P.; Martimort, P.; et al. Sentinel-2: ESA’s Optical High-Resolution Mission for GMES Operational Services. Remote. Sens. Environ. 2012, 120, 25–36. [Google Scholar] [CrossRef]
  24. Saini, O.; Bhardwaj, A.; Chatterjee, R.S. Detection of Water Body Using Very High-Resolution UAV SAR and Sentinel-2 Images. In Proceedings of the UASG 2019, International Conference on Unmanned Aerial System in Geomatics, Roorkee, India, 6–7 April 2019; Jain, K., Khoshelham, K., Zhu, X., Tiwari, A., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 53–65. [Google Scholar] [CrossRef]
  25. Heryadi, Y.; Miranda, E. Land Cover Classification Based on Sentinel-2 Satellite Imagery Using Convolutional Neural Network Model: A Case Study in Semarang Area, Indonesia. In Intelligent Information and Database Systems: Recent Developments; Huk, M., Maleszka, M., Szczerbicki, E., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 191–206. [Google Scholar] [CrossRef]
  26. European Space Agency. Sentinel-2—Satellite Description; ESA—Sentinel Online; European Space Agency: Paris, Frence, 2020. [Google Scholar]
  27. Cici, A. Normalised difference spectral indices and urban land cover as indicators of land surface temperature (LST). Int. J. Appl. Earth Obs. Geoinf. 2020, 86, 102013. [Google Scholar] [CrossRef]
  28. Kumar, V.; Sharma, A.; Bhardwaj, R.; Thukral, A.K. Comparison of different reflectance indices for vegetation analysis using Landsat-TM data. Remote Sens. Appl. Soc. Environ. 2018, 12, 70–77. [Google Scholar] [CrossRef]
  29. Rouse, J.W., Jr.; Haas, R.H.; Schell, J.A.; Deering, D.W. Monitoring Vegetation Systems in the Great Plains with Erts. In Proceedings of the Third Third Earth Resources Technology Satellite-1 Symposium, Washington, DC, USA, 10–14 December 1973; NASA: Washington, DC, USA, 1974; Volume 351, pp. 309–317. [Google Scholar]
  30. Drisya, J.; D, S.K.; Roshni, T. Chapter 27—Spatiotemporal Variability of Soil Moisture and Drought Estimation Using a Distributed Hydrological Model. In Integrating Disaster Science and Management; Samui, P., Kim, D., Ghosh, C., Eds.; Elsevier: Amsterdam, The Netherlands, 2018; pp. 451–460. [Google Scholar] [CrossRef]
  31. Arabameri, A.; Pourghasemi, H.R. 13—Spatial Modeling of Gully Erosion Using Linear and Quadratic Discriminant Analyses in GIS and R. In Spatial Modeling in GIS and R for Earth and Environmental Sciences; Pourghasemi, H.R., Gokceoglu, C., Eds.; Elsevier: Amsterdam, The Netherlands, 2019; pp. 299–321. [Google Scholar] [CrossRef]
  32. Gitelson, A.A.; Kaufman, Y.J.; Merzlyak, M.N. Use of a green channel in remote sensing of global vegetation from EOS-MODIS. Remote Sens. Environ. 1996, 58, 289–298. [Google Scholar] [CrossRef]
  33. Gao, X.; Huete, A.R.; Ni, W.; Miura, T. Optical–Biophysical Relationships of Vegetation Spectra without Background Contamination. Remote Sens. Environ. 2000, 74, 609–620. [Google Scholar] [CrossRef]
  34. Lin, Q. Enhanced vegetation index using Moderate Resolution Imaging Spectroradiometers. In Proceedings of the 2012 5th International Congress on Image and Signal Processing, Chongqing, China, 16–18 October 2012; pp. 1043–1046. [Google Scholar] [CrossRef]
  35. Fang, H.; Liang, S. Leaf Area Index Models. In Reference Module in Earth Systems and Environmental Sciences; Elsevier: Amsterdam, The Netherlands, 2014. [Google Scholar] [CrossRef]
  36. Bo-cai, G. NDWI—A normalized difference water index for remote sensing of vegetation liquid water from space. Remote Sens. Environ. 1996, 58, 257–266. [Google Scholar] [CrossRef]
  37. Rees, W. The Remote Sensing Data Book; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar]
  38. Mather, P.; Tso, B. Classification Methods for Remotely Sensed Data; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar]
  39. Edalat, M.; Jahangiri, E.; Dastras, E.; Pourghasemi, H.R. 18-Prioritization of Effective Factors on Zataria multiflora Habitat Suitability and its Spatial Modeling. In Spatial Modeling in GIS and R for Earth and Environmental Sciences; Pourghasemi, H.R., Gokceoglu, C., Eds.; Elsevier: Amsterdam, The Netherlands, 2019; pp. 411–427. [Google Scholar] [CrossRef]
  40. Wilson, M. Support Vector Machines. In Encyclopedia of Ecology; Jørgensen, S.E., Fath, B.D., Eds.; Academic Press: Oxford, UK, 2008; pp. 3431–3437. [Google Scholar] [CrossRef]
  41. Vapnik, V.N. The Nature of Statistical Learning Theory; Springer: New York, NY, USA, 1995. [Google Scholar] [CrossRef]
  42. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  43. Fratello, M.; Tagliaferri, R. Decision Trees and Random Forests. In Encyclopedia of Bioinformatics and Computational Biology; Ranganathan, S., Gribskov, M., Nakai, K., Schönbach, C., Eds.; Academic Press: Oxford, UK, 2019; pp. 374–383. [Google Scholar] [CrossRef]
  44. Ge, G.; Shi, Z.; Zhu, Y.; Yang, X.; Hao, Y. Land use/cover classification in an arid desert-oasis mosaic landscape of China using remote sensed imagery: Performance assessment of four machine learning algorithms. Glob. Ecol. Conserv. 2020, 22, e00971. [Google Scholar] [CrossRef]
  45. Breiman, L.; Friedman, J.; Stone, C.; Olshen, R. Classification and Regression Trees; The Wadsworth and Brooks-Cole Statistics-Probability Series; Taylor & Francis: Abingdon, UK, 1984. [Google Scholar]
  46. Choubin, B.; Zehtabian, G.; Azareh, A.; Rafiei-Sardooi, E.; Sajedi-Hosseini, F.; Kisi, O. Precipitation forecasting using classification and regression trees (CART) model: A comparative study of different approaches. Environ. Earth Sci. 2018, 77, 314. [Google Scholar] [CrossRef]
  47. Gorelick, N.; Hancher, M.; Dixon, M.; Ilyushchenko, S.; Thau, D.; Moore, R. Google Earth Engine: Planetary-scale geospatial analysis for everyone. Remote. Sens. Environ. 2017, 202, 18–27. [Google Scholar] [CrossRef]
  48. Praticò, S.; Solano, F.; Di Fazio, S.; Modica, G. Machine Learning Classification of Mediterranean Forest Habitats in Google Earth Engine Based on Seasonal Sentinel-2 Time-Series and Input Image Composition Optimisation. Remote Sens. 2021, 13, 586. [Google Scholar] [CrossRef]
  49. Chen, B.; Xiao, X.; Li, X.; Pan, L.; Doughty, R.; Ma, J.; Dong, J.; Qin, Y.; Zhao, B.; Wu, Z.; et al. A mangrove forest map of China in 2015: Analysis of time series Landsat 7/8 and Sentinel-1A imagery in Google Earth Engine cloud computing platform. ISPRS J. Photogramm. Remote. Sens. 2017, 131, 104–120. [Google Scholar] [CrossRef]
  50. Tassi, A.; Gigante, D.; Modica, G.; Di Martino, L.; Vizzari, M. Pixel- vs. Object-Based Landsat 8 Data Classification in Google Earth Engine Using Random Forest: The Case Study of Maiella National Park. Remote. Sens. 2021, 13, 2299. [Google Scholar] [CrossRef]
  51. Zarei, A.R.; Shabani, A.; Mahmoudi, M.R. Comparison of the climate indices based on the relationship between yield loss of rain-fed winter wheat and changes of climate indices using GEE model. Sci. Total. Environ. 2019, 661, 711–722. [Google Scholar] [CrossRef] [PubMed]
  52. Xiong, J.; Thenkabail, P.S.; Gumma, M.K.; Teluguntla, P.; Poehnelt, J.; Congalton, R.G.; Yadav, K.; Thau, D. Automated cropland mapping of continental Africa using Google Earth Engine cloud computing. ISPRS J. Photogramm. Remote Sens. 2017, 126, 225–244. [Google Scholar] [CrossRef] [Green Version]
  53. Tamiminia, H.; Salehi, B.; Mahdianpari, M.; Quackenbush, L.; Adeli, S.; Brisco, B. Google Earth Engine for geo-big data applications: A meta-analysis and systematic review. ISPRS J. Photogramm. Remote Sens. 2020, 164, 152–170. [Google Scholar] [CrossRef]
  54. Kumar, L.; Mutanga, O. Google Earth Engine Applications Since Inception: Usage, Trends, and Potential. Remote Sens. 2018, 10, 1509. [Google Scholar] [CrossRef] [Green Version]
  55. Nisa-Shaharum, N.; Mohd-Shafri, H.; Ghani, W.; Samsatli, S.; Al-Habshi, M.; Yusuf, B. Oil palm mapping over Peninsular Malaysia using Google Earth Engine and machine learning algorithms. Remote Sens. Appl. Soc. Environ. 2020, 17, 100287. [Google Scholar] [CrossRef]
  56. Borràs, J.; Delegido, J.; Pezzola, A.; Pereira, M.; Morassi, G.; Camps, G. Clasificación de usos del suelo a partir de imágenes Sentinel-2. Rev. Teledetección 2017, 2017, 55. [Google Scholar] [CrossRef] [Green Version]
  57. Liu, L.; Xiao, X.; Qin, Y.; Wang, J.; Xu, X.; Hu, Y.; Qiao, Z. Mapping cropping intensity in China using time series Landsat and Sentinel-2 images and Google Earth Engine. Remote Sens. Environ. 2020, 239, 111624. [Google Scholar] [CrossRef]
  58. Ashourloo, D.; Shahrabi, H.; Azadbakht, M.; Rad, A.; Aghighi, H.; Radiom, S. A novel method for automatic potato mapping using time series of Sentinel-2 images. Comput. Electron. Agric. 2020, 175, 105583. [Google Scholar] [CrossRef]
  59. Macintyre, P.; van Niekerk, A.; Mucina, L. Efficacy of multi-season Sentinel-2 imagery for compositional vegetation classification. Int. J. Appl. Earth Obs. Geoinf. 2020, 85, 101980. [Google Scholar] [CrossRef]
  60. Hudait, M.; Patel, P.P. Crop-type mapping and acreage estimation in smallholding plots using Sentinel-2 images and machine learning algorithms: Some comparisons. Egypt. J. Remote Sens. Space Sci. 2022, 25, 147–156. [Google Scholar] [CrossRef]
  61. Silva-Junior, C.; Leonel-Junior, A.; Saragosa-Rossi, F.; Correia-Filho, W.; Santiago, D.; Oliveira-Júnior, J.; Teodoro, P.; Lima, M.; Capristo-Silva, G. Mapping soybean planting area in midwest Brazil with remotely sensed images and phenology-based algorithm using the Google Earth Engine platform. Comput. Electron. Agric. 2020, 169, 105194. [Google Scholar] [CrossRef]
  62. Comisión Nacional para el Conocimiento y Uso de la Biodiversidad. Mapa Base del Estado de Tabasco. Available online: http://www.conabio.gob.mx/informacion/metadata/gis/tabaprgn.xml?_httpcache=yes&_xsl=/db/metadata/xsl/fgdc_html.xsl&_indent=no&as=.html (accessed on 13 June 2021).
  63. Brink, H.; Richards, J.; Fetherolf, M. Real-World Machine Learning; Simon and Schuster: Manhattan, NY, USA, 2016. [Google Scholar]
  64. Loukika, K.N.; Keesara, V.R.; Sridhar, V. Analysis of Land Use and Land Cover Using Machine Learning Algorithms on Google Earth Engine for Munneru River Basin, India. Sustainability 2021, 13, 13758. [Google Scholar] [CrossRef]
  65. Shetty, S. Analysis of Machine Learning Classifiers for LULC Classification on Google Earth Engine. Master’s Thesis, University of Twente, Enschede, The Netherlands, 2019. [Google Scholar]
Figure 1. Proposed methodology for land-cover classification.
Figure 1. Proposed methodology for land-cover classification.
Sensors 22 04729 g001
Figure 2. Study area map’s. (a) Tabasco in Mexico; (b) Study area.
Figure 2. Study area map’s. (a) Tabasco in Mexico; (b) Study area.
Sensors 22 04729 g002
Figure 3. Sample points on the GEE platform.
Figure 3. Sample points on the GEE platform.
Sensors 22 04729 g003
Figure 4. SVM-generated maps.
Figure 4. SVM-generated maps.
Sensors 22 04729 g004
Figure 5. RF-generated maps.
Figure 5. RF-generated maps.
Sensors 22 04729 g005
Figure 6. CART-generated maps.
Figure 6. CART-generated maps.
Sensors 22 04729 g006
Table 1. Spectral bands for Sentinel-2A and Sentinel-2B sensors. Bold bands were used in this research.
Table 1. Spectral bands for Sentinel-2A and Sentinel-2B sensors. Bold bands were used in this research.
Sentinel-2ASentinel-2B
BandWavelength (nm)Resolution (m)Wavelength (nm)Resolution (m)
1 Coastal aerosol443.960442.360
2 Blue496.610492.110
3 Green5601055910
4 Red664.51066510
5 Vegetation red edge (VNIR)703.920703.820
6 Vegetation red edge (VNIR)740.220739.120
7 Vegetation red edge (VNIR)782.520779.720
8 Near infrared (NIR)835.11083310
8a Narrow NIR864.82086420
9 Water vapor94560943.260
10 Short-wave infrared (SWIR) cirrus1373.5601376.960
11 SWIR1613.7201610.420
12 SWIR2202.4202185.720
Table 2. Time series imaging dataset.
Table 2. Time series imaging dataset.
SeasonImg 2017Img 2018Img 2019
Spring–Summer127066
Autumn–Winter466352
Table 3. Sample points collection.
Table 3. Sample points collection.
CoverageSpring–SummerAutumn–Winter
Corn crop194190
Sorghum crop969822
Water bodies324324
Land in recovery152628
Urban areas100100
Sandy areas191233
Tropical rainforest353378
Others227337
Table 4. Collection of sample points for test.
Table 4. Collection of sample points for test.
CoverageSpring–SummerAutumn–Winter
Corn crop5657
Sorghum crop285262
Water bodies9595
Land in recovery49177
Urban areas2838
Sandy areas5463
Tropical rainforest112125
Others63107
Table 5. Overall accuracy (OA) and Kappa index (KI) of the seasons.
Table 5. Overall accuracy (OA) and Kappa index (KI) of the seasons.
RFSVMCART
SeasonOAKIOAKIOAKI
Spring–Summer0.96710.95800.99730.99660.94260.9260
Autumn–Winter0.99200.99040.99880.99860.98150.9777
Table 6. Coverage in hectares of corn and sorghum crops provided by SIAP for the spring–summer (s-s) and autumn–winter (a-w) seasons.
Table 6. Coverage in hectares of corn and sorghum crops provided by SIAP for the spring–summer (s-s) and autumn–winter (a-w) seasons.
Emiliano ZapataBalancánTenosique
YearCornSorghumCornSorghumCornSorghum
2017 s-s104529810,26725382860no data
2017 a-w1340323757740252146339
2018 s-s10173711,6541074262no data
2018 a-w137095792638721738262
2019 s-s11755011,35716064305120
2019 a-w1425115808639722485421
Table 7. Land-use coverage classified by SVM. Coverage in hectares.
Table 7. Land-use coverage classified by SVM. Coverage in hectares.
SeasonCCSCWBLRUASATROthers
Zone 1. Emiliano Zapata
2017 s-s1128.31318.122697.58445.77369.1623.5415,263.4338,985.48
2017 a-w1422.23348.114441.971141.84441.71224.7721,411.6229,799.01
2018 a-w1409.32104.992499.00820.43343.68432.8422,769.3430,354.33
2019 s-s1238.6857.121515.61339.56374.7047.119651.5844,665.69
2019 a-w1514.59124.282391.121079.26420.8192.1821,616.8531,992.72
Zone 2. Balancán
2017 s-s10,572.432720.765573.769431.751971.4264.7549,028.55278,072.51
2017 a-w7820.234248.017457.565339.38648.59183.14118,043.15213,995.85
2018 s-s11,856.54112.316350.092402.60449.3826.9995,198.04245,288.92
2018 a-w8354.534060.535575.266459.76623.32528.99100,175.72231,858.86
2019 s-s11,763.251723.654958.373374.17572.66131.9545,152.25278,353.70
2019 a-w8362.464150.805317.324871.12788.57179.20107,152.55226,615.16
Zone 3. Tenosique
2017 s-s3080.31190.153569.702561.97762.5419.4951,663.70126,387.18
2017 a-w2245.22365.404435.462900.94622.7186.3087,333.2590,146.37
2018 s-s4568.87156.323933.851603.03423.1410.1277,259.58101,880.44
2018 a-w1856.00290.863060.323871.29593.40159.4376,436.89101,967.70
2019 s-s4675.46142.432885.483174.07625.4652.2934,752.37141,926.51
2019 a-w2269.37452.243038.072497.06708.7764.2587,729.0693,745.83
Table 8. Land-use coverage classified by RF. Coverage in hectares.
Table 8. Land-use coverage classified by RF. Coverage in hectares.
SeasonCCSCWBLRUASATROthers
Zone 1. Emiliano Zapata
2017 s-s2769.571127.542931.90472.56316.3410.7811,841.7339,760.49
2017 a-w2097.25400.715091.821015.73359.71279.2521,870.4628,115.99
2018 s-s1369.75577.453135.05205.12270.7112.6618,826.3734,833.81
2018 a-w1925.99498.322978.68610.61346.99350.1823,395.1129,125.03
2019 s-s2307.291,330.621500.89315.94316.9644.6311,915.9441,364.37
2019 a-w2444.86488.262853.61789.13400.51124.9521,574.1930,555.41
Zone 2. Balancán
2017 s-s12598.435191.365752.225840.99832.9932.4128,248.56299,238.95
2017 a-w6960.484725.058167.376800.37524.80284.70119,955.66210,317.24
2018 s-s5077.672661.626598.98893.01333.3755.21110,099.69232,016.35
2018 a-w5848.834291.236134.846918.09689.55536.95101,651.32231,665.08
2019 s-s8608.914436.02957.582227.57495.57122.3442,612.84294,275.07
2019 a-w9502.743217.595739.215667.76692.72308.39109,065.48235,420.01
Zone 3. Tenosique
2017 s-s4036.993230.393782.101596.70495.6110.0947,177.75127,905.39
2017 a-w3203.481243.794774.914049.55460.66273.8886,276.8487,951.92
2018 s-s2879.731664.454011.87830.06288.9825.6686,444.8192,089.46
2018 a-w3307.661555.563313.374636.41517.43317.5777,226.2397,360.76
2019 s-s3651.143821.042948.241967.18516.8349.1741,891.19133,390.22
2019 a-w4646.641454.983256.643336.57577.33123.2988,130.1886,709.40
Table 9. Land-use coverage classified by CART. Coverage in hectares.
Table 9. Land-use coverage classified by CART. Coverage in hectares.
SeasonCCSCWBLRUASATROthers
Zone 1. Emiliano Zapata
2017 s-s7842.294393.242267.24314.17326.06121.2613,546.9830,419.69
2017 a-w5441.092383.544353.351764.45443.26287.3020,304.4924,253.44
2018 s-s8962.111710.502633.75236.01247.7349.1417,263.7628,127.93
2018 a-w4711.871614.521688.281194.54378.511098.520,915.1427,629.57
2019 s-s5958.213524.671718.43372.18285.3697.7710,477.6636,765.65
2019 a-w5683.842510.761738.651306.61459.65913.4119,569.5927,048.42
Zone 2. Balancán
2017 s-s52,951.2232,589.66820.262452.761890.401095.2654,479.32205,457.03
2017 a-w24,318.2222,800.817246.546072.87835.25612.87116,901.50178,947.87
2018 s-s52,631.6813,250.76634.00606.79381.43149.91120,140.88164,940.43
2018 a-w27,318.8213,204.625092.277121.10867.07889.5598,020.03205,222.44
2019 s-s39,514.5226,606.65807.001546.87471.27258.935200.33238,330.39
2019 a-w29,281.7215,707.493216.924647.011070.982476.39110,215.11191,120.28
Zone 3. Tenosique
2017 s-s21,547.4713,259.83231.721199.76744.97252.7559,488.6388,510.64
2017 a-w12,161.459266.724325.923491.13584.33284.8583,586.1074,534.53
2018 s-s25,649.196545.04,153.16572.72273.31107.8491,325.5059,608.26
2018 a-w13,428.545993.692798.794093.97546.92277.9675,955.1485,140.02
2019 s-s15,836.9713,948.93379.601638.31487.28174.5044,401.90108,368.25
2019 a-w13,933.628180.232161.822671.43689.341112.8383,780.0675,705.71
Table 10. Percentage of corn and sorghum crop error by each classification method.
Table 10. Percentage of corn and sorghum crop error by each classification method.
CornSorghum
SeasonSVMRFCARSVMRFCAR
Zone 1. Emiliano Zapata
2017 s-s7.9%62.26%86.67%6.32%73.57%93.21%
2017 a-w6.11%36.1%75.37%7.73%19.39%86.44%
2018 s-s7.9%25.75%88.65%12.42%95.15%98.36%
2018 a-w2.87%29.86%70.92%9.51%80.93%94.11%
2019 s-s5.41%49.07%80.38%10.4%96.24%98.58%
2019 a-w5.41%191.84%74.92%7.82%76.44%95.41%
Zone 2. Balancán
2017 s-s2.95%18.5%80.61%6.71%51.11%92.21%
2017 a-w3.21%8.85%249.37%5.54%14.81%82.34%
2018 s-s1.7%129.51%77.42%4.96%95.97%99.19%
2018 a-w5.39%35.51%70.98%4.85%9.76%70.67%
2019 s-s3.45%31.92%72.25%7.34%63.79%93.96%
2019 a-w3.41%14.9%72.38%4.61%23.44%74.71%
Zone 3. Tenosique
2017 s-s7.7%29.15%86.72%
2017 a-w7.66%33.01%82.35%7.55%72.74%96.34%
2018 s-s7.2%47.99%88.84%
2018 a-w6.68%47.45%87.05%10.68%83.15%95.62%
2019 s-s8.6%17.9%72.81%18.68%96.85%99.13%
2019 a-w6.07%13.37%71.11%6.87%71.06%94.85%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pech-May, F.; Aquino-Santos, R.; Rios-Toledo, G.; Posadas-Durán, J.P.F. Mapping of Land Cover with Optical Images, Supervised Algorithms, and Google Earth Engine. Sensors 2022, 22, 4729. https://doi.org/10.3390/s22134729

AMA Style

Pech-May F, Aquino-Santos R, Rios-Toledo G, Posadas-Durán JPF. Mapping of Land Cover with Optical Images, Supervised Algorithms, and Google Earth Engine. Sensors. 2022; 22(13):4729. https://doi.org/10.3390/s22134729

Chicago/Turabian Style

Pech-May, Fernando, Raúl Aquino-Santos, German Rios-Toledo, and Juan Pablo Francisco Posadas-Durán. 2022. "Mapping of Land Cover with Optical Images, Supervised Algorithms, and Google Earth Engine" Sensors 22, no. 13: 4729. https://doi.org/10.3390/s22134729

APA Style

Pech-May, F., Aquino-Santos, R., Rios-Toledo, G., & Posadas-Durán, J. P. F. (2022). Mapping of Land Cover with Optical Images, Supervised Algorithms, and Google Earth Engine. Sensors, 22(13), 4729. https://doi.org/10.3390/s22134729

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop