Next Article in Journal
An RF-PCE Hybrid Surrogate Model for Sensitivity Analysis of Dams
Next Article in Special Issue
India’s Commitments to Increase Tree and Forest Cover: Consequences for Water Supply and Agriculture Production within the Central Indian Highlands
Previous Article in Journal
A Systemic Analysis of the Environmental Impacts of Gold Mining within the Blyde River Catchment, a Strategic Water Area of South Africa
Previous Article in Special Issue
Agricultural Impacts on Hydrobiogeochemical Cycling in the Amazon: Is There Any Solution?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mapping Center Pivot Irrigation Systems in the Southern Amazon from Sentinel-2 Images

1
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100101, China
2
School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 101408, China
3
CNRS, UMR 6554 LETG, 35043 Rennes, France
*
Author to whom correspondence should be addressed.
Water 2021, 13(3), 298; https://doi.org/10.3390/w13030298
Submission received: 10 October 2020 / Revised: 21 January 2021 / Accepted: 22 January 2021 / Published: 26 January 2021

Abstract

:
Irrigation systems play an important role in agriculture. Center pivot irrigation systems are popular in many countries as they are labor-saving and water consumption efficient. Monitoring the distribution of center pivot irrigation systems can provide important information for agricultural production, water consumption and land use. Deep learning has become an effective method for image classification and object detection. In this paper, a new method to detect the precise shape of center pivot irrigation systems is proposed. The proposed method combines a lightweight real-time object detection network (PVANET) based on deep learning, an image classification model (GoogLeNet) and accurate shape detection (Hough transform) to detect and accurately delineate center pivot irrigation systems and their associated circular shape. PVANET is lightweight and fast and GoogLeNet can reduce the false detections associated with PVANET, while Hough transform can accurately detect the shape of center pivot irrigation systems. Experiments with Sentinel-2 images in Mato Grosso achieved a precision of 95% and a recall of 95.5%, which demonstrated the effectiveness of the proposed method. Finally, with the accurate shape of center pivot irrigation systems detected, the area of irrigation in the region was estimated.

1. Introduction

The Southern Amazon agricultural frontier has long been studied for its rapid expansion associated with high deforestation rates [1,2]. Nonetheless, the last decade has also been marked by the adoption of intensive agricultural practices often considered to be an efficient strategy to ensure high agricultural productivity and diversification while limiting deforestation [3]. In this regard, a widely studied intensive practice is double cropping, which consists of cultivating two crops (usually soybean followed by maize or cotton) sequentially in a same year [4,5,6]. Although the adoption of double cropping systems depends on various socio-economic drivers [7], studies have also demonstrated the importance of regional patterns of rainfall regimes to explain the spatial distribution of double cropping systems. For example, Arvor et al. [8] showed that regions with a longer rainy season (over 5 to 6 months) are better suited for double cropping systems. However, by doing so, agricultural systems become more vulnerable (1) to the high spatio-temporal variability of rainfall, related to convective activity and regional climate mechanisms (e.g., South Atlantic and Intertropical Convergence zones) and (2) to climate change. Indeed, recent studies point out that the climate change implications for the Southern Amazon may result in a shorter rainy season, which may thus prevent the adoption of double cropping systems on the long run [9,10]. Consequently, many farmers have now installed irrigation systems to ensure regular water supply for double cropping systems. Mapping central pivot irrigation systems thus appears essential to (1) monitor the evolution of the Southern Amazon agricultural frontier towards agricultural intensification and (2) assess the climate change adaptation strategies adopted by local farmers.
The emergence of new remote sensing data and techniques raise interesting promises for the mapping of cropping practices [11]. In Mato Grosso, most case studies focused on the analysis of Vegetation Index time series of MODIS images to monitor agricultural calendars and thus map the expansion of double cropping systems [4,6,12,13] and crop-livestock integration systems [14]. Yet, whereas the potential of remote sensing data to map irrigation systems and quantify the amount of water supply has long been tested [15], few studies have focused on the mapping of irrigation systems in the Southern Amazon [16].
Remote-sensing-based mapping of irrigated fields is usually performed through pixel-based classifications of physical values retrieved from optical [17,18], thermal [19] or radar [20] data. Yet, in large-scale flat agricultural landscapes such as in Brazil or the US, irrigation systems mostly consist of center pivots, whose specific circular shape makes the use of object detection techniques especially relevant. The basic method to automatically detect circles is Hough transform [21]. The main principle of Hough transform is to project a binary contour image (computed by ad-hoc filters) in a 3D-space that expresses the possibility that a circle (a, b, r) exists ((a, b) being the coordinates of the center and r the radius). The resulting circles are selected by voting. Though efficient for direct detection of circular objects in satellite images, Hough transforms still have limited precision, long computation time and large data storage requirements.
Recent years have been marked by the rapid emergence of deep learning applications in remote sensing [22]. More specifically, such approaches have been widely used for image classification [23,24], object detection [25,26] and image segmentation [27,28]. In particular, CNN (convolutional neural network) has been applied to recognize all kinds of objects in images, and several studies based on deep learning methods have been proposed to monitor center pivot irrigation systems. For example, Zhang et al. [29] implemented a CNN approach to identify such pivots in North Colorado (USA) using Landsat data and the Crop Data Layer (CDL) to filter out non-cropland areas. They achieved high precision (95.85%) and recall (93.33%) accuracies, but the method was time-consuming since it relied on a sliding window applied to the test images. In Brazil, Saraiva [30] used U-Net [31] to detect and map center pivot irrigation systems with a high precision of 99% and a relatively low recall of 88%. Similarly, Albuquerque [32] used U-Net semantic segmentation to map center pivot irrigation systems in three Brazilian study sites with an improved recall of 94.57% and a precision of 98.26%. However, the training samples of center pivots needed to be accurately delineated, which is labor intensive and time-consuming. In addition, the processing was slow, especially when the size of the overlapping windows was large, a necessary condition to achieve high performance levels.
Although center pivot irrigation systems have a simple characteristic shape, CNN-based methods fail to detect them precisely due to their reliance on neighborhood information for classification. Since CNN learns more about implicit representations of shape and texture relative to other shapes and locations [33], other means are needed to detect the location of objects accurately. Zhang et al. [29] used a variance-based approach to locate center pivot irrigation systems, with preprocessing to reduce false detections. U-Net can locate the borders of center pivot irrigation systems based on pixel segmentation, but essentially extracts border pixels rather than detecting shapes.
Given prior knowledge of the circular shape of center pivot irrigation systems, Tang [34] proposed a combined method of PVANET-Hough to automatically detect their accurate location. The proposed method first relied on a lightweight real-time object detection network PVANET [35] to detect circles and then applied a Hough transform to delineate the shape of center pivot irrigation systems and eliminate false detections of PVANET. The method was fast, and it was less demanding to build the sample database, which only required annotated bounding boxes of center pivot irrigation systems. However, Hough transforms are prone to a high rate of false detections.
The aim of this study is to propose a method to detect and accurately locate the shapes of center pivot irrigation systems to estimate the irrigated area. The proposed method of detection-recognition-location combines PVANET, GoogLeNet and Hough transform. In the proposed method, we added an image classification process to the PVANET-Hough approach. After applying PVANET to locate potential center pivot irrigation systems, we trained an image classification model to further discriminate them from other circular image objects. We then used Hough transform to accurately delineate the center pivots. The proposed method is fast, accurate and can precisely delineate center pivot irrigation systems, thus improving the estimates of irrigated areas and water supply.

2. Materials and Methods

2.1. Study Area

The study area is in the Brazilian state of Mato Grosso, in the center-west region and the southern edge of the Amazon basin (Figure 1). The third largest state by area in Brazil, Mato Grosso spans 903,357 square kilometers. Mato Grosso is an important state for agriculture, being the first national producer of soybeans and maize in Brazil. Over 40% of the state’s GDP is made up of agriculture. Mato Grosso experienced a rapid and significant increase in the number of center pivot irrigation systems during the period from 2010 to 2017 [36], whose monitoring is of great importance to analyze the process of agricultural intensification and diversification in the region.

2.2. Image Data

TCI (True Color Image) images of Sentinel-2 are used to detect the center pivot irrigation systems. The spatial resolution is 10 meters. The images used in the study cover three major Amazon watersheds in Mato Grosso (Juruena, Teles Pires and Xingu river), covering an area of 750,000 square kilometers and 2/3 of Mato Grosso. We analyzed a total of image 77 tiles, acquired between June and August 2017, filtered to select the images with low cloud cover. The image size is 10,980 × 10,980 pixels.

2.3. Methods

The proposed method consists of three parts: in the first part, PVANET was used to detect the center pivot irrigation systems candidates; in the second part, GoogLeNet [37] was used to further discriminate center pivot irrigation systems from false detections in the result of PVANET; in the third part, Hough transform was applied to accurately delineate the shape of center pivot irrigation systems. The proposed method is illustrated in Figure 2.

2.3.1. PVANET for Detection of Center Pivot Irrigation Systems

PVANET (Performance vs Accuracy Network) is a deep but lightweight neural network for real-time object detection. This method can achieve real-time object detection performance without losing accuracy compared to other state-of-the-art methods.
PVANET (Figure 3) follows the pipeline of Faster R-CNN [25], which is “CNN feature extraction + region proposal + RoI classification”, with modification on the feature extraction part, adopting building blocks of concatenated C.ReLU [38], Inception [37] and HyperNet [39] to make the network thin and lightweight, maximizing the computation efficiency.
C.ReLU (Figure 4) is inspired by an observation of an interesting property of CNNs, which considers that the filters in the early layers tend to be paired, i.e., for every filter, there is another filter which is almost the opposite phase. Based on this observation, C.ReLU reduces the number of convolution channels by half, then concatenates the same outputs by multiplying the output of convolution by −1, thus doubling the computational speed in the early stage without losing accuracy. Scaling and shifting after concatenation are appended to allow each channel’s slope and activation threshold to be different from its opposite channel.
Inception is one of the most cost-effective building blocks for capturing both the small and large objects in an input image. The receptive fields of the features of CNN should be large enough to capture the large objects. On the other hand, the receptive fields should be small enough to accurately localize the small objects to capture them. Inception fulfills both requirements by congregating different sizes of kernels to the convolution layers, as illustrated in Figure 5. 1 × 1 convolution plays an important role by preserving the receptive field of the previous layer so that small objects can be captured precisely. In PVANET, 5 × 5 convolution is replaced with a sequence of two 3 × 3 convolutions.
Multi-scale features have been proven to be beneficial in many deep learning tasks [39,40]. Combining shadow fine-grained details with deep highly abstracted information in feature layers helps the following region proposal network and classification network to detect objects of different scales. PVANET combines the last layer and two intermediate layers with scales of 2× and 4× the last layer, respectively. The mid-size layer serves as the reference layer, such that the down-scaled (pooling) 4× layer and the up-scaled (linear interpolation) last layer are concatenated.

2.3.2. GoogLeNet for Recognition of Center Pivot Irrigation Systems

As CNN-based object detection methods can only predict the location and shape of objects relatively, objects with a circular shape such as forest blocks may be mistakenly detected as center pivot irrigation systems (Figure 9). We thus used a GoogLeNet image classification model to further qualify if the objects detected by PVANET actually correspond to center pivot irrigation system or to false detections. GoogLeNet is a convolution neural network for image classification, which achieves high performance while keeping the number of parameters low and being computationally efficient. GoogLeNet won 1st place in the ILSVRC 2014 Classification Challenge, using 12 times fewer parameters than AlexNet [41], the winning architecture of the ILSVRC 2012 Classification Challenge, while being significantly more accurate. GoogLeNet has 22 layers, and Inception modules are used in the architecture to improve performance as described in Section 2.3.1.

2.3.3. Hough Transform for Accurate Location of Center Pivot Irrigation Systems

Because the crops under center pivot irrigation systems have the unique shape of circles, we can locate these shapes and get the coordinates of the centers and radius using the Hough transform. Hough transform is highly reliable and adaptive to noise, transform, deformity and so on [42]. Hough transform calculates the maximum accumulated local results in a parameter space through voting algorithm. The collection of distinctive forms can be obtained. Therefore, Hough transform can be used to detect objects with particular shapes. We only detect circles here.
In Hough circle detection, the edge pixels from image space are mapped to 3D parameter space, then an arbitrary point on the circle will be transformed into a right circular cone in the 3D parameter space. The cones of all the image points on a circle will intersect at a single point in 3D space. Searching a 3D Hough search space to find the centroid and radius of each circular object in an image space would mean far greater memory requirements and a much slower speed. To solve these problems, researchers proposed some improved methods, including the 2-1 Hough transform (21HT) [43]. In 21HT, a 2-D accumulator and a 1-D histogram are used to substitute 3D Hough search space, thus reducing storage requirements.
This paper used 21HT to achieve fast processing. First, edge detection is performed in the image. Secondly, for every non-zero point in the edge image, the local gradient is considered. A 2-D accumulator is used to accumulate votes along the normal of each edge point. 1-D histogram as a radius histogram is used to identify the radius of circles of the distance of each point from a candidate center. The detection of false peaks in the center finding stage can lead to significant computational cost for the second stage, especially if a low threshold is used to detect small circles. Since only a single 2-D accumulator and a 1-D histogram are used, the storage space required for the method is quite small. Moreover, based on prior knowledge, the radius of a circle can be limited to the scale of (rmin, rmax), which can further improve the detection speed. 21HT runs much faster and helps overcome the problem of the otherwise sparse population of the 3D accumulator. With the method described above, center pivot irrigation systems with the shape of circle can be accurately detected and located, as shown in Figure 6.

2.3.4. Training Datasets and Training of PVANET and GoogLeNet

A set of images with center pivot irrigation systems were sampled to be the training and validation data for PVANET. In this study, images with the size of 500 × 500 were randomly cropped from a tile of Sentinel-2 images of Mato Grosso whose acquisition time differed from the test image tiles acquired in July 2017. In total, 613 images with center pivot irrigation systems were selected from the cropped images, and subsequently annotated, which were then used as the dataset to train and validate PVANET. Examples of the annotated samples are shown in Figure 7, both complete and incomplete footprints of center pivot irrigation were included in the sample images, and these were randomly located in the images. Each sample image may contain a different number of center pivot irrigation systems. Ninety percent of the dataset was used as training data and ten percent was reversed for use as validation data.
The dataset used to train and validate GoogLeNet consisted of images with and without center pivot irrigation systems, since GoogLeNet classifies the detections of PVANET into 2 classes: center pivot irrigation systems and non-center pivot irrigation systems. Images with center pivot irrigation systems were cropped from the training samples of PVANET according to the bounding box annotation (Figure 8a). For the non-center irrigation system samples, we used images with objects that have a similar shape to center pivot irrigation systems, such as forest patches for training (Figure 8b), because PVANET tends to mistakenly detect these objects as center pivot irrigation systems. To obtain the non-center irrigation system samples, image patches with the size of 250 × 250 were randomly cropped from a Sentinel-2 image tile of Mato Grosso whose acquisition time is different from the test image tiles acquired in July 2017. The image patches with objects that have a similar shape to center pivot irrigation systems were selected and cropped as samples of non-center pivot irrigation system. Examples of the sample images are shown in Figure 8. There were 1142 samples of center pivot irrigation systems and 1057 samples of non-center pivot irrigation systems. Ninety percent of the dataset was used as training data and ten percent was used as validation data. As the input size of GoogLeNet is a fixed size of 224 × 224, all the training and test images were resized to 224 × 224.
With the prepared datasets, we fine-tuned the pre-trained model of PVANET from ILSVRC2012 training images for 1000-class image classification. The learning rate was set to be 0.001. For GoogLeNet, we fine-tuned the pre-trained model of GoogLeNet from ImageNet [44]. The learning rate was set to be 0.0002. PVANET and GoogLeNet were implemented using caffe [45]. Training was done in a machine with an Intel Core Xeon E5-2620 CPU with 32 cores, 126 GB RAM and 4 NVIDIA TITAN Xp graphics cards.

2.3.5. Evaluation

Based on the 77 image tiles in Mato Grosso (10,980 × 10,980 pixels each), every image tile was cropped into blocks of 500 × 500 with an overlap of 200 pixels between the neighboring blocks, which were fed into PVANET to detect the center pivot irrigation systems. After all blocks of an image tile are detected, duplicate detections between the blocks were removed to get the detections of the whole image tile. After the detection of PVANET, the detections were cropped from the image tiles and fed into GoogLeNet to recognize if it is a center pivot irrigation system or not. Finally, Hough transform was applied to get the accurate shape of center pivot irrigation systems.
We used two quantitative indices to evaluate the result: precision and recall or missed detection rate and false detection rate, since missed detection rate = 1-recall and false detection rate = 1-precision. Precision is defined as the number of correct detections over the number of correct detections plus the number of false detections, which tells us how many of the detected center pivot irrigation systems are correct. Recall is defined as the number of correct detections over the number of ground truth, which tells us how many of the center pivot irrigation systems that should be detected are detected. We manually identified all the center pivot irrigation systems in the image tiles. There were 641 center pivot irrigation systems in the image tiles of Mato Grosso.

3. Results

The result after the recognition of GoogLeNet is shown in Table 1. There were 644 detected candidate center pivot irrigation systems, 612 of the detections were correct, 32 of the detections were false and 29 center pivot irrigation systems were missed. The precision after the recognition of GoogLeNet is 95%, the recall is 95.5%, the missed detection rate is 4.5% and the false detection rate is 5%.
In the result of PVANET (Table 1), there were 846 detected candidates of center pivot irrigation systems by PVANET, 619 of the detected candidates were correct, 227 of the detected candidates were false and 22 center pivot irrigation systems were missed. The precision of PVANET is 73.2%, the recall is 96.6%, the missed detection rate is 3.4% and the false detection rate is 26.8%. Obviously, PVANET has a very high false detection rate. Examples of false detections are shown in Figure 9. We can see that a lot of forest patches with a contour similar to circle and riverbank were mistakenly detected as center pivot irrigation systems by PVANET.
Adding GoogLeNet decreased the number of false detections of PVANET from 227 to 32. Examples of the remaining false detections are shown in Figure 10. They are mainly circular cropland. We can notice that 612 detections of all the candidate detections after the recognition of GoogLeNet were correct, 29 center pivot irrigation systems were missed, which means 7 more were missed by GoogLeNet. Examples of missed center pivot irrigation systems by GoogLeNet are shown in Figure 11. They are center pivot irrigation systems with irregular shapes or with cloud cover. We can see that the false detection rate is decreased by a large percent with just a very small decrease in the recall.
PVANET is not able to delineate the shape of center pivot irrigation systems and there are many false detections, as we can see from the result. GoogLeNet can distinguish between center pivot irrigation systems and non-center pivot irrigation systems in the detections of PVANET, reducing the false detections. Hough transform can accurately locate the shape of center pivot irrigation systems. Examples of the location of center pivot irrigation systems using Hough transform are shown in Figure 12. The detection of an image with the size of 10,980 × 10,980 pixels by PVANET took 83 s; the filtering of all the detections of PVANET by GoogLeNet took 21 s; and the location of all the detections using Hough transform took 2 s.
After identifying the center coordinates and radiuses of the center pivot irrigation systems, we were able to calculate the total irrigated area in the region. Based on manual delineation of center pivot irrigation systems in images of the region, we estimated a total area of 74,221 ha, which we consider representing “ground truth” for the purposes of this analysis. Based on our automated detections, we estimated that the total area of irrigation in the region was 74,133 ha, with an error rate of 0.12%. Filtering out false detections decreased our estimate of irrigated area to 70,654 ha, with an error rate of 4.81%. This difference in the error rate is an artifact, given that the total area of false detections (errors of commission) appears to compensate for the total area of missed detections (errors of omission) over the entire region. Our manual and automated estimates of irrigated area in 2017 were 8% and 12% lower, respectively, than those reported by ANA (National Water Agency of Brazil) [36], which reported 80,234 ha of irrigated area in the study area in Mato Grosso for the same year.

4. Discussion

4.1. The Effect of Several Factors on the Detection and Location

There are many factors that affect the detection and location of center pivot irrigation systems, including their size and shape, as well as the acquisition time of images. In this section, we will discuss the influence of several factors on the detection and location of center pivot irrigation systems.

4.1.1. The Size of Center Pivot Irrigation Systems

Center pivot irrigation systems have variable sizes. Typical center pivot irrigation systems in Mato Grosso have the radii of 500–800 m [46]. With the Inception module and multi-scale features concatenation in PVANET, our method can detect center pivot irrigation systems with different sizes. The radii of the detected center pivot irrigation systems range from 31–87 pixels (310–870 m), which shows the ability of our method to detect center pivot irrigation systems with different sizes. Although our method missed some smaller center pivot irrigation systems, most of the mid-size ones and big ones were successfully detected.

4.1.2. The Shape of Center Pivot Irrigation Systems

Most center pivot irrigation systems have the shape of a complete circle, but there are a few with the shape of an incomplete circle, as shown in Figure 13. With the proposed method, these center pivot irrigation systems can be detected. However, as Hough transform can only generate detection results of complete circles, the detected delineations tended to overestimate the irrigated area in irregular shapes (Figure 13).

4.1.3. The Acquisition Time of the Images

At different times of the year, the crops under center pivot irrigation systems are at different phases and the area surrounding the crops are also different. Therefore, center pivot irrigation systems and their surroundings have different looks and contrast in satellite images. As shown in Figure 14a, in June when the crops are still growing, the center pivot irrigation systems have a higher contrast with the surroundings. In October, when the crops are planted, the center pivot irrigation systems have a lower contrast with their surroundings (Figure 14b). Detection of the center pivot irrigation systems with a lower contrast with the surroundings has a higher rate of missed detection (Figure 14c,d). Center pivot irrigation systems are installed to ensure water supply at the end of the rainy season (usually in May-June) to enable the safrinha, which is the second crop after soybean [46]. These are the reasons why we chose images acquired between June and August to detect the center pivot irrigation systems in the study area.
Our manual and automated estimates of irrigated area in the study area in 2017 based on images acquired between June and August were 8% and 12% lower, respectively, than those reported by ANA for the same year. The apparent discrepancy is likely due to the fact that the images used to produce ANA’s irrigation maps had different acquisition times than the images used in this study. More likely, to get a more accurate estimation of the area of irrigation in the region in that year, the estimation of ANA was based on images throughout the year. Therefore, the estimated area of irrigation in the region by ANA was larger than our estimation. The estimation of our approach for the year can be improved by detecting and analyzing images throughout the year.

4.1.4. Cloud Cover

The selected 77 images in the study area have low cloud cover. When there is cloud cover in a center pivot irrigation system, it will inevitably affect the detection. We tested our method in images with cloud cover to see how it affects the detection of center pivot irrigation systems. The results show that our method successfully detects some center pivot irrigation systems with cloud cover, even with heavy cloud cover (Figure 15), which demonstrates the robustness of the proposed method. However, there are also some center pivot irrigation systems that our method fails to detect.

4.2. Using State of Art Detection Network

To evaluate the performance of our approach relative to more common ones, we compared our approach with the well-known detection network YOLOv4 [47]. The result of YOLOv4 is shown in Table 2. We can see from the result that the precision of YOLOv4 is 88.1% and the recall of YOLOv4 is 97.2%, which are better than PVANET. After the recognition of GoogLeNet, the precision is 98.9% and the recall is and 96.1%, which are slightly better than the precision and recall of PVANET after the recognition of GoogLeNet.
Though YOLOv4 outperforms PVANET in precision-recall, the number of false detections (84) is still important since its rate is 11.9%. With the proposed framework of detection-recognition-location, the false detection rate is falls to 1.1% (with a small decrease in the recall) which is to our opinion a really interesting behavior.
As for computation time, the training of YOLOv4 requires 12 h and the detection on an image tile of size 10,980 × 10,980 by YOLOv4 takes 40 s. The recognition part of GoogLeNet for all the detections of YOLOv4 takes 16 s. Concerning PVANET, the training requires 8 h and the detection of an image tile of size of 10,980 × 10,980 takes 83 s while the recognition of GoogLeNet for all the detections of PVANET needs 21 s. All these experiments are reported in Table 3 and Table 4.

4.3. Implications for the Monitoring of Agricultural Dynamics in the Southern Amazon

Beyond these methodological considerations, this study raises important perspectives for a better understanding of agricultural dynamics in the Southern Amazon. First, the accurate location of center pivots could serve as a basis for monitoring the sequential cropping practices applied in irrigation systems. Indeed, surveys with local farmers indicated that irrigation is mainly used to secure double cropping systems, enabling anticipated sowing before the beginning of the rainy season and delayed harvest. Some producers also reported the possibility to using irrigation systems to harvest three crops per year. In this regard, the approach presented here could be complemented by monitoring phenological cycles in irrigated areas based on MODIS time series of vegetation indices, whose potential to monitor crop calendars and especially double cropping systems has long been proven [4,6,13].
Second, long-term monitoring of center pivot irrigation systems could provide relevant information on current strategies towards agricultural intensification. Counting the number of pivots and estimating their average areas is important for assessing the speed and scale of adoption of irrigation practices in Mato Grosso. In addition, determining the average duration that a center pivot remains active on a single field would improve understanding of inter-annual rotation strategies promoted by farmers.
Third, mapping center pivot irrigation systems is essential for assessing the impacts of agricultural practices on water resources. The results introduced in this study could serve as a basis to estimate the amounts of water used for irrigation and its potential cumulative effects on the hydrological network [48]. In addition, it could help in refining current maps of artificial water bodies, which fail to characterize the final use of small farm dams [49,50,51]. Indeed, we suggest that fine-scale analysis of the distances between irrigation systems and farm dams may help to discriminate reservoirs intended for fish farming from those designed to supply water for irrigation systems.
Finally, our results lend insights into the farmers’ adaptation strategies in the face of climate change. Soy producers still consider local climate conditions (i.e., a long and regular rainy season) as a major asset for soybean cultivation in the Southern Amazon, yet many deny climate change [52,53,54] despite the fact that numerous scientific studies have documented significant trends towards a shortening of the rainy season that may prevent adopting double cropping systems over the long run [9,10]. Analyzing the medium to long-term spatio-temporal proliferation of center pivot irrigation systems at regional scales may provide very important information on the concrete actions carried out by farmers to mitigate the expected impacts of climate change on agricultural production [55,56].

5. Conclusions

In this paper, a framework of detection-recognition-location combining PVANET, GoogLeNet and Hough transform is proposed for the detection and accurate location of center pivot irrigation systems. Detection of PVANET, which is lightweight and fast, can get all the candidates of center pivot irrigation systems, recognition of GoogLeNet can reduce the false detections in the candidates identified by PVANET, as it can further distinguish between circular center pivot irrigation systems and other objects with a circular shape, and Hough transform can accurately locate the shape of center pivot irrigation systems. After the center coordinates and radiuses of the center pivot irrigation systems are obtained, the total area of irrigation in the region is estimated. The estimated area of irrigation in the region is 70,654 ha, with an error rate of 4.81%, which has important implications for understanding annual water consumption and precise management of water resources in the study area. The most important contribution of this paper is the framework of detection-recognition-location, especially for the objects which can be visually interpreted by shape. The approach is flexible and can accommodate other state-of-the-art detection networks, such as YOLOv4, substituted in the same framework. Likewise, besides GoogLeNet, other classification models, such as ResNet and EfficientNet [57], can be used as the classification model in the framework. In future work, we will make a more comprehensive evaluation of the combination of other detection models and classification models besides PVANET and GoogLeNet.

Author Contributions

Conceptualization, J.T., D.A., T.C. and P.T.; methodology, J.T. and P.T.; formal analysis, J.T. and P.T.; investigation, J.T. and P.T.; resources, J.T. and P.T.; data curation, J.T.; writing—original draft preparation, J.T., D.A. and P.T.; writing—review and editing, J.T., D.A., T.C. and P.T.; visualization, J.T. and D.A.; supervision, P.T.; project administration, P.T.; funding acquisition, P.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (grant numbers: 41971396, 41701397 and 41701399), by the French National Centre for Scientific Research (CNRS) through the SCOLTEL International Emerging Action (grant number: 231438) and by the French National Centre for Space Research (CNES) through the CASTAFIOR project (grant number: 181670).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data available in a publicly accessible repository.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fearnside, P.M. Soybean cultivation as a threat to the environment in Brazil. Environ. Conserv. 2001, 28, 23–38. [Google Scholar] [CrossRef] [Green Version]
  2. Morton, D.C.; DeFries, R.S.; Shimabukuro, Y.E.; Anderson, L.O.; Arai, E.; del Bon Espirito-Santo, F.; Freitas, R.; Morisette, J. Cropland expansion changes deforestation dynamics in the southern Brazilian Amazon. Proc. Natl. Acad. Sci. USA 2006, 103, 14637–14641. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Arvor, D.; Meirelles, M.; Dubreuil, V.; Bégué, A.; Shimabukuro, Y.E. Analyzing the agricultural transition in Mato Grosso, Brazil, using satellite-derived indices. Appl. Geogr. 2012, 32, 702–713. [Google Scholar] [CrossRef] [Green Version]
  4. Arvor, D.; Jonathan, M.; Meirelles, M.S.P.; Dubreuil, V.; Durieux, L. Classification of MODIS EVI time series for crop mapping in the state of Mato Grosso, Brazil. Int. J. Remote Sens. 2011, 32, 7847–7871. [Google Scholar] [CrossRef]
  5. Kastens, J.H.; Brown, J.C.; Coutinho, A.C.; Bishop, C.R.; Esquerdo, J.C.D.M. Soy moratorium impacts on soybean and deforestation dynamics in Mato Grosso, Brazil. PLoS ONE 2017, 12, e0176168. [Google Scholar] [CrossRef]
  6. Brown, J.C.; Kastens, J.H.; Coutinho, A.C.; de Castro Victoria, D.; Bishop, C.R. Classifying multiyear agricultural land use data from Mato Grosso using time-series MODIS vegetation index data. Remote Sens. Environ. 2013, 130, 39–50. [Google Scholar] [CrossRef] [Green Version]
  7. VanWey, L.K.; Spera, S.; de Sa, R.; Mahr, D.; Mustard, J.F. Socioeconomic development and agricultural intensification in Mato Grosso. Philos. Trans. R. Soc. Biol. Sci. 2013, 368, 20120168. [Google Scholar] [CrossRef] [PubMed]
  8. Arvor, D.; Dubreuil, V.; Ronchail, J.; Simões, M.; Funatsu, B.M. Spatial patterns of rainfall regimes related to levels of double cropping agriculture systems in Mato Grosso (Brazil). Int. J. Climatol. 2013, 34, 2622–2633. [Google Scholar] [CrossRef]
  9. Fu, R.; Yin, L.; Li, W.; Arias, P.A.; Dickinson, R.E.; Huang, L.; Chakraborty, S.; Fernandes, K.; Liebmann, B.; Fisher, R.; et al. Increased dry-season length over southern Amazonia in recent decades and its implication for future climate projection. Proc. Natl. Acad. Sci. USA 2013, 110, 18110–18115. [Google Scholar] [CrossRef] [Green Version]
  10. Arvor, D.; Funatsu, B.; Michot, V.; Dubreuil, V. Monitoring Rainfall Patterns in the Southern Amazon with PERSIANN-CDR Data: Long-Term Characteristics and Trends. Remote Sens. 2017, 9, 889. [Google Scholar] [CrossRef] [Green Version]
  11. Bégué, A.; Arvor, D.; Bellon, B.; Betbeder, J.; de Abelleyra, D.; Ferraz, R.P.D.; Lebourgeois, V.; Lelong, C.; Simões, M.; Verón, S.R. Remote Sensing and Cropping Practices: A Review. Remote Sens. 2018, 10, 99. [Google Scholar] [CrossRef] [Green Version]
  12. Chen, Y.; Lu, D.; Moran, E.; Batistella, M.; Dutra, L.V.; Sanches, I.D.; da Silva, R.F.B.; Huang, J.; Luiz, A.J.B.; de Oliveira, M.A.F. Mapping croplands, cropping patterns, and crop types using MODIS time-series data. Int. J. Appl. Earth Obs. Geoinf. 2018, 69, 133–147. [Google Scholar] [CrossRef]
  13. Picoli, M.C.A.; Camara, G.; Sanches, I.; Simões, R.; Carvalho, A.; Maciel, A.; Coutinho, A.; Esquerdo, J.; Antunes, J.; Begotti, R.A.; et al. Big earth observation time series analysis for monitoring Brazilian agriculture. ISPRS J. Photogramm. Remote Sens. 2018, 145, 328–339. [Google Scholar] [CrossRef]
  14. Kuchler, P.C.; Bégué, A.; Simões, M.; Gaetano, R.; Arvor, D.; Ferraz, R.P. Assessing the optimal preprocessing steps of MODIS time series to map cropping systems in Mato Grosso, Brazil. Int. J. Appl. Earth Obs. Geoinf. 2020, 92, 102150. [Google Scholar] [CrossRef]
  15. Karthikeyan, L.; Chawla, I.; Mishra, A.K. A review of remote sensing applications in agriculture for food security: Crop growth and yield, irrigation, and crop losses. J. Hydrol. 2020, 586, 124905. [Google Scholar] [CrossRef]
  16. Zaiatz, A.; Zolin, C.; Lopes, T.; Vendrusculo, L. Classificação de áreas irrigadas por pivos centrais durante o ano de 2014 em uma sub-bacia do alto rio Teles Pires. Jorn. Cient. Embrapa Agrossilvilpastor. 2017, 5, 44–47. [Google Scholar]
  17. Pervez, M.S.; Brown, J.F. Mapping Irrigated Lands at 250-m Scale by Merging MODIS Data and National Agricultural Statistics. Remote Sens. 2010, 2, 2388–2412. [Google Scholar] [CrossRef] [Green Version]
  18. Sharma, A.; Hubert-Moy, L.; Buvaneshwari, S.; Sekhar, M.; Ruiz, L.; Bandyopadhyay, S.; Corgne, S. Irrigation History Estimation Using Multitemporal Landsat Satellite Images: Application to an Intensive Groundwater Irrigated Agricultural Watershed in India. Remote Sens. 2018, 10, 893. [Google Scholar] [CrossRef] [Green Version]
  19. Lebourgeois, V.; Chopart, J.L.; Bégué, A.; Mézo, L.L. Towards using a thermal infrared index combined with water balance modelling to monitor sugarcane irrigation in a tropical environment. Agric. Water Manag. 2010, 97, 75–82. [Google Scholar] [CrossRef]
  20. Sharma, A.K.; Hubert-Moy, L.; Sriramulu, B.; Sekhar, M.; Ruiz, L.; Bandyopadhyay, S.; Mohan, S.; Corgne, S. Evaluation of Radarsat-2 quad-pol SAR time-series images for monitoring groundwater irrigation. Int. J. Digit. Earth 2019, 12, 1177–1197. [Google Scholar] [CrossRef]
  21. Duda, R.O.; Hart, P.E. Use of the Hough transformation to detect lines and curves in pictures. Commun. ACM 1972, 15, 11–15. [Google Scholar] [CrossRef]
  22. Zhu, X.X.; Tuia, D.; Mou, L.; Xia, G.; Zhang, L.; Xu, F.; Fraundorfer, F. Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–36. [Google Scholar] [CrossRef] [Green Version]
  23. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2015, arXiv:1409.1556. [Google Scholar]
  24. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef] [Green Version]
  25. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [Green Version]
  26. Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  27. Shelhamer, E.; Long, J.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. arXiv 2016, arXiv:1605.06211. [Google Scholar] [CrossRef]
  28. Takikawa, T.; Acuna, D.; Jampani, V.; Fidler, S. Gated-SCNN: Gated Shape CNNs for Semantic Segmentation. arXiv 2019, arXiv:1907.05740. [Google Scholar]
  29. Zhang, C.; Yue, P.; Di, L.; Wu, Z. Automatic Identification of Center Pivot Irrigation Systems from Landsat Images Using Convolutional Neural Networks. Agriculture 2018, 8, 147. [Google Scholar] [CrossRef] [Green Version]
  30. Saraiva, M.; Protas, É.; Salgado, M.; Souza, C. Automatic Mapping of Center Pivot Irrigation Systems from Satellite Images Using Deep Learning. Remote Sens. 2020, 12, 558. [Google Scholar] [CrossRef] [Green Version]
  31. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv 2015, arXiv:1505.04597. [Google Scholar]
  32. de Albuquerque, A.O.; de Carvalho Júnior, O.A.; Carvalho, O.L.F.d.; de Bem, P.P.; Ferreira, P.H.G.; de Moura, R.d.S.; Silva, C.R.; Trancoso Gomes, R.A.; Fontes Guimarães, R. Deep Semantic Segmentation of Center Pivot Irrigation Systems from Remotely Sensed Data. Remote Sens. 2020, 12, 2159. [Google Scholar] [CrossRef]
  33. Milletari, F.; Ahmadi, S.A.; Kroll, C.; Plate, A.; Rozanski, V.; Maiostre, J.; Levin, J.; Dietrich, O.; Ertl-Wagner, B.; Bötzel, K.; et al. Hough-CNN: Deep learning for segmentation of deep brain regions in MRI and ultrasound. Comput. Vis. Image Underst. 2017, 164, 92–102. [Google Scholar] [CrossRef] [Green Version]
  34. Tang, J.W.; Arvor, D.; Corpetti, T.; Tang, P. Pvanet-Hough: Detection and location of center pivot irrigation systems from Sentinel-2 images. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, V-3-2020, 559–564. [Google Scholar] [CrossRef]
  35. Kim, K.H.; Hong, S.; Roh, B.; Cheon, Y.; Park, M. PVANET: Deep but Lightweight Neural Networks for Real-time Object Detection. arXiv 2016, arXiv:1608.08021. [Google Scholar]
  36. Catálogo de Metadados da ANA. Available online: https://metadados.snirh.gov.br/geonetwork/srv/por/catalog.search#/metadata/e2d38e3f-5e62-41ad-87ab-990490841073 (accessed on 14 December 2020).
  37. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. arXiv 2014, arXiv:1409.4842. [Google Scholar]
  38. Shang, W.; Sohn, K.; Almeida, D.; Lee, H. Understanding and improving convolutional neural networks via concatenated rectified linear units. In Proceedings of the 33rd International Conference on International Conference on Machine Learning, ICML’16, New York, NY, USA, 19–24 June 2016; JMLR.org: New York, NY, USA, 2016; Volume 48, pp. 2217–2225. [Google Scholar]
  39. Kong, T.; Yao, A.; Chen, Y.; Sun, F. HyperNet: Towards Accurate Region Proposal Generation and Joint Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 845–853. [Google Scholar] [CrossRef] [Green Version]
  40. Bell, S.; Zitnick, C.L.; Bala, K.; Girshick, R. Inside-Outside Net: Detecting Objects in Context with Skip Pooling and Recurrent Neural Networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2874–2883. [Google Scholar] [CrossRef] [Green Version]
  41. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  42. Chuang, C.H.; Lo, Y.C..; Chang, C.C.; Cheng, S.C. Multiple Object Motion Detection for Robust Image Stabilization Using Block-Based Hough Transform. In Proceedings of the 2010 Sixth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, Darmstadt, Germany, 15–17 October 2010; pp. 623–626. [Google Scholar] [CrossRef]
  43. Yuen, H.; Princen, J.; Illingworth, J.; Kittler, J. Comparative study of Hough Transform methods for circle finding. Image Vis. Comput. 1990, 8, 71–77. [Google Scholar] [CrossRef] [Green Version]
  44. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. ImageNet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar] [CrossRef] [Green Version]
  45. Jia, Y.; Shelhamer, E.; Donahue, J.; Karayev, S.; Long, J.; Girshick, R.; Guadarrama, S.; Darrell, T. Caffe: Convolutional Architecture for Fast Feature Embedding. In Proceedings of the ACM International Conference on Multimedia—MM’14, Orlando, FL, USA, 3–7 November 2014; ACM Press: Orlando, FL, USA, 2014; pp. 675–678. [Google Scholar] [CrossRef]
  46. Souza, C.A.D.; Aquino, B.G.; Queiroz, T.M.D. Expansão da agricultura irrigada por pivô central na região do Alto Teles Pires-MT utilizando sensoriamento remoto. Rev. Geama 2020, 6, 11–16. [Google Scholar]
  47. Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
  48. Arvor, D.; Tritsch, I.; Barcellos, C.; Jégou, N.; Dubreuil, V. Land use sustainability on the South-Eastern Amazon agricultural frontier: Recent progress and the challenges ahead. Appl. Geogr. 2017, 80, 86–97. [Google Scholar] [CrossRef]
  49. Pekel, J.F.; Cottam, A.; Gorelick, N.; Belward, A.S. High-resolution mapping of global surface water and its long-term changes. Nature 2016, 540, 418–422. [Google Scholar] [CrossRef] [PubMed]
  50. Arvor, D.; Daher, F.R.; Briand, D.; Dufour, S.; Rollet, A.J.; Simões, M.; Ferraz, R.P. Monitoring thirty years of small water reservoirs proliferation in the southern Brazilian Amazon with Landsat time series. ISPRS J. Photogramm. Remote Sens. 2018, 145, 225–237. [Google Scholar] [CrossRef]
  51. Souza, C.; Kirchhoff, F.; Oliveira, B.; Ribeiro, J.; Sales, M. Long-Term Annual Surface Water Change in the Brazilian Amazon Biome: Potential Links with Deforestation, Infrastructure Development and Climate Change. Water 2019, 11, 566. [Google Scholar] [CrossRef] [Green Version]
  52. Dubreuil, V.; Funatsu, B.M.; Michot, V.; Nasuti, S.; Debortoli, N.; de Mello-Thery, N.A.; Tourneau, F.M.L. Local rainfall trends and their perceptions by Amazonian communities. Clim. Chang. 2017, 143, 461–472. [Google Scholar] [CrossRef]
  53. Funatsu, B.M.; Dubreuil, V.; Racapé, A.; Debortoli, N.S.; Nasuti, S.; Tourneau, F.M.L. Perceptions of climate and climate change by Amazonian communities. Glob. Environ. Chang. 2019, 57, 101923. [Google Scholar] [CrossRef]
  54. de Mello-Théry, N.A.; de Lima Caldas, E.; Funatsu, B.M.; Arvor, D.; Dubreuil, V. Climate Change and Public Policies in the Brazilian Amazon State of Mato Grosso: Perceptions and Challenges. Sustainability 2020, 12, 5093. [Google Scholar] [CrossRef]
  55. Cohn, A.S.; VanWey, L.K.; Spera, S.A.; Mustard, J.F. Cropping frequency and area response to climate variability can exceed yield response. Nat. Clim. Chang. 2016, 6, 601–604. [Google Scholar] [CrossRef]
  56. Costa, M.H.; Fleck, L.C.; Cohn, A.S.; Abrahão, G.M.; Brando, P.M.; Coe, M.T.; Fu, R.; Lawrence, D.; Pires, G.F.; Pousa, R.; et al. Climate risks to Amazon agriculture suggest a rationale to conserve local ecosystems. Front. Ecol. Environ. 2019, 17, 584–590. [Google Scholar] [CrossRef]
  57. Tan, M.; Le, Q.V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv 2020, arXiv:1905.11946. [Google Scholar]
Figure 1. The location of Mato Grosso in Brazil.
Figure 1. The location of Mato Grosso in Brazil.
Water 13 00298 g001
Figure 2. Flowchart of the proposed method.
Figure 2. Flowchart of the proposed method.
Water 13 00298 g002
Figure 3. The structure of PVANET.
Figure 3. The structure of PVANET.
Water 13 00298 g003
Figure 4. C.ReLU.
Figure 4. C.ReLU.
Water 13 00298 g004
Figure 5. Inception.
Figure 5. Inception.
Water 13 00298 g005
Figure 6. Center pivot irrigation systems accurately detected by Hough circle detection.
Figure 6. Center pivot irrigation systems accurately detected by Hough circle detection.
Water 13 00298 g006
Figure 7. Examples of training samples for PVANET.
Figure 7. Examples of training samples for PVANET.
Water 13 00298 g007
Figure 8. Examples of training samples for GoogLeNet. (a) A sample image of center pivot irrigation systems. (b) A sample image of non-center pivot irrigation systems.
Figure 8. Examples of training samples for GoogLeNet. (a) A sample image of center pivot irrigation systems. (b) A sample image of non-center pivot irrigation systems.
Water 13 00298 g008
Figure 9. Examples of false detections of PVANET.
Figure 9. Examples of false detections of PVANET.
Water 13 00298 g009
Figure 10. Examples of false detections after the recognition of GoogLeNet.
Figure 10. Examples of false detections after the recognition of GoogLeNet.
Water 13 00298 g010
Figure 11. Examples of missed detections after the recognition of GoogLeNet.
Figure 11. Examples of missed detections after the recognition of GoogLeNet.
Water 13 00298 g011
Figure 12. Examples of the location of center pivot irrigation systems using Hough transform.
Figure 12. Examples of the location of center pivot irrigation systems using Hough transform.
Water 13 00298 g012
Figure 13. Center pivot irrigation systems with a shape of an incomplete circle. (a,b) Examples of center pivot irrigation systems with a shape of incomplete circle. (c,d) Detection and delineation of center pivot irrigation systems with a shape of an incomplete circle.
Figure 13. Center pivot irrigation systems with a shape of an incomplete circle. (a,b) Examples of center pivot irrigation systems with a shape of incomplete circle. (c,d) Detection and delineation of center pivot irrigation systems with a shape of an incomplete circle.
Water 13 00298 g013
Figure 14. Center pivot irrigation systems at different times of the year. (a) Center pivot irrigation systems in June. (b) Center pivot irrigation systems in October. (c) Detection of center pivot irrigation systems in June. (d) Detection of center pivot irrigation systems in October.
Figure 14. Center pivot irrigation systems at different times of the year. (a) Center pivot irrigation systems in June. (b) Center pivot irrigation systems in October. (c) Detection of center pivot irrigation systems in June. (d) Detection of center pivot irrigation systems in October.
Water 13 00298 g014
Figure 15. Center pivot irrigation systems with cloud cover. (ac) Center pivot irrigation systems with cloud cover successfully detected. (df) Missed detection of center pivot irrigation systems due to cloud cover.
Figure 15. Center pivot irrigation systems with cloud cover. (ac) Center pivot irrigation systems with cloud cover successfully detected. (df) Missed detection of center pivot irrigation systems due to cloud cover.
Water 13 00298 g015
Table 1. The result of the proposed method.
Table 1. The result of the proposed method.
Detected Candidate PivotsCorrectly Detected PivotsPrecisionRecall
PVANET84661973.2%96.6%
PVANET-GoogLeNet64461295%95.5%
Table 2. Comparison of PVANET and YOLOv4.
Table 2. Comparison of PVANET and YOLOv4.
Detected Candidate PivotsCorrectly Detected PivotsPrecisionRecall
PVANET84661973.2%96.6%
YOLOv470762388.1%97.2%
PVANET-GoogLeNet64461295%95.5%
YOLOv4-GoogLeNet62361698.9%96.1%
Table 3. Computation time of the detection of PVANET and YOLOv4.
Table 3. Computation time of the detection of PVANET and YOLOv4.
Computation Time of an Image Tile (10,980 × 10,980)
PVANET83 s
YOLOv440 s
Table 4. Computation time of the recognition of GoogLeNet for all the detections of PVANET and YOLOv4.
Table 4. Computation time of the recognition of GoogLeNet for all the detections of PVANET and YOLOv4.
Computation Time
GoogLeNet for PVANET21 s
GoogLeNet for YOLOv416 s
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tang, J.; Arvor, D.; Corpetti, T.; Tang, P. Mapping Center Pivot Irrigation Systems in the Southern Amazon from Sentinel-2 Images. Water 2021, 13, 298. https://doi.org/10.3390/w13030298

AMA Style

Tang J, Arvor D, Corpetti T, Tang P. Mapping Center Pivot Irrigation Systems in the Southern Amazon from Sentinel-2 Images. Water. 2021; 13(3):298. https://doi.org/10.3390/w13030298

Chicago/Turabian Style

Tang, Jiwen, Damien Arvor, Thomas Corpetti, and Ping Tang. 2021. "Mapping Center Pivot Irrigation Systems in the Southern Amazon from Sentinel-2 Images" Water 13, no. 3: 298. https://doi.org/10.3390/w13030298

APA Style

Tang, J., Arvor, D., Corpetti, T., & Tang, P. (2021). Mapping Center Pivot Irrigation Systems in the Southern Amazon from Sentinel-2 Images. Water, 13(3), 298. https://doi.org/10.3390/w13030298

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop