Next Article in Journal
Enhancing the Fuel Efficiency of Cogeneration Plants by Fuel Oil Afterburning in Exhaust Gas before Boilers
Previous Article in Journal
Blockchain-Enabled Microgrids: Toward Peer-to-Peer Energy Trading and Flexible Demand Management
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Project Report

Estimation of Electrical Energy Consumption in Irrigated Rice Crops in Southern Brazil

by
Daniel Lima Lemes
1,
Matheus Mello Jacques
1,
Natalia Bastos Sousa
1,
Daniel Pinheiro Bernardon
1,
Mauricio Sperandio
1,*,
Juliano Andrade Silva
2,
Lucas M. Chiara
2 and
Martin Wolter
3
1
Headquarters Campus, Federal University of Santa Maria, Santa Maria 97105-900, RS, Brazil
2
CPFL Energia, Campinas 13088-900, SP, Brazil
3
Institute for Electrical Energy Systems (IESY), Otto-von-Guericke University Magdeburg (OVGU), 39106 Magdeburg, Sachsen-Anhalt, Germany
*
Author to whom correspondence should be addressed.
Energies 2023, 16(18), 6742; https://doi.org/10.3390/en16186742
Submission received: 1 July 2023 / Revised: 4 September 2023 / Accepted: 6 September 2023 / Published: 21 September 2023
(This article belongs to the Section F5: Artificial Intelligence and Smart Energy)

Abstract

:
On average, 70% of the world’s freshwater is used in agriculture, with farmers transitioning to electrical irrigation systems to increase productivity, reduce climate uncertainties, and decrease water consumption. In Brazil, where agriculture is a significant part of the economy, this transition has reached record levels over the last decade, further increasing the impact of energy consumption. This paper presents a methodology that utilizes the U-Net model to detect flooded rice fields using Sentinel-2 satellite images and estimates the electrical energy consumption required to pump water for this irrigation. The proposed approach involves grouping the detected flooded areas using k-means clustering with the electricity customers’ geographical coordinates, provided by the Power Distribution Company. The methodology was evaluated in a dataset of satellite images from southern Brazil, and the results demonstrate the potential of using U-Net models to identify rice fields. Furthermore, comparing the estimated electrical energy consumption required for irrigation in each cluster with the billed energy values provides valuable insights into the sustainable management of rice production systems and the electricity grid, helping to identify non-technical losses and improve irrigation efficiency.

1. Introduction

Remote sensing plays a crucial role in understanding the Earth and its various systems. It provides a unique perspective, allowing us to observe and gather information about the planet from a distance. This technology has a wide range of applications, including environmental monitoring [1], resource management, disaster response [2], cropland expansion [3], crop yield estimation [4], and urban planning. The importance of remote sensing cannot be overstated as it provides crucial information for farmers to make decisions and plans, and address global challenges such as sustainability.
Additionally, Deep Convolutional Neural Networks (DCNNs) are a pivotal component of modern computer visions and have revolutionized the field with their remarkable ability to learn hierarchical representations of visual data. These networks consist of multiple convolutional and pooling layers, which are trained end-to-end on large datasets to extract high-level features from raw input images. The effectiveness of DCNNs lies in their capacity to learn and encode rich, discriminative features that capture both low-level and high-level information, thereby enabling robust and accurate performances in various vision tasks, such as image classification, object detection, and semantic segmentation. The importance of DCNNs stems from their wide range of applications in multiple domains, such as autonomous driving, medical imaging, image processing, and biometrics, among others [5].
Remote sensing and DCNNs have been widely applied together in a variety of fields, and electrical energy is no exception. In recent years, the combination of these two technologies has shown great potential for improving the accuracy and efficiency of electrical energy monitoring and management [6]. The use of DCNNs in the electrical energy field allows for improved automation and scalability, reducing the need for manual inspections and enabling more efficient large-scale monitoring.
Manual inspections for detecting non-technical losses in energy consumption on farms can be a challenging and time-consuming process. Non-technical losses refer to energy losses that are due to theft, meter tampering, billing errors, and other factors that are not related to the technical performance of the electrical system. To detect these losses, manual inspections typically involve physically visiting each farm, checking meters and electrical equipment, and interviewing farmers and other stakeholders. However, this process can be difficult and prone to errors, particularly in large and geographically dispersed farming operations, as in Brazil. Inspectors may face challenges such as harsh weather conditions, limited access to electrical equipment, and resistance by farmers. Furthermore, manual inspections are often labor-intensive and require significant resources, making them an expensive and inefficient method for detecting non-technical losses. As a result, there is a growing interest in exploring alternative methods for detecting non-technical losses, such as remote sensing and machine learning techniques.
The electrical energy consumption of crops is an important aspect of modern agriculture that can significantly impact the sustainability and efficiency of food production. Electricity is used for a variety of tasks on farms, such as powering irrigation pumping systems, heating and cooling greenhouses, operating farm machinery, and providing lighting. Especially in rice planting, irrigation plays a crucial role in plant growth and productivity. Therefore, understanding and managing the electrical energy consumption of crops is crucial for ensuring the long-term sustainability and competitiveness of agriculture [7]. In particular, the development of energy-efficient technologies and practices, as well as the optimization of energy use through monitoring and control systems, can help to decrease energy costs, improve energy efficiency, and reduce the carbon footprint of agriculture [8,9,10].
Irrigated rice crops play a vital role in the agroindustry of southern Brazil, especially in the Rio Grande do Sul State, where it accounts for 42.6% of the total production [11]. With an increasing demand for rice, it is crucial for a power distribution company to gather precise and up-to-date information about the planted area to effectively plan and allocate resources. This ensures a reliable supply of energy to farmers for optimal crop growth and production. Moreover, accurate data on rice crop areas can offer valuable insights to optimize energy usage of pumping systems, minimize waste, identify non-technical losses, and support the transition towards a more sustainable agriculture industry [12]. However, the development of specific methodologies that consider regional variations in rice cultivation practices and limited research on using remote sensing for monitoring electrical energy usage in irrigated rice crops indicate a need for further investigation in this field.
Remote sensing images collected from satellite-based platforms have become a valuable source of information for various applications in agriculture, including crop monitoring and analysis. The mapping of irrigated areas through remote sensing can essentially be divided into two approaches. The first one involves research that utilizes optical sensors, such as [13,14,15,16]. The second approach utilizes microwave sensors, which include [17,18,19]. However, the majority of crop mapping studies are applied to areas that are small or have a very low resolution, as in the case of [20].
The Sentinel-2 mission provides multi-spectral images with a high spatial and temporal resolution. Hence, this paper aims to explore these satellite images to estimate the electrical energy consumption in irrigated rice production. The study focuses on the processing and analysis of Sentinel-2 images using U-Net to extract relevant information for electrical consumption, and the development of algorithms to accurately estimate energy based on this information. The results of this study have implications for improving the efficiency of agricultural practices, reducing the carbon footprint, and supporting sustainable agriculture practices.
This paper is organized as follows: the first section describes the method, the second section presents the results and discussions, and the last section brings conclusions.

2. Materials and Methods

2.1. Study Area

The pilot study area, where the method was applied in a real-world scenario, is the municipality of Uruguaiana, highlighted in Figure 1. It is a city located in the western region of the state of Rio Grande do Sul, Brazil. The municipality has an area of approximately 5700 square kilometers and a population of around 126,800 people [21]. The region has a predominantly flat relief, with a few hills and valleys in the surrounding areas. The local economy is based on agriculture, with large plantations of rice, soybeans, and corn.
This area was chosen because it is relevant to the study, as it is the largest rice producer in the state of Rio Grande do Sul. In addition, the power energy distribution company provided all the necessary data from electricity customers. These data were crucial to enable the development and implementation of the proposed method and allowed a more accurate assessment of the electricity consumption patterns of the different rice production units, allowing the identification of areas of improvement and opportunities for energy savings.

2.2. Images

Sentinel-2 is an Earth observation mission developed by the European Space Agency (ESA) as part of the Copernicus Programme. Launched in 2015, Sentinel-2 is equipped with a multispectral imager that provides high-resolution optical images of the Earth’s land surfaces, coastal zones, and inland waterways. The main objective of the mission is to support a wide range of applications, such as land cover mapping, agricultural monitoring, forest management, and disaster response. The satellite captures images in 13 spectral bands, which allows for the identification and analysis of different types of vegetation, water bodies, and urban areas [22].
The images captured by Sentinel-2 have a high resolution, with a spatial resolution of 10, 20, or 60 m, depending on the spectral band. This level of detail enables the detection of small changes in land cover and the identification of individual objects on the ground. The multispectral imager of the satellite captures data in the visible, near-infrared, and shortwave infrared regions of the electromagnetic spectrum, which makes it possible to distinguish between different types of vegetation, such as crops, forests, and grasslands. The images by Sentinel-2 are also used for monitoring water quality, tracking the spread of wildfires, and assessing the impact of natural disasters, such as floods and landslides.
The data generated by Sentinel-2 are freely available to the public, which makes it an important resource for scientific research and environmental monitoring. The mission is expected to have a significant impact on global efforts to address climate change, by providing accurate information on land use, deforestation, and carbon storage. The images captured by Sentinel-2 are also used for urban planning, infrastructure development, and natural resource management. As the mission continues to collect data, its images will provide valuable insights into the changes taking place on our planet, and help us to make more informed decisions about how to manage and protect the Earth’s resources.
Sentinel-2 has twin satellites in the same orbit but are phased as 180°, which allows 5 days of revisit frequency at the Equator. The data from Sentinel-2 is split along fixed-size scenes, 100 × 100 km2, which are ortho-images in a UTM/WGS84 projection. The study region is covered by four scenes. Figure 2 shows all four scenes. Images 1C and 2A are two of the data products generated by the mission, with 1C providing orthorectified top-of-atmosphere reflectance data and 2A providing atmospheric correction and bottom-of-atmosphere reflectance data. In this work, images 2A were used because they performed better with the neural network.
In the analyzed area, the cycle of the flooded rice fields usually starts in September and ends in March. It consists of four main stages: land preparation, seeding, growing, and harvesting. In the land preparation phase, the fields are plowed, leveled, and flooded with water, creating an environment suitable for the growth of rice plants. Then, the seeds are sown and the plants begins to grow; they are irrigated and fertilized throughout the growing phase. Harvest, usually performed by machines, takes place when the grains reach the ideal degree of ripeness and humidity, and rice is then processed and stored for commercialization.
To locate rice fields, January is considered the best month to acquire the images. This is because rice fields are at the vegetative stage during this month, which means that they can be better distinguished from other crops or land cover types [23].
In the municipality of Uruguaiana, there are 445 electricity customers classified as irrigators. The local power distribution company uses this classification because these customers have a different tariff for irrigation activity [24].

2.3. U-Net Topology

U-Net is a highly effective deep-learning architecture primarily used for image segmentation tasks [25]. It was first introduced in 2015 and has since become a popular choice among researchers in various fields, including medical imaging, satellite imaging, and robotics. The architecture is symmetrical, with a contracting path and an expansive path that allow for both high-level context and low-level detail to be captured and processed. The U-Net structure is made up of multiple layers of convolutions, pooling, and up-sampling, with skip connections between the contracting and expansive paths to preserve spatial information from early layers.
One of the key advantages of the U-Net model is its ability to handle complex and highly variable images. By incorporating skip connections, the model is able to preserve fine-grained details that are often lost during the downsampling process of traditional CNNs. This makes it particularly useful for medical imaging applications, where accurate segmentation of tumors, organs, and other structures are critical for diagnosis and treatment planning.
The U-Net model was chosen for this study because it is a highly effective and versatile tool for image segmentation, with a fully convolutional network architecture, in which the input image and the output mask have a one-to-one correspondence. This facilitates the segmentation of satellite images, and consequently the identification of rice crops. Another characteristic is the encoder–decoder topology, as illustrated in Figure 3, where z is a vector representing what the neural network identifies when compressing the input image x. Meanwhile, the decoding function y = f(z) represents the output y based on z. The vector z stores the semantic information that the neural network deems most crucial for predicting the output y. This feature reduces computational efforts as the image is reduced multiple times in the contraction stage to capture the context, achieved through convolutional and pooling layers.
The drawback of this topology is the loss of information owing to the loss of resolution in this stage. However, for this study, this is not relevant since there is no need to identify small objects. During the contraction stage, the neural network can identify patterns in the images but loses information about location. Reconstruction of the area where the main features were located is left to the expansion stage, which uses information from the previous stage and is concatenated with the same level between the encoder and decoder.
In the expansion stage, transposed convolution layers are used for image reconstruction. These characteristics make the U-Net a suitable topology for this study because a very large training dataset is not required. The ability of the U-Net to accurately segment images of rice crops using a limited number of training samples is a significant advantage in practice; therefore, it is a promising tool for the rice crop analysis and management.
To implement the U-Net topology for this study, the Python programming language was used along with the TensorFlow [26] and Keras [27] libraries. These libraries are popular for deep learning applications, as they provide a high-level interface for building and training neural networks. TensorFlow is a widely-used open-source platform for numerical computation and machine learning, while Keras is a user-friendly neural network library written in Python. The combination of these two libraries allowed for the efficient implementation of the U-Net topology, making it easier to build, train, and evaluate the model for rice crop segmentation. Additionally, Python’s extensive ecosystem of scientific computing libraries, such as NumPy and Matplotlib, enabled data manipulation and visualization tasks, providing a comprehensive framework for the study.

2.4. Images Preprocessing

Before inserting the images into the neural network, some preprocessing steps need to be performed. In this study, the images were first resized from 10,980 × 10,980 pixels to smaller images of 512 × 512 pixels. This resizing step divided each scene of 100 square kilometers into 1849 smaller images of 0.054 square kilometers. The size of each image was chosen based on the number of layers used by the U-Net neural network topology and the fact that it works with images of exponential multiples of two dimensions. In total, 11,094 images were used in this study, which were randomly divided into training and validation datasets before being fed into the neural network. The images used for testing the neural network consisted of 1849 images, which were manually selected to represent both classes proportionally.
To balance the dataset, the classification of each pixel was analyzed using the generated labels. It was found that the final dataset consisted of approximately 20% of pixels representing rice crops and 80% representing areas where no crops were present. Therefore, training methods that took into account this class imbalance were necessary.

2.5. Labels

One of the main challenges posed by Artificial Intelligence— AI applications—is the construction of the training dataset. The lack of available datasets also affects applications that utilize satellite images, as this is a recent area of AI utilization. The construction of a quality dataset requires significant effort, as it involves the collection, annotation, and preparation of large amounts of data. This process is further complicated in the case of satellite imagery owing to factors such as cloud cover, changing lighting conditions, and the presence of shadows. Addressing these challenges requires novel approaches to data collection and preparation, as well as the development of advanced algorithms capable of handling complex and diverse datasets [28,29].
In this study, the maximum likelihood classification method [30] was employed in combination with visual photo interpretation by experts. The first method considers the weighting of average distances using static parameters. The training sets define the scatter plot of classes and their probability distributions, considering the normal probability distribution for each training class. Thus, the probability of each pixel belonging to a specific cultivar type is generated. Subsequently, experts classified the crops through photo interpretation.
Figure 4 displays the classified crops in two scenes in the year 2021. The yellow polygons represent rice crops, the purple ones show soybean crops, and the blue polygons indicate water bodies. These pre-labeled scenes were divided into 512 × 512 images and used to train the U-Net model.
These two scenes were selected for this study owing to the abundance of rice crops, which provided a substantial number of labels for training purposes. By selecting regions with a high concentration of rice cultivation, the dataset can be enriched with a greater diversity of rice plant types, growth stages, and environmental conditions, which are all important factors for accurately training machine learning algorithms to recognize and classify rice crops. As a result, these scenes were deemed suitable for acquiring a comprehensive and diverse dataset to aid in the development of more robust and accurate computer vision models for the rice crop analysis.

2.6. Metric

Selecting the appropriate evaluation metric to measure the performance of a neural network is essential [31]. In this study, due to the imbalanced nature of the dataset, the choice of metric is even more critical. For example, in an image with only 10% rice fields, if the neural network fails to detect any of them, it would still have an accuracy of 90% if the metric were based on the number of pixels. In refs. [32,33], the authors presented a review of several popular metrics, including Precision, Recall, F-Measure, Area Under Curve, Intersection Over Union (IoU), Dice Coefficient, and Warping Error. Among these metrics, IoU emerged as the most reliable and precise for diverse applications.
Therefore, this study used the IoU [33] method, which involves calculating the ratio of the intersection of the predicted segmentation and the ground truth label to the union of the predicted segmentation and the ground truth label. This index ranges from 0 to 100% and compensates for the unbalanced distribution between the rice fields and other areas in an image. The IoU metric has become increasingly popular in computer vision applications, particularly in object detection and segmentation tasks. It provides a more comprehensive evaluation of the network’s performance by taking into account both false positives and false negatives and it is robust to class imbalance issues.

2.7. Crop Allocation

The regional Power Distribution Company responsible for the study area has provided the geographic coordinates of the electricity meters of all farms and their corresponding billed energy. The region of study encompasses a total of 524 customers’ farms.
Allocating each crop to its corresponding meter posed a challenge since the metric of the smaller distance between them was not feasible, as illustrated in Figure 5. The blue markers represent meters, and the light green regions represent the detected crops. The two crops shown in the figure belong to the meter highlighted in red. Notably, the distance between these crops is not smaller than the distance between the crops and the other meters. Furthermore, 15.07% of the meters have the same geographic coordinates as at least one other meter. This phenomenon mainly arises from the extensive distances between farms; as a result costumers share costs for energy infrastructure and metering points that are located at the same site.
To address the challenge of allocating crops to their corresponding meters, clusters had to be created. This was achieved by using the k-means method [34], which involves partitioning data points into k distinct clusters based on their proximity to the centroid of the cluster. Additionally, the database of the Rural Environmental Registry, known as the CAR in Brazil, was used. This is a nationwide electronic public record, mandatory for all rural properties, with the purpose of integrating environmental information from rural properties and possessions for control, monitoring, environmental and economic planning, and combat against deforestation.
The CAR requires farmers and landowners to provide detailed information about their properties, including location, area size, land use, and environmental conservation measures, among others. This database contains the geographical boundaries of rural properties, which are illustrated in Figure 6, depicting the farms within the study region. The system enables better monitoring and enforcement of environmental regulations, and it serves as a tool for developing public policies for sustainable land use and rural development. Overall, the CAR is an essential instrument for advancing Brazil’s environmental and social sustainability goals.

2.8. Estimation of Energy Consumption

The methodology used in this work to estimate energy consumption in irrigated rice crops is presented in [35]; the author studied the area and presents the average value for all the necessary parameters. The energy consumption is calculated using Equation (1).
E = q A H η t
where q is the Flow Rate, i.e., the quantity of water per time that is pumped to the crop. For this work, 1.5 L per second per hectare is used. A is the Irrigated Area (m2) and accounts for the total irrigated area of rice crops; H (Head) represents the necessary energy to raise the water from the reservoir to the higher point of the crops. According to [35], the average head in Uruguaiana is 10 m. The System Efficiency η is defined as 65% by [35], and lastly, t corresponds to the time during which the pump remains working on average, which is 21 h per day and 100 days per crop period.

3. Results

3.1. Topology

After training the neural network for 30 epochs in the mentioned areas, with a loss function close to 0.26 on the test dataset, a precision of 90% and an IoU of 68.84% were achieved. These results suggest that the trained model has a high level of accuracy in predicting the target variable, and also indicates a good overlap between the predicted and actual values. Figure 7 compares RGB images with images generated through photo interpretation and by the neural network implemented in this study, where in white are the detected rice fields and in black are other areas.
While the neural network may not detect small details in the crop, it exhibited an overall strong performance in our study. For calculations of area, these small details become less relevant and the network’s ability to accurately identify and classify larger features is critical. Therefore, we believe that our neural network approach holds great potential for improving crop monitoring and management practices.
As an example, Figure 8 displays the complete image of scene 21JVG and its corresponding mask generated by our implemented neural network. The mask effectively identifies and segments the relevant features within the image, demonstrating the network’s ability to accurately classify and distinguish between different objects and regions. The image on the left depicts the scene with real colors captured by the satellite, while the image on the right represents its mask, where the detected rice crops are depicted in white.
For the purpose of comparison, Table 1 shows the IoU percentages of the current study, another relevant study utilizing the same metric from the literature, and a baseline. This comparison allows for a clear assessment of the effectiveness and superiority of our proposed method, and it provides valuable information for future research in this area.

3.2. Energy Consumption Estimation

By employing the proposed method described in Table 2, the expected consumption of each cluster was calculated based on the total area of rice fields associated with each cluster. Table 3 presents an example of the cluster number, the number of customers farms within the cluster, the total area of rice fields, the estimated energy consumption, and the billed energy consumption provided by the Power Distribution Company. Considering the mean and standard deviation of the approximation errors, we conclude that a difference of up to 50% is acceptable. However, when there is a significant discrepancy between the calculated and billed energy consumption values, further inspection and investigation are required, as shown in Table 4.

4. Discussion

The two groups of rice farms in Table 3 and Table 4 provide a comprehensive evaluation of the electricity consumption on the farms. The average of the study area is 578.86 kWh/ha. The results demonstrate that 26.5% of the clusters presented significant differences between the estimated energy and the amount that was billed. This indicates the need for further investigation.
These discrepancies in energy consumption may have occurred for various reasons, including the use of data provided by malfunctioning energy meters, theft of electricity, crops that have distributed energy generation, human error in data collection or entry, or variations in weather conditions affecting crop growth and water needs. Some farms in the dataset may have implemented energy-saving measures or crops that use gravity-based irrigation systems, resulting in a lower energy consumption than expected. Nonetheless, these potential factors only highlight the complexity of accurately estimating energy consumption in the agricultural sector.
Addressing these factors will be crucial to ensure the development of more reliable and accurate methods for estimating energy consumption, which can have significant implications for energy planning and sustainability in the agriculture industry. As such, future studies should consider a wider range of variables and potential factors that could impact energy consumption in agricultural settings. By utilizing these findings, electricity consumption can also be compared across clusters, thereby generating efficiency indicators that assist farmers in making informed decisions regarding the upgrade of their electrical equipment.
However, these results already provide valuable insights into the sustainable management of rice production systems and the electricity grid in southern Brazil. They can be used as input for an AI network in conjunction with other variables, such as meteorological data, soil characteristics, and productivity, in future studies.
The methodology developed in this study not only provides valuable insights for optimizing resource management in the context of the specific crop and region analyzed, but it also holds the potential for broader applications across various other crops and regions. By adapting and tailoring the framework to suit the unique characteristics and requirements of different agricultural systems, this methodology could serve as a versatile tool to enhance more efficient food production systems worldwide.

5. Conclusions

It was shown that the U-Net model can effectively segment satellite images to identify rice plantations, with a precision of 90% and an IoU of 68.84%. Then, the k-means algorithm was utilized to group the meters of customers farms and estimate electrical energy usage for irrigation purposes.
Comparing the estimated consumption with the energy billed by the utility, properties with very different patterns could be distinguished: in particular, five clusters with more than 500% less consumption than expected. This method can be a powerful tool for promoting more efficient electricity usage and reducing inspection costs, with applications in other fields as well. Additionally, these techniques can be applied for energy management and planning.
The findings of this study can be used to feed another AI system, along with new data such as meteorological information, soil data, and other relevant factors. This integration of additional data can further enhance the accuracy and refinement of the findings.

Author Contributions

Conceptualization, D.P.B.; Methodology, D.L.L.; Software, D.L.L., M.M.J. and N.B.S.; Validation, N.B.S. and M.S.; Investigation, D.L.L. and M.M.J.; Resources, J.A.S. and L.M.C.; Data curation, J.A.S. and L.M.C.; Writing—original draft, D.L.L.; Writing—review & editing, M.S.; Supervision, L.M.C. and M.W.; Project administration, D.P.B.; Funding acquisition, M.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was financed by CPFL Energia through the project “Sistema de Detecção de Perdas não Técnicas em Áreas de Irrigação Empregando Técnicas de Inteligência Artificial”, developed under the ANEEL R&D Program PD-00063-3065/2020. It was also financed by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior—CAPES, PROEX (Financing Code 001) and PrInt (Aux 2458/2018 Proc 88881.310202/2018-01).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bu, L.; Lai, Q.; Qing, S.; Bao, Y.; Liu, X.; Na, Q.; Li, Y. Grassland Biomass Inversion Based on a Random Forest Algorithm and Drought Risk Assessment. Remote Sens. 2022, 14, 5745. [Google Scholar] [CrossRef]
  2. Zhang, M.; Liu, D.; Wang, S.; Xiang, H.; Zhang, W. Multisource Remote Sensing Data-Based Flood Monitoring and Crop Damage Assessment: A Case Study on the 20 July 2021 Extraordinary Rainfall Event in Henan, China. Remote Sens. 2022, 14, 5771. [Google Scholar] [CrossRef]
  3. Vieira, D.C.; Sanches, I.D.; Montibeller, B.; Prudente, V.H.R.; Hansen, M.C.; Baggett, A.; Adami, M. Cropland expansion, intensification, and reduction in Mato Grosso state, Brazil, between the crop years 2000/01 to 2017/18. Remote Sens. Appl. Soc. Environ. 2022, 28, 100841. [Google Scholar] [CrossRef]
  4. Saad El Imanni, H.; El Harti, A.; El Iysaouy, L. Wheat Yield Estimation Using Remote Sensing Indices Derived from Sentinel-2 Time Series and Google Earth Engine in a Highly Fragmented and Heterogeneous Agricultural Region. Agronomy 2022, 12, 2853. [Google Scholar] [CrossRef]
  5. Mishkin, D.; Sergievskiy, N.; Matas, J. Systematic evaluation of convolution neural network advances on the Imagenet. Comput. Vis. Image Underst. 2017, 161, 11–19. [Google Scholar] [CrossRef]
  6. Rustowicz, R.; Cheong, R.; Wang, L.; Ermon, S.; Burke, M.; Lobell, D. Semantic Segmentation of Crop Type in Africa: A Novel Dataset and Analysis of Deep Learning Methods. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
  7. Gorjian, S.; Fakhraei, O.; Gorjian, A.; Sharafkhani, A.; Aziznejad, A. Sustainable Food and Agriculture: Employment of Renewable Energy Technologies. Curr. Robot. Rep. 2022, 3, 153–163. [Google Scholar] [CrossRef] [PubMed]
  8. Yang, L.; Sun, Q.; Zhang, N.; Li, Y. Indirect Multi-Energy Transactions of Energy Internet with Deep Reinforcement Learning Approach. IEEE Trans. Power Syst. 2022, 37, 4067–4077. [Google Scholar] [CrossRef]
  9. Cui, G.; Liu, B.; Luan, W.; Yu, Y. Estimation of Target Appliance Electricity Consumption Using Background Filtering. IEEE Trans. Smart Grid 2019, 10, 5920–5929. [Google Scholar] [CrossRef]
  10. Guzman, C.; Cardenas, A.; Agbossou, K. Local Estimation of Critical and Off-Peak Periods for Grid-Friendly Flexible Load Management. IEEE Syst. J. 2020, 14, 4262–4271. [Google Scholar] [CrossRef]
  11. CONAB-Companhia Nacional de Abastecimento. Acompanhamento da Safra Brasileira de Grãos, safra 2021/22 n. 8. 2022. Available online: https://www.conab.gov.br/info-agro/safras/graos/boletim-da-safra-de-graos/item/download/45085_57c7824c98301706be01288c77f460b7 (accessed on 21 April 2023).
  12. World Food and Agriculture—Statistical Yearbook 2022; FAO: Rome, Italy, 2022. [CrossRef]
  13. Ozdogan, M.; Gutman, G. A new methodology to map irrigated areas using multi-temporal MODIS and ancillary data: An application example in the continental US. Remote Sens. Environ. 2008, 112, 3520–3537. [Google Scholar] [CrossRef]
  14. Peña-Arancibia, J.L.; McVicar, T.R.; Paydar, Z.; Li, L.; Guerschman, J.P.; Donohue, R.J.; Dutta, D.; Podger, G.M.; van Dijk, A.I.; Chiew, F.H. Dynamic identification of summer cropping irrigated areas in a large basin experiencing extreme climatic variability. Remote Sens. Environ. 2014, 154, 139–152. [Google Scholar] [CrossRef]
  15. Ambika, A.K.; Wardlow, B.; Mishra, V. Remotely sensed high resolution irrigated area mapping in India for 2000 to 2015. Sci. Data 2016, 3, 2052–4463. [Google Scholar] [CrossRef] [PubMed]
  16. Christopher, R.H.; Wade, T.C.; Martha, C.A.; Yilmaz, M.T. Diagnosing Neglected Soil Moisture Source–Sink Processes via a Thermal Infrared–Based Two-Source Energy Balance Model. J. Hydrometeorol. 2015, 16, 1070–1086. [Google Scholar] [CrossRef]
  17. Patricia, M.L.; Joseph, A.S.J.; Sujay, V.K. Irrigation Signals Detected From SMAP Soil Moisture Retrievals. Geophys. Res. Lett. 2017, 44, 11860–11867. [Google Scholar] [CrossRef]
  18. Gao, Q.; Zribi, M.; Escorihuela, M.J.; Baghdadi, N.; Segui, P.Q. Irrigation Mapping Using Sentinel-1 Time Series at Field Scale. Remote Sens. 2018, 10, 1495. [Google Scholar] [CrossRef]
  19. Kumar, S.V.; Peters-Lidard, C.D.; Santanello, J.A.; Reichle, R.H.; Draper, C.S.; Koster, R.D.; Nearing, G.; Jasinski, M.F. Evaluating the utility of satellite soil moisture retrievals over irrigated areas and the ability of land data assimilation methods to correct for unmodeled processes. Hydrol. Earth Syst. Sci. 2015, 19, 4463–4478. [Google Scholar] [CrossRef]
  20. Siebert, S.; Doll, P.; Hoogeveen, J.; Faures, J.M.; Frenken, K.; Feick, S. Development and validation of the global map of irrigation areas. Hydrol. Earth Syst. Sci. 2005, 9, 535–547. [Google Scholar] [CrossRef]
  21. Instituto Brasileiro de Geografia e Estatistica. População no último Censo 2010. 2021. Available online: https://cidades.ibge.gov.br/brasil/rs/uruguaiana/panorama (accessed on 7 April 2023).
  22. European Space Agency. Open Access Hub. Available online: https://scihub.copernicus.eu/ (accessed on 11 March 2020).
  23. Ramadhani, F.; Pullanagari, R.; Kereszturi, G.; Procter, J. Automatic Mapping of Rice Growth Stages Using the Integration of SENTINEL-2, MOD13Q1, and SENTINEL-1. Remote Sens. 2020, 12, 3613. [Google Scholar] [CrossRef]
  24. Agência Nacional de Energia Elétrica. Homologation Resolution n. 1858, Federal Government of Brazil, 27 February 2015. Available online: https://www2.aneel.gov.br/cedoc/reh20151858.pdf (accessed on 7 April 2023).
  25. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
  26. Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M.; et al. Tensorflow: A system for large-scale machine learning. In Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation (OSDI’16), Savannah, GA, USA, 2–4 November 2016; Volume 16, pp. 265–283. [Google Scholar]
  27. Keras, C.F. GitHub—Keras-Team/Keras: Deep Learning for Humans—github.com. Available online: https://github.com/fchollet/keras (accessed on 17 June 2022).
  28. Choumos, G.; Koukos, A.; Sitokonstantinou, V.; Kontoes, C. Towards Space-to-Ground Data Availability for Agriculture Monitoring. In Proceedings of the 2022 IEEE 14th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP), Nafplio, Greece, 26–29 June 2022; pp. 1–5. [Google Scholar]
  29. Volpi, M.; Tuia, D. Dense semantic labeling of subdecimeter resolution images with convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 55, 881–893. [Google Scholar] [CrossRef]
  30. Crosta, A.P. Processamento Digital de Imagens de Sensoriamento Remoto; UNICAMP/Instituto de Geociencias: São Paulo, Brazil, 1992. [Google Scholar]
  31. Younis, M.C.; Keedwell, E.; Savic, D. An Investigation of Pixel-Based and Object-Based Image Classification in Remote Sensing. In Proceedings of the 2018 International Conference on Advanced Science and Engineering (ICOASE), Duhok, Iraq, 9–11 October 2018. [Google Scholar] [CrossRef]
  32. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar] [CrossRef]
  33. Khan, M.Z.; Gajendran, M.K.; Lee, Y.; Khan, M.A. Deep Neural Architectures for Medical Image Semantic Segmentation: Review. IEEE Access 2021, 9, 83002–83024. [Google Scholar] [CrossRef]
  34. MacQueen, J.B. Some Methods for Classification and Analysis of MultiVariate Observations. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability; Cam, L.M.L., Neyman, J., Eds.; University of California Press: Berkeley, CA, USA, 1967; Volume 1, pp. 281–297. [Google Scholar]
  35. Kopp, L.M. Índices de Desempenho Para Estações de Bombeamento em Lavouras de Arroz Irrigado. Ph.D. Thesis, Universidade Federal de Santa Maria—UFSM, Santa Maria, Brazil, 2015. [Google Scholar]
  36. Rakhlin, A.; Davydow, A.; Nikolenko, S. Land Cover Classification from Satellite Imagery with U-Net and Lovász-Softmax Loss. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar] [CrossRef]
Figure 1. The area highlighted in red corresponds to the municipality of Uruguaiana, where the study was performed.
Figure 1. The area highlighted in red corresponds to the municipality of Uruguaiana, where the study was performed.
Energies 16 06742 g001
Figure 2. Four Sentinel-2 scenes that covered the region of study (21JVG, 21JWG, 21JVH, and 21JWH).
Figure 2. Four Sentinel-2 scenes that covered the region of study (21JVG, 21JWG, 21JVH, and 21JWH).
Energies 16 06742 g002aEnergies 16 06742 g002b
Figure 3. Encoder and decoder topology.
Figure 3. Encoder and decoder topology.
Energies 16 06742 g003
Figure 4. Classified Crops of two Sentinel-2 scenes.
Figure 4. Classified Crops of two Sentinel-2 scenes.
Energies 16 06742 g004
Figure 5. Some crops (light green) and their corresponding meter (blue marker in red circle).
Figure 5. Some crops (light green) and their corresponding meter (blue marker in red circle).
Energies 16 06742 g005
Figure 6. Border of rural properties in Uruguaiana.
Figure 6. Border of rural properties in Uruguaiana.
Energies 16 06742 g006
Figure 7. Comparison between RGB images with images generated through photointerpretation and by the neural network implemented.
Figure 7. Comparison between RGB images with images generated through photointerpretation and by the neural network implemented.
Energies 16 06742 g007
Figure 8. Scene 21JVG and its mask.
Figure 8. Scene 21JVG and its mask.
Energies 16 06742 g008
Table 1. Summary of studies utilizing the same topology and evaluation method for comparison in this work.
Table 1. Summary of studies utilizing the same topology and evaluation method for comparison in this work.
YearMethodIoUWork
-Maximum Likelihood59.00%Baseline
2018U-Net + m46 encoder62.40%[36]
2023U-Net68.84%Present study
Table 2. Steps of the proposed methodology.
Table 2. Steps of the proposed methodology.
NumberStepDescription
1Colletion of Satellite ImagesAcquire satellite images from reliable sources.
2Pre-processingClean, enhance, normalize and standardize image data.
3Crop DetectionUtilize U-net to identify crop types and planted areas.
4Crop Information ExtractionExtract relevant information about crop fields.
5ClusteringPerform cluster-identification of crops with respective consumer units or geographical regions.
6Data CompilationGather and organize data related to crop types, areas, and consumer units.
7Estimate ConsumptionMake an estimate energy consumption for each crop-consumer unit cluster.
8ComparisonEstimated vs. Billed Energy Comparison
Table 3. Approved Clusters.
Table 3. Approved Clusters.
ClusterNumber of MetersArea (ha)Estimated Energy (kWh)Billed Energy (kWh)Difference (%)
264861.54430,155333,893−28.8%
404189.3491,62077,149−18.8%
54151110.24560,211497,515−12.6%
7114447.742,446,1362,411,419−1.4%
173308.27149,439160,2556.7%
32101341.95670,195820,17418.3%
217425.04209,876277,67624.4%
2711992.17497,994686,06527.4%
243331.35161,398237,00931.9%
2071595.97877,7371,456,66639.7%
Table 4. Clusters that should be inspected.
Table 4. Clusters that should be inspected.
ClusterNumber of MetersArea (ha)Estimated Energy (kWh)Billed Energy (kWh)Difference (%)
4412902.22450,4682040−21,982%
5310763.84381,37612,529−2944%
42103360.511,848,18976,793−2307%
3091562.44859,29951,758−1560%
385768.78383,84356,731−577%
365961.06479,846123,987−287%
48122143.181,178,693339,711−247%
951699.17934,497323,378−189%
28152299.501,264,665473,766−167%
3441244.75684,581274,904−149%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lemes, D.L.; Jacques, M.M.; Sousa, N.B.; Bernardon, D.P.; Sperandio, M.; Silva, J.A.; Chiara, L.M.; Wolter, M. Estimation of Electrical Energy Consumption in Irrigated Rice Crops in Southern Brazil. Energies 2023, 16, 6742. https://doi.org/10.3390/en16186742

AMA Style

Lemes DL, Jacques MM, Sousa NB, Bernardon DP, Sperandio M, Silva JA, Chiara LM, Wolter M. Estimation of Electrical Energy Consumption in Irrigated Rice Crops in Southern Brazil. Energies. 2023; 16(18):6742. https://doi.org/10.3390/en16186742

Chicago/Turabian Style

Lemes, Daniel Lima, Matheus Mello Jacques, Natalia Bastos Sousa, Daniel Pinheiro Bernardon, Mauricio Sperandio, Juliano Andrade Silva, Lucas M. Chiara, and Martin Wolter. 2023. "Estimation of Electrical Energy Consumption in Irrigated Rice Crops in Southern Brazil" Energies 16, no. 18: 6742. https://doi.org/10.3390/en16186742

APA Style

Lemes, D. L., Jacques, M. M., Sousa, N. B., Bernardon, D. P., Sperandio, M., Silva, J. A., Chiara, L. M., & Wolter, M. (2023). Estimation of Electrical Energy Consumption in Irrigated Rice Crops in Southern Brazil. Energies, 16(18), 6742. https://doi.org/10.3390/en16186742

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop