Next Article in Journal
Design of Small LNG Supply Chain by Multi-Period Optimization
Next Article in Special Issue
Cell-Set Modelling for a Microtab Implementation on a DU91W(2)250 Airfoil
Previous Article in Journal
Experimental Investigation of the Seamless Gearshift Mechanism Using an Electric Motor and a Planetary Gear-Set
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Image-Based River Water Level Estimation for Redundancy Information Using Deep Neural Network

by
Gabriela Rocha de Oliveira Fleury
1,†,
Douglas Vieira do Nascimento
1,†,
Arlindo Rodrigues Galvão Filho
1,*,
Filipe de Souza Lima Ribeiro
2,
Rafael Viana de Carvalho
1 and
Clarimar José Coelho
1
1
Scientific Computing Lab, Pontifical Catholic University of Goiás, Goiânia 74175-720, GO, Brazil
2
Jirau Hidroeletric Power Plant, Energia Sustentável do Brasil, Porto Velho 76840-000, RO, Brazil
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Energies 2020, 13(24), 6706; https://doi.org/10.3390/en13246706
Submission received: 1 November 2020 / Revised: 25 November 2020 / Accepted: 15 December 2020 / Published: 18 December 2020

Abstract

:
Monitoring and management of water levels has become an essential task in obtaining hydroelectric power. Activities such as water resources planning, supply basin management and flood forecasting are mediated and defined through its monitoring. Measurements, performed by sensors installed on the river facilities, are used for precisely information about water level estimations. Since weather conditions influence the results obtained by these sensors, it is necessary to have redundant approaches in order to maintain the high accuracy of the measured values. Staff gauge monitored by conventional cameras is a common redundancy method to keep track of the measurements. However, this method has low accuracy and is not reliable once it is monitored by human eyes. This work proposes to automate this process by using image processing methods of the staff gauge to measure and deep neural network to estimate the water level. To that end, three models of neural networks were compared: the residual networks (ResNet50), a MobileNetV2 and a proposed model of convolutional neural network (CNN). The results showed that ResNet50 and MobileNetV2 present inferior results compared to the proposed CNN.

Graphical Abstract

1. Introduction

Image-based water level estimation corresponds to a visual-sensing technique that uses an imaging process to automatically inspect readings of the water-line instead of the human eye [1]. Monitoring the water level has become an essential task for regulatory control of rivers in order to manage disaster risk assessment, flood warnings, water resources planning, public and industrial supply [2]. In hydro-power energy production, it is essential to monitor the rainfall, inflows and water level in order to maximize energy revenue, while taking into account dam safety risks [3].
Different methods are used in redundancy in order to guarantee the availability and accuracy of measurements. Automatic water-level gauges are used for monitoring water-level by sensors that measure the level of water (i.e., float-type, pressure-type, ultrasonic-type and radar-type gauge) [4,5,6]. Moreover, video surveillance has become widely used for monitoring and measuring the system at hydro-power stations as a redundancy system [7]. The problem with this method is that human eyes are not reliable and subject to errors, which compromise the security of the system. Therefore, defining an accurate and reliable method to monitor the water-level is a challenge for hydro-power system control. Estimate models using deep neural networks provide an ideal solution to monitoring the water-level and stream flow in hydroelectric power plant [8].
A deep neural network is a set of algorithms inspired by the functioning of the human brain. Each neural network acts as a neuron designed to recognize patterns, group and analyze data collected in real-world such as images, sounds, texts, and time series. The deep learning neural network has application in areas as image classification [9] segmentation and object detection [10]. Recent advances of deep learning neural network techniques present good performance to fine-grained image classification to distinguish subordinate-level categories [11].
Convolutional neural network (CNN) is a class of deep neural network of feed-forward type that uses a variation of multilayer perceptron developed in order to demand the least possible pre-processing. There are various variants of CNN architectures. However, their basic components are very similar [12]. LeNet-5, for example, consists of three types of layers: convolutional, pooling, and fully-connected layers [13]. Various improvements in CNN learning methodology and architecture are progressing to make CNN scalable to large, heterogeneous, complex, and multiclass problems. New features in CNNs include modification of processing units, a new approach to parameter and hyper-parameter optmization, and connectivity of layers [14].
CNN have increased the number of layers, starting from AlexNet, visual geometry (VGG) [15], Inception to Residual networks, showing deep networks has best results to image segmentation and classification than other techniques [16]. However, training a deep CNN is difficult due the phenomenons of exploding/vanishing gradients and degradation. Nowdays, various techniques are being proposed to training deeper networks, as initialization strategies, better optimizers, skip connections, knowledge transfer and layer training [17,18,19].
Residual neural networks (ResNet) is as a continuation of deep networks [20]. It drastically changes CNN architecture by introducing the concept of residual learning and defined an efficient methodology for training deep networks [21]. ResNet get state-of-art in classification task on large scale visual recognition challenge (ILSVRC) 2015 [22]. ResNet uses identity shortcut connections that enable the flow of information across layers without attenuation that would be caused by multiple stacked non-linear transformations, resulting in improved optimization [23]. ResNet uses residuals connections skip layers and are implemented with non-linear double or triple layer jumps with normalization batches between them [24]. Such residual connections followed by layers of normalization enabled ResNet to mitigate the problem of the explode/vanish gradient. ResNet application has achieved great success in image processing field in the last five years [25].
A MobileNet is a simplification of standard CNN to realtime applications such as image classification, captioning, object detection and semantic segmentation [26]. MobileNet architecture is based on depthwise convolutions which requires only one-eighth of computation cost compared with CNN [27]. MobileNetV2 is very similar to MobileNet, except that it uses inverted residual blocks with bottlenecking features. MobileNetV2 has a lower number parameters than the original MobileNet. It is a general architecture and can be used for multiple use cases. Depending on of case, it can use different input layer size and different width factors [28]. This allows different width models to reduce the number of multiply-adds and thereby reduce inference cost on mobile devices. MobileNetV2 architecture is based on an inverted residual structure where shortcut connections are between thin bottleneck layers [29]. It is important to remove non-linearities in narrow layers in order to maintain representational power and to improve performance. It results in a very memory-efficient inference model. MobileNetV2 improves the performance of mobile models on multiple tasks. It uses a factorized version of the convolutional operator to splits convolution into two separate layers [30]. In the literature, there are some works related to the identification of water levels through neural networks models. Wang et al. [31] proposed a deep CNN, based on the multidimensional densely connected CNN, for identifying water in Poyang Lake area. Han et al. [32] combine max-RGB method and shades of the gray method to achieve the enhancement of underwater vision. CNN method is used for solving a weakly illuminated problem for underwater images by training mapping relationship to obtain an illumination map. After image processing, a deep CNN method is proposed to perform underwater detection and classification, according to characteristics of underwater vision. Song et al. [33] construct an excellent self-learning ability of deep learning to modified structure of the Mask R-CNN (extends Faster R-CNN) method, which integrates bottom-up and top-down processes for water recognition. Gan and Zailah [34], propose a water level classification model into the flood monitoring system by integrating it with Artificial Intelligence technology and CNN.
This study proposes to develop an automatic detection approach that can be used in the redundancy of sensor techniques, in order to assure a security system of monitoring water level. It is used image analysis techniques and deep neural networks in order to automatically measure and estimate the water level, considering images from conventional cameras of the staff gauge. In this context, this article proposes a CNN model and compare it with two models of redundant system in order corroborate that is a suitable model for water level estimation.

2. Case Study

The Madeira River is the biggest tributary of the Amazon river in South America. The river has a capacity for power generation of 3350 MW and its flows can reach 60,000 m3/s, enough to supply approximately ten million homes. The Jirau Hydroelectric Plant is one of two power stations installed in Madeira River, located in state of Rondônia, Brazil (Figure 1). It is composed of 50 generating units, managed by Consortium Energia Sustentável do Brasil (ESBR), which has provided all data for this study.
The continuous monitoring of inflows and water-level is an essential tool for hydropower dam operators by providing real-time data for decision making in power generation and planning. The water level, measured in meters according to mean sea level, requires a maximum accuracy and must be efficiently available to hydropower control systems and operators. The water-level at the Jirau plant reservoir ranges from 82.5 m to 90 m and its monitoring system consists of sensors that measure water-level at the dam, sending data to the control room. Moreover, the water-level is monitored by real-time videos from the staff gauge displayed at the control room and monitored by operating staff experts who confront data from sensors with the level at staff gauge.
For this study, real-time videos of the staff gauges were recorded between 31 May and 6 June of 2020. The dataset consists of thirty-five real-time videos of staff gauges over different conditions, i.e., days, nights, and weather conditions. The lowest level registered in the dataset is 86.35 m and the highest level registered is 88.89 m which covers an amplitude of 31.87% of water level and uniformity distribution of 0.2995. The measurement errors/uncertainties while using the conventional staff gauge are maximum of 0.6%. The aim of this paper is to estimate water level of a river at a hydro-power plant. The videos were recently recorded and provided by Jirau Hydroelectric Plant specific for this case study.

3. Methodology

The images from real-time videos are preprocessed in order to determine measurements of the level of the river. Moreover, the deep neural network model is used in order to automatically measure and estimate the water level. These videos are separated into frames of 3364 images in total, where the water level is measured for each image. Initially, images are preprocessed to remove symbols of date and hours and prepared for extraction of the region of interest (ROI) [35]. Once the water level is detected by image analysis, each image is classified by its level and divided into two sets: 2706 images for the training set and 658 for the testing set. Thereafter, deep neural networks are used to train and estimate the water level.

3.1. Image Processing

Digital image processing is applied to remove noise for better visualization and to extract the ROI of staff gauge. For the provided dataset, the camera angle is not suitable for ROI extraction.
The captured images show that staff gauge is not straight-positioned, which influences correct detection of the water level in the final analysis. The original image of staff gauge to be binarized to extract ROI is shown in Figure 2.
To improve image quality, a vertical shearing filter was applied to make the image straight [36]. Next, non-uniform illumination was corrected by applying imaging segmentation. The morphological opening filter was applied to noise reduction and gamma filter to enhance contrast (Figure 3).
To extract ROI, an enhanced image was binarized and borders of ROI are detected and cropped from image. The result was the region of interest of the image and it is depicted in Figure 4.
After extracting ROI, water-level management was performed in order to identify the water level measured at each image.

3.2. Water Level Management

The staff gauge contains count marks that measure the level of water reached at a specific mark. To measure water level by image analysis of ROI, it is necessary to define a window size of these count marks, in pixel coordinates, and obtain the number of counts present in the staff gauge. In addition, size difference between the beginning of staff gauge and the river’s surface line is also defined. Considering a fixed mark, in meters, number of counts and defined surface line, it is possible to obtain a relationship between them according to Equation (1)
l = r c + d s 0.1
where l is water level of the river, r is fixed mark, c is the number of counts on staff gauge, d is difference between water line surface and beginning of staff gauge and s is the size of each counter mark in pixel coordinates. To ensure results are in centimeters, l is multiplied by 0.1 .
The final result shows the preprocessed image and water level measurement detected (Figure 5). Once images were processed, deep neural networks were used to train and test the dataset in order to estimate the level of the river at hydro-power control.

3.3. Residual Neural Network Model

ResNet are extensively used in computer vision tasks [21]. The ResNet consists of a residual learning framework to simplify the formation of deep networks with an architecture composed of residual blocks. A deep network is composed of many nonlinear functions in which dependencies between layers can be highly complex, making gradient computations inconstant [21]. To circumvent this problem, ResNet introduces identity skip connections in order to bypass residual layers, allowing information to pass directly through any subsequent layer. Therefore, instead of layers fitting desired underlying mapping, they fit residual mapping.
Considering input x and an underlying mapping H ( x ) to be fit by stacked layers, residual mapping is defined as
F ( x ) = H ( x ) x .
The unit residual structure is represented by
x ^ = σ ( F ( x , W ) ) + H ( x ) ,
where x ^ is output of residual unit, H ( x ) is an identity mapping, W is set of weights, function σ is activation function of nodes (ReLU) and F ( x , W ) is residual mapping to be learned [21,37].
The ResNet used in this study is called ResNet50 and consists of 50 layers, composed of a convolutional layer and residual blocks. The dataset provides preprocessed images as input. Each image is resized in the model in order to scale to 224 rows and 224 columns. Thereafter, a convolutional layer was applied with a filter ( 7 × 7 ) followed by residuals blocks, with a series of residuals layers with filter ( 3 × 3 ) . The lasted layer is connected with a fully connected layer followed by regression layers [21].

3.4. MobileNetV2 Model

A MobileNetV2 is a neural network characterized by optimizing memory consumption and execution at a low computational cost [38]. MobileNetV2 architecture is composed of a concept of depthwise separable convolutions and inverted residual structure, similar to residuals blocks [39,40]. As described in [41], depthwise separable convolutions have the aim of replacing a full convolutional operator into two separate layers. The first layer is a depthwise convolution performing a feature map-wise convolution which is applied for each feature map. The second layer is a 1 × 1 convolution kernel, named pointwise, applied for feature maps through computing linear combinations. In a regular convolutional operator, an image is processed through height, width and channel dimensions at the same time. However, separable convolutions process an image by height and width at the first layer and channel dimensions at the second layer [42].
According to [30], computational costs of a regular convolutional operator is calculated by
C r e g u l a r = h i · w i · d i · d j · k · k
and for a separable convolutions the formula is
C s e p a r a b l e = h i · w i · d i · ( d j + k · k ) ,
where C r e g u l a r is a regular convolutional operator, C s e p a r a b l e is a separable convolution, i is input layer index, j output layer index, h and w represents height and width of features maps respectively. The d i and d j are respectively the number of inputs and outputs of feature maps, and k is filter size. Therefore, to quantify computational advantage of use separable convolutions is used Equation (6)
C r e g u l a r C s e p a r a b l e = d j · k 2 d j + k 2 .

3.5. Evaluation Criteria

The deep neural networks methods were evaluated by three different criteria: root mean square error (RMSE), expressed by Equation (7), mean absolute error (MAE) in Equation (8) and determination coefficient ( R 2 ) in Equation (9),
R M S E = 1 N i = 1 N ( y ^ i y i ) 2 2
M A E = 1 N i = 1 N | y ^ i y i |
R 2 = 1 i = 1 N ( y i y i ^ ) 2 i = 1 N ( y i y ¯ i ) 2
where N is the amount of data, y ^ i is the estimated value, y i is the measured value and y ¯ i average of the measured value of the water level. These errors are used to measure the accuracy of the estimation model for training and testing. The RMSE and MAE measure the magnitude of errors in meters. A lower value for RMSE is better than a higher one. The MAE calculates the average over the dataset between estimation and water level measurements from image analysis. R 2 represents the relationship between the variance of estimation and total variance of data. All evaluation results, accuracy and errors of each model tested are individually presented in Section 5.

4. Convolutional Neural Networks: Proposed Method

The proposed CNN model is composed of 19 layers in which the first layer is responsible to normalizing image dimension in order to guarantee the output of 224 rows and columns and three channels (RGB). The next five layers contain a set of filters in order to perform convolutional operations with its subsequent layer. For a set of filters, each of them produces an activation map that is stacked along depth dimension, producing output volume. As a parameter for all layers, we used a kernel with size 3 × 3 . CNN architecture is summarized in Figure 6.
The model is empirically calibrated with an initial learning rate set to 0.001 for 350 epochs, chosen independently to faster training and with better accuracy comparing to renowned model [43]. To avoid fast reduction of matrix dimension, it is set padding 0 after each convolutional layer. For each layer, non-linearity layers involve the use of a nonlinear activation function which takes a single number and performs a fixed mathematical operation on it, and it is a complement of the convolutional operator [44].
The pooling layer is responsible for reducing spatial size, number of parameters and computation in networks. We used an average pooling type to reduce activation maps. However, it was necessary to apply a dropout layer equal to 20 % in order to decrease further data. Fully connected layer performs classification of features and extracts all convolutional layers [43]. To perform estimations, a regression layer is added at end of CNN architecture. It represents six convolutional layers with its respective filters between brackets, batch Normalization, ReLU and average pooling layer (after each convolution layer), one dropout layer, fully connected layer and regression layer to make estimations.

5. Results

Three different deep neural networks where tested in order to identify the model that better fits to the problem. A proposed CNN, ResNet50 [21] and MobileNetV2 [30].They were evaluated by training and testing images under the same parameters calibration (i.e., initial learning rate, epochs, bias, validation frequency, etc.).
Table 1 shows results of RMSE, MAE and R 2 for train and test datasets. Comparing the coefficient of determination, ResNet50 presented 0.7808 for the training dataset and 0.7692 for the testing dataset. MobileNetV2 presented 0.7803 for training, and 0.7612 for the testing dataset. Finally, the proposed model coefficient of determinantion was 0.9004 and 0.8868 for training and test datasets respectively. The difference between other criteria (RME and MAE) is also evident in Table 1 which indicates that the proposed model obtained the best result in all criteria compared with ResNet50 and MobileNetV2 networks.
Figure 7 compare estimations obtained with ResNet50, MobileNetV2 and proposed CNN model according to data value from image analysis (Expected) and models output (Predicted) under the same configuration. Due to noise in some images caused by acquisition and/or weather conditions (i.e., rain), the predicted level was not precise, once image analysis could not be effective for this error. These situations were characterized by spikes in plots. Figure 7a,b compare results respectively for training and test set of ResNet50. For this model, both results are similar and remain within a narrow range without much variation. Since ResNet50 architecture is less susceptible to error, the predicted level shows fewer spikes compared to other models. However, the model underestimates the predicted level compared to the expected level in comparison to other models under same configuration. This results corroborates evaluation presented in Table 1. Figure 7c,d shows values for MobileNetV2 training and test set respectively. In this case, there is a considerable variation between predicted and estimated levels compared to ResNet50 model. Moreover, due to its architecture being more susceptible to error, predicted level presents more spikes at plots. However, the estimated level for MobileNetV2 was in range of the predicted level which also corroborates results from evaluation criteria. Figure 7d,e shows values of the CNN proposed model for training and test set. Despite some spikes presented in the predicted model, susceptibility for the error of this model is lower then MobileNetV2 but still bigger than ResNet50. However, this model presented a better result for the predicted and expected level for both training and test sets compared to previous models under the same configuration, which also confirms the results from Table 1.

6. Conclusions and Discussion

This work proposes water level detection using images and a deep neural network method to automatically estimate the level of a river at a hydro-power plant. The image dataset was preprocessed from videos of staff gauge measures provided by Jirau Hydroelectric Power Plant. Digital image processing consisted of apply filters (i.e., vertical shearing filter, imaging segmentation, morphological opening filter and gamma filter) to enhance images in order to identify a region of interest. Moreover, to identify water measure from images, we defined a window size of count marks, in pixel coordinates, and obtain the number of counts present in the staff gauge. Strategy to identify water measure from images by correlating the level of the river, fixed marks and number of counts of staff gouge provided a qualified dataset for training and test deep neural networks.
For a deep neural network, a CNN model was proposed to detect the water level using preprocessed images by training and test datasets. The estimation capacity of the proposed model was tested in terms of RMSE, MAE and R 2 , resulting in low errors for training and test sets. The determination coefficient of the proposed model for training images reached 0.9004, while for testing it reached 0.8868. Moreover, MobileNetV2 neural network and ResNet50 residual network were implemented, using their standard parameters, in order to compare with proposed model. To that end, the same dataset, configuration and test criteria were used for each compared model.
The evaluation presented by MobileNetV2 reached 0.7803 of determination coefficient for the training set and 0.7612 for the testing dataset. This result shows that the proposed model is superior compared to MobilenetV2. Moreover, the evaluation of ResNet50 was slightly superior, reaching coefficient determination of 0.7808 for training and 0.7692 for testing. Compared to the proposed CNN model (0.9004 for training and 0.8865 for testing). However, the complexity of implementation and computational costs from ResNet50, minimize this difference compared to CNN proposed model. On average, the proposed CNN model performed 49.82% in percent change compared to the ResNet50 model, and 26.30% better than MobileNetV2.
However, there are some limitations regarding this methodology: The dataset used is limited to a short period of recording and to a specific staff gauge since it was a new procedure at Jirau Power Plant. Therefore, increase data with measurements in a more extensive range of water levels and considering different seasons would increase the accuracy of the models. Moreover, noise at images caused by weather condition and/or acquisition also influence the efficiency of models that are more susceptible to errors. Therefore, a fine-tuning of parameters of deep neural networks (i.e., bias, learn rate, validation frequency, etc.) can enhance not only the proposed CNN model, but also tested models in order to provide a more precise estimation of the water level.
Therefore, it can be concluded that CNN based strategy is a promising approach for water level detection of Madeira River, and can collaborate with security regarding data integrity, through redundancy of information through another acquisition paradigm. This safety is essential in efficiency studies of the Jirau Hydroelectric Power Plant and energy production as well as hydropower control and management.
As future work, it is possible to work on these limitations, increasing the dataset with new images at different dates, water levels and weather seasons, providing better training and test sets for the model. Furthermore, exploring different calibration and configurations of the deep neural networks can be performed in order to adapt the model to the problem.

Author Contributions

Conceptualization, D.V.d.N and C.J.C.; Data curation, G.R.d.O.F., D.V.d.N. and F.d.S.L.R.; Formal analysis, A.R.G.F.; Funding acquisition, C.J.C.; Investigation, G.R.d.O.F. and A.R.G.F.; Methodology, A.R.G.F.; Project administration, F.d.S.L.R., R.V.d.C. and C.J.C.; Resources, F.d.S.L.R.; Software, G.R.d.O.F. and D.V.d.N.; Validation, A.R.G.F.; Writing—original draft, G.R.d.O.F., D.V.d.N., A.R.G.F., R.V.d.C. and C.J.C.; Writing—review & editing, R.V.d.C. and C.J.C. All authors have read and agreed to the published version of the manuscript.

Funding

Programa de P&D da Energia Sustentável do Brasil S.A. (PD-06631-0007/2018).

Acknowledgments

Authors thank Energia Sustentável do Brasil for their support in conducting this study “Projeto regulamentado pela ANEEL e desenvolvido no âmbito do Programa de P&D da Energia Sustentável do Brasil S.A. (PD-06631-0007/2018)”.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, Z.; Zhou, Y.; Liu, H.; Zhang, L.; Wang, H. Visual Measurement of Water Level under Complex Illumination Conditions. Sensors 2019, 19, 4141. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Kishor, N.; Saini, R.P.; Singh, S.P. A review on hydropower plant models and control. Renew. Sustain. Energy Rev. 2007, 11, 776–796. [Google Scholar] [CrossRef]
  3. Katole, S.; Bhute Y., A. Real Time Water Quality Monitoring System based on IoT Plataform. Rev. Int. J. Recent Innov. Trends Comput. Commun. 2017, 5, 302–305. [Google Scholar]
  4. Zheng, G.; Zong, H.; Zhuan, X.; Wang, L. High-accuracy surface-perceiving water level gauge with self-calibration for hydrography. IEEE Sens. J. 2010, 10, 1893–1900. [Google Scholar] [CrossRef]
  5. Li, G.B.; Ha, Q.; Qiu, W.B.; Xu, J.C.; Hu, Y.Q. Application of guided-wave radar water level meter in tidallevel observation. J. Ocean Technol. 2018, 37, 19–23. [Google Scholar]
  6. Katole, S.; Bhute, Y.A. Discharge-Measurement System Using an Acoustic Doppler Current Profiler with Applications to Large Rivers and Estuaries; US Government Printing Office: Washington, DC, USA, 1993; p. 32.
  7. Shin, I.; Kim, J.; Lee, S.G. Development of an internet-based water-level monitoring and measuring systemusing CCD camera. In Proceedings of the ICMIT 2007: Mechatronics, MEMS, and Smart Materials, Gifu, Japan, 16–18 December 2007; Volume 6794. [Google Scholar] [CrossRef]
  8. Galvão Filho, A.R.; de Carvalho, R.V.; Ribeiro, F.S.L.; Coelho, C.J. Generation of Two Turbine Hill Chart Using Artificial Neural Networks. In Proceedings of the IEEE 10th International Conference on Intelligent Systems (IS), Varna, Bulgaria, 28–30 August 2020; pp. 457–462. [Google Scholar] [CrossRef]
  9. Rawat, W.; Wang, Z. Deep Convolutional Neural Networks for Image Classification: A Comprehensive Review. Neural Comput. 2017, 2352–2449. [Google Scholar] [CrossRef] [PubMed]
  10. Sultana, F.; Sufian, A.; Dutta, P. Evolution of Image Segmentation using Deep Convolutional Neural Network: A Survey. arXiv 2020, arXiv:2001.04074. [Google Scholar]
  11. Dhillon, A.; Verma, G.K. Convolutional neural network: A review of models, methodologies and applications to object detection. Prog. Artif. Intell. 2020, 9, 85–112. [Google Scholar] [CrossRef]
  12. Bressem, K.K.; Adams, L.C.; Erxleben, C.; Hamm, B.; Niehues, S.M.; Vahldiek, J.L. Comparing different deep learning architectures for classification of chest radiographs. Sci. Rep. 2020. [Google Scholar] [CrossRef] [PubMed]
  13. Khan, A.; Sohail, A.; Zahoora, U.; Qureshi, A.S. A survey of the recent architectures of deep convolutional neural networks. Artif. Intell. Rev. 2020, 53, 5455–5516. [Google Scholar] [CrossRef] [Green Version]
  14. Sindagi, V.A.; Patel, V.M. A survey of recent advances in CNN-based single image crowd counting and density estimation. Pattern Recognit. Lett. 2018, 107, 3–16. [Google Scholar] [CrossRef] [Green Version]
  15. Yang, W.; Zhang, X.; Tian, Y.; Wang, W.; Xue, J.-H.; Liao, Q. Deep Learning for Single Image Super-Resolution A Brief Review. arXiv 2019, arXiv:1808.03344. [Google Scholar]
  16. Alom, Z.; Taha, T.M.; Yakopcic, C.; Westberg, S.; Sidike, P.; Nasrin, S.; Essen, B.C.V.; Abdul, A.S.; Awwal, A.V.K. The History Began from AlexNet: A Comprehensive Survey on Deep Learning Approaches. arXiv 2018, arXiv:1803.01164. [Google Scholar]
  17. Guan, B.; Zhang, J.; Sethares, W.A.; Kijowski, R.; Liu, F. SpecNet: Spectral Domain Convolutional Neural Network. arXiv 2019, arXiv:1905.10915. [Google Scholar]
  18. Alom, M.Z.; Hasan, M.; Yakopcic, C.; Taha, T.M.; Asari, V.K. Improved inception-residual convolutional neural network for object recognition. Neural Comput. Appl. 2020, 32, 279–293. [Google Scholar] [CrossRef] [Green Version]
  19. Cao, X.; Li, C. Evolutionary Game Simulation of Knowledge Transfer in Industry-University-Research Cooperative Innovation Network under Different Network Scales. Sci. Rep. 2020, 4, 4027. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Luo, J.-H.; Wu, J. Neural Network Pruning with Residual-Connections and Limited-Data. arXiv 2020, arXiv:1911.08114. [Google Scholar]
  21. He, K.; Zhang, X.; Ren, X.; Sun, J. Deep Residual Learning for Image Recognition. arXiv 2015, arXiv:1512.03385. [Google Scholar]
  22. Targ, S.; Almeida, D.; Lyman, K. Resnet in Resnet: Generalizing Residual Archetectures. arXiv 2016, arXiv:1603.08029. [Google Scholar]
  23. Hinton, G.E.; Srivastava, N.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R.R. Improving neural networks by preventing co-adaptation of feature detectors. arXiv 2012, arXiv:1207.0580. [Google Scholar]
  24. Zhang, L.; Schaeffer, H. Forward Stability of ResNet and Its Variants. J. Math. Imaging Vis. 2019, 62, 328–351. [Google Scholar] [CrossRef] [Green Version]
  25. He, F.; Liu, T.; Tao, D. Why resnet works? Residuals generalize. arXiv 2020, arXiv:1904.01367v1. [Google Scholar]
  26. Pashaei, M.; Kamangir, H.; Starek, M.J.; Tissot, P. Review and Evaluation of Deep Learning Architectures for Efficient Land Cover Mapping with UAS Hyper-Spatial Imagery: A Case Study Over a Wetland. Remote Sens. 2020, 12, 959. [Google Scholar] [CrossRef] [Green Version]
  27. Sinha, D.; El-Sharkawy, M. Thin MobileNet: An Enhanced MobileNet Architecture. In Proceedings of the IEEE 10th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), New York, NY, USA, 10–12 October 2019. [Google Scholar] [CrossRef]
  28. Sun, X.; Choi, J.; Chen, C.-Y.; Wang, N. Hybrid 8-bit Floating Point (HFP8) Training and Inference for Deep Neural Networks. In Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS), Vancouver, Canada, 8–14 December 2019; Volume 32. [Google Scholar]
  29. Mehta, S.; Rastegari, M.; Shapiro, L.; Hajishirzi, H. ESPNetv2: A Light-Weight, Power Efficient, and General Purpose Convolutional Neural Network. arXiv 2019, arXiv:1811.11431. [Google Scholar]
  30. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. Comparing different deep learning architectures for classification of chest radiographs. arXiv 2019, arXiv:1801.04381. [Google Scholar]
  31. Wang, G.; Wu, M.; Wei, X.; Song, H. Water Identification from High-Resolution Remote Sensing Images Based on Multidimensional Densely Connected Convolutional Neural Networks. Remote. Sens. 2020, 12, 795. [Google Scholar] [CrossRef] [Green Version]
  32. Han, F.; Yao, J.; Zhu, H.; Wang, C. Underwater Image Processing and Object Detection Based on Deep CNN Method. Hindawi J. Sens. 2020. [Google Scholar] [CrossRef]
  33. Song, S.; Liu, J.; Liu, Y.; Feng, G.; Han, H.; Yao, Y.; Du, M. Intelligent Object Recognition of Urban Water Bodies Based on Deep Learning for Multi-Source and Multi-Temporal High Spatial Resolution Remote Sensing Imagery. Sensors 2020, 2, 397. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Gan, J.L.; Zailah, W. Water Level Classification for Flood Monitoring System Using Convolutional Neural Network. Lecture Notes in Electrical Engineering. In Proceedings of the 11th National Technical Seminar on Unmanned System Technology, Kuantan, Malaysia, 2–3 December 2019; Volume 666, pp. 299–318. [Google Scholar]
  35. Zhou, Q.; Ma, L.; Celenk, M.; Chelberg, D. Real-Time Video Object Recognition Using Convolutional Neural Network. Multimed. Tools Appl. 2015, 27, 251–281. [Google Scholar] [CrossRef]
  36. Schneider, A.; Hommel, G.; Blettner, M. Linear Regression Analyzes. Dtsch. Ärztebl. Int. 2010, 107, 776–782. [Google Scholar] [PubMed]
  37. Veit, A.; Wilber, M.; Belongie, S. Residual Networks Behave Like Ensembles of Relatively Shallow Networks. In Proceedings of the Advances in Neural Information Processing Systems 29 (NIPS), Barcelona, Spain, 5–10 December 2016; pp. 1–9. [Google Scholar]
  38. Behrmann, J.; Grathwohl, W.; Chen, R.T.Q.; Duvenaud, D.; Jacobsen, J.-H. Invertible Residual Networks. arXiv 2019, arXiv:1811.00995. [Google Scholar]
  39. Abdi, M.; Nahavandi, S. Multi-Residual Networks: Improving the Speed and Accuracy of Residual Networks. arXiv 2017, arXiv:1609.05672. [Google Scholar]
  40. Tung, F.; Mori, G. Similarity-Preserving Knowledge Distillation. arXiv 2019, arXiv:1907.09682. [Google Scholar]
  41. Veit, A.; Belongie, S. Convolutional Networks with Adaptive Inference Graphs. arXiv 2020, arXiv:1711.11503. [Google Scholar]
  42. Siu, C. Residual Networks Behave Like Boosting Algorithms. arXiv 2019, arXiv:1909.11790. [Google Scholar]
  43. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 84–90. [Google Scholar] [CrossRef]
  44. Ahn, B. Real-Time Video Object Recognition Using Convolutional Neural Network. In Proceedings of the International Joint Conference on Neural Networks (IJCNN), Killarney, Ireland, 12–17 July 2015; pp. 1–7. [Google Scholar] [CrossRef]
Figure 1. Madeira river basin.
Figure 1. Madeira river basin.
Energies 13 06706 g001
Figure 2. Original image from recorded video.
Figure 2. Original image from recorded video.
Energies 13 06706 g002
Figure 3. Image enhancement.
Figure 3. Image enhancement.
Energies 13 06706 g003
Figure 4. Extraction of the region of interest.
Figure 4. Extraction of the region of interest.
Energies 13 06706 g004
Figure 5. Water level detected in a preprocessed image.
Figure 5. Water level detected in a preprocessed image.
Energies 13 06706 g005
Figure 6. Convolutional neural network architecture.
Figure 6. Convolutional neural network architecture.
Energies 13 06706 g006
Figure 7. Estimation results using training and test dataset for (a,b) ResNet50, (c,d) MobileNetV2 and (e,f) proposed model.
Figure 7. Estimation results using training and test dataset for (a,b) ResNet50, (c,d) MobileNetV2 and (e,f) proposed model.
Energies 13 06706 g007
Table 1. Estimations results of root mean square error (RMSE), mean absolute error (MAE) and R 2 for train and test datasets using all three deep neural networks.
Table 1. Estimations results of root mean square error (RMSE), mean absolute error (MAE) and R 2 for train and test datasets using all three deep neural networks.
ModelTrainTest
RMSE (m)MAE (m) R 2 RMSE (m)MAE (m) R 2
ResNet500.42110.38910.78080.41780.38410.7692
MobileNetV20.36830.27090.78030.37340.27730.7612
Proposed CNN0.26680.19910.90040.29280.22280.8868
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Fleury, G.R.d.O.; do Nascimento, D.V.; Galvão Filho, A.R.; Ribeiro, F.d.S.L.; de Carvalho, R.V.; Coelho, C.J. Image-Based River Water Level Estimation for Redundancy Information Using Deep Neural Network. Energies 2020, 13, 6706. https://doi.org/10.3390/en13246706

AMA Style

Fleury GRdO, do Nascimento DV, Galvão Filho AR, Ribeiro FdSL, de Carvalho RV, Coelho CJ. Image-Based River Water Level Estimation for Redundancy Information Using Deep Neural Network. Energies. 2020; 13(24):6706. https://doi.org/10.3390/en13246706

Chicago/Turabian Style

Fleury, Gabriela Rocha de Oliveira, Douglas Vieira do Nascimento, Arlindo Rodrigues Galvão Filho, Filipe de Souza Lima Ribeiro, Rafael Viana de Carvalho, and Clarimar José Coelho. 2020. "Image-Based River Water Level Estimation for Redundancy Information Using Deep Neural Network" Energies 13, no. 24: 6706. https://doi.org/10.3390/en13246706

APA Style

Fleury, G. R. d. O., do Nascimento, D. V., Galvão Filho, A. R., Ribeiro, F. d. S. L., de Carvalho, R. V., & Coelho, C. J. (2020). Image-Based River Water Level Estimation for Redundancy Information Using Deep Neural Network. Energies, 13(24), 6706. https://doi.org/10.3390/en13246706

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop