Next Article in Journal
Prediction Performance Analysis of Artificial Neural Network Model by Input Variable Combination for Residential Heating Loads
Previous Article in Journal
Improvement of Transverse-Flux Machine Characteristics by Finding an Optimal Air-Gap Diameter and Coil Cross-Section at the Given Magneto-Motive Force of the PMs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convolutional Neural Network for High-Resolution Cloud Motion Prediction from Hemispheric Sky Images

Institute for Meteorology and Climatology, Leibniz Universität Hannover, Herrenhäuser Straße 2, 30419 Hannover, Germany
*
Author to whom correspondence should be addressed.
Energies 2021, 14(3), 753; https://doi.org/10.3390/en14030753
Submission received: 26 December 2020 / Revised: 26 January 2021 / Accepted: 27 January 2021 / Published: 1 February 2021

Abstract

:
A novel high-resolution method for forecasting cloud motion from all-sky images using deep learning is presented. A convolutional neural network (CNN) was created and trained with more than two years of all-sky images, recorded by a hemispheric sky imager (HSI) at the Institute of Meteorology and Climatology (IMUK) of the Leibniz Universität Hannover, Hannover, Germany. Using the haze indexpostprocessing algorithm, cloud characteristics were found, and the deformation vector of each cloud was performed and used as ground truth. The CNN training process was built to predict cloud motion up to 10 min ahead, in a sequence of HSI images, tracking clouds frame by frame. The first two simulated minutes show a strong similarity between simulated and measured cloud motion, which allows photovoltaic (PV) companies to make accurate horizon time predictions and better marketing decisions for primary and secondary control reserves. This cloud motion algorithm principally targets global irradiance predictions as an application for electrical engineering and in PV output predictions. Comparisons between the results of the predicted region of interest of a cloud by the proposed method and real cloud position show a mean Sørensen–Dice similarity coefficient (SD) of 94 ± 2.6% (mean ± standard deviation) for the first minute, outperforming the persistence model (89 ± 3.8%). As the forecast time window increased the index decreased to 44.4 ± 12.3% for the CNN and 37.8 ± 16.4% for the persistence model for 10 min ahead forecast. In addition, up to 10 min global horizontal irradiance was also derived using a feed-forward artificial neural network technique for each CNN forecasted image. Therefore, the new algorithm presented here increases the SD approximately 15% compared to the reference persistence model.

1. Introduction

Short-time cloud motion prediction has a huge impact on the future behavior of the power generation output of solar photovoltaic (PV) power plants [1]. Clouds are a major modulator of the global horizontal irradiance (GHI) and a source of severe fluctuation when, for example, passing in front of the sun. Clouds can even increase the solar radiation at the surface by reflection and/or forward scattering [2,3,4]. To compensate for these ramp events, very short-term forecasting/forecasts can help power plant operators to accurately manage PV power plants. The analyses of clouds play an important role in both scientific and business enterprises, where these severe fluctuations in the energy output are incompatible with the established safety standards for the electricity distribution systems [5].
In this context, the introduction of hemispheric sky imager (HSI) systems as efficient ground surface equipment for cloud data assessment have already been proven by various authors [6]. However, even with good high-resolution cloud detections, cloud movement forecast is still a topic of research due to its high degree of complexity [7,8,9]. Figure 1 shows how quickly cloud changes can occur within three minutes.
Looking at cloud detection methods, we can mention threshold-based algorithm [10] and machine learning methods [11,12,13,14,15]. Threshold-based algorithms normally use a red/blue ratio of the three RGB (red, blue, and green) channels from the image pixels for cloud classification [16,17,18,19]. Cloud pixels are identified as high R and B values, while sky pixels have low R and high B values. However, this method has some weaknesses, primarily distinguishing or detecting clouds near the horizon and close to the sun [20]. In addition, different pattern recognition algorithms have been developed. Ren and Malek [21] proposed a cloud segmentation algorithm utilizing superpixel. This algorithm divides the image into blocks (or clusters) and the division is based mainly in the continuity of cloud contours, the texture and brightness of each pixel. A hybrid framework has been proposed to forecast hourly global solar radiation [22]. This approach includes two different methods: support vector machine and machine learning techniques. The results showed that it is possible to predict next-day hourly values of solar radiation by reducing the root mean absolute error ( r M A E ) by 15.2% compared to the reference persistence model.
Machine learning methods have been used successfully for cloud detection [23] and cloud coverage estimations [24]. Crisosto [25] developed a method using HSI images to predict cloud concentrations one minute in advance using artificial neural networks (ANNs). The results showed a 30% reduction of errors when compared to the persistence model under diverse cloud conditions. In addition, a similar method has been used as an important step for predicting GHI one hour in advance with one-minute intervals [26]. Other advanced and sophisticated techniques, like convolutional neural networks (CNNs), have been developed and applied in recent years to forecast solar irradiance [27] by offering significant advantages for large image datasets [28], evaluating the non-linearity and other more complex relationships [29].
The main objective of this work is to propose a preliminary pre-processing method required to target solar irradiance predictions that allows companies to make more accurate horizon time predictions. The CNN algorithm can be important for very short-term GHI forecasts, and subsequently better marketing decisions for primary and secondary control reserve (cloud position and the GHI up to 5 min in advance). In this paper, deformation cloud vectors 10 min ahead were determined under different cloud and all-weather conditions. We applied an ANN technique to estimate the respective GHI of the forecasted cloud clusters, for methodology validation. Section 2 briefly describes the data acquisition methods. The methodologies of this study are described in Section 3. The results are given in Section 4. The conclusions and future work are discussed in Section 5.

2. Data

The HSI equipment used was a digital compact charge-coupled device camera and a fish-eye objective with a field of view of 183° inside a weatherproof box, which provided hemispherical images of the entire sky [30]. Exposure time was 1000/s, and there was an acquisition time of 1 min between each image. In total, 150,000 pictures were produced between 2014 and 2016. From these 150,000 manually segmented images, 5000 were selected for testing (i.e., these pictures were independent of the training data). The system is installed on the roof of the Institute of Meteorology and Climatology (IMUK) of the Leibniz University Hannover in Germany (52.4° N, 9.7° E). Completely overcast images were not used in this analysis, since GHI values under 100 W / m 2 are usually not relevant for the production of solar energy and we were more interested in larger GHI ramp effects. In addition, the GHI data was obtained using a CMP11 pyranometer (Kipp & Zonen, Delft, The Netherlands) [31].

3. Methodology

3.1. Cloud Identification

The method used to identify and separate clouds and sky pixels is an improved sky index image-processing algorithm [32]. The haze index consists of identifying cloud pixels combining the red, blue, and green channels, as detailed by Schrempf [33], and serves as an improvement for hazed areas. Every pixel is then classified as cloudy or clear sky based on a threshold (see Figure 3). Equation (1) presents the haze index, which is applied only to hazed areas, based on thresholds of the sky index.
H a z e   I n d e x = count red + count blue 2 count green count red + count blue 2 + count green

3.2. Semantic Segmentation (Acquiring Labelled Data)

Deep learning and specifically CNNs have drastically improved the way in which intelligent algorithms learn. With convolutional layers, pooling layers, and fully connected layers, CNNs allow computational models to represent data with multiple levels of abstraction [34]. With the automatic cloud–sky separation derived by the haze index algorithm, the automatic cloud segmentation is realized. Therefore, cloud clusters are labelled as ground truth for the automatic segmentation, and thus, for further cloud motion forecasts. Sky clusters were not taken into consideration. Figure 2 shows the process of acquiring the ground truth for the input parameters of the CNN. The first column shows the original image. The results of the haze index algorithm can be seen in the second column. The third column shows the two classes: cloud clusters in white and sky clusters in gray. We can see how the CNN learns to recognize different regions of interest (ROIs) for further simulations. The training process consists of learning how clouds can change frame by frame consecutively.

3.3. CNN Development and Training

Once the ROIs were identified by the haze index, they were used as ground truth parameters for training the CNN. The input parameters were the original HIS images and their corresponding cloud clusters (see Figure 2). The network accepts one HIS image (.jpg) and one cloud cluster image (.jpg) and learns exactly, frame by frame, where the clouds are.
The input layers were resized to 256 × 256, max pooling of 2 × 2 resulting in an output layer of 256 × 256. In the training phase, a pre-trained CNN for classification and detection (VGG-16) [34] was selected and extended to automatically learn the ROI changes in whole pictures frame by frame. The training process finished when the network learned as accurately as possible the ROI changes in that time. The binary cross entropy function was minimized during the training process, and the activation layers were very simple rectified linear units, or ReLUs , defined as ReLU ( z ) = max ( 0 , z ) or variants of this function proposed by He et al. [35]. Adaptative moment estimation (Adam) [36] was chosen as a stochastic optimization method and the batch size was 16. We trained the model using 100 epochs
Once the trained network was ready (the network learned the cloud movements frame by frame in more than 145,000 images), the simulation phase began. The program first identified the location of the current ROI at t , and after saving this information, went back 5 min ( t 5 Δ t ). Then the program went forward, frame by frame, to forecast the new ROI at t + Δ t by applying probabilistic accuracies learned in the training phase. In other words, the trained network delivers frame by frame the best matching cloud location, and the output is the new (estimated) ROI (cloud location) for the next minute. Furthermore, the ROI estimated for t + Δ t will be the base ROI for t + 2 Δ t , and so on.

3.4. An Artificial Neural Network Used for Validating Our Model

To validate the results of our algorithm, we applied an extra artificial neural network (ANN), as explained by Crisosto et al. [26]. This ANN only needs an HSI image to predict GHI one hour ahead in one-minute resolution. This input image is the output of our CNN method. Therefore, the new predicted ROIs of our algorithm were fed into the ANN as input parameters to derive their correspondent GHI values 10 min ahead.

3.5. Statistical Metrics

The Sørensen–Dice ( S D ) similarity coefficient [37] and overlap ( V O ) [38] were used for the method evaluation. S D is defined as the division between twice the number of elements common to both sets and the sum of the number of elements in each set (Equation (2)). V O is defined as a quotient of the intersection of both X and Y segmentations (Equation (3))
S D ( X , Y ) = 2 | X Y | | X |   +   | Y |
V O ( X , Y ) = | X Y | | X Y |
where | X |   and   | Y | is the cardinality of two sets. The mathematical definition of the root mean square error (RMSE) and the coefficient of determination ( R 2 ) are expressed as follows:
R M S E = 1 N i = 1 n ( y i x i ) 2
R 2 = ( i = 1 N ( y i y ¯ ) ( x i x ¯ ) ) [ ( i = 1 N ( y i y ¯ ) 2 ) ( i = 1 N ( x i x ¯ ) 2 ) ] 1 2
where y i is the forecast value, x i is the measured value, and N is the total number of samples. Additionally, x ¯ = i = 1 N x i and y ¯ =   i = 1 N y i .
The new algorithm was compared with the scaled persistence model [39] defined as the ROI configuration vector ( R O I _ C V ) where the next minute would be identical to the current minute ( R O I _ C V t + 1 = R O I _ C V t ). This model is the reference model for short-term solar forecasting [40]. For the irradiance evaluation, only the movement of the sun was taken into consideration.

4. Results

For a better visual representation of the results, the following example cases were selected to show the effectiveness and efficiency of the algorithm: 15 April 2014 at 15:54, 11 July 2014 at 13:09, and 1 June 2014 at 17:42. After that, 5000 cloud images with different cloud positions were simulated and the results are presented.

4.1. Analysis of the Example Cases

Figure 3 shows the one-minute ahead simulation for the three days, including the observed (target) ROI to be simulated (column 1), the simulated ROI by our model (column 2), and the simulated ROI using the persistence model (column 3). In all cases, we can see that our CNN model performs better than the persistence for the first minute. The S D values were 92.4 ± 2.9% (mean ± standard deviation), 92.7 ± 2.6%, and 88 ± 3.4%, respective to each case, while the S D for the persistence was 85.8 ± 4.9%, 85.8 ± 5.6%, and 84 ± 7.1%, respectively.
The statistical comparison with forecasts up to 10 min ahead can be seen in Table 1. As expected, the performance of the algorithm decreases substantially as the simulated time progresses. In addition, our model outperforms the persistence model for the full 10-min period. For larger timescale forecasts (for example, 5 and 10 min), these results were already expected; however, for very short forecasts (for example, 1 and 2 min), our CNN shows improvements in forecasting cloud movement.

4.2. Analysis of the Simulation for All Simulated Datasets

Table 2 presents the statistical indicators of the cloud ROI changes in the 5000 tested images. Table 1 and Table 2 show the quality of the results decreasing in forecasts for longer time scales, with S D values up to 44.4 ± 12.3% and V O of 37.7 ± 15.3% obtained for 10-min ahead forecasts.

4.3. Application of the Presented CNN Algorithm

To validate our algorithm, we applied an ANN as described in Section 3.4. For each predicted image from our CNN, the GHI value was predicted at the same time. Figure 4 shows a comparison between the target images and simulated images one-minute ahead generated as output by the CNN and their correspondent measured and simulated GHI.
Figure 5 shows the results for two examples of 10-min ahead simulations utilizing our method as an irradiance simulations tool. Table 3 shows a comparison between 100 observed images with their corresponding simulated values. Figure 6 shows the distribution of the relative deviations as a boxplot for different time horizons.

5. Conclusions

In this study, a novel method to forecast cloud motion up to 10 min ahead was presented. A convolutional neural network (CNN) was trained using hemispherical sky images as inputs, and a statistical approach for forecasting future cloud motion was performed.
According to the simulation results, the method presented here is capable of predicting cloud changes for the first minute with very high confidence using CNN, with a coefficient of determination ( R 2 ) of 0.97 and a Sørensen–Dice similarity coefficient ( S D ) of 94 ± 2.6%. For the same simulated datasets, the persistence model reached an R 2 of 0.92 and a S D of 89 ± 3.8%. The method was also tested for different forecast time scales, however, unsatisfactory results (an R 2 of 0.40 and a S D of 49 ± 11.8%) were obtained for 10-min simulations by our model, although they were better than the results of the persistence model. In addition, the global horizontal irradiance (GHI) output results predicted by the CNN showed a forecast accuracy for the decreased amount of energy one-minute ahead, achieving a R M S E of 32 ( W / m 2 ) and a R 2 of 0.81. The persistence model achieved a R M S E of 45 ( W / m 2 ) and a R 2 of 0.76. However, for the GHI prediction for the next 10-min ahead, the R M S E was 148 ( W / m 2 ) and the R 2 was 0.42 for our model and the R M S E was 187 ( W / m 2 ) and the R 2 was 0.39 for the persistence model.
The research presented here can be used as a first step for PV companies to understand cloud movement and to implement an end-to-end forecasting system (as a pipeline) within a fully automated server with the goal of forecasting global horizontal irradiance minutes ahead. This fully automated pipeline implementation will help to allow PV companies to make accurate horizon time predictions and better marketing decisions for primary and secondary control reserves (i.e., up to 5 min in advance).
Future research is needed to better understand cloud movement through wind speed and wind direction, and also to understand how to improve forecast results for periods longer than 1 min or when the sky is totally covered. Different methodologies and maybe different analyses of data should be considered.
Despite the good results, the existence of other models offers new ways to process big data. For example, long short-term memory networks (LSTMs) appear to be an alternative. Since the architecture of these networks is more complex, LSTMs are suitable for processing long data sequences while avoiding vanishing or exploding gradients, currently problems that CNNs still have. As an outlook for further projects, the utilization of LSTM and hybrid models should be taken into consideration.

Author Contributions

Conceptualization, C.C. and E.W.L.; Data curation, C.C.; Formal analysis, C.C., E.W.L. and G.S.; Investigation, C.C.; Methodology, C.C.; Project administration, G.S.; Resources, G.S.; Software C.C.; Supervision, G.S.; Validation, C.C. and E.W.L.; Visualization, C.C. and G.S.; Writing–original draft, C.C. and E.W.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The publication of this article was funded by the Open Access fund of Leibniz Universität Hannover.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chauvin, R.; Nou, J.; Thil, S.; Grieu, S. Cloud motion estimation using a sky imager. AIP Conf. Proc. 2016, 1734, 150003. [Google Scholar]
  2. Dev, S.; Savoy, F.M.; Lee, Y.H.; Winkler, S. Short-term prediction of localized cloud motion using ground-based sky imagers. In Proceedings of the Region 10 Conference (TENCON), Singapore, 22–25 November 2016; pp. 2563–2566. [Google Scholar]
  3. Kleissl, J. Solar Energy Forecasting and Resource Assessment, 1st ed.; Elsevier: Amsterdam, The Netherlands, 2013. [Google Scholar]
  4. Marquez, R.; Coimbra, C.F.M. Intra-hour DNI forecasting based on cloud tracking image analysis. Sol. Energy 2013, 91, 327–336. [Google Scholar] [CrossRef]
  5. Dissawa, D.M.L.H.; Ekanayake, M.P.B.; Godaliyadda, G.M.R.I.; Ekanayake, J.B.; Agalgaonkar, A.P. Cloud motion tracking for short-term on-site cloud coverage prediction. In Proceedings of the 17th International Conference on Advances in ICT for Emerging Regions (ICTer), Colombo, Sri Lanka, 6–9 September 2017; Proc. 2018-Janua. pp. 332–337. [Google Scholar] [CrossRef]
  6. Volker Quaschning. Statistiken. 2017. Available online: https://www.volker-quaschning.de/datserv/pv-welt/index.php (accessed on 24 October 2019).
  7. Escrig, H.; Batlles, F.J.; Alonso, J.; Baena, F.M.; Bosch, J.L.; Salbidegoitia, I.B.; Burgaleta, J.I. Cloud detection, classification and motion estimation using geostationary satellite imagery for cloud cover forecast. Energy 2013, 55, 853–859. [Google Scholar] [CrossRef]
  8. Huang, H.; Xu, J.; Peng, Z.; Yoo, S.; Yu, D.; Huang, D.; Qin, H. Cloud motion estimation for short term solar irradiation prediction. In Proceedings of the 2013 IEEE International Conference on Smart Grid Communications (SmartGridComm), Vancouver, BC, Canada, 21–24 October 2013; pp. 696–701. [Google Scholar] [CrossRef]
  9. Dev, S.; Lee, Y.H.; Winkler, S. Color-Based Segmentation of Sky/Cloud Images From Ground-Based Cameras. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 1–12. [Google Scholar] [CrossRef]
  10. Kreuter, A.; Zangerl, M.; Schwarzmann, M.; Blumthaler, M. All-sky imaging: A simple, versatile system for atmospheric research. Appl. Opt. 2009, 48, 1091–1097. [Google Scholar] [CrossRef]
  11. Heinle, A.; Macke, A.; Srivastav, A. Automatic cloud classification of whole sky images. Atmos. Meas. Tech. 2010, 3, 557–567. [Google Scholar] [CrossRef] [Green Version]
  12. Yang, J.; Lu, W.; Ma, Y.; Yao, W. An automated cirrus cloud detection method for a ground-based cloud image. J. Atmos. Ocean. Technol. 2012, 29, 527–537. [Google Scholar] [CrossRef]
  13. Long, C.N.; Sabburg, J.M.; Calbó, J.; Pagès, D. Retrieving cloud characteristics from ground-based daytime color all-sky images. J. Atmos. Ocean. Technol. 2006, 23, 633–652. [Google Scholar] [CrossRef] [Green Version]
  14. Liu, S.; Zhang, L.; Zhang, Z.; Wang, C.; Xiao, B. Automatic cloud detection for all-sky images using superpixel segmentation. IEEE Geosci. Remote Sens. Lett. 2015, 12, 354–358. [Google Scholar]
  15. Kumler, A.; Xie, Y.; Zhang, Y. A New Approach for Short-Term Solar Radiation Forecasting Using the Estimation of Cloud Fraction and Cloud Albedo; Report Number NREL/TP-5D00-72290; National Renewable Energy Laboratory: Golden, CO, USA, 2018. [Google Scholar]
  16. Yang, J.; Lv, W.T.; Ma, Y.; Yao, W.; Li, Q.Y. An automatic groundbased cloud detection method based on adaptive threshold. J. Appl. Meteorol. Sci. 2009, 20, 713–721. [Google Scholar]
  17. Cazorla, A.; Olmo, F.J.; Alados-Arboledas, L. Development of a sky imager for cloud cover assessment. J. Opt. Soc. Am. A. 2008, 25, 29–39. [Google Scholar] [CrossRef] [PubMed]
  18. Ren, X.F.; Malik, J. Learning a classification model for segmentation. In Proceedings of the 9th IEEE International Conference on Computer Vision, Nice, France, 13–16 October 2003; pp. 10–17. [Google Scholar]
  19. Peng, B.; Zhang, L.; Zhang, D. Automatic image segmentation by dynamic region merging. IEEE Trans. Image Process. 2011, 20, 3592–3605. [Google Scholar] [CrossRef] [PubMed]
  20. Le Goff, M.; Tourneret, J.; Wendt, H.; Ortner, M.; Spigai, M. Deep learning for cloud detection. In Proceedings of the ICPRS 8th International Conference of Pattern Recognition Systems, Madrid, Spain, 11–13 July 2017; pp. 1–6. [Google Scholar]
  21. Malek, E. Evaluation of effective atmospheric emissivity and parameterization of cloud at local scale. Atmos. Res. 1997, 45, 41–54. [Google Scholar] [CrossRef]
  22. Jiménez-Pérez, P.; López, L. Modeling and forecasting hourly global solar radiation using clustering and classification techniques. Sol. Energy 2016, 135, 682–691. [Google Scholar] [CrossRef]
  23. Li, Z.; Shen, H.; Cheng, Q.; Liu, Y.; You, S.; He, Z. Deep learning based cloud detection for medium and high resolution remote sensing images of different sensors. ISPRS J. Photogramm. Remote Sens. 2019, 150, 197–212. [Google Scholar] [CrossRef] [Green Version]
  24. Onishi, R.; Sugiyama, D. Deep convolutional neural network for cloud coverage estimation from snapshot camera images. SOLA 2017, 13, 235–239. [Google Scholar] [CrossRef] [Green Version]
  25. Crisosto, C. Autoregressive Neural Network for Cloud Concentration Forecast from Hemispheric Sky Images. Int. J. Photoenergy 2019, 2019, 4375874, 8 pages. [Google Scholar] [CrossRef] [Green Version]
  26. Crisosto, C.; Hofmann, M.; Mubarak, R.; Seckmeyer, G. One-Hour Prediction of the Global Solar Irradiance from All-Sky Images Using Artificial Neural Networks. Energies 2018, 11, 2906. [Google Scholar] [CrossRef] [Green Version]
  27. Moncada, A.; Richardson, W.; Vega-Avila, R. Deep learning to forecast solar irradiance using a six-month UTSA SkyImager dataset. Energies 2018, 11, 1988. [Google Scholar] [CrossRef] [Green Version]
  28. Sun, Y.; Venugopal, V.; Brandt, A.R. Convolutional Neural Network for Short-term Solar Panel Output Prediction. In Proceedings of the 2018 IEEE 7th World Conference on Photovoltaic Energy Conversion (WCPEC) (A Joint Conference of 45th IEEE PVSC, 28th PVSEC & 34th EU PVSEC), Waikoloa Village, HI, USA, 10–15 June 2018; pp. 2357–2361. [Google Scholar]
  29. Siddiqui, T.A.; Bharadwaj, S.; Kalyanaraman, S. A deep learning approach to solar-irradiance forecasting in sky-videos. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa Village, HI, USA, 7–11 January 2019; pp. 2166–2174. [Google Scholar]
  30. Tohsing, K.; Schrempf, M.; Riechelmann, S.; Schilke, H.; Seckmeyer, G. Measuring high-resolution sky luminance distributions with a CCD camera. Appl. Opt. 2013, 52, 1564–1573. [Google Scholar] [CrossRef]
  31. Anon. CMP11 Pyranometer. Available online: http://www.kippzonen.com/Product/13/CMP11-Pyranometer#.WXi1sK3qh-U (accessed on 23 May 2018).
  32. Yamashita, M.; Yoshimura, M.; Nakashizuka, T. Cloud Cover Estimation using Multitemporal Hemisphere Imageries. Inter. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 35, 826–829. [Google Scholar]
  33. Schrempf, M. Entwicklung eines Algorithmus zur Wolkenerkennung in Digitalbildern des Himmels. Master’s Thesis, Institut für Meteorologie und Klimatologie, Hanover, Germany, 2012. [Google Scholar]
  34. Simonyan, K.; Zisserman, A. Very Deep Convolutional Neural Networks for Large-Scale Image Recognition. In Proceedings of the International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  35. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. arXiv 2015, arXiv:1502.01852 Cs. [Google Scholar]
  36. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980 Cs. [Google Scholar]
  37. Zou, K.H.; Warfield, S.K.; Bharatha, A.; Tempany, C.M.; Kaus, M.R.; Haker, S.J.; Wells, W.M., III; Jolesz, F.A.; Kikinis, R. Statistical validation of image segmentation quality based on a spatial overlap index. Acad. Radiol. 2004, 11, 178–189. [Google Scholar] [CrossRef] [Green Version]
  38. Stegmann, M.B.; Delgado, D. A Brief Introduction to Statistical Shape Analysis. Informatics and Mathematical Modelling. March 2002. 15p. Available online: http://graphics.stanford.edu/courses/cs164-09-spring/Handouts/paper_shape_spaces_imm403.pdf (accessed on 1 January 2021).
  39. Notton, G.; Voyant, C. Forecasting of Intermittent Solar Energy Resource. In Advances in Renewable Energies and Power Technologies; Elsevier: Amsterdam, The Netherlands, 2018; Volume 1, pp. 77–114. [Google Scholar] [CrossRef]
  40. Wilks, D.S. Statistical Methods in the Atmospheric Sciences; Academic Press: Cambridge, MA, USA, 2011. [Google Scholar]
Figure 1. From (ac); hemispheric sky imager (HSI) images showing examples of cloud shape changes within a 3-min interval. The gray area corresponds to sky pixels and white area corresponds to clouds. All cloud identification was derived from the original HSI pictures.
Figure 1. From (ac); hemispheric sky imager (HSI) images showing examples of cloud shape changes within a 3-min interval. The gray area corresponds to sky pixels and white area corresponds to clouds. All cloud identification was derived from the original HSI pictures.
Energies 14 00753 g001
Figure 2. Automatic segmentation of a picture taken on 22 June 2014 at 12:53. The first column shows the original image. The results of the haze index algorithm can be seen in the second column. The third column shows the two classes: cloud clusters in white and sky clusters in gray.
Figure 2. Automatic segmentation of a picture taken on 22 June 2014 at 12:53. The first column shows the original image. The results of the haze index algorithm can be seen in the second column. The third column shows the two classes: cloud clusters in white and sky clusters in gray.
Energies 14 00753 g002
Figure 3. Cloud region of interest (ROI) changes forecasting results. Column 1 represents the target images in gray. Column 2 shows segmentation ROIs for the new model. The segmentation ROIs of the persistence model are presented in Column 3. (a) Images for 15 April 2014 at 15:54. (b) Images for 11 July 2014 at 13:09. (c) Images for 1 June 2014 at 17:42.
Figure 3. Cloud region of interest (ROI) changes forecasting results. Column 1 represents the target images in gray. Column 2 shows segmentation ROIs for the new model. The segmentation ROIs of the persistence model are presented in Column 3. (a) Images for 15 April 2014 at 15:54. (b) Images for 11 July 2014 at 13:09. (c) Images for 1 June 2014 at 17:42.
Energies 14 00753 g003
Figure 4. Comparison between the observed images and simulated images one-minute ahead generated as output by the CNN and their correspondent measured and simulated global horizontal irradiance (GHI) values; (a) corresponds to 24 August 2014 at 09:45 with (b) as its simulations; (c) corresponds to 8 June 2015 at 15:10 with (d) as its simulations; (e) corresponds to 9 May 2017 at 10:42 with (f) as its simulations.
Figure 4. Comparison between the observed images and simulated images one-minute ahead generated as output by the CNN and their correspondent measured and simulated global horizontal irradiance (GHI) values; (a) corresponds to 24 August 2014 at 09:45 with (b) as its simulations; (c) corresponds to 8 June 2015 at 15:10 with (d) as its simulations; (e) corresponds to 9 May 2017 at 10:42 with (f) as its simulations.
Energies 14 00753 g004aEnergies 14 00753 g004b
Figure 5. Comparison results of the simulation performed by the new and persistence model and the measured dataset at one-minute intervals; (a,b) show a good performance; R 2 = 0.82 and R 2 = 0.74, respectively, for the first two minutes. For the persistence model, R 2 = 0.75 and R 2 = 0.67, respectively, for the first two minutes.
Figure 5. Comparison results of the simulation performed by the new and persistence model and the measured dataset at one-minute intervals; (a,b) show a good performance; R 2 = 0.82 and R 2 = 0.74, respectively, for the first two minutes. For the persistence model, R 2 = 0.75 and R 2 = 0.67, respectively, for the first two minutes.
Energies 14 00753 g005
Figure 6. The mean relative deviation boxplot of all 782 simulation cases derived for different time horizons. The symmetry in 50% of the data decreases as soon as the program advances in time. Here, there are narrower interquartile ranges for higher sample sizes; symmetry in 50% of the data decreases as soon as the program receives more information, but the numbers of outliers (+) are lower.
Figure 6. The mean relative deviation boxplot of all 782 simulation cases derived for different time horizons. The symmetry in 50% of the data decreases as soon as the program advances in time. Here, there are narrower interquartile ranges for higher sample sizes; symmetry in 50% of the data decreases as soon as the program receives more information, but the numbers of outliers (+) are lower.
Energies 14 00753 g006
Table 1. Statistical indicators of the artificial neural network (ANN) model for four different time periods: the mean Sørensen–Dice similarity ( S D ) coefficient and the overlap ( V O ) for for 1-, 2-, 5-, and 10-min forecasts. CNN: convolutional neural network.
Table 1. Statistical indicators of the artificial neural network (ANN) model for four different time periods: the mean Sørensen–Dice similarity ( S D ) coefficient and the overlap ( V O ) for for 1-, 2-, 5-, and 10-min forecasts. CNN: convolutional neural network.
DayModelStatistical Parameters
SD   ( % ) VO ( % )
1-min2-min5-min10-min1-min2-min5-min10-min
24 March 2014 at 09:48CNN
Persistence
93
89
83
79
71
68
51
49
91
86
80
79
69
60
47
42
1 June 2014 at 17:37CNN
Persistence
92
88
82
78
69
64
57
52
90
84
81
75
62
57
49
43
24 May 2014 at 14:02CNN
Persistence
94
87
87
82
73
62
62
48
92
87
84
79
69
59
51
48
9 May 2015 at 15:26CNN
Persistence
87
83
79
71
62
58
51
49
85
80
78
70
57
55
46
40
Table 2. Comparison between the statistical indicators of the proposed CNN model and the persistence model: the mean Sørensen–Dice similarity coefficient ( S D ) and overlap ( V O ) of the 5000 simulated cases for 1-, 2-, 5-, and 10-min forecasts.
Table 2. Comparison between the statistical indicators of the proposed CNN model and the persistence model: the mean Sørensen–Dice similarity coefficient ( S D ) and overlap ( V O ) of the 5000 simulated cases for 1-, 2-, 5-, and 10-min forecasts.
ModelMean Statistical Parameters
SD   ( % ) VO   ( % )
1-min2-min5-min10-min1-min2-min5-min10-min
CNN9483604992805843
Persistence8978554486694537
Table 3. Presentation of the statistical indicator for the comparison between our model and the persistence model. R M S E and the R 2 of all 100 compared cases.
Table 3. Presentation of the statistical indicator for the comparison between our model and the persistence model. R M S E and the R 2 of all 100 compared cases.
ModelStatistical Parameters
R M S E   ( W / m 2 ) ( % ) R 2
1-min2-min5-min10-min1-min2-min5-min10-min
CNN32541011480.810.650.530.42
Persistence45721251870.760.610.480.39
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Crisosto, C.; Luiz, E.W.; Seckmeyer, G. Convolutional Neural Network for High-Resolution Cloud Motion Prediction from Hemispheric Sky Images. Energies 2021, 14, 753. https://doi.org/10.3390/en14030753

AMA Style

Crisosto C, Luiz EW, Seckmeyer G. Convolutional Neural Network for High-Resolution Cloud Motion Prediction from Hemispheric Sky Images. Energies. 2021; 14(3):753. https://doi.org/10.3390/en14030753

Chicago/Turabian Style

Crisosto, Cristian, Eduardo W. Luiz, and Gunther Seckmeyer. 2021. "Convolutional Neural Network for High-Resolution Cloud Motion Prediction from Hemispheric Sky Images" Energies 14, no. 3: 753. https://doi.org/10.3390/en14030753

APA Style

Crisosto, C., Luiz, E. W., & Seckmeyer, G. (2021). Convolutional Neural Network for High-Resolution Cloud Motion Prediction from Hemispheric Sky Images. Energies, 14(3), 753. https://doi.org/10.3390/en14030753

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop