3.1. Snow Cover Analyzed as a Continuous Variable
The two cameras at the Kenttärova site replicated similar FSC estimates for both early and melting seasons in 2015 and 2016 (
Figure 8). Consequently, RMSE of the cameras were also very similar (
Table 4). The result also showed that FSC estimates gave good results in both early and melting seasons for both the Sodankylä ground and wetland cameras (
Figure 9 and
Figure 10). For all the sites, late winter results have a large number of days with high error. From the figures, it seems that these days cover a high portion of the data, but the number of days with low error are actually much more numerous throughout the year. Since the markers in the graphs are drawn on top of each other, the amount of data points is less visible (
Figure 11,
Figure 12,
Figure 13 and
Figure 14). RMSE results for seasons show that the errors of the results from the days in winters are still in a comparable range with others (
Table 4). FSC estimates for all sites from image processing are in reasonably good agreement with visual observations, with R-squared values above 0.65. The slopes of all the graphs are mostly between 0.7 to 0.9, meaning the algorithm consistently underestimated the snow cover relative to the observer (
Figure 11,
Figure 12,
Figure 13 and
Figure 14).
The images for which the fractional snow cover results have large error were further inspected to understand the reasons for the failure. The factors that cause failures were divided into four groups: (1) changes in the camera view, (2) environmental components that are classified as snow, (3) environmental components that hide the snow cover, and (4) phenomena that disturb the histogram. These factors occur in different circumstances, and their effects on the results are different.
Changes in the camera view occurred on two of the four cameras. The view direction of the Kenttärova canopy camera moved to the right about 5–10 degrees in the winter of 2015–2016. The movement has not only changed the ROI, but also caused the reference plate in front of the camera to cover most of the ROI (
Figure 15a). In addition, in late winter, the accumulation of the snow on the reference plate masked the field of view almost completely (
Figure 15b). Later in the same winter, it is seen that the camera has moved again. The camera was also rotated 90 degrees to the right, like it has fallen down (
Figure 15c). Movements on this camera change the ROI completely, thus the images from that situation were discarded from the analyses. Changes on the zoom level and focus occurred on the Sodankylä ground camera. This time, the ROI did not change as much and it covers the same area (
Figure 15d,e). Thus, the images were not discarded from the analysis.
Environmental components that were classified as snow are the objects or vegetation that simply look like snow in the pixel level, even to the human eye. An example is the lichen on the ground, visible from Sodankylä ground camera in summer seasons (
Figure 15f). High reflectance of lichen in the blue channel [
31] near soil and green vegetation causes it to be detected as snow. Error in fractional snow cover caused by lichen is relatively low. RMSE in summer for Sodankylä is higher than other cameras, but still as low as 3.6% (
Table 4). Another example is the water on the ground. High reflectance of accumulated water on bare soil in Kenttärova field of view after rain and wetland in the Sodankylä field of view in the melting season produces high values in the blue channel, depending on the direction of incoming light (
Figure 15g). This effect causes the wet area to be classified as snow. Objects that had high blue channel reflectance (e.g., reference plates, snow sticks, masts, etc.) were also classified as snow (
Figure 15g). Thus, such objects should not be included in ROIs, and should be stabilized so that they do not fall into ROIs when they become loose.
Environmental components that hide snow cover are objects and vegetation that block the field of view at the pixel level. The litter from trees and dirt are the most probable examples, and the effect is most visible when full snow cover is present (
Figure 15h). Another example is the long branches, either from ground or trees. These branches change position due to the weight of the snow when snow accumulates on them. Even though ROIs are selected so that this situation does not disturb the analyses, some images have branches in the field of view, for example when a branch breaks and falls down on another branch.
Phenomena that disturb the histogram include shades in the field of view from the objects, vegetation, clouds, and snow properties (e.g., roughness, irregularities) (
Figure 15i–l). In the situations with full cloud cover, illumination of the field of view is almost uniform. The same situation is valid when there is no cloud cover, and the ROI is selected as such that it does not include shades, either because there is no object to create a shade, or the direction of the incoming light casts the shadows in the other direction. Under uniform illumination, the histogram of the ROI can have two different signatures, as explained in the methods section. When this phenomenon occurs, different parts of the ROI have different levels of illumination. Parts in shadow are much darker than the others. That causes the number of distribution components (peaks) to be doubled (
Figure 16). In that case, the automatic selection of threshold by the algorithm causes the shady areas to be classifed as snow-free and non-shady areas to be classified as snow-covered, regardless of whether the pixels correponded to snow cover. The error caused by this phenomenon can be up to 99%. This phenomenon is observed in almost all the images with an error larger than 50%.
Histogram disturbing by shade phenomenon is the most significant failure, as it causes the largest errors. Besides, failures by environmental components occurred mostly in summer, and can be discarded from the analyses or studies. Changes in camera view are also easier to spot because they generally cover an interval of time. But the shade phenomenon depends on the cloud cover, environment and sunlight direction, and this may change even within minutes. Thus, one should inspect all the images and list them in order to discard problematic ones, but such intervention would be further away to the idea of automatized processing, and also would mean losing a large amount of data. Instead, the algorithm should be developed or trained with the information about histograms under different light conditions, possibly by supervising the algorithm training with visual inspection and classification of sun/shade images.