Next Article in Journal
Numerical Modeling of Local Scour in the Vicinity of Bridge Abutments When Covered with Ice
Previous Article in Journal
Abiotic–Biotic Interrelations in the Context of Stabilized Ecological Potential of Post-Mining Waters
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Snow Cover Detection Using Multi-Temporal Remotely Sensed Images of Fengyun-4A in Qinghai-Tibetan Plateau

1
School of Electronics and Information Engineering, Nanjing University of Information Science and Technology, Nanjing 210044, China
2
School of Internet of Things Engineering, Wuxi University, Wuxi 214105, China
3
School of Automation, Nanjing University of Information Science and Technology, Nanjing 210044, China
4
School of Atmospheric Science and Remote Sensing, Wuxi University, Wuxi 214105, China
5
Anhui Meteorological Information Center, Hefei 230031, China
6
Tiantai Meteorological Bureau, Taizhou 317200, China
*
Author to whom correspondence should be addressed.
Water 2023, 15(19), 3329; https://doi.org/10.3390/w15193329
Submission received: 18 August 2023 / Revised: 14 September 2023 / Accepted: 19 September 2023 / Published: 22 September 2023

Abstract

:
Differentiating between snow and clouds presents a formidable challenge in the context of mapping snow cover over the Qinghai–Tibetan Plateau (QTP). The frequent presence of cloudy conditions severely complicates the discrimination of snow cover from satellite imagery. To accurately monitor the spatiotemporal evolution of snow cover, it is imperative to address these challenges and enhance the segmentation schemes employed for snow cover assessment. In this study, we devised a pixel-wise classification algorithm based on Support Vector Machine (SVM) called the 3-D Orientation Gradient algorithm (3-D OG), which captures the variations of the gradient direction of snow and clouds in spatiotemporal dimensions based on geostationary satellite “Fengyun-4A” (FY-4A) multi-spectral and multi-temporal optical imagery. This algorithm assumes that the speed and direction of clouds and snow are different in the process of movement leading to their discrepancy of gradient characteristics in time and space. Therefore, in this algorithm, the gradient of the images in the spatiotemporal dimensions is calculated first, and then the movement angle and trend are obtained based on that. Finally, the feature space is composed of the multi-spectral image, gradient image, and movement feature maps, which are used as the input of the SVM. Our results demonstrate that the proposed algorithm can identify snow and clouds more accurately during snowfall by utilizing the FY-4A’s high temporal resolution image. Weather station data, which was collected during snowstorms in the QTP, were used for evaluating the accuracy of our algorithm. It is demonstrated that the overall accuracy of snow cover segmentation by using the 3-D OG algorithm is improved by at least 12% and 10% as compared to snow products of Fengyun-2 and MODIS, respectively. Overall, the proposed algorithm has overcome the axial swing errors existing in Geostationary satellites and is successfully applied to cloud and snow segmentation in QTP. Furthermore, our study underscores that the visible and near-infrared bands of Fengyun-4A can be used for near real-time snow cover monitoring with high performance using the 3-D OG algorithm.

1. Introduction

Snow cover represents a pivotal land surface parameter, particularly within mountainous regions such as the Qinghai–Tibetan Plateau (QTP). Its significance arises from a multitude of factors, including its high albedo, thermal properties, capabilities for water retention, and intricate connections to global climate processes [1,2,3,4]. In the context of the QTP, the world’s loftiest and most expansive highland, this region exerts a profound influence on both regional and global climate dynamics, operating through a combination of mechanical forcing and thermodynamic mechanisms [5]. The snowpack present on the QTP exerts a substantial impact on the availability of water during the summer season for numerous major Asian rivers, encompassing the Yellow, Yangtze, Indus, Ganges, Brahmaputra, and others [6]. Moreover, given its critical role as a pastoral hub in China, snow cover significantly influences traditional livelihoods, especially in the domain of animal husbandry. Additionally, snow cover may affect the transportation system, thereby directly affecting the daily lives of residents [7]. Consequently, the acquisition of information pertaining to the spatiotemporal distribution of snow cover over the QTP is of paramount importance, serving as an essential resource for scientific investigations and effective disaster management protocols [8].
The investigation of alpine glacier variations within the QTP has begun, and commenced in the past few decades [9,10,11]. Snow cover, recognized as a crucial indicator of temperature and precipitation patterns across the QTP and its environs, has garnered substantial attention from the research community [12,13]. The earlier snow cover mappings mainly came from the meteorological stations. Nonetheless, due to the spatial discontinuity and inconsistency of meteorological stations, the observation does not meet the spatial distribution. Remotely sensed images have been used for snow monitoring due to their wide monitoring range [14,15,16,17]. Since the launch of the Television Infrared Observation Satellite (TIROS-1) in 1960, an escalating number of satellites have been deployed for the purpose of snow cover monitoring [9,18,19,20]. Notable satellites and instruments currently employed in snow monitoring include National Oceanic and Atmospheric Administration’s (NOAA) Advanced Very High-Resolution Radiometer (AVHRR), TERRA, and AQUA’s Moderate Resolution Imaging Spectroradiometer (MODIS), Landsat’s Thematic Mapper/Operational Land Imager (TM/OLI), ENVISAT’s Advanced Along-Track Scanning Radiometer (AATSR), ERS-2′s Along Track Scanning Radiometer (AATSR) and Fengyun-2′s Stretched Visible and Infrared Spin Scan Radiometer (VISSR-2), among others [21,22]. AVHRR data offer the advantages of a broad field of view, a relatively short revisit period, and robust objectivity. Nevertheless, its observation data, characterized by low spatial and temporal resolution, often results in the occurrence of mixed pixels comprising clouds and snow, rendering the discrimination between the two a challenging task [23]. The TERRA and AQUA satellites, equipped with MODIS, provide higher spatial and temporal resolution and an expanded range of spectral bands. However, despite their twice-daily observations, cloud interference remains a significant concern [9,16]. The latest generation of Landsat satellites, Landsat-8, carry Operational Land Imager (OLI) and the Thermal Infrared Sensor (TIRS), for which the spatial resolution is 100 m and 30 m, respectively. It has a high revisit period, which is up to 16 days [24]. China’s meteorological satellite, “Fengyun-3”, is the second generation of polar-orbiting meteorological satellites. Nonetheless, the extended revisit cycle associated with polar-orbiting satellites is not conducive to snow disaster monitoring in certain regions [25,26].
Furthermore, as outlined in the research conducted by Gao X et al. [9], the QTP typically experiences total cloud cover ranging from 45% to 70% throughout the course of a year. It is a huge challenge for snow monitoring using optical sensors in polar-orbiting satellites. Firstly, objects under clouds are difficult to distinguish with cloud occlusion. On the other hand, clouds and snow are difficult to separate because of their similar reflectance characteristics in certain bands [27]. Parajka and Blöschl [28] reported a 95% accuracy rate in snow detection over Austria on cloud-free days. However, they also underscored that in regions characterized by persistent cloud cover, such as Austria where 63% of the days are cloudy, the monitoring of snow dynamics can be highly irregular. Similarly, Hall and Riggs [29] identified a comparable trend and emphasized that a noteworthy source of error in snow mapping stemmed from the misclassification between snow and clouds, as only pixels devoid of cloud cover were considered for snow detection. Nowadays, MODIS ice snow products are widely used in snow monitoring [4,12,18]. Studies [30,31,32] have shown that the mainstream snow-cover products in the world have a higher omission error and commission error in QTP than in other regions. The greatest shortcomings of polar-orbiting satellites are the difficulty of long-term monitoring of a certain location daily. Nonetheless, it is essential to determine which pixels are covered by snow, even in cloudy scenes, to facilitate the mapping of alterations in spatial coverage, grain size, and the daily, weekly, and prevalence of light-absorbing particles within the snowpack on a daily, weekly, and monthly basis [33]. The paucity of daily data acquired from polar orbit satellites, typically limited to one or two observations per day, underscores the primary constraint in generating daily snow products, primarily due to cloud obscuration [34]. At present, the international mainstream snow products mainly use multi-day synthesis to achieve the purpose of removing clouds, or by merging multiple satellite data for correction (such as using microwave sensor data) [35]. The limitation of the former method is that persistent cloud cover over consecutive days can hide snow cover during this period [29]. Although the latter method is not disturbed by the cloud, it requires a lot of auxiliary data and is greatly affected by snow depth [36,37]. Moreover, for the QTP, the change speed of snow is fast, and the multi-day synthetic results are difficult to reflect the real snow cover area and cannot provide real-time data. An alternative is to utilize a geostationary orbit satellite, which can be used for long-term monitoring in a certain area [18]. In the realm of daily snow-cover mapping, the most effective strategy for mitigating the challenges posed by cloud obscuration is to augment the frequency of observations within a single day [36]. Geostationary satellites, owing to their high time resolution, offer a promising solution to this issue, as they provide frequent observations that enable nearly real-time monitoring of specific areas [36]. The current operational geostationary satellites encompass the Geostationary Operational Environmental Satellites (GOES), Meteosat Second Generation (MSG), Multi-functional Transport Satellite-2 (MTSAT-2), Fengyun-2 (FY-2), FY-4, etcetera. GOES data have been used for a daily snow cover image [38]. Researchers such as Terzago et al. [39] employed MSG data for snow mapping and validated their results using meteorological station data in South-Western Piedmont. Wang et al. [40] leveraged clear-sky multi-channel observations from FY-2/VISSR to produce daily cloud-reduced fractional snow cover (FSC) over China. Oyoshi et al. [41] employed MTSAT data to create maps of snow-cover and observed a high degree of consistency between their maps and those generated using AVHRR and MODIS data. Snow detection using geostationary satellites can solve the problem of difficulty in identifying objects under clouds. The high temporal resolution of geostationary satellites enables the detection of objects beneath clouds by analyzing temporal variations. Therefore, FY-4A/AGRI Level 1 products with 1 km resolution are used in this research, which has been preprocessed, including quality inspection, geographical positioning, and radiometric calibration. We mainly consider its advantage of high spatiotemporal resolution to improve the above-mentioned problems in the process of snow cover mapping. The usage of satellite data from optical sensors for snow detection mostly uses the reflectivity characteristics of snow in different wavebands. The majority of snow products are based on spectral-based threshold algorithms, which set a threshold for some channels [18,29,36,37]. These products generally use the NDSI (Normalized Difference Snow Index) as the main criterion, which relies on the difference in reflectance between snow and snow-free surfaces in visible and near-infrared bands [9]. However, the spectral characteristics of some clouds and snow covers in these bands are difficult to distinguish. Therefore, infrared bands are needed to further separate clouds and snow [42]. The threshold of this method is affected by many factors such as terrain, landform, altitude, temperature, etcetera. Given the substantial elevation disparity between the eastern and western regions of the QTP, employing a fixed threshold proves challenging when considering both areas [35].
In recent years, machine learning techniques have garnered considerable attention and have emerged as a focal point of research. The latest developments in machine learning have compellingly showcased their remarkable capacity to construct robust remote sensing models when dealing with large datasets. Many studies have proved that the machine learning method can overcome the shortcomings of traditional remote sensing methods. Capitalizing on this inherent advantage, machine learning has found widespread application within the realm of remote sensing image processing [43,44,45]. For example, He G et al. [46] used the SVM algorithm to detect snow in mountain regions and acquired a higher accuracy than the threshold algorithm. Abbas et al. [47] fused feature extraction into K-Means to classify landmarks. Lagrange et al. [48] utilized a Gaussian Mixture Model (GMM) to perform terrain classification with high-dimensional remote sensing data and achieved commendable classification accuracy while maintaining efficiency in computation time. Ishida et al. [49] proposed an integration method that uses feature extraction and SVM to detect clouds. However, these algorithms are generally aimed at satellite data with high spatial resolution, and the characteristics of multi-temporal data are not taken into account. Although the classification accuracy is high in high spatial resolution data, it is difficult to cover a large area in a short time. Therefore, in this article, we tried to use the moderate resolution remotely sensed image coming from FY-4A as input to explore a snow cover algorithm.
How to enhance the accuracy of snow cover in the QTP for long-term monitoring, and overcome the bottleneck of low precision for cloud obscuration with fewer spectral bands and distinguishable between clouds and snow using FY-4A/AGRI 1 km data? The traditional threshold algorithms are no longer applicable to these data. First of all, the FY-4A/AGRI 1 km data contain fewer bands, which can not meet all the bands involved in the threshold algorithms. Secondly, there is the problem of varying radiation caused by open terrain with huge elevation differences. Taking QTP as an example, the altitude of the eastern region is much lower than that of the western region. Significant variations in reflectance from identical features contribute to the challenge of employing a fixed threshold that fails to discriminate between them. Therefore, in this work, we try to solve the above problems by proposing a 3-D Orientation of Gradient algorithm based on machine learning because the machine learning method has a better adaptivity compared with the threshold algorithm. Generally speaking, the main idea of classification using machine learning methods is to extract features of different objects first, which allows for greater inter-class distances before transferring to the classifier [49]. We designed a novel extractor to construct a high-dimensional feature space by extracting various features, thus enhancing the distinguishability of different objects. To make up for the problem of lack of information caused by fewer wavebands, we tried to take advantage of the dynamic characteristics between cloud and snow. On time series, clouds are dynamic objects and snow is stationary, which makes them distinguishable on time series. However, it is not that simple in practice. By observing the time series data of Fengyun-4A/AGRI 1 km data, we found that the snow cover in the same area is not fixed. Affected by the data acquisition process of satellite sensors, it is usually vacillating. This brings great challenges to the attempt to distinguish clouds and snow by dynamic characteristics. Therefore, in the 3-D OG algorithm, we especially added this wave feature extraction to distinguish them. Finally, a linear Support Vector Machine method was utilized as a baseline classifier throughout the study.
In this paper, we commence with a concise overview of prior research on snow cover detection in Section 1. In Section 2, we provide a detailed description study area and data sets. Section 3 offers a detailed description of the 3-D OG algorithm. The experimental validation of our method is expounded upon in Section 4. Finally, we encapsulate our findings and draw conclusions in Section 5 and Section 6, respectively.

2. Study Area and Data

2.1. Study Area

The QTP is situated in the southwest region of China, nestled between the Pamirs and Hengduan mountains. This vast plateau covers an expansive area of approximately 296.8 km2 and boasts an average elevation of 4500 m [50]. The QTP is renowned for exhibiting the most significant variation in snow cover across China. Its distinctive geographical setting wields profound influence over both China’s and the world’s water and energy cycle. The plateau experiences a semi-arid climate, characterized by an annual average precipitation of less than 450 mm [51]. Anomalies in snow cover with the QTP serve as highly indicative markers of precipitation and drought effects, significantly impacting their prediction. These anomalies play a pivotal role in modulating the spatial and temporal distribution of precipitation in China through the monsoon, thereby influencing the broader water resources system. As depicted in Figure 1, the Digital Elevation Model (DEM) of the study area reveals a notable discrepancy in altitude between the eastern and the western regions of the Qinghai–Tibet Plateau. This substantial elevation presents formidable challenges in the realm of snow cover mapping.

2.2. Data

The FY-4A meteorological satellite was launched on 11 December 2016, and was officially delivered on 25 September 2017. The FY-4A carries the Advanced Geosynchronous Radiation Imager (AGRI), Geosynchronous Interferometric Infrared Sounder (GIIRS), Lightning Mapping Imager (LMI), and Space Environment Monitoring Instrument Package (SEP) [52]. In this research, the FY-4A/AGRI L1 products are used for training and testing. These data are Level 1 products and have been preprocessed, including quality inspection, geographical positioning, and radiometric calibration.
We trained and tested the 3-D OG algorithm on FY-4A/AGRI 1 km resolution data sets in the QTP. The dataset comprises three channels with a spectral bandwidth ranging from 0.45 to 0.90 µm, encompassing both visible and near-infrared bands, as detailed in Table 1. Notably, FY-4A/AGRI data exhibit high temporal resolution, with the ability to generate regional observation images within just one minute at the fastest. Within the FY-4A/AGRI dataset employed in this article, the shortest time interval is 15 min, spanning data collection from 2:30 UTC to 10:00 UTC daily. We have partitioned the dataset into a training set and a testing set. The training set comprises 20,000 pixels observed between 1 November and 10 November 2017. The images from 6 February 2019 to 11 February 2019, and are all contained in the testing set. The labels used for training come from the Landsat-8/OLI data because of its high spatial resolution. We conducted manual labeling Landsat-8/OLI and subsequently extracted corresponding FY-4A/AGRI data based on the geographic coordinates of the Landsat-8 image. It is essential to highlight that a majority of the products under comparison in this study are cumulative products computed on a daily basis. While MOD10A1 and MYD10A1 are not strictly daily products, their acquisition times do not align with those of Landsat-8. Furthermore, differences in satellite shooting angles further preclude their suitability for verification purposes. In contrast, FY-4A, owing to its multi-temporal capabilities, allows for the identification of corresponding time-specific data for training based on the transit time of Landsat-8. Indeed, to validate the outcomes of our algorithm, we are constrained to rely solely on meteorological station data obtained from the China National Meteorological Information Center (NMIC). Given the specific constraints and differences in data acquisition among various remote sensing products, meteorological station data serve as the most appropriate and reliable source for our validation purposes. The distribution of these stations is shown in Figure 1. We used weather station data from 6 February 2019 to 11 February 2019 for verification. The total number of stations is 242, comprising both manual and automatic weather stations. We used the data of this period as a verification because the QTP experienced a wide range of snow disasters during this period. The data, provided by the China Meteorological Administration (CMA), contain information about snow and precipitation, and the collection time was from 8:00 BJT to 8:00 BJT the next day.

3. Materials and Methods

3.1. Overall Framework of the 3-D OG Algorithm

The motivation of the 3-D OG algorithm is to distinguish clouds and snow in satellite images based on their discrepancy in movement. As can be seen in Figure 2, the cloud may be moving in one certain direction with time and the snow object may be stationary. However, when we enlarge the images and play the continuous frames in a dynamic form, we find that the snow is not fixed in these series of satellite images. Judging from the animation, the snow in the same area is swaying from side to side and the boundaries of them are flickering in multi-temporal images. This change has brought challenges to the distinction between clouds and snow. The objective of the 3-D OG algorithm is to construct a multi-dimensional feature space according to their radiance in VIS and NIR bands, shape, gradient, and movement parameters. This feature space will be used to increase the distance between clouds and snow.
Figure 3 is the overview of our algorithm framework. It consists of two main components: the 2-D edge detector (blue dotted box) and the 3-D orientation of the gradient detector (red dotted box). Firstly, the texture features of different objects are extracted by edge detectors, including the time dimensions. In this step, we used the edge extractor as the method to calculate the image gradient. Then, after the gradient of the image was obtained, the motion angle was calculated. Next, we used a parameter to fix the motion angle to a specified number of directions. These direction feature maps will be used to express the motion characteristics of clouds and snow. Finally, the original image (standardized data) and these features that express texture information and motion information will be used to construct feature space. They are concatenated before inputting the classifier SVM. The gradient transformation over a multi-temporal image will be different because of the difference in clouds and snow’s motion characteristics. First, the combined features of spectral bands and texture content will provide more surface information. Second, we can take the variation characteristics of the dynamic object from the orientation of gradient on the merged layer over time series for separating them.

3.2. Feature Detector in Spatiotemporal Dimensions

The goal of the 2-D detector is to distinguish the snow and clouds from other objects by physical characteristics and texture information. In other words, it is very difficult to directly separate snow and other objects (exclude clouds) from physical characteristics alone by using FY-4A/AGRI 1 km three-band imagery. That is because the FY-4A/AGRI VIS data only contain two bands, which leads to other types of objects being easily confused. This will result in the misclassification of pixels that have similar reflectivity but belong to different classes. However, clouds and snow have rich texture information that distinguishes them from other objects.
Figure 4 blue dotted box illustrates the flow of the 2-D edge extractor. x t R H × W × C is the intermediate frame in the set of images X = { x j | j = t + n , , t , , t n } , where H, W, and C represent the height, width, and depth of the image, respectively. In this article, the constant C is the number of bands. This means there are C layers in the x t , which is noted as x t i R H × W ,   i = 1 ,   2 ,   3 . Since X is a set of series images, the X R H × W × C × T , where T represents the number in the time dimension. In this study, the T is set to 7, which means that the five frames before and after the middle image to be calculated are used for the 3-D OG algorithm. We began with an original image that contains near-infrared (NIR) and visible (VIS) waveband data. Firstly, the problem of reflectivity difference due to the large areas should be solved before the next step. To achieve this, standardization can be used to eliminate the effects of excessive differences. The standardization function is then formulated as
x = ( x μ ) / σ ,
where μ and σ are the mean and standard deviation of every layer, respectively. The reflectivity of a large area is expected to smooth after this step.
For the next step, the texture information of one image is, in general, treated as a kind of pattern, which is agnostic to spatial information. In our method, one such texture information could be computed by
{ f V = G V x t | V x   o r   y } ,
where f V is the convolutional result of the G x or G y . The operator represents the convolution operator. Note that we used V x , y for simplicity, i.e., V can be replaced by any symbol of x or y. G V is the Sobel filter kernel, which can be expressed as
G x = [ 1 0 1 2 0 2 1 0 1 ]   and   G y = [ 1 2 1 0 0 0 1 2 1 ]
The gradient approximations for each pixel of the image can be formulated as
f x y = f x 2 + f y 2 ,
where f x and f y represent edge detection results in the horizontal and vertical directions, respectively.
The gradient of the time series can be formulated as f z , which is the edge detection result along the time axis. The gradient in the time dimension detects a static object due to the swing of the geostationary satellite data. Therefore, although it contains a wealth of dynamic information, considering the robustness of the algorithm, we will replace the edges with the orientation of the gradient. This can be formulated as
f x z = a r c t a n ( f x , f z )
and
f y z = a r c t a n ( f y , f z ) ,
which represents the results of edge detection results with x and y in the time dimension. Finally, in order to standardize the motion angle to a specific orientation, we introduce the parameter bin. This is a variable that can be adjusted during training. Therefore, the standardized orientation can be denoted as f x z b i n and f y z b i n . In this article, the bin size is set to 64.

3.3. Incorporating Feature Maps for Support Vector Machine

The idea of these feature extractors is to make the inter-class distance larger and the intra-class distance smaller. In other words, the pixels in the same class have a similar feature, and the difference in pixels of different classes is greater. This will result in a large gap between the different classes. To achieve this, we mapped the original low-dimensional features to the high-dimensional feature space by feature extraction. This feature contains raw reflectivity information, spatial gradient information, and time information. We combined the new features into one image, which is denoted by
f = { x , f x y , f x z b i n , f y z b i n } .
x′ denotes the original input after the standardization process. And these feature maps are concatenated together as the inputs of SVM.
For simplicity training and speed, we used linear SVM as a baseline classifier throughout the study. It can be expressed as the following optimization problem.
min w , b , ξ 1 2 w 2 + c j = 1 N ξ i
s . t .               y k ( w f + b ) 1 ξ k , k = 1 ,   2 , ,   N . ξ k 0 , k = 1 ,   2 , ,   N .
In Equation (10), c > 0 is the penalty parameter, ξ is the slack variable. In this study, the value of c is set to 0.3. A well-learned model will be used for the classification task. And detailed information on linear SVM can be referenced [53].
Finally, the outputs of SVM are collected and synthesized. Due to the influence of illumination, the data in the VIS band were relatively stable between 02:00 UTC and 11:00 UTC, so we chose the data in this period to calculate and synthesize the final product. According to available statistics, during this period, there were 69 pieces of data. This is because the acquisition interval of FY-4A/AGRI is not fixed.

3.4. Performance Metrics

The performance of the 3-D OG method applied to FY-4A AGRI 1 km data has been comparing its results with snow measurements obtained from meteorological stations in the QTP. A detailed introduction to the ground station can be found in Section 2.2. Table 2 shows a confusion matrix that contrasts the daily snow-cover product data with in-situ observations. SS represents the total number of correctly identified snow-covered pixels, NN represents the total number of correctly identified no-snow pixels. NS denotes the total number of erroneously identified snow-covered pixels, and SN signifies the total number of mistakenly identified no-snow pixels. Our evaluation criteria encompass several key indicators, including:
Overall Accuracy: This metric gauges the proportion of accurately classified pixels, both snow and no-snow, in relation to the total pixel count. And the equation is defined as:
Overall   accuracy = S S + N N S S + N S + S N + N N
Snow Detection Rate: This indicator quantifies the fraction of correctly detected snow-covered pixels out of the total number of actual snow-covered pixels. And the equation is formulated as:
Snow   detection   rate = S S S S + S N
Omission Error: This metric assesses the percentage of actual snow-covered pixels that were incorrectly categorized as no-snow. And the equation is defined as:
Omission   error = S N S S + S N + N S + N N
Commission Error: It quantifies the fraction of pixels classified as snow-covered that were, in reality, no-snow. The commission error is formulated as:
Commission   error = N S S S + S N + N S + N N
These evaluation metrics collectively provide a comprehensive assessment of the 3-D OG method’s performance in snow cover detection and its alignment with ground station measurements.

4. Results

4.1. Snow Cover over the QTP

Figure 5a provides an illustrative example of the FY-4A/AGRI RGB composite. Figure 5b showcases the outcome of the 3-D OG algorithm, visually depicted in highlighted blue. This scene covers the QTP in China on 11 February 2019 at 06:30 UTC. The snow distribution shows a large snow cover area surrounding the QTP. Snow is mainly disbursed over the Kunlun Mountains (labeled 1), Himalayas (labeled 2), Hengduan Mountains (labeled 3), Tanggula Mountains (labeled 4), and the east of the QTP (labeled 5).
The visual results indeed appear satisfactory. Figure 5b demonstrates a compelling consistency between the distribution of snow cover and the topography. Mountains (label 1–5) exhibit perpetual snow cover due to their high altitude. Consequently, the distribution of snow cover aligns seamlessly with the topographical features. This alignment underscores the efficacy of preserving spatial dimension feature information in the designed feature space of 3-D OG algorithm. Furthermore, the distinction between snow and clouds is notably clear. Snow, being dispersed on the ground, is inherently tied to the characteristics of the underlying terrain, dictating the extent of snow cover. In contrast, clouds, as seen in the region between label 1 and label 5 in Figure 5a, exhibit more intricate and variable shapes compared to the relatively stable characteristics of snow. Figure 5b effectively excludes clouds from the scene. Moreover, the results in Figure 5b showcase the algorithm’s adaptability across the eastern and western sections of the QTP, which vary significantly in terms of altitude. Notably, the left side of the RGB composite exhibits higher brightness than the right side. Despite this, the 3-D OG algorithm effectively mitigates such interference, successfully identifying snow cover in both the eastern and western parts of the plateau. This adaptability underscores the algorithm’s robust performance.

4.2. Comparison with Different Snow Products

To evaluate the 3-D OG algorithm, we compared it with other snow products such as Snow and Ice Mapping System (IMS) (Version 3, 1 km resolution), MODIS snow products MOD10A1 and MYD10A1 (V006), GlobSnow SE (Version 2.0, 1 km resolution), and FY-2/VISSR’s (FY-2F, FY-2G, FY-2H) snow cover (SNW) products. The performance metrics are summarized in Table 3.
From the results, the comprehensive performance of the algorithm in this paper is the best, and three indicators have reached the optimal value, namely the overall accuracy (84.38%), snow detection rate (66.67%), and omission error (4.16%). This not only proves that the algorithm in this paper is feasible but also shows that the multi-time of geostationary satellites is very important in snow cover mapping, especially in cloudy areas. Compared to MOD10A1 and MYD10A1, the overall accuracy of the 3-D OG algorithm is improved by 11.33% and 10.39%, respectively. The snow detection rate is improved by 798% and 70.38% and the omission error is reduced by 80.38% and 80.85%. It can be seen that this algorithm not only detects more snow covers but also has better accuracy than them. This not only benefits from the advantages of multi-temporal data of FY-4A/AGRI but also benefits from the advanced algorithm in this paper. Compared with the geostationary FY-2 satellite series snow products, the improvement of this algorithm is still obvious. Compared with FY-2F, FY-2G, and FY-2H, the overall accuracy is improved by 12.72%, 15.87%, and 12.51%, respectively. The snow detection rate is improved by 15.05%, 5.12%, and 16.19%. The omission error is reduced by 56.80%, 50%, and 57.38%. And the commission error is reduced by 26.02%, 39.20%, and 37.55%. This result shows that the algorithm in this paper outperforms the FY-2 satellite series, which is also a synthetic product based on multi-temporal satellite data. To sum up, the 3-D OG algorithm in this paper is feasible, and it can give full play to the advantages of FY-4A/AGRI multi-temporal data and has a good performance in practical application. The visualized results are presented in Figure 6, where (a) and (b) represent RGB composites obtained from FY-4A imagery, captured at 4:30 UTC and 8:30 UTC on 6 February 2019, respectively. Upon visual inspection of the results, it is evident that the GlobSnow dataset exhibits the most pronounced issues in terms of a notably low snow detection rate. Subsequently, the MOD10A1 and MYD10A1 datasets display similar shortcomings, largely attributed to their reliance on polar-orbiting satellites with a single daily pass. Furthermore, the algorithm utilized in this study is observed to misidentify mist as cloud cover, consequently diminishing the snow detection rate. Notably, in Figure 6, regions A and B exhibit data leakage for FY-2F, FY-2G, and FY-2H datasets. Specifically, in region B, only the IMS and the 3-D OG algorithm manage to detect the snow present at the lower portion of the region. The visualized results collectively demonstrate the superior performance of the algorithm employed within this research in terms of snow detection outcomes.

4.3. Comparison with MODIS Ice Snow Product

Figure 7 shows a comparison over the northern part of the QTP on 11 February 2019, at 04:45 UTC between our algorithm result and MOD10A1 ice and snow product: (1) the daytime FY-4A RGB composite (Figure 7a), (2) the 3-D OG snow cover binary map (Figure 7b) where snow is represented in highlighted blue, and (3) the MODIS ice snow product MOD10A1 snow cover binary map where snow is represented in highlighted blue (Figure 7c). This region was covered by a mixture of snow and clouds, which can show the difference between 3-D OG and MOD10A1. Snow cover in Figure 7b is broader than that in Figure 7c.
To verify that the 3-D OG algorithm results were right, we labeled four different parts between 3-D OG snow cover and MOD10A1 with highlighted red Arabic numerals in Figure 8a. There is no clear difference between clouds and snow in the still image in Figure 8a. Therefore, we listed five remote sensing images acquired on 11 February 2019 from 04:19 to 04:53 UTC in Figure 8. In Figure 8a, the part indicated by the arrow provides a reference to do comparisons in these five time-series images. For better comparison, we provided more detailed latitude and longitude dividing lines as a reference in Figure 8, and the changes can be observed on both sides of the middle line of each image. We can see the color in the edge of the reference regions, but no change in the main area, so the main part should be snow. It can be seen that the leakage rate can be greatly reduced by using the 3-D OG in this paper using the time series AGRI images of FY-4A.
In addition, to illustrate the capability of the 3-D OG algorithm in identifying the snow below the cloud, we selected an image from Landsat-8 to verify it. The scene shown in Figure 9 was obtained from an FY-4A AGRI in the north of the QTP on 2 February 2019, at 04:45 UTC. The RGB composite is shown in the left panel, the Landsat-8 RGB composite is shown in the middle panel, and the 3-D OG algorithm result over the same region is shown in the right panel. It shows that the 3-D OG algorithm is capable of ‘seeing’ snow below thin clouds. The snow in the upper left corner of the picture (yellow box) is missing and the clouds in the red box are masked as snowpack. We can see that from Figure 9a, the snow in the upper left corner is not clear. This may be related to snow depth and sensor resolution. Figure 9b is the RGB composite of Landsat-8 whose spatial resolution is 30 m. The snow in this figure is clearer than in FY-4A/AGRI RGB composite. And the cloud over the snow has similar motion characteristics that may be difficult to discriminate because of the low spatial resolution of FY-4A/AGRI data.

4.4. Temporal Evolution

A special feature of the 3-D OG algorithm is its ability to show the temporal evolution of the snow cover. An example of this feature is represented in Figure 10 and http://data.cma.cn/data/online/t/1 (accessed on 20 July 2023) shows the daily amount of precipitation and the daily average temperature products of China from 6–11 February 2019, respectively. The set of precipitation data from 00:00 to 24:00 UTC per day accumulated the daily amount of precipitation. Figure 10a–e show the 3-D OG snow cover binary map in Shigatse from 6–11 February 2019, which accumulated all the results of a day. It can be seen from the daily precipitation products that there is a large range of precipitation in the QTP on 7 February and 8 February. Figure 10a–d show that the snow cover has expanded from the southwestern part of Shigatse to the east from 6–9 February. On 10 February, there was no precipitation. In Figure 10e, the snow cover was smaller than that on 9 February. The snow cover in the southwest, south, and east of Shigatse had not decreased significantly. Precipitation products show there was no precipitation in Shigatse on 10 February. Therefore, the result on 11 February showed a cutdown to the northern part of Shigatse compared to the snow cover on 10 February. From the results, the range of snow cover changes with precipitation. This is a fitting fit in time, because we chose the daily precipitation data as a reference, so the precipitation does not show up during the day or night, but the corresponding snow identification results show a good response. The reason why we chose the average daily temperature as a reference is that studies have shown that precipitation is mainly snowfall when it is below minus four degrees [54].

5. Discussion

The clouds and snow are difficult to discriminate when using optical sensors in satellites, especially in a single static satellite image. An idea to solve this problem is to use multi-temporal satellite data. Therefore, we proposed the 3-D Orientation Gradient algorithm to detect snow cover using FY-4A/AGRI 1 km data. These data contain only three bands (VIS and NIR), but its temporal resolution is high. To verify the effectiveness of these multi-temporal satellite images and algorithms, we visualized the results of the 3-D OG algorithm’s results and validated the results using meteorological stations. Furthermore, to demonstrate the advantages of the algorithm, we compared the results of this article with mainstream snow products. Finally, we monitored a temporal evolution of snowfall in Shigatse to illustrate its effectiveness in practice.
Firstly, we can see from the results of 3-D OG on 11 February 2019, at 06:30 UTC, that snow can be identified in the eastern and western regions. This indicates that the process of standardization is useful. This step can be used to process imagery in places with large elevation differences. Secondly, the results of comparison between MOD10A1 and MYD10A1 show that the snow under clouds can be identified. Figure 8 shows that there are clouds over the snow in the place where there are labeled red arrows. This means that multi-temporal imagery is useful for snow detection with many clouds. This is because, during the movement of the cloud, it will expose the previously blocked places. Thirdly, the results of validation using meteorological station data show that our algorithm exhibited better performance in overall accuracy. Compared with MODIS snow products, the accuracy of the 3-D OG algorithm is improved by at least 10%, and the omission error is reduced by at least 80%. This means our research obtains more snow under the cloud by using multi-temporal FY-4A/AGRI information. It also shows that the snow cover of MODIS products is smaller than the real snow cover area. The algorithm in this paper is more inclined to identify more snow. This may have occurred when there were always many clouds during this period. Compared with FY-2 series snow products, our research achieves a better score in overall accuracy, snow detection rate, omission error, and commission error. This may be because our algorithm links multi-temporal data, and FY-4A/AGRI provides better data than FY-2. IMS snow ice products are mainly derived from a fusion of microwave sensor data, so they are not affected by cloud layer interference [31]. Liu X et al. [31] pointed out that IMS products overestimate the snow cover area due to their poor identification of fractal snow cover and the uneven distribution of meteorological stations in QTP. GlobSnow SE exhibits poor performance in snow detection rate. This may be limited by satellite data. The data of GlobSnow SE mainly come from ATSR-2 and AATSR [37]. There is nothing to be done about the cloud. For some outliers, in addition to the above analysis, we speculate that it has a good relationship with the geographical topography and weather conditions of the QTP and the uneven distribution of meteorological stations [9,13,23,37]. Finally, we monitored a temporal evolution of snowfall in the Shigatse region. The size of the snow cover range in this research can be matched with the precipitation process. This shows that our study can achieve the purpose of real-time observation of snow cover. This means that our results can be used to monitor current snow cover in local areas, which will provide necessary and accurate snow cover information for disaster warnings. The characteristics of multiple-temporal phases can make up for its shortcomings of less spectral bands. The results of the validation show that this scheme works.
There are still some areas that can be improved for our research. For the FY-4A AGRI data, the impact mainly comes from (a) low spatial resolution, (b) fewer spectral bands, and (c) image instability. Therefore, the FY-4A/AGRI data preprocessing method needs to be improved. This indicates that FY-4A data still have great potential for snow cover monitoring. Our algorithm exhibits a better performance in results. However, the relationship between each pixel and surrounding pixels is not considered. In addition, our algorithm uses the shortcomings of FY-4A/AGRI data which are swinging in time. This may cause our algorithm to be modified for satellite data with good stability. We have not considered some long-term influence factors (e.g., sun elevation and seasonal factors).

6. Conclusions

A snow extraction algorithm has been developed specifically tailored to the 1 km data obtained from FY-4A/AGRI, referred to as the 3-D OG algorithm. This algorithm effectively leverages the advantages of FY-4A’s multi-temporal remote sensed observations and the unique geographical conditions of QTP. The data in February on the QTP were used to validate the algorithm and monitor a snowstorm in the Shigatse. The 3-D OG algorithm exhibited exceptional performance in detecting snow even in the presence of thin cloud cover, thereby enhancing the precision of snow cover extent determination. This assertion is supported by the validation results, underscoring the algorithm’s reliability. Both the FY-4A/AGRI 1 km data and the 3-D OG algorithm show promise in effectively dealing with snow cover analysis in the presence of thin cloud cover. The algorithm proposed in this study is robust against a multitude of interfering factors, including the vast geographical range and complex terrain of the QTP, noise from satellite sensors, and data collection disruptions. It achieves this by harnessing data from the 0.47μm, 0.65μm, and 0.83μm channels for snow detection while leveraging multi-temporal data for distinguishing between clouds and snow. Through the real-time monitoring of a snowstorm in the Shigatse region of the QTP, it is evident that both the algorithm and the data can be effectively applied to snowstorm monitoring, enabling the timely tracking of snowfall processes and coverage within the QTP. Furthermore, this auxiliary information has the potential to contribute to long-term studies on snow distribution and changes in snow cover across the QTP. Importantly, this algorithm may also find applicability in conjunction with other instruments that provide multi-temporal data.

Author Contributions

Conceptualization, G.M., L.Z. and Y.Z.; Data curation, Y.Z., Y.F. and T.Y.; Funding acquisition, L.Z. and Y.Z.; Investigation, K.T.C.L.K.S. and Y.F.; Methodology, G.M.; Project administration, Y.Z.; Resources, Y.Z., Y.F. and T.Y.; Software, G.M., L.Z. and T.Y.; Supervision, Y.F.; Validation, G.M., L.Z., K.T.C.L.K.S., Y.F. and T.Y.; Visualization, L.Z., Y.F. and T.Y.; Writing—original draft, G.M. and K.T.C.L.K.S.; Writing—review & editing, G.M., L.Z., Y.Z. and K.T.C.L.K.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China, grant number 2021YFE0116900; the National Natural Science Foundation of China, grant number 42175157; the Fengyun Application Pioneering Project (FY-APP) of China, grant number FY-APP-2022.0604; the Natural Science Foundation of the Jiangsu Higher Education Institutions of China, grant number 23KJB170025; and Wuxi University Research Start-up Fund for Introduced Talents, grant number No. 2022r035.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the National Satellite Meteorological Center (NSMC) and China Meteorological Administration (CMA) for providing Feng Yun-4A satellite data and situ observation data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dozier, J.; Painter, T.H.; Rittger, K.; Frew, J.E. Time–space continuity of daily maps of fractional snow cover and albedo from MODIS. Adv. Water Resour. 2008, 31, 1515–1526. [Google Scholar] [CrossRef]
  2. Zhang, G.; Xie, H.; Yao, T.; Liang, T.; Kang, S. Snow cover dynamics of four lake basins over Tibetan Plateau using time series MODIS data (2001–2010). Water Resour. Res. 2012, 48, W10529. [Google Scholar] [CrossRef]
  3. Tang, Z.; Wang, X.; Wang, J.; Wang, X.; Li, H.; Jiang, Z. Spatiotemporal variation of snow cover in Tianshan Mountains, Central Asia, based on cloud-free MODIS fractional snow cover product, 2001–2015. Remote Sens. 2017, 9, 1045. [Google Scholar] [CrossRef]
  4. Huang, X.; Deng, J.; Wang, W.; Feng, Q.; Liang, T. Impact of climate and elevation on snow cover using integrated remote sensing snow products in Tibetan Plateau. Remote Sens. Environ. 2017, 190, 274–288. [Google Scholar] [CrossRef]
  5. Li, C.; Su, F.; Yang, D.; Tong, K.; Meng, F.; Kan, B. Spatiotemporal variation of snow cover over the Tibetan Plateau based on MODIS snow product, 2001–2014. Int. J. Climatol. 2018, 38, 708–728. [Google Scholar] [CrossRef]
  6. Immerzeel, W.; Droogers, P.; de Jong, S.; Bierkens, M. Large-scale monitoring of snow cover and runoff simulation in Himalayan river basins using remote sensing. Remote Sens. Environ. 2009, 113, 40–49. [Google Scholar] [CrossRef]
  7. Heqimi, G.; Gates, T.J.; Kay, J.J. Using spatial interpolation to determine impacts of annual snowfall on traffic crashes for limited access freeway segments. Accid. Anal. Prev. 2018, 121, 202–212. [Google Scholar] [CrossRef]
  8. Tang, Z.; Wang, J.; Li, H.; Yan, L. Spatiotemporal changes of snow cover over the Tibetan plateau based on cloud-removed moderate resolution imaging spectroradiometer fractional snow cover product from 2001 to 2011. J. Appl. Remote Sens. 2013, 7, 073582. [Google Scholar] [CrossRef]
  9. Mayewski, P.A.; Jeschke, P.A. Himalayan and Trans-Himalayan glacier fluctuations since AD 1812. Arct. Alp. Res. 1979, 11, 267–287. [Google Scholar] [CrossRef]
  10. Bishop, M.P.; Olsenholler, J.A.; Shroder, J.F.; Barry, R.G.; Raup, B.H.; Bush, A.B.G.; Copland, L.; Dwyer, J.L.; Fountain, A.G.; Haeberli, W.; et al. Global Land Ice Measurements from Space (GLIMS): Remote sensing and GIS investigations of the Earth’s cryosphere. Geocarto Int. 2004, 19, 57–84. [Google Scholar] [CrossRef]
  11. Ye, Q.; Kang, S.; Chen, F.; Wang, J. Monitoring glacier variations on Geladandong mountain, central Tibetan Plateau, from 1969 to 2002 using remote-sensing and GIS technologies. J. Glaciol. 2006, 52, 537–545. [Google Scholar] [CrossRef]
  12. Hall, D.K.; Riggs, G.A.; Salomonson, V.V.; DiGirolamo, N.E.; Bayr, K.J. MODIS snow-cover products. Remote Sens. Environ. 2002, 83, 181–194. [Google Scholar] [CrossRef]
  13. Xu, W.; Ma, H.; Wu, D.; Yuan, W. Assessment of the daily cloud-free MODIS snow-cover product for monitoring the snow-cover phenology over the Qinghai-Tibetan plateau. Remote Sens. 2017, 9, 585. [Google Scholar] [CrossRef]
  14. Berman, E.E.; Bolton, D.K.; Coops, N.C.; Mityok, Z.K.; Stenhouse, G.B.; Moore, R.D. Daily estimates of Landsat fractional snow cover driven by MODIS and dynamic time-warping. Remote Sens. Environ. 2018, 216, 635–646. [Google Scholar] [CrossRef]
  15. Girona-Mata, M.; Miles, E.S.; Ragettli, S.; Pellicciotti, F. High-resolution snowline delineation from Landsat imagery to infer snow cover controls in a Himalayan catchment. Water Resour. Res. 2019, 55, 6754–6772. [Google Scholar] [CrossRef]
  16. Salomonson, V.V.; Appel, I. Estimating fractional snow cover from MODIS using the normalized difference snow index. Remote Sens. Environ. 2004, 89, 351–360. [Google Scholar] [CrossRef]
  17. Dong, T.X.; Jiang, H.B.; Chen, C.; Qin, Q.M. A Snow Depth Inversion Method for the HJ-1B Satellite Data. Spectrosc. Spectr. Anal. 2011, 31, 2784–2788. [Google Scholar]
  18. Yang, J.; Jiang, L.; Shi, J.; Wu, S.; Sun, R.; Yang, H. Monitoring snow cover using Chinese meteorological satellite data over China. Remote Sens. Environ. 2014, 143, 192–203. [Google Scholar] [CrossRef]
  19. Wang, Z.; Zhang, X.; Zhou, Z. Research progress of satellite data utilization for snow monitoring in pastoral areas. Pratacultural Sci. 2009, 26, 32–39. [Google Scholar]
  20. Hüsler, F.; Jonas, T.; Wunderle, S.; Albrecht, S. Validation of a modified snow cover retrieval algorithm from historical 1-km AVHRR data over the European Alps. Remote Sens. Environ. 2012, 121, 497–515. [Google Scholar] [CrossRef]
  21. Li, S.; Yan, H.; Liu, C. Study of snow detection using FY-2C satellite data. J. Remote Sens. 2007, 11, 406–413. [Google Scholar]
  22. Notarnicola, C.; Duguay, M.; Moelg, N.; Schellenberger, T.; Tetzlaff, A.; Monsorno, R.; Costa, A.; Steurer, C.; Zebisch, M. Snow cover maps from MODIS images at 250 m resolution, Part 1: Algorithm description. Remote Sens. 2013, 5, 110–126. [Google Scholar] [CrossRef]
  23. Chedin, A.; Scott, N.A.; Wahiche, C.; Moulinier, P. The improved initialization inversion method: A high resolution physical method for temperature retrievals from satellites of the TIROS-N series. J. Clim. Appl. Meteorol. 1985, 24, 128–143. [Google Scholar] [CrossRef]
  24. Roy, D.P.; Wulder, M.A.; Loveland, T.R.; Woodcock, C.E.; Allen, R.G.; Anderson, M.C.; Helder, D.; Irons, J.R.; Johnson, D.M.; Kennedy, R.; et al. Landsat-8: Science and product vision for terrestrial global change research. Remote Sens. Environ. 2014, 145, 154–172. [Google Scholar] [CrossRef]
  25. Zhang, P.; Yang, J.; Dong, C.; Lu, N.; Yang, Z.; Shi, J. General introduction on payloads, ground segment and data application of Fengyun 3A. Front. Earth Sci. China 2009, 3, 367–373. [Google Scholar] [CrossRef]
  26. Johnson, N.L.; Stansbery, E.; Liou, J.-C.; Horstman, M.; Stokely, C.; Whitlock, D. The characteristics and consequences of the break-up of the Fengyun-1C spacecraft. Acta Astronaut. 2008, 63, 128–135. [Google Scholar] [CrossRef]
  27. Marchane, A.; Jarlan, L.; Hanich, L.; Boudhar, A.; Gascoin, S.; Tavernier, A.; Filali, N.; Le Page, M.; Hagolle, O.; Berjamy, B. Assessment of daily MODIS snow cover products to monitor snow cover dynamics over the Moroccan Atlas mountain range. Remote Sens. Environ. 2015, 160, 72–86. [Google Scholar] [CrossRef]
  28. Parajka, J.; Blöschl, G. Validation of MODIS snow cover images over Austria. Hydrol. Earth Syst. Sci. 2006, 10, 679–689. [Google Scholar] [CrossRef]
  29. Hall, D.K.; Riggs, G.A. Accuracy assessment of the MODIS snow products. Hydrol. Process. Int. J. 2007, 21, 1534–1547. [Google Scholar] [CrossRef]
  30. Guo, Y.; Zhai, P.; Li, W. Snow cover in China, derived from NOAA satellite remote sensing and conventional observation. J. Glaciol. Geocryol. 2004, 26, 755–760. [Google Scholar]
  31. Liu, X.; Jin, X.; Ke, C.Q. Accuracy evaluation of the IMS snow and ice products in stable snow covers regions in China. J. Glaciol. Geocryol. 2014, 36, 500–507. [Google Scholar]
  32. Pu, Z.; Xu, L.; Salomonson, V.V. MODIS/Terra observed seasonal variations of snow cover over the Tibetan Plateau. Geophys. Res. Lett. 2007, 34, L06706. [Google Scholar] [CrossRef]
  33. Dozier, J.; Painter, T.H. Multispectral and hyperspectral remote sensing of alpine snow properties. Annu. Rev. Earth Planet. Sci. 2004, 32, 465–494. [Google Scholar] [CrossRef]
  34. Liang, T.G.; Huang, X.D.; Wu, C.X.; Liu, X.Y.; Li, W.L.; Guo, Z.G.; Ren, J.Z. An application of MODIS data to snow cover monitoring in a pastoral area: A case study in Northern Xinjiang, China. Remote Sens. Environ. 2008, 112, 1514–1526. [Google Scholar] [CrossRef]
  35. Yang, J.; Jiang, L.; Ménard, C.B.; Luojus, K.; Lemmetyinen, J.; Pulliainen, J. Evaluation of snow products over the Tibetan Plateau. Hydrol. Process. 2015, 29, 3247–3260. [Google Scholar] [CrossRef]
  36. de Wildt, M.R.; Seiz, G.; Gruen, A. Operational snow mapping using multitemporal Meteosat SEVIRI imagery. Remote Sens. Environ. 2007, 109, 29–41. [Google Scholar] [CrossRef]
  37. Metsämäki, S.; Pulliainen, J.; Salminen, M.; Luojus, K.; Wiesmann, A.; Solberg, R.; Böttcher, K.; Hiltunen, M.; Ripper, E. Introduction to GlobSnow Snow Extent products with considerations for accuracy assessment. Remote Sens. Environ. 2015, 156, 96–108. [Google Scholar] [CrossRef]
  38. Romanov, P.; Tarpley, D. Automated monitoring of snow cover over South America using GOES Imager data. Int. J. Remote Sens. 2003, 24, 1119–1125. [Google Scholar] [CrossRef]
  39. Terzago, S.; Cremonini, R.; Cassardo, C.; Fratianni, S. Analysis of snow precipitation during the period 2000-09 and evaluation of a snow cover algorithm in SW Italian Alps. Geogr. Fis. Din. Quat. 2012, 35, 91–99. [Google Scholar]
  40. Wang, G.; Jiang, L.; Wu, S.; Shi, J.; Hao, S.; Liu, X. Fractional snow cover mapping from FY-2 VISSR imagery of China. Remote Sens. 2017, 9, 983. [Google Scholar] [CrossRef]
  41. Oyoshi, K.; Takeuchi, W.; Yasuoka, Y. Evaluation of snow-cover maps over Northeastern Asia derived from AVHRR, MODIS and MTSAT data. In Proceedings of the 28th Asian Conference on Remote Sensing (ACRS), Kuala Lumpur, Malaysia, 12–16 November 2007. [Google Scholar]
  42. Riggs, G.A.; Hall, D.K. Reduction of cloud obscuration in the MODIS snow data product. In Proceedings of the 60th Eastern Snow Conference, Sherbrooke, QC, Canada, 4–6 June 2003. [Google Scholar]
  43. Latry, C.; Panem, C.; Dejean, P. Cloud detection with SVM technique. In Proceedings of the 2007 IEEE International Geoscience and Remote Sensing Symposium, Barcelona, Spain, 23–27 July 2007; IEEE: Barcelona, Spain, 2007; pp. 448–451. [Google Scholar]
  44. Thüring, T.; Schoch, M.; van Herwijnen, A.; Schweizer, J. Robust snow avalanche detection using supervised machine learning with infrasonic sensor arrays. Cold Reg. Sci. Technol. 2015, 111, 60–66. [Google Scholar] [CrossRef]
  45. Li, P.; Dong, L.; Xiao, H.; Xu, M. A cloud image detection method based on SVM vector machine. Neurocomputing 2015, 169, 34–42. [Google Scholar] [CrossRef]
  46. He, G.; Xiao, P.; Feng, X.; Zhang, X.; Wang, Z.; Chen, N. Extracting snow cover in mountain areas based on SAR and optical data. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1136–1140. [Google Scholar] [CrossRef]
  47. Abbas, A.W.; Minallh, N.; Ahmad, N.; Abid, S.A.R.; Khan, M.A.A. K-Means and ISODATA clustering algorithms for landcover classification using remote sensing. Sindh Univ. Res. J. SURJ (Sci. Ser.) 2016, 48, 315–318. [Google Scholar]
  48. Lagrange, A.; Fauvel, M.; Grizonnet, M. Large-scale feature selection with Gaussian mixture models for the classification of high dimensional remote sensing images. IEEE Trans. Comput. Imaging 2017, 3, 230–242. [Google Scholar] [CrossRef]
  49. Ishida, H.; Oishi, Y.; Morita, K.; Moriwaki, K.; Nakajima, T.Y. Development of a support vector machine based cloud detection method for MODIS with the adjustability to various conditions. Remote Sens. Environ. 2018, 205, 390–407. [Google Scholar] [CrossRef]
  50. Zhang, Y.L.; Li, B.Y.; Zheng, D. A discussion on the boundary and area of the Tibetan Plateau in China. Geogr. Res. 2002, 21, 1–8. [Google Scholar]
  51. Wang, G.; Hu, H.; Li, T. The influence of freeze–thaw cycles of active soil layer on surface runoff in a permafrost watershed. J. Hydrol. 2009, 375, 438–449. [Google Scholar] [CrossRef]
  52. Yang, J.; Zhang, Z.; Wei, C.; Lu, F.; Guo, Q. Introducing the new generation of Chinese geostationary weather satellites, Fengyun-4. Bull. Am. Meteorol. Soc. 2017, 98, 1637–1658. [Google Scholar] [CrossRef]
  53. Burges, C.J.C. A tutorial on support vector machines for pattern recognition. Data Min. Knowl. Discov. 1998, 2, 121–167. [Google Scholar] [CrossRef]
  54. McCabe, G.J.; Wolock, D.M. Long-term variability in Northern Hemisphere snow cover and associations with warmer winters. Clim. Chang. 2010, 99, 141–153. [Google Scholar] [CrossRef]
Figure 1. Distribution of meteorological stations across the QTP in China. Black dots represent the locations of meteorological stations, while the color bar provides altitude information for the QTP. The map in the upper right corner provides an overview of the study area’s location within China.
Figure 1. Distribution of meteorological stations across the QTP in China. Black dots represent the locations of meteorological stations, while the color bar provides altitude information for the QTP. The map in the upper right corner provides an overview of the study area’s location within China.
Water 15 03329 g001
Figure 2. The movement trend of clouds. Clouds and snow have different moving directions, and driven by the wind field, the moving direction of clouds generally moves in a fixed direction. The red box shows the change of the same cloud over a period.
Figure 2. The movement trend of clouds. Clouds and snow have different moving directions, and driven by the wind field, the moving direction of clouds generally moves in a fixed direction. The red box shows the change of the same cloud over a period.
Water 15 03329 g002
Figure 3. Overview of the 3-D Orientation of the gradient method. The blue dotted box shows the step of the 2-D feature extractor. The step of the 3-D feature extractor is shown in the red dotted box.
Figure 3. Overview of the 3-D Orientation of the gradient method. The blue dotted box shows the step of the 2-D feature extractor. The step of the 3-D feature extractor is shown in the red dotted box.
Water 15 03329 g003
Figure 4. Diagram of the 3-D orientation of gradient algorithm. The 2-D feature extractor extracts the context feature (blue box). The 3-D OG considers the changes with time (red box).
Figure 4. Diagram of the 3-D orientation of gradient algorithm. The 2-D feature extractor extracts the context feature (blue box). The 3-D OG considers the changes with time (red box).
Water 15 03329 g004
Figure 5. The (a) RGB composite and (b) 3-D OG RGB composite over QTP in China on 11 February 2019, at 06:30 UTC. The red numbers are the Kunlun Mountains (labeled 1), the Himalayas (labeled 2), the Hengduan Mountains (labeled 3), the Tanggula Mountains (labeled 4), and the east of the QTP (labeled 5).
Figure 5. The (a) RGB composite and (b) 3-D OG RGB composite over QTP in China on 11 February 2019, at 06:30 UTC. The red numbers are the Kunlun Mountains (labeled 1), the Himalayas (labeled 2), the Hengduan Mountains (labeled 3), the Tanggula Mountains (labeled 4), and the east of the QTP (labeled 5).
Water 15 03329 g005
Figure 6. The RGB composite of FY-4A on (a) 6 February 2019 at 04:30 UTC, (b) 6 February 2019 at 08:30 UTC, and the snow cover maps of (c) MOD10A1, (d) MYD10A1, (e) GlobSnow SE, (f) IMS, (g) FY-2F, (h) FY-2G, (i) FY-2H, and (j) 3-D OG algorithm.
Figure 6. The RGB composite of FY-4A on (a) 6 February 2019 at 04:30 UTC, (b) 6 February 2019 at 08:30 UTC, and the snow cover maps of (c) MOD10A1, (d) MYD10A1, (e) GlobSnow SE, (f) IMS, (g) FY-2F, (h) FY-2G, (i) FY-2H, and (j) 3-D OG algorithm.
Water 15 03329 g006aWater 15 03329 g006b
Figure 7. (a) The RGB composite of FY-4A AGRI, (b) the 3-D OG RGB composite, (c) and the MOD10A1 RGB composite on 11 February 2019, at 04:45 UTC.
Figure 7. (a) The RGB composite of FY-4A AGRI, (b) the 3-D OG RGB composite, (c) and the MOD10A1 RGB composite on 11 February 2019, at 04:45 UTC.
Water 15 03329 g007
Figure 8. FY-4A AGRI 1 km resolution RGB composite on (a) 11 February 2019 at 04:19 UTC, (b) 11 February 2019 at 04:30 UTC, (c) 11 February 2019 at 04:45 UTC, (d) 11 February 2019 at 04:53 UTC. The red arrows indicate the locations where comparisons are required.
Figure 8. FY-4A AGRI 1 km resolution RGB composite on (a) 11 February 2019 at 04:19 UTC, (b) 11 February 2019 at 04:30 UTC, (c) 11 February 2019 at 04:45 UTC, (d) 11 February 2019 at 04:53 UTC. The red arrows indicate the locations where comparisons are required.
Water 15 03329 g008
Figure 9. The FY-4A AGRI RGB composite (a), Landsat RGB composite (b), and the 3-D OG algorithm RGB composite (c) over the QTP on 4 February 2019, at 04:45 UTC.
Figure 9. The FY-4A AGRI RGB composite (a), Landsat RGB composite (b), and the 3-D OG algorithm RGB composite (c) over the QTP on 4 February 2019, at 04:45 UTC.
Water 15 03329 g009
Figure 10. The merge results of the 3-D OG algorithm in Shigaze on 6 February (a), 7 February (b), 8 February (c), 9 February (d), 10 February (e), 11 February (f), 2019.
Figure 10. The merge results of the 3-D OG algorithm in Shigaze on 6 February (a), 7 February (b), 8 February (c), 9 February (d), 10 February (e), 11 February (f), 2019.
Water 15 03329 g010aWater 15 03329 g010b
Table 1. Waveband information of FY-4A/AGRI (1 km resolution).
Table 1. Waveband information of FY-4A/AGRI (1 km resolution).
ChannelSpatial Resolution (km)Central Wavelength (µm)Wavelength (µm)
VIS10.470.45–0.49
0.50.650.55–0.75
NIR10.830.75–0.90
Table 2. Confusion matrix comparing daily snow cover product against in situ observations.
Table 2. Confusion matrix comparing daily snow cover product against in situ observations.
Ground ObservationDaily Snow-Cover Product
SnowNo Snow
SnowSSSN
No snowNSNN
Table 3. Different snow products VS. weather stations data.
Table 3. Different snow products VS. weather stations data.
ProductEvaluation Index
Overall
Accuracy (%)
Snow Detection Rate (%)Omission
Error (%)
Commission Error (%)
MOD10A175.797.4221.203.01
MYD10A176.4439.1321.721.83
GlobSnow SE76.702.2822.380.91
IMS73.6928.5716.369.94
FY-2F74.8657.959.6315.49
FY-2G72.8263.428.3218.85
FY-2H71.8757.389.7618.35
3-D OG 84.3866.674.1611.46
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, G.; Zhu, L.; Zhang, Y.; Lim Kam Sian, K.T.C.; Feng, Y.; Yu, T. Snow Cover Detection Using Multi-Temporal Remotely Sensed Images of Fengyun-4A in Qinghai-Tibetan Plateau. Water 2023, 15, 3329. https://doi.org/10.3390/w15193329

AMA Style

Ma G, Zhu L, Zhang Y, Lim Kam Sian KTC, Feng Y, Yu T. Snow Cover Detection Using Multi-Temporal Remotely Sensed Images of Fengyun-4A in Qinghai-Tibetan Plateau. Water. 2023; 15(19):3329. https://doi.org/10.3390/w15193329

Chicago/Turabian Style

Ma, Guangyi, Linglong Zhu, Yonghong Zhang, Kenny Thiam Choy Lim Kam Sian, Yixin Feng, and Tianming Yu. 2023. "Snow Cover Detection Using Multi-Temporal Remotely Sensed Images of Fengyun-4A in Qinghai-Tibetan Plateau" Water 15, no. 19: 3329. https://doi.org/10.3390/w15193329

APA Style

Ma, G., Zhu, L., Zhang, Y., Lim Kam Sian, K. T. C., Feng, Y., & Yu, T. (2023). Snow Cover Detection Using Multi-Temporal Remotely Sensed Images of Fengyun-4A in Qinghai-Tibetan Plateau. Water, 15(19), 3329. https://doi.org/10.3390/w15193329

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop