Next Article in Journal
We Must all Pay More Attention to Rigor in Accuracy Assessment: Additional Comment to “The Improvement of Land Cover Classification by Thermal Remote Sensing”. Remote Sens. 2015, 7, 8368–8390
Next Article in Special Issue
Complex Deformation Monitoring over the Linfen–Yuncheng Basin (China) with Time Series InSAR Technology
Previous Article in Journal
Fine Surveying and 3D Modeling Approach for Wooden Ancient Architecture via Multiple Laser Scanner Integration
Previous Article in Special Issue
Anatomy of Subsidence in Tianjin from Time Series InSAR
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Automatic Procedure for Early Disaster Change Mapping Based on Optical Remote Sensing

1
Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, Beijing 100094, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2016, 8(4), 272; https://doi.org/10.3390/rs8040272
Submission received: 27 October 2015 / Revised: 7 March 2016 / Accepted: 11 March 2016 / Published: 26 March 2016
(This article belongs to the Special Issue Earth Observations for Geohazards)

Abstract

:
Disaster change mapping, which can provide accurate and timely changed information (e.g., damaged buildings, accessibility of road and the shelter sites) for decision makers to guide and support a plan for coordinating emergency rescue, is critical for early disaster rescue. In this paper, we focus on optical remote sensing data to propose an automatic procedure to reduce the impacts of optical data limitations and provide the emergency information in the early phases of a disaster. The procedure utilizes a series of new methods, such as an Optimizable Variational Model (OptVM) for image fusion and a scale-invariant feature transform (SIFT) constraint optical flow method (SIFT-OFM) for image registration, to produce product maps including cloudless backdrop maps and change-detection maps for catastrophic event regions, helping people to be aware of the whole scope of the disaster and assess the distribution and magnitude of damage. These product maps have a rather high accuracy as they are based on high precision preprocessing results in spectral consistency and geometric, which compared with traditional fused and registration methods by visual qualitative or quantitative analysis. The procedure is fully automated without any manual intervention to save response time. It also can be applied to many situations.

Graphical Abstract

1. Introduction

Natural disasters, such as earthquakes, landslides, avalanches and debris flows, occur unexpectedly and suddenly, claiming huge losses of life and property and causing significant damage to the surrounding environment. A large number of cases show that rapid emergency response is effective in reducing the casualties and the loss caused by disasters. However, effectively making proper contingency plans is a troublesome problem faced by governments and experts [1].
Remote sensing technologies have the unique ability to help emergency managers streamline response and recovery by providing a backdrop of situational awareness which can be invaluable for assessing the impacts of the damage and to guide the disaster rescue [2]. Remote sensing plays an important role for the disaster rescue from the very early phase of a disaster, right through to long-term recovery [3]. Especially during the early stages of disaster response, a coordinated and reasonable plan for search and rescue (S&R) activities, logistics planning, and monitoring can save lives and property. However, the information (e.g., the geographical scope of the disaster areas, the magnitude and the spatial distribution of damage, and transportation infrastructures conditions) to support rapid response is limited. Remote sensing is a critical option for obtaining a comprehensive information about the disaster and for assessing the scope of damage, in particular to remote areas, where other means of assessment or mapping either fail or are of insufficient quality [4].
Various type of remote sensing sensors, platforms and techniques can be considered for emergency mapping [5]. The choice is mainly based on the type of disaster, the approximate extent of the affected areas and the requirement for monitoring the event. Generally, the main source of data for response activities is satellite remote sensing, as this gives the ability to monitor a wide footprint on the ground with limited or no access. In addition, modern satellite platforms can be triggered to change the acquisition angle to cover the affected areas in a short time for increasing the observation frequency of the regions of interest. The satellite imagery of disaster-hit areas can be obtained every day or even every few hours. Regarding the sensor type, the synthetic aperture radar (SAR) data and optical high-resolution (HR) or very high-resolution images (VHR) are generally obtained when disaster occur. SAR or InSAR systems are of great value for disaster response as it is a utility further enhanced by its all-weather capabilities, especially when the persistent of cloud cover over the affected areas make optical data unusable [6]. It has been achieved success in detecting surface displacement and offsets and height change of disaster areas by using the intensity, coherence or phase information of post-event SAR or InSAR data. However, for structural damage or some temporary change (e.g., damage buildings and the shelter sites), SAR and InSAR data are insufficiently sensitive and the final mapping results have been less conclusive, and marked by uncertainties. And most SAR-based change detection approaches suffer from a lack of archive data with the same acquisition parameters as the post-crisis imagery [7]. In addition, SAR imagery is less intuitive and is difficult to be interpreted by nonexperts, who must rely on sophisticated software to analyze the interferometry or amplitude coherence that require a longer processing time. Almost from the very onset of the disaster, optical satellite imagery is always available and provides the first glimpse of the disaster devastation. Optical high resolution imagery is the preferred choice to delineate the feature of interest (e.g., building, tents) and their current status (e.g., destroyed, burned, or moved) [5]. Generally, optical data can provide useful information to discriminate between damaged or non-damaged areas. Even though no further classification with respect to damage levels can be retrieved, the information derived from optical imagery can be effective [8]. The advantage of optical data is that their interpretation is intuitive for nonexperts, it can provide an overview of affected regions and sufficiently detailed information for decision-makers or aid workers to make S&R plan. However, the presence of cloud and shadows and variations in observation angles and geometric distortions limit the application of optical imagery for rapid emergency mapping [9].
This paper focuses on optical remote sensing image processing for the disaster mapping. One widely used approach for producing damage mapping is visual interpretation [10,11], which is tedious and labor intensive. Several automatic methodologies have been presented for disaster mapping, however, geometric distortions and improper co-registration between pre- and post-disaster imagery can result in a high false alarm rate in these automatic change detection approaches [7]. Therefore, a balance must be found between the product accuracy and the labor-cost, and a compromise must be found between the response time, the analysis depth, and the mapping accuracy. In this respect, we propose an automatic Optimizable Variational Model (OptVM) and scale-invariant feature transform (SIFT) constraint optical flow method (SIFT-OFM) to improve the accuracy of change mapping to produce the preliminary damage map, which usually will have a rather coarse character. In the early period of disaster, these initial maps could quickly provide some crucial information to relief teams and support the planning and coordination of rescue operations for emergency response. At the same time, these maps would give professional analysts or the public a guide to further checking whether a detected change is correct to improve the accuracy of the damage map by visual validation. With the availability of further earth observation data and more in-depth image analysis, the maps can be updated to incorporate this new information and provide a refined damage assessment [7].
Therefore, we propose an automatic optical remote sensing process to produce a cloud free product as a situation awareness map for identifying the overall damage and potential cascading effects, a change detection map for estimating the impacts of damage, and a temporal change map of key areas (such as the heavily damaged areas and temporary resettlement of victims) quickly in order to monitor the recovery process and guide the disaster recovery. This process can rely on the newest available optical data to rapidly produce the up-to-date maps, cover large parts of the affected area, and enable disaster managers to obtain an overview of the situation, assess the damage, and supply local logistic teams with reliable information on very short notice. The procedure and methods are introduced in Section 2. The experimental results and analyses, using the Nepal magnitude 8.1 earthquake on 25 April 2015, as an example, are given in Section 3. A discussion is presented in Section 4. The last section concludes the paper with the significance of this research and proposed future work.

2. The Procedure and Methods

The timeliness of receiving data and the length of time needed for data processing are key factors in rapid emergency mapping for disaster events. When a large disaster occurs (e.g., the Haiti earthquake, the forest fires in Australia, and the Nepal earthquake), many international organizations or commercial companies decrease the satellite revisit time to obtain the newest data for the affected areas by adjusting the acquisition angle [12,13]. This timely triggering provides enough data for disaster mapping. However, these agencies only offer the original satellite imagery, rather than processing or analysis results. The orderly organization and management for effective processing is scanty; and these multi-source and multi-temporal data are not sufficient for developing information applicable in humanitarian and natural crisis situations.
In this paper, a procedure is proposed to deal automatically with multi-source and multi-temporal remote sensing data, in order to produce a product that can serve in the early stages of disasters. The procedure is shown in Figure 1, which includes image fusion, image registration, and the production of maps. The procedure ultimately offers a cloudless remote sensing map, a change detection map, and a local temporal variation map for some typical areas during a short period of time. The proposed procedure is based mainly on the general rapid mapping framework proposed by Ajmar and Boccardo [14], but our proposed procedure is a specific implementation of a general framework except for map dissemination. In our procedure that uses some new techniques and methods, the accuracy of the products in each step has been greatly improved. Additionally, the whole procedure is fully automated without any artificial participation. The methods applied in each step of the procedure are described below.

2.1. Image Fusion

Recently high spatial resolution images, such as QuickBird, WordView, GeoEye and GF-1, have been favored in many remote sensing applications, especially for the assessment of disasters [15]. Usually, high spatial resolution satellites provide image data with a low-resolution multispectral (LRM) image and a high-resolution panchromatic (HRP) image [16]. In order to obtain the high-resolution multispectral (HRM) images needed for detailed damage assessment, the image fusion process must be implemented. This is the first key step in our procedure.
There has been much research done on image fusion. The methods can be categorized into five types, including variable substitution (VS) methods, such as the IHS and PCA algorithms [17,18]; modulation fusion methods (MF), such as the Brovery algorithm [19]; multi-scale analysis (MSA) such as the wavelet fusion algorithm [20]; restoration based methods (RB), such as the adjustable model-based fusion algorithm [21]; and compressed sensing methods (CS) [16,22]. The first three types of methods can be viewed as traditional methods, which tend to cause color distortions. Additionally, spatial information weakens due to the wavelength extension of the new satellite HRP images. Inspired by the rapid development of super-resolution techniques, RB methods have been applied routinely to image restoration and fusion. These methods view the LRM image and the HRP image as the observations of the HRM image via an image degradation model. The HRM image is obtained by setting up a model and solving the model. The CS method is based on sparse representation, which utilizes the strong correlation between the LRM and HRP images to obtain a sparse coefficient, construct two coupled dictionaries, and reconstruct the fused image by the fused measurement [23]. This method usually needs a large collection of training images to obtain trained dictionary, requiring significant computing time.
In this paper, an improved fusion model, OptVM, is applied to produce high-quality HRM images automatically. The flowchart of the OptVM is shown in Figure 2. First, a PAN image is simulated by MS image, and the simulated PAN and original HRP image are matched. Next, the restored MS image is obtained by the relationship of spectrum and grayscale between the simulated PAN and LRM images; and finally, the model (Equation (1)) is optimized to get the HRM image. The model defines Pan-sharpening as an optimization of a linear over-determined system, which is based on gray-value and spectral sensitivity relationships between the original multispectral (MS), panchromatic (PAN), and fused images.
The OptVM constructs a cost function to generate the HRM image based on three constrained hypotheses: (1) the radiation of an object in an LRM image should be equal to the integral average radiation of the same object in the HRM image; (2) spatial details of the HRP image and the HRM image (the target fused image) should be similar; (3) the values in the panchromatic band should be the summation of the respective multispectral band, if the spectral response range of the LRM image is nearly the same as the HRP image. According to the three assumption, the cost function can be defined as follows in Equation (1):
E ( P H , X S H , X S L ) = λ 1 n = 1 4 ( X S L n ( x ) k × X S H n ( x ) ) 2 d x + λ 2 n = 1 4 ( X S H n ( x ) S N ( x ) W s x X S H n ( s ) ) 2 d x + λ 3 ( n = 1 4 W n X S H n ( x ) P H ( x ) ) 2 d x
where XSL, XSH and PH are the radiation of pixels in the LRM image, the HRM image and the HRP image respectively. The variables, λ1, λ2 andλ3 represent the weight of the three models in the total energy function, which can be adjusted according the fusion result. Here, in next experiment, λ1, λ2 and λ3 are set 1. The variables, n is number of fusion band. The parameter of k is based on the fact that the low-resolution pixels are formed from the high-resolution ones by low pass filtering followed by subsampling [24], and k represents the low pass filtering followed by subsampling. The express of k is shown as follows in Equation (2):
X S L ( x ) = k × j M ( x ) P H ( j )
In the Equation (2), the pixel j is one of the pixels set M in HRP image which corresponds to the pixel x in LRM image. For example, if the resolution of LRM image is 8 m and HRP image is 2 m, each pixel in LRM image corresponds to 4 × 4 pixels set in HRP image in the same object. The variables in Equation (1), Wsx represents the weighted value of the pixel s which in the neighborhood window of pixel x in HRP image, the express is as follows Equation (2), including δx is the standard deviation.
W s x = e ( P H ( x ) ( P H ( s ) ) 2 / 2 σ x 2
In the Equation (3), the size of neighborhood window is set 9 × 9 in the next experiment. The variables in Equation (1), Wn is the band weights of multispectral image in spectral response relationship. The express is shown as follows in Equation (4):
P h ( λ ) = n = 1 4 W n X S L ( λ )
In the Equation (4), λ represent the Spectral value. The process of image fusion is the global minimum of E(PH,XSH,XSL),which is the linear constraint of the three basic models. The least square technique—LSQR [25] is applied to optimize the model. An important step of the processing is the image registration of the HRP image and the simulated panchromatic image by the LRM image to avoid fusion image distortion and spectral offset based on the optical flow and feature constraint method (introduced in Section 2.2).

2.2. Image Registration

Image registration is an important procedure in remote sensing image applications [26] and a crucial step in our chain. Previous studies of image registration methods can be divided into three categories, including area-based method, such as Cross-Correlation (CC) method [27], the mutual information (MI) [28]and the Sequential Similarity Detection Algorithms (SSDAs) [29]; feature-based method such as SIFT [30,31,32] and physically-based methods, such as multi-resolution elastic model [33] and the optical flow model [34,35]. The first two methods, which are based on the gray information and the spatial features of the images to register the images, cannot meet the real time requirements, as they require large computing capacity and highly complex algorithms, They are also not very robust. The physically based methods utilize physical models to describe the process of image deformation and solve the image physical model to achieve image registration. These methods have great advantages for speed and accuracy. In this paper, a physically-based method, the SIFT feature constraint optical flow method (SIFT-OFM), is proposed to register images. Figure 3 shows the flowchart of the algorithm. First, homologous points are obtained for the reference image and the sensed image using the SIFT feature matching method [36]. Then the affine transform coefficients are calculated to be a constraint for an optical flow model. Finally, the parameters of optical flow model are optimized through an iterative process to obtain high accuracy registration images.
The optical flow model method is based mainly on the assumption of the local gradient model and the overall smoothness model of the pixel gray values, which regard the change of image gray scale as the change of optical flow [35]. The algorithm is able to get the high-precision matched image by considering the optical flow change as the displacement of the reference image and the sensed image to the register.
The optical flow model is based on the assumption of the constancy of brightness, which means that the gray value of the same object is not changing with time and the radiance consistent is in motion [37]. Thus, the spatial variation of the optical flow model can be expressed as follows in Equation (5):
E g r a y = φ ( ( I ( x + u , y + v ) I ( x , y ) ) 2 ) d x d y
where I is the image intensity function of interval Ω (x, y), (x, y) ϵ Ω; u and v are the displacements in the x and y directions, respectively, which is the offset between the reference image and the sensed image. The symbol, Ψ, represents a differentiable function on the interval Ω. To reduce the impact of light changes, the gradient energy equation is added into the total energy formula. The gradient energy equation is defined as follows in Equation (6):
E g r a d i e n t = φ ( | I ( x + u , y + v ) I ( x , y ) | 2 ) d x d y
where I = [ I x , I y ] T , and I is the gradient vector of the image. For every pixel, there is only one equation (the sum of Equations (2) and (3)); but there are two variables, u and v, which can cause uncertain solutions and aperture problems. Thus, there is a need to add a regularization constraint. The regularization constraint plays an important role in the estimation intensity, the edge smoothness, and the edge retention of the optical flow field. In this algorithm, a global smoothness constraint proposed by Horn [37] is used to minimize the constraint term. The global smoothness constraint is described as follows in Equation (7):
E s m o o t h = φ ( | u | 2 + | v | 2 ) d x d y
where : = ( x , y ) is two dimensional differential operator.
Optical flow model is not suitable for matching large displacement images since the optical flow model can be expressed by expanding the Taylor series and omitting the high-order terms. Therefore, the SIFT-OFM adds the SIFT feature as a constraint into the optical flow model. The generation of the SIFT feature points includes four steps: extreme space detection, precise positioning of the SIFT feature point, determination of the main direction of the feature point, and the description of the building features. Based on these SIFT feature points, the affine coefficients are calculated as the initial value of the optical flow field to match the image. The total energy equation of SIFT-OFM is defined as follows in Equation (8):
E = E g r a y + λ E g r a d i e n t + α E s m o o t h + τ E S I F T = φ ( ( I ( x + u , y + v ) I ( x , y ) ) 2 ) d x d y + λ φ ( | I ( x + u , y + v ) I ( x , y ) | 2 ) d x d y + α φ ( | u | 2 + | v | 2 ) d x d y + τ φ ( ( u u S I F T ) 2 + ( v v S I F T ) 2 ) d x d y
where λ , α and τ are the coefficient of the last three component. Based on previous experience, these are set to λ = 1 , α = 50 and τ = 1 in next experiment.
The minimum of u and v , are solved based on the energy function E, using Successive Over Relaxation (SOR) iterative method [38]. This is the displacement of the reference image and the sensed image. Then calculate the displacement based u and v and resample to get high-precision matched image.

2.3. Map Production

The procedure mainly provides two kinds of products. These are the cloudless maps for the disaster regions and the change detection maps for the early stage of the disaster. The flowchart for map production is shown in Figure 4. After obtaining the high precision matched disaster image, the clouds on the images are detected and removed to get cloudless remote sensing maps for determining the overall scope of the disaster and for automatically detecting the change between the pre- and post-disaster cloudless images. For the automated change map, artificial checking is needed to exclude the error detection, which is caused by the different types of sensors and the observed conditions between the two images. In cases of emergency, the automated coarse change map can be used directly to offer approximate information for managing crisis issues.
Much research has been focused on the task of cloud removal to relieve the influences of clouds. In fact, the process of cloud removal is an information reconstruction process for missing information. The proposed approaches can be grouped into three categories [39,40]: (1) spatial-based methods without any other auxiliary information source; (2) spectral-based methods, which extract the complementary information from other spectra; and (3) temporal-based methods, which extract the complementary information from other data acquired at the same position and at different time periods. Spatial-based methods are often used for some small missing regions by interpolation, such as for the scan line (SLC) missing from Landsat-7 [41]. It is not appropriate for large-scale cloud removal. Spectral-based methods are based on the relationship of spectra in clear regions and restore the contaminated band of data by modeling a relationship between the contaminated band and the auxiliary band by methods such as the HOT method [42], and the STARFM method [43]. It is usually constrained by the spectral compatibility and has difficulty with thick clouds. Temporal-based methods utilize multi-temporal clear data for the same region to simulate the cloud cover area data for acquiring cloud-free images, such as local linear histogram matching (LLHM) approach [44], spatio-temporal MRF model [45] and etc. In this paper, we produce the cloud free maps by the temporal phases simulation method proposed by Gui [46], which utilize spectrum relationship, vegetation index and HOT parameters [47] to detect clouds on the images and calculate the feature difference between two temporal images to make up the cloud regions. The simulation method uses the minimum value of the feature difference and a multi-grid optimization algorithm. It converts the features of one image to the features of the reference image and then fills the areas of the clouds by the converted image that has no cloud in the same region. This generates a cloud free image in the disaster districts. In this study, time to acquire the difference between the images is very short, so the generated image can truly reflect the overall situation after the disaster by this method.
Several authors have presented semi- or fully- automatic methods to analyze images for damage change detection and assessment by optical data [3,48,49,50]. But geometric distortion, improper co-registration of pre- and post-disaster images can result in a high false alarm rate of automatic change detection methods. In this procedure, the new methods, OptVM and SIFT-OFM, are utilized to improve the accuracy of preprocessing results. And based on the high-precision preprocessing images, the Iteratively Re-weighted modification to the Multivariate Alteration Detection (IR-MAD) method proposed by Nielsen [51,52] is used for detecting change automatically in this procedure. IR-MAD is mainly based on the means of canonical correlation analysis (CCA). First, the maximum canonical correlation coefficient after linear transformation is determined for the pre- and post-disaster images [53]. The D-value and the variance of the linear transformed images are calculated. Finally, the pixel change is checked by the Chi-square distribution of the D-value and the variance. In order to highlight the changing information of the pixels to improve the accuracy of the change detection map, a weighted iteration method is used during the process of transformation. The iterations continue until the absolute value is less than the error limit (for this paper, the error limit is set as 0.001). The absolute value is the difference of correlation coefficient maximum value from two iterations, namely, before and the current conversion.
However, the change detection map has some error detection points because the map, which is generated by IR-MAD, could be influenced by the different conditions during the acquisition of the two images. Artificial checking is essential for removing the errors in the changed regions. Changes are totally possible during the early stages of disasters, since many people and agencies from other regions or countries have a great enthusiasm and willingness to help the victims and local government in relief and reconstruction activities. There are many public community remote sensing platforms, such as TomNod [54] and GIS Corps [55] which could add and update the information (not only in vector format but also by means of geotagged pictures and textual information), validate the information conveyed, and actively participate in the map production phase by coordinating the efforts of ordinary people and experts [56]. It could gather every tiny force to provide significant assistance for the victims and decision making department in the disaster region [57]. The automatic production change map can be used in on a community remote sensing platform to reduce labor costs and to provide a rapid response to the disaster.

3. Experiments Results and Analysis

3.1. Study Area and Data Sets

A powerful earthquake shook Nepal near its capital, Kathmandu, at 11:56 NST on 25 April 2015, killing more than 8000 people and flattening section of the city’s historic center. Some of the oldest parts of the city, including seven UNESCO World Heritage sites, were severely damaged in the earthquake. This paper uses the Nepal earthquake as an example. The location of the study area, the center of Kathmandu, is shown in Figure 5. Four images from a GF-1 PMS sensor were selected, including a pre-earthquake image on April 11 and post-earthquake images on 27 April, 1 May, and 2 May, as shown in Figure 6. The specific observation parameters of the data set are shown in Table 1, and the parameters for the PMS GF-1 sensor are shown in Table 2. The proposed procedure was followed without any human intervention. The procedure will be discussed clearly, including the intermediate results of the image fusion and registration and the final products of the cloudless maps and the change map.

3.2. Image Fusion Results and Analysis

In this section, the image from 27 April is taken as an example to show the accuracy of the image fusion results. Figure 7 shows part of the whole image to exhibit the image details clearly. The left image is multi-spectral data, and the right is panchromatic data. The upper left corner image in each view is an enlarged view of the small red box. Figure 8 shows the result of PCA fusion, GS-pansharpening fusion, wavelet transform fusion, and the proposed OptVM method. Visually, the four methods have achieved spatial resolution enhancement and keep a certain spectral fidelity; but the results of PCA, GS, and wavelet transforms have virtual phenomena in the edge of objects, especially the wavelet transform result being the worst. In detail, from the enlarged image of the small red box, the resulting color recovery of OptVM is finer than with the other three algorithms. For example, observe the red roof in the enlarged image in Figure 8. The OptVM method restored the red color of the roof regularly, while the color recovery results of the other three algorithms deviate from the red color, with some parts of roof being white.
Six typical surface features are shown in Figure 9 which are sampled from the fused results (seen in Figure 8). These are sampled in relatively uniform areas to assess the spectral fidelity and the spatial location of the six points marked in Figure 9A. These pixels cover a large set of material in the scene and are uniform around their neighborhood. The spectra of the original MS image are used as ground truth for comparisons based on the reasonable assumption that the spectra around such uniform areas are most likely unchanged. The spectra results of the image fusion methods are shown in Figure 10; and the Euclidean distance difference and the spectral difference, expressed in spectral angle difference, are shown in Table 3. For uniformity of comparison, the values of the fusion results are normalized to 0–255. From Figure 10 and Table 3, it can be seen that the result of OptVM method can preserve the spectra of these pixels well, whereas the spectra from the PCA, the Gram–Schmidt (GS), and the wavelet fusion are very different from the truth. For example, the red roof of pixel #1 is shifted seriously in the result from wavelet fusion. For the ordinary roof (pixel #4) and bare land (pixel #6), the results of PCA and GS are lower than the truth, especially in red band, and the difference of the spectral angle is larger. In the vegetation area, the colors of the results based on the wavelet method (see Figure 9D) lack fidelity, while our algorithm performs very well in most regions (Figure 9F). In our method, a crucial step is that the simulated PAN and the original PAN need to be matched to reduce the distortion that is caused by the geometric deviation of the original MS and PAN images.
In order to comprehensively assess the quality of fused images, four objective evaluation metrics are selected, which include the spectral angle mapper (SAM), the error of relative dimensionless global in synthesis (ERGAS) metric, the Q4 index [58] and Quality with No Reference (QNR) metric [59]. The equations of four metrics are presented in the Appendix. SAM and ERGAS are measures of the global radiometric distortion of the fused images, and a smaller value indicates better fusion results. Q4 and QNR reflect the spectral and spatial quality of the fused images. Values closer to 1 denote the better result. The results of the four metrics are shown in Figure 11 and Table 4. In regards to the fidelity of the image spectrum and the image clarity, the fused images by GS and by our proposed algorithms appear better than those obtained by the PCA and the wavelet fusion methods. The QNR index obviously displays this difference, since replacing the original MS image information leads to the loss of information. Compared with the other fused methods, the SAM value of the GS fused image is very large, with the degree of sharpening being excessive in some parts of the results. The values for the four metrics for our method show that it is the best of the four methods. In fact, most of the existing image fusion algorithms are based on the band-to-band correlation, whereas our method synthetically utilize the relationship of spectrum, grayscale and spatial to produce an image with fewer distortions.

3.3. Image Registration Results and Analysis

Image registration is based on the results of image fusion and select two images as the example (pre-earthquake on 11 April 2015, and post-earthquake on 27 April 2015). The pre-earthquake image is the reference image to match with the post-seismic image of 27 April. In Figure 12, the middle image is a combined image, which takes a 100 × 100 block from the reference image and also from the sensed image, alternately. Figure 12A–D are the detailed enlarged images, which show the position offset of the reference image and of the sensed image. In Figure 13, Figure 14, Figure 15 and Figure 16, the middle combined image is obtained in the same way as Figure 12. These figures are based on the reference image and the matched image using the optical flow model algorithm (Figure 13), the SIFT feature algorithm (Figure 14), the ENVI software registration workflow (Figure 15), and our proposed SIFT-OFM method (Figure 16).
From the detailed images in Figure 12, the dislocation between the reference image and the sensed image is very obvious, so it is necessary to register the two images for producing the products. From Figure 14 to Figure 16, the results are good. Figure 13 shows that the result that is based only on the optical flow model algorithm is the worst. It is because optical flow algorithm model is not suitable for the large displacement of two images, generally not more than two pixels; but in this study, the difference of the two images significantly exceeds at least 10 pixels. Furthermore, comparing the detailed image from Figure 14 to that in Figure 16, the SIFT algorithm performed well in Figure 14A,D; but in Figure 14B, the image of the dislocation of the bridges is very clear, while the road in the Figure 14C image is broken. The linear correction model of the affine transformation is utilized in SIFT feature algorithm and can cause some nonlinear distortions to be uncorrected. The result of Figure 15 is based on ENVI registration process, in which 34 ground control points (GCP) are marked by artificial selection and the overall error of these GCP is not more than 0.43 pixels. However, in both Figure 15A,B the discrepancy of the bridges is obvious and serious. This is related to the maldistribution of the GCP and the linear calibration model in the registration process. There is no obvious displacement or deviation in any of the images of Figure 16 produced by the SIFT-OFM method, which restricts the larger deviation by the SIFT feature and corrects the local linear or nonlinear distortion by the optical flow model.
The structural similarity index measurement (SSIM) [60] is the quantitative evaluation metric used to evaluate the precision of a registration image. The SSIM maps in Figure 17 are obtained by calculating the SSIM between the reference image and the matched image in a sliding window within 9 × 9. In the SSIM map, red indicates SSIM values close to 1; that is, the areas with high precision registration. It can be seen that the precision of the proposed method is higher for the SIFT feature and the ENVI automatic registration process. The existence of clouds and shadows in sensed images and the lack of clouds in the reference image leads to the low SSIM value in the clouds and shadows regions. Therefore, in the Table 5, the overall SSIM value in the whole image shows relatively low.

3.4. Cloudless Product and Analysis

The generation of cloudless maps includes cloud detection and temporal simulation based on the result of image fusion and image registration. The four images after image fusion and image registration are shown in Figure 18. From Figure 18, all the post-event images (images of 27 April, 1 May, and 2 May) included clouds and shadows, which affect the ability of the decision makers and analysts to estimate and grasp the overall situation. Therefore, cloud and shadow removal and temporal simulation for producing a cloud free image is indispensable.
The cloud detection results by our method introduced in Section 2.3 are shown in Figure 19, in which red areas represent cloud regions and green areas represent shadow regions. By visual comparison, most of the cloud and shadow have been detected, while only a small portion of the thin clouds have not been detected. The cloud and shadow based on the result in Figure 19 is used to produce the cloud free image by temporal simulation based on the adjacent time image after the earthquake. The cloudless image is shown in Figure 20, in which Figure 20A is the image of 1 May and Figure 20B is the image of 2 May. Figure 20C is the cloudless map. The image, Figure 20C, is based mainly on the Figure 20B image, with the cloud and shadow regions made up from the Figure 20A image. Figure 21 show the detailed enlarged images of the rectangle in Figure 20. In Figure 21, the scenes in the cloud and shadow regions are recovered by the temporal simulation method and keep consistency in color and texture using neighbors. It provides a high quality cloudless image and identifies the overall damage and potential cascading effects accurately. Figure 22 shows the cloudless maps of the post-event on 27 April, 1 May, and 2 May. Comparing with the original image of Figure 18, the cloudless maps eliminate the influence of clouds and shadows and can be used easily as backdrop maps or situation maps. The cloud and shadow regions of the image on 27 April (seen Figure 18 right-up image and Figure 19 right-up image) is replaced by the pre-event cloudless image on 11 April (seen Figure 18 left-up image) to obtain a cloudless map (seen Figure 22 left image), as there is no post-event image before 27 April to make up the cloud and shadow regions. The left cloudless image of Figure 22 contains some pre-event information, which may make a confusion by users if they only use the single cloudless map, so some red vectors of cloud and shadow regions are added to distinguish the pre and post event data. In the internal regions of the vectors are the pre-event information, while the external region are the post-event information. This kind maps is only used in the short time after the event when there are few post-event images and the images still have overlapping cloud-covered areas.

3.5. Change-Detection Product and Analysis

The change-detection map produced by IR-MAD in this paper is a coarse detection map and needs further manual checks to improve the accuracy of the product. In this section, the four images of Figure 18 are tested. The image in Figure 23I is the change-detection map between 11 April and 27 April, in which clouds and shadows are considered as the change, and Figure 23II is the change-detection map after removing the cloud and shadow regions based on the Figure 23I image. Figure 23A–D give the detailed enlarged images of the red rectangle in the large maps. Figure 23A illustrates the change of open grass land before and after earthquake, where some tents have been put up as temporary refuge. Figure 23B indicates the landmark building of the Kathmandu, Bhimsen Tower, completely collapsed after the earthquake. Figure 23C represents an error change detection result, as the different imaging angle of two images result in the different positions of tall buildings and their shadows in the images. Though the images are matched by the method proposed in Section 2.2, the differences caused by the viewing angle is very difficult to be corrected. Figure 23D also shows an error change-detection result, because part of the shadow area is missing from detection. From the above analysis, the change-detection map produced by our proposed procedure is a coarse result without distinguishing the type and grade of change and many errors are detected, which require manual checking to refine the work.
The image in Figure 24I is the same image shown in Figure 23II, which is the change map between 11 April and 27 April. The image in Figure 24II is the change between 1 May and 11 April and the image in Figure 24III is the change between 27 April and 1 May. All of these eliminate the impact of cloud and shadow. Figure 25 shows the detailed enlarged images of the red rectangles of Figure 24. In Figure 25A, the Durbar Square of Kathmandu, which damaged severely in the earthquake, is shown, but this change (the damage) is not discovered because the clouds and shadows exist in the image of 27 April (seen in Figure 25A). In Figure 24III, the change is observed, since the image of 1 May is cloudless in this region. For B sense in the Figure 24, the reason of discovering the change in II, not in I and III map is the same with the A sense as the influence of shadow (seen the detail in Figure 25B). Figure 25C shows the Kal Mochan Temple of Kathmandu, which also collapsed after the earthquake. In Figure 24I,II, the change area is the same large, but small in Figure 24III map. This is because the feature on the ground has no major changes in the post-earthquake images (27 April and 1 May), while the feature changed obviously between pre- and post-earthquake images (Figure 24I is the result of 11 April and 27 April, Figure 24II is the result of 11 April and 1 May). There are just some rocks from the collapsed temple that have been consolidated during the period from 27 April to 1 May to clean the area of the ruins (shown in Figure 25C). By these comparisons, the progress of the rescue will be learned. The same reason applies to the Figure 25D,E scenes. From these images, we see that many tents were put up in some open space nearby the houses of victims as temporary shelters because of the constant large aftershocks. The temporary shelters were migrated or revoked on 1 May as the rescue proceeded. So some temporary facilities would disappear on the image of 1 May (seen in Figure 25D,E); and no or few change areas would be detected in image III of Figure 24. The results of this paper mainly provide a quick change detection map to determine the urgent requirements, as well as offer guidance to check the change regions precisely through a community remote sensing platform for experts or the public, which is possible to reduce labor costs and improve efficiency in the maximum extent. Figure 26 show the detected changes vector superposition on cloudless maps of 27 April. The vector of the left hand map is the final result, which was extracted from the image I of Figure 26; and the vector of the right hand map removes the error regions of the final result by visual checking. The right vector map is the combination of visual interpretation by four independent people. This vector map can be considered the relative correct result for comparing. The simple statistic is shown in Table 6. The number of change regions in the original vector map (shown in Figure 26 left hand map) is 758, while the number in the checked vector map (shown in Figure 26 right hand map) is 556, and the error detected change regions are 202. The percentage of truth change regions and original change detected regions is 73.4%. This can meet the application requirements to a certain degree. In the true change regions, most of them are tents or shelters (shown Figure 23A, Figure 25D,E, and Figure 27A), some are cars on the street or other temporary features, and a few are the collapsed or damaged buildings (shown in Figure 25C and Figure 27B). In the error change regions, as analyzed above, most of the error is caused by the changes in different view angles between pre-event maps and post-event maps, and some are caused by undetected clouds, thin clouds, shadows, or other unknown errors.
Through the change detection map, the rescue progress can be observed. An example is shown in Figure 27. The image in Figure 27A shows the difference over four days in the open space of a park in the center of Kathmandu. The image from 27 April, the day after earthquake, indicates that the situation in this open space is in chaos, with a variety of temporary residences set up. However, by the sixth day after the earthquake, on the 1 May, the temporary settlements are orderly. This phenomenon is more obvious and more and unified residences and relief tents are set up by 2 May. The image in Figure 27B shows the series of changes of Bhimsen Tower. From 27 April to 2 May, the bricks and masonry from the ruins are cleaned up effectively and the situation changed from one of chaos to order. These temporal change maps indicate the progress of the disaster rescue, allowing for more objective guidance to formulate the next plan of rescue and allocate rescue resources reasonably.

4. Discussion

The goal of this work was to solve the problems related to the use of optical remote sensing data in disaster emergency mapping. The main problems are the impact of clouds in obtaining a clear situation backdrop map after the disaster and the contradiction between obtaining the poor precision change detection maps by automated methods and the relatively labor intensive process by visual interpretations. Through a series of new methods proposed in this paper, the accuracy of optical data pre-processing results have been improved greatly and the produced change detection maps can fulfill the fundamental requirement of disaster S&R activities, which is the information regarding the disaster in terms of the number, location, distribution, arrangement, extent, and impact of the devastation. These rather coarse change maps also can be used as a guideline and the basis for visual observation to check the correctness of the detected change in objects. This can then generate more refined damage assessment maps and up-to-date maps for evaluation.

4.1. Map Accuracy and Timeliness

Reliability of the results and timeliness of the process are the key factors in emergency mapping. In previous studies, the biggest problem of optical data used in damage mapping by automatic change detection method was the calibration stage, which is a time consuming procedure with poor accuracy of results. In this paper, OptVM for image fusion and SIFT-OFM for image registration are proposed for producing a high accuracy fused image and matched image to obtain a rather high precision change detection map. Comparing the results based on traditional image fusion and registration methods, the results based on our proposed methods are the best overall (see the analysis in Section 3.2 and Section 3.3). The false alarm rate of the final change detection scenes, which are caused by improper co-registration, can be reduced by relying on the rather high accuracy of the fused image and the matched image. Timeliness is also an issue. The time for processing the images of 11 April, 27 April, and 1 May from the Kathmandu example was only about 46 min to obtain the cloudless map shown in Figure 20 and the change map shown in Figure 24 using a PC with 8GB RAM and a 64-bit operating system. This is well worth the effort to obtain a cloudless situation backdrop map and a rather coarse change map. The time can be shortened if cluster or GPU parallel computing is used.

4.2. Limitations

Although a series of methods or strategies are used in this study to reduce the effects of optical imagery limitations, some limitations are still presented in the maps produced by this study. First, cloudless situation maps are available only when cloud cover is less than 50% of the image taken on the same day; otherwise, the cloudless maps contain a lot of information from the previous images. When the cloud coverage in the image is more than 80%, the image is completely unavailable, and only the SAR data is available to determine the situation of the cloud and shadow areas. The other limitation is that the generation of cloudless maps requires at least two post-event images with non-overlapping cloud covered areas. Post-event data are a bit more challenging due to cloud cover during the time of satellite passes. In some cases, getting a post-event cloudless map of the entire damaged area requires a great deal of time. However, multiple satellites passes and multiple data increase the possibility of obtaining cloudless images. Generally, a single post-event imagery is available in the first hours after the event. If it has cloud covered areas, there is no method other than to just replace the cloud areas with the nearest pre-event clear data when we want to get cloudless maps. Our proposed method can produce a cloudless map in this case. It is also useful for rescue teams to learn the situation of the affected areas using cloudless maps before they start search and rescue (S&R) activities within a few hours after the disaster. In parts of the country without internet access, the professional rescue teams need to take a cloudless printed map. It is important to provide cloudless maps even though cloud cover areas are replaced by pre-event clear data. In this kind cloudless map (seen Figure 22 left image), the vectors of detected cloud and shadow regions are provided to distinguish where is the pre-event or post-event data to avoid confusion for users.
Next, undetected clouds and shadows, variations in solar illumination, different off-nadir angles, and some temporary objects (e.g., car traffic), which could influence the results of change detection, still exist and reduce the accuracy of change maps in this example (as shown in Figure 23C,D). These errors are difficult to exclude from automatic change detected maps. It is only by manual visual checking that errors can be removed to obtain more precise change maps. That is why we emphasize the important of manual participation for further checking the change maps in this study. These change maps are just the coarse result, which provides some basic main information for the affected regions. In addition, the detected changes in the product maps are not divided into categories and degrees, which also rely on manual interpretation to measure.
Then, a main limitation is the data resolution. As the policy of data acquisition and fund problems, the highest data resolution obtained was only 2 m with GF-1. This is not sufficient for detecting and assessing the damaged buildings, except those that are completely collapsed (shown in Figure 27B) or partially collapsed (shown in Figure 25C). The main purpose of the change detection maps in this study is to detect the scene after the earthquake to determine whether it was changed or not. The change detection maps could provide guidance of the changed regions for further checking as mentioned above.
Lastly, though image-based mapping has played an important role in rapid post-disaster damage assessment, the detail and accuracy of assessment has not reached the level of ground-based surveys [9]. The limitation of image-based damage assessment is not only related to the spatial resolution of the sensors, but also the reflected information reflected by image itself. Generally, most of the satellite images for disaster mapping are acquire by vertical angle (or small angle offset vertical), and the provided information of images, such as for building, is the situation of roofs. The roof information is well suited for the identification of extreme damage states, i.e., completely destroyed structures [61]. But for some not collapsed buildings with cracks or inclined walls, the results of image-based mapping are largely missed and lead to assessment deficiency of this type damage in the final results. Recently oblique or multi-perspective data, which provide information of both roofs and facades, has been identified as a potential data source for a more comprehensive building damage assessment [62,63].

5. Conclusions and Future Work

Using remote sensing for emergency mapping is now a common operational approach for disaster risk management. Both optical and radar data are the main input data when disasters occur. However, for damage assessment or analysis purposes, a number of results have been presented after every event mostly based on optical data. These can be easy to read and analyze to generate fast maps showing the location, scale, or the extent of disaster or crisis situation. In this study, a rapid procedure for transforming optical remote sensing data is proposed to produce cloudless situation backdrop maps and change detection maps for the emergency needs in the early phases of disaster recovery. On the one hand, the precision and reliability of the products has improved significantly by using the new methods, which include OptVM and SIFT-OFM. On the other hand, the automatic processing time is shortened by proposed procedure. It is useful to provide some rapid and actionable information for the governmental agencies, multilateral organizations, and NGOs in a timely fashion. While only the GF-1 temporal data were tested in this study as the limitation of data acquisition, the proposed procedure is also applicable to the other remote sensing data. It can even provide up-to-date situation awareness, loss estimation, and recovery monitoring maps to support decision making by processing multi-source and multi-temporal optical remote sensing data together.
The accuracy of the results from the automatic methods is obviously not as good as the manual visual interpretation by good cooperation of professional image analysts or skilled operators [63]. Manual visual interpretation, is always a rather labor intensive task. An ideal solution is to minimize the impact of restrictions to improve the accuracy of automatic methods results continuously. These automatic methods results can save the labor-cost to the maximum extent and provide some guideline for further refine the results by manual visual checking, though the results is rather coarse. This is also our main purposes of this study. International cooperation and volunteer contributions play an important role in mapping for disaster impact assessment at the urban level. International agencies and volunteers should be coordinated properly to ensure that common mapping guidelines and proper communication mechanisms are in place, for more effective cooperation among the involved actors especially during major emergencies.
Remote sensing is an effective tool at the disposal of a disaster manager. It can supply authentic and objective information for various rescue resources and the general public, especially in remote or inaccessible areas during the early period of the disaster. It would be strongest when allied with other tools, such as catastrophe risk models or GIS tools [64], to provide a powerful guidance to streamline evacuation, issue disaster declarations, coordinate S&R, coordinate logistics, conduct damage assessment, distribute relief supplies, and aid in long-term recovery. Some emerging technologies and initiatives will provide updated change and geospatial information as well as the resources distribution and demand data to analyze it in a time frame suitable for emergency response purposes. In the future, integrating different remote sensing data based on emergency mapping and other GIS information or analysis tools will be the popular trend to reduce economic losses, ensure public safety, and meet the basic subsistence needs of the people affected in an emergency response context.

Acknowledgments

This work was supported by the civil aerospace pre-research project during the “12th Five-Year Plan”, the satellite special project of National Development and Reform Commission: the system of remote sensing satellite data real-time active service and the 135 Strategy Planning of Institute of Remote Sensing and Digital Earth, CAS.

Author Contributions

Yong Ma developed the methods, carried out the experiments and analyzed the results. Fu Chen and Jianbo Liu supervised the research. Yang He, Jianbo Duan and Xinpeng Li prepared the data. Yong Ma and Fu Chen wrote the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix

The four indices used in this paper are defined as follows
(1) The Spectral Angle Mapper (SAM): The SAM measures the absolute angle between the spectral vectors between the reference and fused images, which reflects the spectral distortion. The best value is 0, and the equation is shown as follow
S A M ( A ( i , j ) , B ( i , j ) ) = arccos ( A ( i , j ) , B ( i , j ) A ( i , j ) 2 · B ( i , j ) 2 )
where A(i, j) and B(i, j) are the value of pixel (i, j) on the reference and fused images.
2) The error of relative dimensionless global in synthesis (ERGAS): ERGAS is a comprehensive index, which gives a global depiction of the radiometric difference between the reference and fused images. The best value is 0 and it is defined as
E R G A S ( A k , B k ) = 100 r a t i o 1 N k = 1 N ( R M S E ( A k , B k ) 2 M e a n ( B k ) )
where A is the fused image and B is the reference image, ratio is the ratio of the spatial resolutions of the PAN image and the MS image, RMES is the root mean square of image A and B, Mean represents the average value of the image.
3) The Q4 Index: the Q4 index [34] is an extension of the quality index, which is a product of the correlation coefficient, mean bias, and contrast variation. The best value is 1 and it is described as follow:
Q 4 D × D = δ Z 1 Z 2 δ Z 1 δ Z 2 2 δ Z 1 δ Z 2 δ Z 1 2 + δ Z 2 2 2 | Z 1 ¯ | | Z 2 ¯ | | Z 1 ¯ | 2 + | Z 2 ¯ | 2
Z 1 = X 1 + i × X 2 + j × X 3 + k × X 4
Z 2 = Y 1 + i × Y 2 + j × Y 3 + k × Y 4
Z 1 * = X 1 i × X 2 j × X 3 k × X 4
| Z 1 ¯ | = Z 1 × Z 1 * = ( X 1 i ) 2 + ( X 2 i ) 2 + ( X 3 i ) 2 + ( X 4 i ) 2
where | Z 2 ¯ | is the same way with | Z 1 ¯ | , X1 and Y1 are the first band of image A and B respectively, and so are the other bands. σ z 1 z 2 is the covariance of z1 and z2, andσz1 and σz2 is the variance of z1 and z2. The first term of the equation is the modulus of the hyper-complex CC between z1 and z2, and the second and third terms measure contrast changes and the mean bias on all bands, respectively. D is block size of image and in this paper the size is 32*32.
4) Quality with No Reference (QNR): QNR measure the quality of fused image and reference image from the spectral and spatial distortion. The best value is 1 and it is defined as follow:
Q N R = ( 1 D λ ) α ( 1 D s ) β
D λ = 1 L ( L 1 ) l = 1 L r = 1 r 1 L | Q ( G l ^ , G r ^ ) Q ( G l , ˜ G r ˜ ) | P p
D S = 1 L ( L 1 ) l = 1 L | Q ( G l ^ , P ) Q ( G l , ˜ P ˜ ) | q q
where L is the number of bands, G ^ is fused HRM image ,and G ~ is LRM image. p is integer index of amplifying the spectrum distortion, generally taking p = 1. P is HRP image and P ~ is panchromatic image after processing by low-pass filter. q is integer index of amplifying of spatial distortion, generally taking q= 1,and α=β=1.

References

  1. Dou, M.; Chen, J.; Chen, D.; Chen, X.; Deng, Z.; Zhang, X.; Xu, K.; Wang, J. Modeling and simulation for natural disaster contingency planning driven by high-resolution remote sensing images. Future Gener. Comput. Syst. 2014, 37, 367–377. [Google Scholar] [CrossRef]
  2. Huyck, C.; Verrucci, E.; Bevington, J. Remote Sensing for Disaster Response: A Rapid, Image-Based Perspective. In Earthquake Hazard, Risk, and Disasters; Academic Press: Cambridge, MA, USA, 2014; pp. 1–24. [Google Scholar]
  3. Joyce, K.E.; Belliss, S.E.; Samsonov, S.V.; McNeill, S.J.; Glassey, P.J. A review of the status of satellite remote sensing and image processing techniques for mapping natural hazards and disasters. Prog. Phys. Geogr. 2009, 33, 183–207. [Google Scholar] [CrossRef]
  4. Voigt, S.; Kemper, T.; Riedlinger, T.; Kiefl, R.; Scholte, K.; Mehl, H. Satellite image analysis for disaster and crisis-management support. IEEE Trans. Geosci. Remote Sens. 2007, 45, 1520–1528. [Google Scholar] [CrossRef]
  5. Boccardo, P.; Tonolo, F.G. Remote Sensing Role in Emergency Mapping for Disaster Response. In Engineering Geology for Society and Territory; Springer International Publishing: New York, NY, USA, 2015; Volume 5, pp. 17–24. [Google Scholar]
  6. Arciniegas, G.A.; Bijker, W.; Kerle, N.; Tolpekin, V.A. Coherence- and amplitude-based analysis of seismogenic damage in Bam, Iran, using ENVISAT ASAR data. IEEE Trans. Geosci. Remote Sens. 2007, 45, 1571. [Google Scholar] [CrossRef]
  7. Voigt, S.; Schneiderhan, T.; Twele, A.; Gähler, M.; Stein, E.; Mehl, H. Rapid damage assessment and situation mapping: Learning from the 2010 Haiti earthquake. Photogramm. Eng. Remote Sens. 2011, 77, 923–931. [Google Scholar] [CrossRef]
  8. Baiocchi, V.; Dominici, D.; Giannone, F.; Zucconi, M. Rapid building damage assessment using EROS B data: The case study of L’Aquila earthquake. Ital. J. Remote Sens. 2012, 44, 153–165. [Google Scholar] [CrossRef]
  9. Kerle, N. Satellite-based damage mapping following the 2006 Indonesia earthquake—How accurate was it? Int. J. Appl. Earth Obs. 2010, 12, 466–476. [Google Scholar] [CrossRef]
  10. Ehrlich, D.; Guo, H.; Molch, K.; Ma, J.; Pesaresi, M. Identifying damage caused by the 2008 Wenchuan earthquake from VHR remote sensing data. Int. J. Digit. Earth 2009, 2, 309–326. [Google Scholar] [CrossRef]
  11. Chini, M. Earthquake Damage Mapping Techniques Using SAR and Optical Remote Sensing Satellite Data; INTECH Open Access Publisher: Rijeka, Croatia, 2009. [Google Scholar]
  12. Dell’Acqua, F.; Bignami, C.; Chini, M.; Lisini, G.; Polli, D.A.; Stramondo, S. Earthquake damages rapid mapping by satellite remote sensing data: L’Aquila 6 April 2009 event. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2011, 4, 935–943. [Google Scholar] [CrossRef]
  13. Yamazaki, F.; Kouchi, K.I.; Kohiyama, M.; Muraoka, N.; Matsuoka, M. Earthquake damage detection using high-resolution satellite images. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Anchorage, Alaska, 20–24 September 2004; IEEE: Piscataway, NJ, USA, 2004; Volume 4, pp. 2280–2283. [Google Scholar]
  14. Ajmar, A.; Boccardo, P.; Disabato, F.; Tonolo, F.G. Rapid Mapping: Geomatics role and research opportunities. Rend. Lincei 2015, 26, 63–73. [Google Scholar] [CrossRef]
  15. Zhang, Y. Understanding image fusion. Photogramm. Eng. Remote Sens. 2004, 70, 657–661. [Google Scholar]
  16. Jiang, C.; Zhang, H.; Shen, H.; Zhang, L. A practical compressed sensing-based pan-sharpening method. IEEE Geosci. Remote Sens. Lett. 2012, 9, 629–633. [Google Scholar] [CrossRef]
  17. Shahdoosti, H.R.; Ghassemian, H. Spatial PCA as a new method for image fusion. In Proceedings of the 16th CSI International Symposium on Artificial Intelligence and Signal Processing (AISP), Shiraz, Iran, 2–3 May 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 90–94. [Google Scholar]
  18. El-Mezouar, M.C.; Taleb, N.; Kpalma, K.; Ronsin, J. An IHS-based fusion for color distortion reduction and vegetation enhancement in IKONOS imagery. IEEE Trans. Geosci. Remote Sens. 2011, 49, 1590–1602. [Google Scholar] [CrossRef]
  19. Zhang, Y. Problems in the Fusion of Commercial High-Resolution Satelitte as well as Landsat 7 Images and Initial Solutions. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2002, 34, 587–592. [Google Scholar]
  20. Pradhan, P.S.; King, R.L.; Younan, N.H.; Holcomb, D.W. Estimation of the number of decomposition levels for a wavelet-based multiresolution multisensor image fusion. IEEE Trans. Geosci. Remote Sens. 2006, 44, 3674–3686. [Google Scholar] [CrossRef]
  21. Zhang, L.; Shen, H.; Gong, W.; Zhang, H. Adjustable model-based fusion method for multispectral and panchromatic images. IEEE Trans. Syst. Man Cybern. B Cybern. 2012, 42, 1693–1704. [Google Scholar] [CrossRef] [PubMed]
  22. Li, S.; Yang, B. A new pan-sharpening method using a compressed sensing technique. IEEE Trans. Geosci. Remote Sens. 2011, 49, 738–746. [Google Scholar] [CrossRef]
  23. Ghahremani, M.; Ghassemian, H. Remote sensing image fusion using ripplet transform and compressed sensing. IEEE Geosci. Remote Sens. Lett. 2015, 12, 502–506. [Google Scholar] [CrossRef]
  24. Ballester, C.; Caselles, V.; Igual, L.; Verdera, J.; Rougé, B. A variational model for P+ XS image fusion. Int. J. Comput. Vis. 2006, 69, 43–58. [Google Scholar] [CrossRef]
  25. Paige, C.C.; Saunders, M.A. LSQR: An algorithm for sparse linear equations and sparse least squares. ACM Trans. Math. Softw. (TOMS) 1982, 8, 43–71. [Google Scholar] [CrossRef]
  26. Le Moigne, J.; Netanyahu, N.S.; Eastman, R.D. Image Registration for Remote Sensing; Cambridge University Press: England, UK, 2011. [Google Scholar]
  27. Avants, B.B.; Epstein, C.L.; Grossman, M.; Gee, J.C. Symmetric diffeomorphic image registration with cross-correlation: Evaluating automated labeling of elderly and neurodegenerative brain. Med. Image Anal. 2008, 12, 26–41. [Google Scholar] [CrossRef] [PubMed]
  28. Dame, A.; Marchand, E. Second-order optimization of mutual information for real-time image registration. IEEE Trans. Image Process. 2012, 21, 4190–4203. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Yu, X.; Chen, X.; Jiang, M. Motion detection in moving background using a novel algorithm based on image features guiding self-adaptive Sequential Similarity Detection Algorithm. Optik-Int. J. Light Electron Opt. 2012, 123, 2031–2037. [Google Scholar] [CrossRef]
  30. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  31. Mikolajczyk, K.; Schmid, C. Scale & affine invariant interest point detectors. Int. J. Comput. Vis. 2004, 60, 63–86. [Google Scholar]
  32. Sima, A.A.; Buckley, S.J. Optimizing SIFT for matching of short wave infrared and visible wavelength images. Remote Sens. 2013, 5, 2037–2056. [Google Scholar] [CrossRef]
  33. Bajcsy, R.; Kovačič, S. Multiresolution elastic matching. Comput. Vis. Graph. Image Process. 1989, 46, 1–21. [Google Scholar] [CrossRef]
  34. Zhang, Z.; Li, J.Z.; Li, D.D. Research of automated image registration technique for infrared images based on optical flow field analysis. J. Infrared Millim. Waves 2003, 22, 307–312. [Google Scholar]
  35. Liu, J.; Yan, H. Phase correlation pixel-to-pixel image co-registration based on optical flow and median shift propagation. Int. J. Remote Sens. 2008, 29, 5943–5956. [Google Scholar] [CrossRef]
  36. Huo, C.; Pan, C.; Huo, L.; Zhou, Z. Multilevel SIFT matching for large-size VHR image registration. IEEE Geosci. Remote Sens. Lett. 2012, 9, 171–175. [Google Scholar] [CrossRef]
  37. Horn, B.K.; Schunck, B.G. Determining optical flow. In 1981 Technical Symposium East; International Society for Optics and Photonics: Bellingham, WA, USA, 1981. [Google Scholar]
  38. Allahviranloo, T. Successive over relaxation iterative method for fuzzy system of linear equations. Appl. Math. Comput. 2005, 162, 189–196. [Google Scholar] [CrossRef]
  39. Shen, H.; Li, H.; Qian, Y.; Zhang, L.; Yuan, Q. An effective thin cloud removal procedure for visible remote sensing images. ISPRS J. Photogramm. Remote Sens. 2014, 96, 224–235. [Google Scholar] [CrossRef]
  40. Lin, C.-H.; Tsai, P.-H.; Lai, K.-H.; Chen, J.-Y. Cloud removal from multitemporal satellite images using information cloning. IEEE Trans. Geosci. Remote Sens. 2013, 51, 232–241. [Google Scholar] [CrossRef]
  41. Chen, J.; Zhu, X.; Vogelmann, J.E.; Gao, F.; Jin, S. A simple and effective method for filling gaps in Landsat ETM+ SLC-off images. Remote Sens. Environ. 2011, 115, 1053–1064. [Google Scholar] [CrossRef]
  42. Zhang, Y.; Guindon, B.; Cihlar, J. An image transform to characterize and compensate for spatial variations in thin cloud contamination of Landsat images. Remote Sens. Environ. 2002, 82, 173–187. [Google Scholar] [CrossRef]
  43. Gao, F.; Masek, J.; Schwaller, M.; Hall, F. On the blending of the Landsat and MODIS surface reflectance: Predicting daily Landsat surface reflectance. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2207–2218. [Google Scholar]
  44. Helmer, E.; Ruefenacht, B. Cloud-free satellite image mosaics with regression trees and histogram matching. Photogramm. Eng. Remote Sens. 2005, 71, 1079–1089. [Google Scholar] [CrossRef]
  45. Cheng, Q.; Shen, H.; Zhang, L.; Yuan, Q.; Zeng, C. Cloud removal for remotely sensed images by similar pixel replacement guided with a spatio-temporal MRF model. ISPRS J. Photogramm. Remote Sens. 2014, 92, 54–68. [Google Scholar] [CrossRef]
  46. Zhengke, G.; Fu, C.; Jin, Y.; Xinpeng, L.; Fangjun, L.; Jing, Z. Automatic cloud and cloud shadow removal method for landsat TM images. In Proceedings of the 10th International Conference on Electronic Measurement & Instruments (ICEMI), Chengdu, China, 16–19 August 2011; IEEE: Piscataway, NJ, USA, 2011; Volume 3, pp. 80–84. [Google Scholar]
  47. Zhu, Z.; Woodcock, C.E. Object-based cloud and cloud shadow detection in Landsat imagery. Remote Sens. Environ. 2012, 118, 83–94. [Google Scholar] [CrossRef]
  48. Chini, M.; Pierdicca, N.; Emery, W.J. Exploiting SAR and VHR optical images to quantify damage caused by the 2003 Bam earthquake. IEEE Trans. Geosci. Remote Sens. 2009, 47, 145–152. [Google Scholar] [CrossRef]
  49. Turker, M.; Sumer, E. Building-based damage detection due to earthquake using the watershed segmentation of the post-event aerial images. Int. J. Remote Sens. 2008, 29, 3073–3089. [Google Scholar] [CrossRef]
  50. Miura, H.; Midorikawa, S.; Kerle, N. Detection of building damage areas of the 2006 central Java, Indonesia, earthquake through digital analysis of optical satellite images. Earthq. Spectra 2013, 29, 453–473. [Google Scholar] [CrossRef]
  51. Canty, M.J.; Nielsen, A.A. Automatic radiometric normalization of multitemporal satellite imagery with the iteratively re-weighted MAD transformation. Remote Sens. Environ. 2008, 112, 1025–1036. [Google Scholar] [CrossRef]
  52. Nielsen, A.A. The regularized iteratively reweighted MAD method for change detection in multi-and hyperspectral data. IEEE Trans. Image Process. 2007, 16, 463–478. [Google Scholar] [CrossRef] [PubMed]
  53. Marpu, P.R.; Gamba, P.; Canty, M.J. Improving change detection results of IR-MAD by eliminating strong changes. IEEE Geosci. Remote Sens. Lett. 2011, 8, 799–803. [Google Scholar] [CrossRef]
  54. Qiang, L.; Yuan, G.; Ying, C. Study on disaster information management system compatible with VGI and crowdsourcing. In Proceedings of the 2014 IEEE Workshop on Advanced Research and Technology in Industry Applications (WARTIA), Ottawa, ON, Canada, 29–30 September 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 464–468. [Google Scholar]
  55. Resor, E.E.L. The Neo-Humanitarians: Assessing the Credibility of Organized Volunteer Crisis Mappers. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2013. [Google Scholar]
  56. Boccardo, P. New perspectives in emergency mapping. Eur. J. Remote Sens. 2013, 46, 571–582. [Google Scholar] [CrossRef]
  57. Barrington, L.; Ghosh, S.; Greene, M.; Har-Noy, S.; Berger, J.; Gill, S.; Lin, A.Y.-M.; Huyck, C. Crowdsourcing earthquake damage assessment using remote sensing imagery. Ann. Geophys. 2012, 54. [Google Scholar] [CrossRef]
  58. Alparone, L.; Baronti, S.; Garzelli, A.; Nencini, F. A global quality measurement of pan-sharpened multispectral imagery. IEEE Geosci. Remote Sens. Lett. 2004, 1, 313–317. [Google Scholar] [CrossRef]
  59. Khan, M.M.; Alparone, L.; Chanussot, J. Pansharpening quality assessment using the modulation transfer functions of instruments. IEEE Trans. Geosci. Remote Sens. 2009, 47, 3880–3891. [Google Scholar] [CrossRef]
  60. Amintoosi, M.; Fathy, M.; Mozayani, N. Precise image registration with structural similarity error measurement applied to superresolution. EURASIP J. Adv. Signal Process. 2009, 2009, 1–7. [Google Scholar] [CrossRef]
  61. Fernandez Galarreta, J.; Kerle, N.; Gerke, M. UAV-based urban structural damage assessment using object-based image analysis and semantic reasoning. Nat. Hazards Earth Sys. Sci. 2015, 15, 1087–1101. [Google Scholar] [CrossRef]
  62. Vetrivel, A.; Gerke, M.; Kerle, N.; Vosselman, G. Identification of damage in buildings based on gaps in 3D point clouds from very high resolution oblique airborne images. ISPRS J. Photogramm. Remote Sens. 2015, 105, 61–78. [Google Scholar] [CrossRef]
  63. Kerle, N.; Hoffman, R.R. Collaborative damage mapping for emergency response: The role of cognitive systems engineering. Nat. Hazards Earth Syst. Sci. 2013, 13, 97–113. [Google Scholar] [CrossRef]
  64. Eguchi, R.T.; Huyck, C.K.; Ghosh, S.; Adams, B.J.; McMillan, A. Utilizing New Technologies in Managing Hazards and Disasters Geospatial Techniques in Urban Hazard and Disaster Analysis; Springer: Houten, The Netherlands, 2010; pp. 295–323. [Google Scholar]
Figure 1. Procedure for dealing with multispectral and multi-temporal remote sensing images.
Figure 1. Procedure for dealing with multispectral and multi-temporal remote sensing images.
Remotesensing 08 00272 g001
Figure 2. Flowchart of image fusion method.
Figure 2. Flowchart of image fusion method.
Remotesensing 08 00272 g002
Figure 3. Flowchart of image registration method.
Figure 3. Flowchart of image registration method.
Remotesensing 08 00272 g003
Figure 4. Flowchart of products processing.
Figure 4. Flowchart of products processing.
Remotesensing 08 00272 g004
Figure 5. Spatial location of the study area (the resolution of left map is 2 m after fusion).
Figure 5. Spatial location of the study area (the resolution of left map is 2 m after fusion).
Remotesensing 08 00272 g005
Figure 6. The four images of data set (all the four image is the original image of MS with 8 m resolution).
Figure 6. The four images of data set (all the four image is the original image of MS with 8 m resolution).
Remotesensing 08 00272 g006
Figure 7. Multispectral (left) and panchromatic data (right) of the original image on 27 April (upper-left small image is the detail from the area of the red rectangle).
Figure 7. Multispectral (left) and panchromatic data (right) of the original image on 27 April (upper-left small image is the detail from the area of the red rectangle).
Remotesensing 08 00272 g007
Figure 8. Result of various image fusion methods using (A) PCA; (B) wavelet; (C) GS; and (D) OptVM.
Figure 8. Result of various image fusion methods using (A) PCA; (B) wavelet; (C) GS; and (D) OptVM.
Remotesensing 08 00272 g008
Figure 9. Comparison of the various algorithms on the urban area scene. (A) the PAN image; (B) the MS image after resampling using the nearest neighbor; (C) the PCA result; (D) the wavelet result; (E) the GS result; and (F) the OptVM result.
Figure 9. Comparison of the various algorithms on the urban area scene. (A) the PAN image; (B) the MS image after resampling using the nearest neighbor; (C) the PCA result; (D) the wavelet result; (E) the GS result; and (F) the OptVM result.
Remotesensing 08 00272 g009
Figure 10. Spectral comparison between the various approaches in the urban area scene. The site of pixels marked in Figure 9A.
Figure 10. Spectral comparison between the various approaches in the urban area scene. The site of pixels marked in Figure 9A.
Remotesensing 08 00272 g010
Figure 11. Line chart of the four metrics comparing fusion methods (ERGAS and SAM values are better when approaching 0, while Q4 and QNR values are better when approaching 1).
Figure 11. Line chart of the four metrics comparing fusion methods (ERGAS and SAM values are better when approaching 0, while Q4 and QNR values are better when approaching 1).
Remotesensing 08 00272 g011
Figure 12. The combined image of the reference image and the sensed image (AD show the detail of each red rectangle on the combined image).
Figure 12. The combined image of the reference image and the sensed image (AD show the detail of each red rectangle on the combined image).
Remotesensing 08 00272 g012
Figure 13. The combined image of the reference image and the matched image only using the optical flow model (AD show the detail of each red rectangle on the combined image).
Figure 13. The combined image of the reference image and the matched image only using the optical flow model (AD show the detail of each red rectangle on the combined image).
Remotesensing 08 00272 g013
Figure 14. The combined image of the reference image and the matched image only using the SIFT algorithm (AD show the detail of each red rectangle on the combined image).
Figure 14. The combined image of the reference image and the matched image only using the SIFT algorithm (AD show the detail of each red rectangle on the combined image).
Remotesensing 08 00272 g014
Figure 15. The combined image of the reference image and the matched image by using GCP in ENVI software (AD show the detail of each red rectangle on the combined image).
Figure 15. The combined image of the reference image and the matched image by using GCP in ENVI software (AD show the detail of each red rectangle on the combined image).
Remotesensing 08 00272 g015
Figure 16. The combined image of the reference image and the matched image using SIFT-OFM (AD show the detail of each red rectangle on the combined image).
Figure 16. The combined image of the reference image and the matched image using SIFT-OFM (AD show the detail of each red rectangle on the combined image).
Remotesensing 08 00272 g016
Figure 17. SSIM maps of three methods (the red area indicates SSIM values close to 1; that is, the more high precision registration regions).
Figure 17. SSIM maps of three methods (the red area indicates SSIM values close to 1; that is, the more high precision registration regions).
Remotesensing 08 00272 g017
Figure 18. The data set of four images after image fusion and registration (compared with Figure 6, the resolution and the registration accuracy have been improved greatly).
Figure 18. The data set of four images after image fusion and registration (compared with Figure 6, the resolution and the registration accuracy have been improved greatly).
Remotesensing 08 00272 g018
Figure 19. The cloud detection products of the four image in Figure 18 (red areas represent cloud regions and green areas represent shadow regions).
Figure 19. The cloud detection products of the four image in Figure 18 (red areas represent cloud regions and green areas represent shadow regions).
Remotesensing 08 00272 g019
Figure 20. The cloudless map of 2 May. (I and II are the original images of 1 May and 2 May, III is the cloudless map after removing the cloud and shadow). (A) and (B) are the shadow and cloud regions in the original image; (C) and (D) are the same areas after removing the influence of cloud and shadow corresponding to (A) and (B) regions. The detail of AD are shown in Figure 21).
Figure 20. The cloudless map of 2 May. (I and II are the original images of 1 May and 2 May, III is the cloudless map after removing the cloud and shadow). (A) and (B) are the shadow and cloud regions in the original image; (C) and (D) are the same areas after removing the influence of cloud and shadow corresponding to (A) and (B) regions. The detail of AD are shown in Figure 21).
Remotesensing 08 00272 g020
Figure 21. The detailed enlarged images of the rectangle in Figure 20. (A and B are the original images from 2 May, C and D are the simulated cloudless images).
Figure 21. The detailed enlarged images of the rectangle in Figure 20. (A and B are the original images from 2 May, C and D are the simulated cloudless images).
Remotesensing 08 00272 g021
Figure 22. The cloudless maps for the post-event images from 27 April, 1 May, and 2 May.
Figure 22. The cloudless maps for the post-event images from 27 April, 1 May, and 2 May.
Remotesensing 08 00272 g022
Figure 23. The change-detection map of image between 11 April and 27 April. (I image contains the cloud and shadow, II image remove the impact of cloud and shadow based on I image, AD are the different detected changing scenes, A and B are the right detected scenes, C and D are false detected scenes).
Figure 23. The change-detection map of image between 11 April and 27 April. (I image contains the cloud and shadow, II image remove the impact of cloud and shadow based on I image, AD are the different detected changing scenes, A and B are the right detected scenes, C and D are false detected scenes).
Remotesensing 08 00272 g023
Figure 24. The change-detection map of image between 11 April, 27 April and 1 May after removing the cloud and shadow (I is the change map of 11 April and 27 April, II is the change map of 11 April and 1 May, III is the change map of 27 April and 1 May. AE region are the different sense and the detail show in Figure 25).
Figure 24. The change-detection map of image between 11 April, 27 April and 1 May after removing the cloud and shadow (I is the change map of 11 April and 27 April, II is the change map of 11 April and 1 May, III is the change map of 27 April and 1 May. AE region are the different sense and the detail show in Figure 25).
Remotesensing 08 00272 g024
Figure 25. The detailed enlarged images of the red rectangle in Figure 24 (A and B are influenced by clouds and shadows; C is a collapsed temple; D and E are the open spaces where many tents are being set up).
Figure 25. The detailed enlarged images of the red rectangle in Figure 24 (A and B are influenced by clouds and shadows; C is a collapsed temple; D and E are the open spaces where many tents are being set up).
Remotesensing 08 00272 g025
Figure 26. The detected changes vector superposition on the cloudless map of 27 April. (the vector of the left hand map is extracted from the I image of Figure 24, and the vector of the right hand map has removed the error regions of the left hand vector map by visual checking).
Figure 26. The detected changes vector superposition on the cloudless map of 27 April. (the vector of the left hand map is extracted from the I image of Figure 24, and the vector of the right hand map has removed the error regions of the left hand vector map by visual checking).
Remotesensing 08 00272 g026
Figure 27. The temporal change detail map of two scenes (A shows a series of changes in Center Park from 27 April to 2 May and B shows the change of a collapsed tower).
Figure 27. The temporal change detail map of two scenes (A shows a series of changes in Center Park from 27 April to 2 May and B shows the change of a collapsed tower).
Remotesensing 08 00272 g027
Table 1. The specific observations parameters of data set.
Table 1. The specific observations parameters of data set.
Acquisition DateSensorViewing Angle
11 April 2015PMS-1
27 April 2015PMS-2−24°
1 May 2015PMS-1−23°
2 May 2015PMS-123°
Table 2. Parameters of GF-1 PMS sensor.
Table 2. Parameters of GF-1 PMS sensor.
ParameterPMS-1/PMS-2 Sensor
Spectral rangePAN0.45–0.90 μm
MS0.45–0.52 μm
0.52–0.59 μm
0.63–0.69 μm
0.77–0.89 μm
Spatial resolutionPAN2 m
MS8 m
Table 3. Spectral angle difference and Euclidean distance difference of marked pixels.
Table 3. Spectral angle difference and Euclidean distance difference of marked pixels.
CategorySpectral Angle DifferenceEuclidean Distance Difference
PCAWaveletGSOptVMPCAWaveletGSOptVM
Red roof29.7525.7130.1019.130.0660.1080.0650.044
Water41.3316.7942.0013.040.0560.0580.0550.028
Grass26.6510.7228.535.480.0680.0480.0750.017
Roof39.0114.9037.879.490.0630.0590.0600.023
Road26.4412.3726.707.940.0510.0500.0540.030
Bare land26.449.0026.707.750.0510.0390.0540.031
Table 4. Comparison of the value of above four index.
Table 4. Comparison of the value of above four index.
IndexPCAWaveletGSOptVM
ERGAS4.1186.5514.2143.485
SAM3.5662.9573.6562.492
Q40.5330.5270.8460.894
QNR0.8780.8860.860.931
Table 5. Comparison of SSIM value for whole subset image using three methods.
Table 5. Comparison of SSIM value for whole subset image using three methods.
IndexSIFTENVISIFT-OFM
SSIM0.62660.64150.7442
Table 6. The simple statistic of detected changes vector map and checked changes vector map.
Table 6. The simple statistic of detected changes vector map and checked changes vector map.
MethodNumber of RegionsChange CategoryNumber of Regions
Total detected change758
Visual interpretation checking real change556Damage Building(collapsed)21
Tent/shelter396
Cars or other temporary feature139
Visual interpretation checking error change202changes as different view angles112
change as undetected thin cloud or shadow34
other error detected changes56

Share and Cite

MDPI and ACS Style

Ma, Y.; Chen, F.; Liu, J.; He, Y.; Duan, J.; Li, X. An Automatic Procedure for Early Disaster Change Mapping Based on Optical Remote Sensing. Remote Sens. 2016, 8, 272. https://doi.org/10.3390/rs8040272

AMA Style

Ma Y, Chen F, Liu J, He Y, Duan J, Li X. An Automatic Procedure for Early Disaster Change Mapping Based on Optical Remote Sensing. Remote Sensing. 2016; 8(4):272. https://doi.org/10.3390/rs8040272

Chicago/Turabian Style

Ma, Yong, Fu Chen, Jianbo Liu, Yang He, Jianbo Duan, and Xinpeng Li. 2016. "An Automatic Procedure for Early Disaster Change Mapping Based on Optical Remote Sensing" Remote Sensing 8, no. 4: 272. https://doi.org/10.3390/rs8040272

APA Style

Ma, Y., Chen, F., Liu, J., He, Y., Duan, J., & Li, X. (2016). An Automatic Procedure for Early Disaster Change Mapping Based on Optical Remote Sensing. Remote Sensing, 8(4), 272. https://doi.org/10.3390/rs8040272

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop