Next Article in Journal
Fabrication and Structural Design of Micro Pressure Sensors for Tire Pressure Measurement Systems (TPMS)
Previous Article in Journal
Study of Interactions between Metallothionein and Cisplatin by using Differential Pulse Voltammetry Brdicka´s reaction and Quartz Crystal Microbalance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Scaling-up Transformation of Multisensor Images with Multiple Resolutions

Key Laboratory of Water Cycle and Related Land Surface Processes, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, 11A, Datun Road, Chaoyang District, Beijing 100101, P.R. China
*
Author to whom correspondence should be addressed.
Sensors 2009, 9(3), 1370-1381; https://doi.org/10.3390/s90301370
Submission received: 18 October 2008 / Revised: 27 December 2008 / Accepted: 11 February 2009 / Published: 26 February 2009
(This article belongs to the Section Remote Sensors)

Abstract

:
For scaling up low resolution multispectral images (LRMIs) with high resolution panchromatic image (HRPI), intensity-hue-saturation (IHS) can produce satisfactory spatial enhancement but usually introduces spectral distortion in the fused high resolution multispectral images (HRMIs). In this paper, to minimize this problem, we present a generalized intensity modulation (GIM) by extending the IHS transform to an arbitrary number of LRMIs, which uses the information of the spectral response functions (SRFs) of the multispectral and panchromatic sensors. Before modulation, the generalized intensity is enhanced by injecting details extracted from the HRPI by means of empirical mode decomposition. After the enhanced generalized intensity is substituted for the old one, the HRMIs are obtained through the GIM. Quickbird images are used to illustrate the superiority of this proposed method. Extensive comparison results based on visual analysis and Wald’s protocol demonstrate that the proposed method is more encouraging for scaling up the LRMIs with the HRPI spectrally and spatially than the tested fusion methods.

1. Introduction

In many remote sensing applications that require both high spatial and high spectral resolution, such as urban mapping, vegetation identification and land use classification, high resolution panchromatic images (HRPIs) and low resolution multispectral images (LRMIs) are fused using fusion methods to produce high resolution multispectral images (HRMIs), not only to increase the ability of humans to interpret the image dataset, but also for improving the accuracy of the classification [1].
Many image fusion methods have been proposed [13]. Initial methods mainly focused on intensity modulation for sharpening the LRMI by means of an HRPI. These methods provide good visual HRMIs, while overlooking the requirement of the high quality synthesis of spectral content which is very important for most remote sensing applications based on spectral signatures, such as soil and lithology [4]. Another family of methods, such as high pass filtering (HPF) [5] and gradient pyramid [6], yields HRMIs with much less spectral distortion by injecting high frequency information from the HRPI into the LRMI. However, it is not until the introduction of methods based on multiresolution analysis that HRMI achieved artistic results [7]. Conventional image fusion approaches based on à trous wavelet transform (AWT) [8] implement multiresoltuion decomposition on the HRPI, and then the HRMI can be recovered by performing the inverse AWT (IAWT) from the LRMI and the wavelet planes of the HRPI. However, wavelet based fusion methods do not consider the differences in high frequency information between the HRPI and the LRMIs [9].
The Intensity Hue Saturation (IHS) method can quickly merge massive volumes of data by requiring only resampled LRMIs aside from its high spatial enhancement capability [10]. Its concept is based on the representation of the LRMIs in the IHS system, and then substituting the low resolution intensity component (LRIC) with the HRPI. The inverse IHS transformation allows one to produce the HRMIs. However, the use of such a method for multisensor image fusion often leads to important modifications of the spectral properties of the LRMIs. This is due to the fact that all details contained in the HRPI are directly substituted to the LRIC [10].
A more appropriate use of the IHS method should rather consist of fusing the LRIC with the HRPI through image processing techniques to produce one high resolution intensity component (HRIC). For this purpose, empirical mode decomposition (EMD) is introduced into the fusion of the LRIC with the HRPI. The EMD is a recent method for analyzing nonlinear and nonstationary data, developed by Huang et al. [11]. The final representations of the signal are finite intrinsic mode functions (IMFs) that give not only sharp identifications of salient information but also smooth part of the signal. By manipulating the IMFs, the EMD is very suitable for image fusion [12].
This paper presents a novel scaling up multisensor image fusion method, based on the joint use of generalized intensity modulation (GIM) and the EMD. The GIM is the generalization of the IHS transform, and it incorporates information from the spectral response functions (SRFs) of the LRMI and the HRPI sensors to estimate the LRIC. The EMD is used to extract the spatial details of the HPRI to be injected into the LRIC. As a result, one texture modulated HRIC is produced. Experimental results based on Quickbird images are presented and discussed. Visual analysis and quantitative comparison demonstrate that the new approach provides a satisfactory result, both visually and quantitatively.

2. Combined GIM-EMD Image Fusion Method

2.1. GIM based fusion method

The main advantage of the IHS method lies in the separation of spatial information such as an intensity (I) component from the spectral information represented by the hue (H) and saturation (S) components. One can independently manipulate the I component while maintaining the overall color balance of the original images. Traditionally, the IHS method comprises four steps: 1) transform three LRMIs to IHS components; 2) match the histogram of the HRPI with that of the LRIC; 3) replace the LRIC with the stretched HRPI; and 4) inverse-transform IHS channels to three HRMIs.
Forward transform : [ LRIC H S ] = [ 1 3 1 3 1 3 1 6 1 6 2 6 1 6 1 6 0 ]   [ LRMI 1 LRMI 2 LRMI 3 ]
Backward transform : [ HRMI 1 HRMI 2 HRMI 3 ] = [ 1 1 6 3 6 1 1 6 3 6 1 2 6 0 ]   [ HRPI H S ] = [ LRMI 1 + ( HRPI LRIC ) LRMI 2 + ( HRPI LRIC ) LRMI 3 + ( HRPI LRIC ) ]
Inspired by (2), a GIM method for one to N fusion can be deduced as:
Forward transform : LRIC = n = 1 N α n LRMI n N
Backward transform : [ HRMI 1 HRMI N ] = [ LRMI 1 + ( HRPI LRIC ) LRMI N + ( HRPI LRIC ) ]
In (3), αn is the weight coefficient of the LRMIn, which is related with the SRFs of the nth multispectral and panchromatic sensors, and is discussed in the following section.

2.2. Production of the LRIC based on SRF

The SRF of a sensor defines the probability that the radiation is detected by this sensor. For producing the LRIC from the {LRMIn}1≤nN and the HRPI, the SRF of the panchromatic sensor (φ(υ)) and the SRFs of the N multispectral sensors ({ψn(υ)}1≤nN) are involved. Let the events mn and t be the detection of the radiation by the nth multispectral sensor and the HRPI sensor, respectively. The probabilities of the events mn and t are [7]:
P ( m n ) = ψ n ( υ ) d υ
P ( t ) = φ ( υ ) d υ
The probability of the radiation detected by both sensors (event mnt) is:
P ( m n t ) = min ( φ ( υ ) , ψ n ( υ ) ) d υ
In geometrical terms, P(mnt) can be understood as the area below φ(υ) and ψn(υ) (Figure 1, http://www.spaceimaging.com/producs/QuickBird/QuickBirdRelativeSpectralResponse.xls, accessed on July, 8, 2005).
Given the radiation detected by the nth multispectral sensor, the probability to be detected by the HRPI sensor is:
P ( t | m n ) = P ( m n t ) P ( m n )
From (8), we can obtain a new LRIC as:
LRIC = n α n LRMI n
α n = P ( t | m n ) n P ( t | m n )
where αn is the spectral signature contribution factor of the LRMIn to the LRIC, and preserves the spectral properties of the scanned objects when producing the LRIC. That is, αn is the ratio of the spectral content identified by the HRPI sensor from what the LRMIn records to that identified by the HRPI sensor from all LRMI bands.

2.3. Introduction of EMD into the fusion of the LRIC and the HRPI

The IHS method for multisensor image fusion often causes significant spectral distortion in the HRMIs. This is due to the fact that all details contained in the HRPI are directly substituted to the LRIC [10]. A more suitable use of the IHS method should rather fuse the LRIC with the HRPI through an advanced image processing technique to produce a better HRIC. The EMD is a highly efficient and adaptive algorithm for analyzing nonlinear and nonstationary signal [11]. With the development of the EMD, one expects much room for improvement over the simple substitution scheme.
The EMD can decompose a signal into finite intrinsic mode functions (IMFs) and one residue component. Each IMF represents simple oscillatory mode imbedded in the signal [11]. Hence, the EMD offers higher frequency resolution and more accurate timing of nonlinear and nonstationary signal events than traditional integral transforms, and the sum of all IMFs match the original signal perfectly using the inverse EMD (IEMD). For the basic theory of the EMD, interested readers may consult [11] for more details.
For a two dimensional image, the sifting procedure of the EMD is summarized as follows:
  • Treating the original image I as the initial residue component I0.
  • Finding all the local extrema, then constructing two smooth cubic splines connecting all the local maxima and minima along rows to get upper envelope ur and lower envelope lr. Similarly, upper envelope uc and lower envelope lc along columns are also obtained. The mean plane ul is defined:
    ul = ( u r + l r + u c + l c ) / 4
    Then, the difference between I0 and ul is:
    ω 1 =   I 0 ul
    This is one iteration of obtaining the IMF. Checking whether or not ω1 is an IMF: if not, treating ω1 as I0, and go to 2); if ω1 is an IMF, and treating the following residue component as I0 and go to 2):
    I 1 =   I 0 ω 1
    Because the value of ul decreases rapidly for the first several iterations and then decreases slowly, this suggests that the number of iterations can be used as the stopping criterion. Therefore, the appropriate number of iterations to obtain the IMF is used as the stopping criterion.
  • Treating the residue component as the new input. A series of {ωj}1≤jJ is obtained by repeating 2) until IJ is a monotonic component (J denotes the decomposition level). I can be recovered using the IEMD:
    I =   j = 1 J ω j + I J
Figure 2 shows one example of the EMD. The original image was downloaded from http://www.inrialpes.fr/is2/people/pgoncalv (accessed in April 2007). Before and after the EMD, it is interesting to find that the original image contains three kinds of patterns, and the two modes and the residue component provide very useful information on a series of pattern structures which vary in scale from the smallest to the largest. Hence, the framework of the EMD is suitable for fusing multisensor images by managing the IMFs.

2.4. Combined GIM-EMD scaling-up transformation method

The fusion of the LRIC and the HRPI based on the EMD can be considered as constructing one HRIC with the same spectral response as the LRIC and the same spatial response as the HRPI. With the EMD, we expect much room for improvement over the traditional IHS fuser. The proposed procedure takes the following steps (Figure 3):
  • Obtaining the LRIC using formula (3).
  • Matching the histogram of the HRPI to that of the LRIC.
  • Decomposing the HRPI with the EMD to J levels, resulting in one residue component (PJ) and a total of J detail subbands ({ωj(P)}1≤jJ). Decomposing the LRIC with the EMD to J levels, resulting in a residue component (LJ) and a total of J IMF planes ({ωj(L)}1≤jJ).
  • Synthesizing the HRIC using LJ and the J detail subbands ({ωj(P)}1≤jJ) of the HRPI as:
    HRIC = L J + j = 1 J ω j ( P )
  • Replacing the LRIC with the HRIC, and obtaining N HRMIs as:
    HRMI n = LRMI n + HRIC LRIC

3. Experiments

The raw images were downloaded from http://studio.gge.unb.ca/UNB/images. These QuickBird images cover over the Pyramids area of Egypt and were taken in 2002. The test images of size 1024 by 1024 at the resolution of 0.7 m are cut from the raw images. The panchromatic band (450–900 nm) of 0.7 m resolution and blue (450–520 nm), green (520–600 nm), red (630–690 nm), near infrared (760–900 nm) bands of 2.8 m resolution are used as the HRPI and LRMIs, respectively. Figure 4(a) displays the LRMIs in color image by mapping the red, green, blue bands into the RGB color space. Figure 4(b) shows the HRPI. Before the image fusion, the LRMIs were co-registered to the HRPI.
For comparison purposes, the IHS, AWT, Brovey Transform (BT), Dyadic Wavelet Transform (DWT), HPF, High Pass Modulation (HPM) based fusion methods were also done. Figures 4(c)–(i) shows the HRMIs of fusing Figure 4(a) with Figure 4(b) by the seven methods. For better evaluation, Figure 5 shows subscenes of size 200×200 from the LRMIs and the corresponding HRMIs.
The qualities of the HRMIs are estimated both qualitatively and quantitatively. Visual inspection is used for qualitative estimation since visual inspection is an effective tool for analyzing local as well as global variations of spatial structures and spectral information of the HRMIs. Wald’s protocol is used to assess the qualities of the HRMIs quantitatively.

3.1. Visual inspection

Visual inspection provides an overall impression of image clarity and the similarity of the original and fused images. Visual analysis shows that the spatial resolution of the HRMIs is much higher than that of the LRMIs. The HRMIs present more details without noticeable spectral distortion except that of the IHS method, such as edges and slopes. Many textures and details in the HRMIs, such as edges and lines, can be identified individually in each of the HRMIs. This means that all of the fusion methods can improve the spatial quality of the LRMIs via the fusion procedure.
From Figures 4(c)–(i), it can be found that the HRMIs [Figures 4(c) and 4(e)] produced by the IHS and BT methods show obvious spectral distortion; the HRMIs [Figures 4(d) and (f)–(h)] produced by the AWT, DWT, HPF, and HPM methods show second color distortion; the HRMIs [Figure 4(i)] produced by the proposed method show the least spectral distortion. It can be concluded from Figure 4 that the HRMIs [Figure 4(i)] produced by the proposed method appear the best among the HRMIs, and the integration of spatial features and color is natural. This effect can be seen clearly in Figure 5 by enlarging a region of interest. For the IHS and BT methods, this is due to the fact that all details contained in the HRPI are directly injected into the LRMIs [10]. For additive methods, such as AWT, HPF, and HPM, this is probably due to over enhancement along the edge area because these methods have not considered the differences in high frequency information between the HRPI and the LRMIs [4]. For the DWT method, the critically sampled multiresolution analysis does not preserve the translation invariance [3].

3.2. Quantitative comparison

In addition to visual analysis, the performance of each method is further quantitatively analyzed by checking Wald’s protocol [13] using the following quantitative indexes.
  • Correlation coefficient (CC) between each band of the original LRMIs and the HRMIs.
  • Root mean square error (RMSE) between the LRMI and the HRMI, computed using the following equation:
    RMSE 2 = bias 2 + SDD 2
    where the bias is the difference between the mean values of the LRMI and the HRMI and SDD the standard deviation of the difference image. RMSE should be as close to 0 as possible.
  • Spectral angle mapper (SAM) is defined as:
    SAM = arccos ( i u i v i i u i 2 i v i 2 )
    where {ui} and {vi} denote the spectral vectors of images U and V, respectively. It should be as close to 0 as possible.
  • Relative average spectral error (RASE) characterizes the average performance of image fusion method in the spectral bands considered [13]:
    RASE = 100 M 1 N i = 1 N RMSE 2 ( B i )
    where M is the mean radiance of the N LRMI bands (Bi). RASE should be as close to 0 as possible.
  • Q4, defined as [14]:
    Q 4 = 4 [ E [ x y * ] x ¯ y ¯ * ] E [ || x ¯ || 2 ] || x ¯ || 2 + E [ || y ¯ || 2 ] || y ¯ || 2 || x ¯ || || y ¯ || || x ¯ || 2 + || y ¯ || 2
    where x and y, which denote the four band LRMIs and the HRMIs, respectively, are both expressed as quaternions (e. g. x=x1+i·x2+j·x3+k·x4). E[·] denotes the expected value, is the quaternion obtained by averaging the four LRMIs, and ||x|| is the magnitude of the quaternion. It should be as close to 1 as possible.
  • Erreur relative globale adimensionnelle de synthèse (ERGAS) [13] is given by:
    ERGAS = 100 h l 1 N i = 1 N bias i 2 + SDD i 2 M i 2
    where h is the resolution of the HRPI, l the resolution of the LRMI, N the number of HRMIs, and Mi the mean of the HRMIi. Bias is the difference between the mean of the LRMI and HRMI, and SDD the square root of the difference image between each band of the LRMIs and the HRMIs.
Three criteria based on the Wald’s protocol were employed to test the degree of spectral distortion caused by the fusion methods [14]: (1) In order to test the first property of Wald’s protocol, the HRMIs are spatially degraded to the resolution level of the original LRMIs (2.8 m) by cubic interpolation. Then, the degraded HRMIs (DHRMIs) are compared with the original LRMIs. Table 1 shows the results. (2) In order to test the second and third properties of Wald’s protocol, the fusion results (LHRMIs) of the degraded HRPI and LRMIs (4 times degraded in resolution by cubic convolution) are also compared with the LRMIs. Table 2 shows the results. In Tables 1 and 2, B1, B2, B3 and B4 denote the red, green, blue, and near infrared bands, respectively, and the last column reflects the ideal situation that should be reached after the fusion process.
It can be seen from Tables 1 and 2 that all fusion methods yield high scores for the DHRMIs and LHRMIs. In general, the proposed method produces less spectral distortion than other fusion methods. Hence, the proposed method allows a higher transformation of the texture information of the HRPI when preserving the spectral content of the LRMIs. The proposed method outperforms other fusion methods in fusing the LRMI with the HRPI, because the fusion model takes into account detail injection, as is the case of the EMD based fuser, and spectral signature, as is the case of the GIM based on the SRFs of the sensors. These aspects of the proposed method allow producing the HRMIs closer to the real HRMIs that the QuickBird multispectral sensor would take at the spatial resolution of the HRPI than other fusion methods.
In order to estimate the spatial quality of the HRMIs, we follow the procedure proposed by Zhou [15]. First, the spatial detail information present in the two images to be compared is extracted using the following Laplacian filter. Second, spatial correlation coefficient (SCC) between these two filtered images is calculated. The SCC indicates that how much the detail information of one of the images is present in the other. A high SCC shows that most spatial information of the HPRI has been incorporated into the LRMI during the fusion process:
| 1 1 1 1 8 1 1 1 1 |
Because fusion method injects different amount of details into different band of the LRMIs, for the purpose of evaluating roundly the detail injection performance of fusion method, the average SCC (SCCavg) is used as a global spatial quality index for the HRMIs. A good fusion method must allow the injection into each band of the LRMIs of the details the multispectral sensor would capture if it worked at a spatial resolution similar to that of the panchromatic sensor. That means the higher the SCCavg value the higher the spatial quality of the HRMIs. Table 3 shows the results.
The proposed method outperforms the AWT, BT, DWT, HPF, and HPM fusion methods in incorporating spatial details of the HRPI into the LRMIs by taking into account the separation of spatial information from the spectral information, as is the case of the EMD decomposition though the IHS method is the best. This injection model allows producing the HRMIs closer to the real HRMIs that the multispectral sensor would take at the spatial resolution of the HRPI. Visual inspection and quantitative comparison show that the proposed method gets the advantage of many traditional methods in fusing the LRMIs with the HRPI when the HRMIs are compared with the LRMIs.

4. Conclusions

In this paper, we wed the ideas of SRF based GIM and the EMD for fusing the LRMI with the HRPI of the same scene in order to obtain one HRMI. The LRIC used in the GIM is obtained from weighted averaging the LRMIs based on the SRFs of the multispectral and panchromatic sensors for separating the low spatial intensity from the spectral information while the EMD is introduced for alleviating the spectral distortion caused by the IHS approach. The LRIC is replaced with the produced HRIC. Finally, the HRMIs are produced by performing the GIM.
QuickBird LRMIs and HRPI are used to demonstrate the advantage of the proposed method over the traditional fusion approaches in terms of preserving the spectral properties of the LRMIs. The experimental results are compared with those of six fusion methods by visual inspection and quantitative comparison. The comparison results confirm the spectral preservation property of the proposed method. All these results are encouraging, and they show that the proposed method can achieve better spectral preservation together with spatial enhancement.

Acknowledgments

The authors would thank the anonymous reviewers for their sincere suggestions which helped to improve the paper. The author gratefully acknowledges the support of the State Key Development Program for Basic Research of China with grant number 2009CB421305, K. C. Wong Education Foundation, Hong Kong, and the 43th batch of postdoctoral foundation. This work was supported jointly by the Program of “One Hundred Talented People”, Key Topics in Knowledge Innovation Program (Grant No. kzcx2-yw-326-1) of the Chinese Academy of Sciences (CAS), the National Key Project of Scientific and Technical Supporting Programs Funded by Ministry of Science & Technology of China (NO. 2006BAC08B0407), and the Project of Natural Science Fund of China (40871170).

References and Notes

  1. Munechika, C.K.; Warnick, J.S.; Salvaggio, C.; Schott, J.R. Resolution enhancement of multispectral image data to improve classification accuracy. Photogramm. Eng. Remote Sens 1993, 59, 67–72. [Google Scholar]
  2. Pohl, C.; Van Genderen, J.L. Multi-sensor image fusion in remote sensing: Concepts, methods, and applications. Int. J. Remote Sens 1998, 5, 823–854. [Google Scholar]
  3. Chen, S.H.; Su, HB.; Zhang, R.H. Feature space and measure metric for fusing multisensor images. Int. J. Remote Sens 2008, 29, 3257–3270. [Google Scholar]
  4. Wang, Z.J.; Ziou, D.; Armenakis, C.; Li, D.R.; Li, Q.Q. A comparative analysis of image fusion methods. IEEE Trans. Geosci. Remote Sens 2005, 43, 1391–1402. [Google Scholar]
  5. Schowengerdt, R.A. Reconstruction of multi-spatial, multi-spectral image data using spatial frequency content. Photogramm. Eng. Remote Sens 1980, 46, 1325–1334. [Google Scholar]
  6. Burt, P.T.; Kolczynski, R.J. Enhanced image capture through fusion. In Proc. Int. Conf. Computer Vision 1993, 173–182. [Google Scholar]
  7. Otazu, X.; González-Audícana, M.; Fors, O.; Núñez, J. Introduction of sensor spectral response into image fusion methods. Application to Wavelet-Based Methods. IEEE Trans. Geosci. Remote Sens 2005, 43, 2376–2385. [Google Scholar]
  8. Dutilleux, P. An implementation of the “algorithme à trous” to compute the wavelet transform. In Wavelets: Time-Frequency Methods and Phase Space; Combes, J.M., Grossman, A., Tchamitchian, P., Eds.; Springer-Verlag: Berlin, Germany, 1989; pp. 298–304. [Google Scholar]
  9. Choi, M. A New Intensity-Hue-Saturation Fusion Approach to Image Fusion With a Tradeoff Parameter. IEEE Trans. Geosci. Remote Sens 2006, 44, 1672–1682. [Google Scholar]
  10. Chibani, Y.; Houacine, A. The joint use of IHS transform and redundant wavelet decomposition for fusing multispectral and panchromatic images. Int. J. Remote Sens 2002, 23, 3821–3833. [Google Scholar]
  11. Huang, N.E.; Shen, Z.; Long, S.R. The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis. Proc. R. Soc. Lond 1998, A454, 903–995. [Google Scholar]
  12. Peng, Z.K.; Tse, P.W.; Chu, F.L. A comparison study of improved Hilbert-Huang transform and wavelet transform: Application to fault diagnosis for rolling bearing. Mech. Syst. Signal Process 2005, 19, 974–988. [Google Scholar]
  13. Wald, L. Quality of high resolution synthesized images: Is there a simple criterion? Proc. Int. Conf. Fusion of Earth Data Nice, France 2000, 1, 99–105. [Google Scholar]
  14. Alparone, L.; Baronti, S.; Garzelli, A.; Nencini, F. A global quality measurement of pan-sharpened multispectral imagery. IEEE Trans. Geosci. Remote Sens. Lett. 2004, 4, 313–317. [Google Scholar]
  15. Zhou, J.; Civco, D.L.; Silander, J.A. A wavelet transform method to merge Landsat TM and SPOT panchromatic data. Int. J. Remote Sens. 1998, 4, 743–757. [Google Scholar]
Figure 1. Spectral response functions for QuickBird-2 bands.
Figure 1. Spectral response functions for QuickBird-2 bands.
Sensors 09 01370f1
Figure 2. (a) The original image; (b) IMF1; (c) IMF2; (d) the residue component.
Figure 2. (a) The original image; (b) IMF1; (c) IMF2; (d) the residue component.
Sensors 09 01370f2
Figure 3. Schematic flowchart of the fusion of the LRIC and the HRPI
Figure 3. Schematic flowchart of the fusion of the LRIC and the HRPI
Sensors 09 01370f3
Figure 4. (a) the original LRMIs; (b) the HRPI; (c)–(i) the HRMIs from the IHS, AWT, BT, DWT, HPF, HPM, and the proposed method, respectively.
Figure 4. (a) the original LRMIs; (b) the HRPI; (c)–(i) the HRMIs from the IHS, AWT, BT, DWT, HPF, HPM, and the proposed method, respectively.
Sensors 09 01370f4
Figure 5. Subscenes of the original LRMIs and the HRMIs produced by different methods. (Left to right sequence) Original LRMIs, IHS, AWT, BT, DWT, HPF, HPM, and the proposed method.
Figure 5. Subscenes of the original LRMIs and the HRMIs produced by different methods. (Left to right sequence) Original LRMIs, IHS, AWT, BT, DWT, HPF, HPM, and the proposed method.
Sensors 09 01370f5
Table 1. Values of the six indexes analyzed to evaluate the qualities of DHRMIs
Table 1. Values of the six indexes analyzed to evaluate the qualities of DHRMIs
IHSAWTBTDWTHPFHPMThe proposed methodideal
CCB10.91440.98080.96490.96340.97740.97650.98531
B20.91770.97980.96650.96890.97630.97760.98671
B30.92140.97970.96250.97130.97620.97720.98691
B40.89090.94100.80110.91180.93210.93530.98201

RMSEB139.45121.21125.79423.63521.74019.41215.3130
B238.13421.66626.27722.26321.31318.77414.4520
B336.26521.48627.33921.33620.44918.87614.3140
B442.94228.57555.27330.16030.33329.73016.7570

SAM12.5746.885510.4528.67937.98777.81215.13650

Q40.89480.96150.90830.94390.95620.96020.98211

RASE28.24816.98623.69517.91517.54016.75611.6780

ERGAS5.19542.68374.41313.59573.24563.20832.08460
Table 2. Values of the six indexes analyzed to evaluate the qualities of LHRMIs
Table 2. Values of the six indexes analyzed to evaluate the qualities of LHRMIs
IHSAWTBTDWTHPFHPMThe proposed methodideal
CCB10.86600.96200.95880.95340.95870.96100.97581
B20.86690.96970.95390.95450.96630.96910.97541
B30.86970.96220.94460.95460.96070.96090.97721
B40.84700.9642.72080.90970.95610.96200.96971

RMSEB145.69827.53727.10226.01629.21625.34325.2670
B245.03424.86228.65625.81726.74522.33521.9030
B344.42327.13731.48826.00528.06427.86120.4760
B446.87422.00755.21141.70333.64932.91821.2680

SAM15.5938.342612.5889.85849.47568.54567.08610

Q40.84870.96270.87990.93980.95760.95690.96731

RASE30.58716.69331.49220.73023.25118.18117.1470

ERGAS5.78253.08595.19493.69723.74523.59162.83750
Table 3. the SCCavg comparison between the spatial detail of the average HRMI and the HRPI
Table 3. the SCCavg comparison between the spatial detail of the average HRMI and the HRPI
IHSAWTBTDWTHPFHPMThe proposed methodideal
SCCavg0.99600.97140.95050.70120.97140.86880.98091

Share and Cite

MDPI and ACS Style

Chen, S.; Zhang, R.; Su, H.; Tian, J.; Xia, J. Scaling-up Transformation of Multisensor Images with Multiple Resolutions. Sensors 2009, 9, 1370-1381. https://doi.org/10.3390/s90301370

AMA Style

Chen S, Zhang R, Su H, Tian J, Xia J. Scaling-up Transformation of Multisensor Images with Multiple Resolutions. Sensors. 2009; 9(3):1370-1381. https://doi.org/10.3390/s90301370

Chicago/Turabian Style

Chen, Shaohui, Renhua Zhang, Hongbo Su, Jing Tian, and Jun Xia. 2009. "Scaling-up Transformation of Multisensor Images with Multiple Resolutions" Sensors 9, no. 3: 1370-1381. https://doi.org/10.3390/s90301370

APA Style

Chen, S., Zhang, R., Su, H., Tian, J., & Xia, J. (2009). Scaling-up Transformation of Multisensor Images with Multiple Resolutions. Sensors, 9(3), 1370-1381. https://doi.org/10.3390/s90301370

Article Metrics

Back to TopTop