Next Article in Journal
Control System Applied to the Microinjection of Artificial Tears for Severe Dry Eye Treatment
Next Article in Special Issue
Dental Images Recognition Technology and Applications: A Literature Review
Previous Article in Journal
Numerical-Experimental Investigation into the Tensile Behavior of a Hybrid Metallic–CFRP Stiffened Aeronautical Panel
Previous Article in Special Issue
A Stronger Aadaptive Local Dimming Method with Details Preservation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparison of Image Fusion Techniques Using Satellite Pour l’Observation de la Terre (SPOT) 6 Satellite Imagery

by
Paidamwoyo Mhangara
1,2,*,
Willard Mapurisa
1 and
Naledzani Mudau
1
1
South African National Space Agency, Innovation Hub, Pretoria 0087, Gauteng, South Africa
2
School of Geography, Archaeology and Environmental Studies, University of the Witwatersrand—Johannesburg, Wits 2050, Johannesburg, South Africa
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(5), 1881; https://doi.org/10.3390/app10051881
Submission received: 27 January 2020 / Revised: 17 February 2020 / Accepted: 18 February 2020 / Published: 10 March 2020
(This article belongs to the Special Issue Advances in Image Processing, Analysis and Recognition Technology)

Abstract

:

Featured Application

High resolution pansharpened images are used for detailed land use and land cover mapping.

Abstract

Preservation of spectral and spatial information is an important requirement for most quantitative remote sensing applications. In this study, we use image quality metrics to evaluate the performance of several image fusion techniques to assess the spectral and spatial quality of pansharpened images. We evaluated twelve pansharpening algorithms in this study; the Local Mean and Variance Matching (IMVM) algorithm was the best in terms of spectral consistency and synthesis followed by the ratio component substitution (RCS) algorithm. Whereas the IMVM and RCS image fusion techniques showed better results compared to other pansharpening methods, it is pertinent to highlight that our study also showed the credibility of other pansharpening algorithms in terms of spatial and spectral consistency as shown by the high correlation coefficients achieved in all methods. We noted that the algorithms that ranked higher in terms of spectral consistency and synthesis were outperformed by other competing algorithms in terms of spatial consistency. The study, therefore, concludes that the selection of image fusion techniques is driven by the requirements of remote sensing application and a careful trade-off is necessary to account for the impact of scene radiometry, image sharpness, spatial and spectral consistency, and computational overhead.

1. Introduction

High spatial resolution satellite imagery is increasingly adopted globally to support spatial planning and monitoring of the built-up environment as evidenced by the proliferation of high-resolution commercial satellite sensors such as Pleiades, Worldview 1–4, Satellite Pour l’Observation de la Terre (SPOT) 6 and 7, Superview, and a wide range of high-resolution services and products derived from these sensors. Most modern satellite sensors carry onboard spectral bands of different spatial resolutions and spectral frequencies. In most instances, satellite sensors have narrow multispectral bands of relatively courser spatial resolution and a wide panchromatic band with higher spatial resolution. To facilitate better image visualization, interpretation, feature extraction, and land cover classification, an image fusion technique called pansharpening is used to merge the visible multispectral bands (red, blue, and green bands) and the panchromatic band to produce color images with higher spatial resolution [1,2,3,4,5,6,7]. The panchromatic band has wide spectral coverage in the visible and near-infrared wavelength regions. Pansharpening is aimed at producing a synthesized multispectral image with an enhanced spatial resolution equivalent to that of a panchromatic band [8,9,10,11,12,13].
Remote sensing using high-resolution satellites is now accepted as a dispensable tool that has the potential to support decision making in a wide range of social benefit areas, such as infrastructure and transportation management, sustainable urban development, disaster resilience, sustainable precision agriculture, and energy and water resources management. The demand for services and products that require users to discern features at high spatial and spectral precision has led most Earth observation service providers to develop geospatial products that use pansharpened satellite imagery that emerges from the fusion of the high spatial resolution panchromatic band and lower resolution multispectral bands [14,15,16].
Many studies have proved the value of pansharpened imagery in discerning geometric features from satellite imagery, cartography, geometric rectification, change detection, and in improving land cover classification accuracies [17,18,19,20]. Many pansharpening techniques have been developed over time to enable users to fully exploit the spatial and spectral characteristics available on most satellite systems. Pansharpening techniques aim to simultaneously increase spatial resolution while preserving the spectral content of the multispectral bands [11,20,21,22].
Pansharpening methods are classified into three broad categories: component substitution (CS)-based methods; multiresolution analysis (MRA)-based methods; and variational optimization (VO)-based methods. A new generation of pansharpening methods based on deep learning has been evolving in recent years. Component substitution methods rely on the application of a color decorrelation transform to convert unsampled lower-resolution multispectral bands into a new color system that differentiates the spatial and spectral details; fusion occurs by partially or wholly substituting the component that contains the spatial geometry by the panchromatic band and reversing the transformation [23]. Most studies report that while component substitution methods produce pansharpened products of good spatial quality the products suffer spectral distortions. Component substitution is considered more computationally efficient and robust in dealing with mismatches between the multispectral and panchromatic bands [10,23,24]. Typical examples of component substitution methods include principal component analysis (PCA) transform, Brovey’s band-dependent spatial detail (BDSD), partial replacement adaptive CS (PRACS), Gram–Schmidt (GS) orthonormalization, and intensity-hue-saturation (IHS) transform. Multiresolution analysis-based methods fuse the high frequencies inherent in the panchromatic band into the unsampled multispectral components through a multiresolution decomposition [23]. In contrast to component substitution methods, pansharpened products generated from multiresolution analysis are considered to produce superior spectral quality but are prone to spatial distortions, particularly when multispectral bands are misaligned with the panchromatic band [9,10]. This is especially the case in multiresolution analysis techniques that apply transformations that are not shift-invariant to engender multiresolution analysis. Examples of multiresolution methods include high-pass modulation (HPM), Laplacian pyramid, discrete wavelet transform, and contourlet transform [23]. Such a transformation converts unsampled lower-resolution multispectral bands into a new color system that differentiates the spatial and spectral details and fusion occurs by partially or wholly substituting the component that contains the spatial geometry by the panchromatic band and reversing the transformation [23]. In recent years, a plethora of novel pansharpening methods have been developed to address the deficiencies of traditional image fusion algorithms. Most of the new pansharpening techniques are broadly clustered into generic categories such as component substitution (CS), multiresolution analysis (MRA), Bayesian, model-based optimization (MBO), sparse reconstruction (SR), and variational optimization (VO)-based methods [8,9,23,25].
The spectral, radiometric, and spatial integrity of pansharpened imagery is critical for several quantitative remote sensing applications. To ascertain the spectral and spatial quality of pansharpened images, many quality metrics were developed. Preservation of spectral content is measured by statistical indicators such as correlation coefficient (CC), root means square error (RMSE), relative-shift means (RM), the universal image quality index, structure similarity index (SSIM), and spectral angle mapper (SAM). A few quantitative measures were also developed to assess the spatial consistency of pansharpened imagery and these include the spatial correlation coefficient (SCC) and the spatial RMSE [10].
Pansharpened SPOT 6/7 and SPOT 5 imagery distributed by South Africa National Space Agency(SANSA) is extensively used by government departments, municipalities, and public entities in South Africa to support spatial planning, crop, and natural resource monitoring. SANSA has distributed pansharpened orthobundles and an annual wall-to-wall national 2.5 m mosaic for SPOT 5 from 2005 to 2012 and a biannual 1.5 m SPOT 6/7 mosaic from 2013 up to 2018. While these pansharpened products were successfully exploited by users, quality assessment of the pansharpened products was limited to visual inspections of the products. In most cases, users of pansharpened imagery require pansharpened products that retain the spectral content of the multispectral image and enhance their spatial detail. The objectives of this study are therefore to compare different pansharpening techniques by using quantitative image quality metrics and recommend the most ideal method with minimum spectral and spatial distortions for the operational production of the SPOT 6 mosaic.

2. Materials and Methods

The SPOT 6/7 multispectral and panchromatic dataset over Pretoria, South Africa was used for the study. SPOT 6 and SPOT 7 are identical sun-synchronous optical satellites launched on 12 September 2012 and 30 June 2014, respectively that co-orbit in the constellation at an altitude of 694 km and are phased at 180 degrees (Airbus, Toulouse, France, 2018). The spectral configuration of the satellites consists of blue (450–520 nm), green (530–590 nm), red (625–695 nm) and near-infrared (760–890 nm) multispectral bands with a spatial resolution of 6 m and a panchromatic (450–745 nm) band with a spatial resolution of 1.5 m and dynamic range of 12 bits per pixel. SPOT 6/7 are capable of contiguous image segments of more than 120 km × 120 km or 60 km × 180 km from a single pass along one orbit.
To meet the operational needs of generating a national wall-to-wall mosaic of South Africa, we selected established pansharpening methods for quantitative quality assessment. The Bayesian (BAY), Brovey transform (BRO), color normalized spectral (CNS) sharpening, Ehlers fusion technique (EHLERS), Gram–Schmidt (GRS), local mean and variance matching (LMVM), modified intensity hue saturation (MIHS), Pansharp algorithm (PANSHARP), principal component analysis (PCA), ratio component substitution (RCS), and wavelet resolution merge (WAVELET) techniques were evaluated in the study.
The PANSHARP algorithm available in the PCI Geomatica software is a statistics-based fusion technique aimed at maximizing spatial detail while minimizing color distortions [26]. It attempts to preserve the spectral characteristics of the data. Developed by Zhang [27], the algorithm uses the least-squares method to approximate the grey value relationship between the original multispectral, panchromatic, and fused images to achieve the best color representation. The modified intensity hue saturation (MIHS) fusion technique merges high-resolution panchromatic data with lower resolution multispectral data to produce a pansharpened image that retains sharp spatial detail and a realistic resemblance of the original multispectral scene colors. This approach assesses the spectral overlap between each multispectral band and the high-resolution panchromatic band and weighs the merge based on these relative wavelengths. The MIHS method was developed to address a shortcoming of the intensity-hue-saturation (IHS) transformation where color distortions occurred due to discrepancies in spectral characteristics between panchromatic and multispectral bands. The IHS fusion transforms the RGB (red, green, and blue) space into the IHS color space and subsequently replaces the intensity band with a high-resolution pan image in the fusion before performing a reverse IHS transformation. The Ehlers (EHLERS) fusion technique uses an IHS transform coupled with Fourier domain filtering and aims to maintain the spectral characteristics of the fused image [22]. This is achieved by using the high-resolution panchromatic image to sharpen the multispectral image while avoiding adding new grey level information to its spectral components by first separating the color and spatial information. The spatial information content is then embedded as an adaptive enhancement to the images using a combination of color and Fourier transforms [22]. The Brovey transform (BRO) algorithm applies a ratio algorithm to combine the images. This is done by first multiplying each multispectral band by a high-resolution pan band and subsequently dividing each product by the sum of the multispectral bands. It is known to preserve the relative spectral contributions of each pixel but substitutes scene brightness with the high-resolution panchromatic (PAN) image [28]. The principal component analysis (PCA) transform converts intercorrelated Multispectral (MS) bands into a new set of uncorrelated components. The first component that resembles a high-frequency band is replaced by a high-resolution panchromatic band for the fusion. The panchromatic band is fused into low-resolution multispectral channels by performing a reverse PCA transform. A high-resolution fused image is generated after the reverse PCA transformation [29]. The color normalized spectral sharpening (CNS) algorithm implemented in Environment for Visualizing Images (ENVI) software is employed to simultaneously sharpen any defined number of bands and retain the characteristics of the original bands in terms of data type and dynamic range. In this case, the higher resolution bands are used to sharpen the lower resolution bands and in the ENVI implementation, the lower resolution multispectral bands are expected to fall in the same spectral range with the high-resolution panchromatic channel [30,31]. The multispectral bands are clustered into spectral segments defined by the spectral range of the high-resolution panchromatic sharpening band. The pansharpened image is generated by multiplying the high-resolution panchromatic with each lower resolution multispectral band before normalizing the computation by dividing the sum of the input spectral channels in each segment.
The wavelet resolution merge (WAVELET) fusion approach sharpens low-resolution multispectral bands using a matching high-resolution panchromatic band by first decomposing the high-resolution panchromatic band into a set of low-resolution multispectral bands with corresponding wavelet coefficients (spatial details) for each level. This is done by infusing the high-resolution spatial into each of the multispectral bands by performing a reverse wavelet transform on each MS band together with the corresponding wavelet coefficients. In a sense, wavelet-based processing is akin to Fourier transform analysis, except fast Fourier transform analysis uses long continuous (sine and cosine) waves, whereas wavelet transform analysis applies short and discrete wavelets [32,33,34,35]. The Gram–Schmidt (GRS) pansharpening algorithm available in the ENVI fuses the high-resolution panchromatic band to the lower resolution multispectral bands by simulating the panchromatic band from the multispectral band by averaging the multispectral bands. A Gram–Schmidt transformation is computed from the simulated panchromatic band and the multispectral band, whereby the simulated panchromatic band is used as the first band. Further, the high spatial resolution panchromatic band is substituted with the first Gram–Schmidt band before applying an inverse Gram–Schmidt transformation to generate the pansharpened multispectral bands [36,37]. The ratio component substitution (RCS) pansharpening algorithm implemented in Orfeo ToolBox [38] fuses orthorectified panchromatic (PAN) and multispectral (XS) images using a low pass sharpening filter as shown in the computation below (OTB, 2019).
XS Filtered   ( PAN ) PAN   E
where E is a vector of random errors that is considered to be stochastically independent of Z.
The Bayesian fusion (BAY) applies elementary calculus in the fusion of the panchromatic and multispectral images to generate a pansharpened image [38]. This fusion approach uses the statistical relationships amongst the spectral bands and the panchromatic band. Bayesian pansharpening techniques use three images that include a panchromatic band and a multispectral image resampled to the same spatial resolution as the panchromatic band. The panchromatic band is weighted in comparison to the multispectral bands. A thorough mathematical description of the Bayesian pansharpening algorithm implemented in Orfeo ToolBox is provided by [39]. This pansharpening technique is dependent on the notion that the variables of interest, expressed as vector Z, are not directly observable and related to observable variable Y through an error-like equation.
Y = g ( Z ) + E
where g(Z) is considered a set of functionals.
The LMVM pansharpening algorithm implemented in OTB software uses an LMVM filter that applies a normalization function at a local scale within the images to equate the local mean and variance values of the high spatial resolution panchromatic band with those of the lower resolution multispectral image [38,40]. The resulting small residual differences are then considered to arise from the high-resolution panchromatic band [40]. Rubiey [40] further notes that this form of filtering improves the correlation between the pansharpened image and the original multispectral image. The LMVM algorithm is highlighted below.
F i , j = ( H i , j H ¯ i , j ) · s ( L ) i , j ( w , h ) s ( H ) i , j ( w , h )   E
where F i , j refers to the fused image, H i , j and L i , j denote high and low spatial resolution images respectively at pixel coordinates i,j. ( H ) i , j ( w , h ) and ( L ) i , j ( w , h ) are local means calculated inside the window of size ( w , h ) . s denotes the local standard deviation.

Spectral and Spatial Quality Evaluation of Pansharpened Images

Using Ward’s three property criteria, we tested the spectral synthesis and consistency properties of the pansharpened images using image quality indices. According to Wald [41], the first property stipulates that the pansharpened image, once degraded from its original resolution, should be as identical as possible to the original image. Secondly, the pansharpened image should be as identical as possible to the image that a matching sensor would detect with the highest resolution. Last, the multispectral pansharpened image should be as identical as possible to the multispectral set of images that the matching sensor would detect with the highest resolution. For assessment purposes, these three properties are further condensed into two properties: consistency and synthesis. The Ward protocol for the quality assessment of pansharpened imagery stipulates that consistency can be tested by downsampling the merged image from the higher spatial resolution to its original spatial resolution. The nearest neighbor resampling method was used in the downsampling process to ensure minimum transformation of the pixel values. To validate the synthesis property, the original high spatial resolution panchromatic band and the lower spatial resolution multispectral bands were downsampled to their lower resolutions.
To validate the synthesis property, we first degraded both the multispectral images and the panchromatic band by a factor of 4. This downsampling procedure meant the spatial resolution of the multispectral images changed from 6 m to 24 m while the panchromatic band changed from 1.5 m to 6 m. The degraded multispectral and pansharpened images were then fused and the pansharpened image was then subsequently compared to the original multispectral images for quality assessment. To verify the consistency property, we first pansharpened the native multispectral and panchromatic images to create a fused image that we further downsampled by a factor of 4, thus changing its spatial resolution of the pansharpened image from 1.5 m to 6 m. We subsequently compared the downsampled pansharpened image to the original 6 m multispectral image. The process was applied for all eight pansharpening techniques assessed in this paper.
To quantitatively assess the spectral consistency of the pansharpened results the following statistical measures were used: correlation coefficient (CC), Erreur Relative Global Adimensionnelle de Synthese (ERGAS), difference in variance (DIV), bias, root mean square error (RMSE), relative average spectral error (RASE), and universal image quality index (UIQI). The quality of the synthesis in an important property in pansharpening and we used the ERGAS indices using the original multispectral and panchromatic band as a reference to assess the quality of the synthesis. The ERGAS index, when used in the spatial and spectral dimension, is indicative of the amount of spatial and spectral distortions, respectively. The spatial consistency of the pansharpened results was assessed using a spatial metric that computes the spatial correlation coefficient (SCC) between the high-frequency components of the fusion product and the original PAN. In this case, we used a 3 × 3 Laplacian edge detection convolution filter to filter the bands of the pansharpened images and the original panchromatic band before computing the correlation coefficients between them.
The CC is one of the most widely used statistical measures of the strength and direction of the linear relationship between two images [37]. It is used to determine the amount of preservation of spectral content in two images. The CC between each band of the reference and the pansharpened image indicates the spectral integrity of the pansharpened image. The best fusion will have a higher value close to +1. RMSE measures the similarity between each band of the original and fused image. It measures the changes in the radiance of the pixel values for each band of the input multispectral image and pansharpened image. It is a very good indicator of the spectral quality when considered along homogeneous regions in the image. The best fusion will have a lower value close to zero [42]. RASE characterizes the average performance of a method in the considered spectral bands. The value is expressed in percentage and tends to decrease as the quality increases. UIQI measures the difference in spectral information between each band of the merged and reference image to estimate the global spectral quality of the merged images. It models distortion using three parameters: loss of correlation, luminance distortion, and contrast. The best fusion will have a higher value close to +1. ERGAS is indicative of the synthesizing quality of the pansharpened image. It is a global quality index that is sensitive to mean shifting and dynamic range change. ERGAS measures the amount of spectral distortion in the image. The best fusion will have a lower value, mostly when less than the number of bands [43]. Bias reveals the error and spectral accuracy of the pansharpened image. Ideal values are considered to be close to zero. The difference in variance (DIV) measures the quality of the image fusion by calculating the mean difference in variances between the pansharpened image and the original multispectral image. The quality of the pansharpening is considered ideal if the values are closer to zero.

3. Results and Discussion

The results of this study are presented and discussed in this section. Spatial consistency, spectral consistency, and spectral synthesis are presented in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9.

3.1. Spatial Consistency Quality Assessment

The spatial consistency results are highlighted in Table 1 below.
Results are reflective of the correlation between the Laplacian filtered bands of the pan sharpened image and the Laplacian filtered panchromatic band. The domain value range from −1 to +1 and ideal values should be close to 1. The ideal value is 1. The results show the best spatial consistency results were produced by the Baysian pansharpening method with Gram–Schmidt in second place and CNS in third place. The wavelet pansharpening technique produced the worst spatial consistency results.

3.2. Spectral Consistency

The results for the spectral consistency evaluation are outlined in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 below.
The CC results are indicative of spectral similarity between the fused image and original multispectral image. The values range from −1 to +1 and the ideal value is considered to be close to 1. While this metric is quite popular, one of its disadvantages is that it is insensitive to a constant gain and bias between two images and is not able to distinguish subtle fusion artifacts. The results indicate that the IMVM method produced the best results followed by the RCS method. The worst results were produced by the Brovey method.
The ERGAS results are indicative of the spectral distortions in the fused image. This gives an indication of the general quality of the fused image at a global level. Lower values are considered more ideal and the domain values range from zero to infinity. The best results were produced by the IMVM pansharpening method while RCS was second. The Brovey method performed poorly.
The UIQI results show the spectral and spatial distortions in the fused image. Results of this similarity index point to correlation losses as well as distortions in luminance and contrast. The domain values range from −1 to 1 and values close to 1 are considered ideal. The ideal value for UIQI is 1. The IMVM pansharpening algorithm produced the best results while the RCS method took second place. The worst results were produced by the Brovey method.
RASE results show the average performance of the fusion algorithm in spectral bands and ideal values should be as small as possible. The results show that the IMVM fusion method produced the best results followed by the RCS method. The PCA method produced the worst results.
The RMSE results are reflective of the average spectral distortion arising from the image fusion and the results are indicative of spectral quality in homogeneous zones in the image. The domain for RMSE value ranges from zero to infinity and lower values close to zero are considered ideal and reflective of high quality. The best results were produced by the IMVM method followed by the RCS method. The worst results were produced by PCA and the Brovey method.
The results indicate the fusion quality over the whole image by showing difference in variances relative to the original one. The metric reveals a decrease or increase of information content as a result of the pansharpening process. The results are considered ideal positive when the information content decreases and undesirable when the information content increases. The ideal value should be close to 0. The IMVM pansharpening method produced the best results. The Brovey transform method ranked second and PCA had the worst performance.
The results are reflective of difference between the original image and fused image and the ideal value should be as small as possible. The IMVM method showed the best results followed by the RCS method. The Brovey transform method showed the worst performance.

3.3. Spectral Synthesis

The spectral synthesis results are shown in Table 9 below.
The best result is indicated by the smallest value. The results indicate that IMVM pansharpening method produced the best spectral synthesis followed by the RCS method. The Brovey method produced the worst synthesis.
The best spectral synthesis in IMVM was reflected by ERGAS 3.921, RCS 6.180, BAY 6.426, EHLERS 6.846, and GRS 7.069.
The IMVM algorithm produced the best pansharpening results in terms of spectral consistency and synthesis as revealed by the CC, bias, DIV, ERGAS, UIQI, RASE, and RMSE results. In terms of spectral consistency, one of the properties tested under Ward’s criteria, the results of this study also show that the IMVM pansharpening technique had an average high correlation coefficient of 0.969 in the visible bands, the highest among the fusion algorithms tested in the study. The performance of the IMVM algorithm is further shown by the fact that it had the lowest bias and DIV values of 0.004 and 0.616, respectively. The superiority of the IMVM algorithm is further attested to by a very high UIQI value of 0.972. Such a high UIQI value demonstrates high spectral consistency as it considers factors such as loss of correlation, luminance, and contrast distortion. The IMVM algorithm had the best RMSE, RASE, and ERGAS values of 11.674, 3.741, and 1.062, respectively, the lowest amongst the tested pansharpened methods. The pansharpened image maintains almost the same natural color as the original multispectral images and the same level of spatial detail as the original panchromatic images. Results of the assessment also revealed that the IMVM algorithm had the best synthesis as shown by an ERGAS of 3.921, the lowest in the analysis, indicating that the fused image had minimum distortions and is quite similar to the reference image.
The RCS algorithm ranked second in the assessment and showed good results in terms of spectral consistency and synthesis. The ability of the algorithm to retain spectral information is shown by a correlation coefficient of 0.855, bias of 0.007, DIV of 1.270, ERGAS of 2.296, UIQI of 0.856, RASE of 8.239, and RMSE of 26.009. The other pansharpening methods that performed comparatively well in terms of spectral consistency were the wavelet principal components, MIHS, and PANSHARP methods. The PCA and Brovey methods produced consistently poor results in terms of spectral consistency as shown by the CC, bias, DIV, ERGAS, UIQI, RASE, and RMSE results.
Spectral synthesis is one of the properties that needs to be analyzed under Ward’s three property criteria. As pointed out earlier, our results indicate that the IMVM algorithm produces the best spectral synthesis as shown by a very low ERGAS value of 3.921. Once again, the RCS algorithm ranked second with an ERGAS value of 6.180. Good spectral synthesis results were also obtained by the BAY, EHLERS, GRS, PANSHARP, and MIHS fusion techniques. The spectral synthesis results also revealed the poor performance of the Brovey, CNS, and PCA methods as shown by ERGAS values of 27.034, 25.122, and 19.177, respectively.
The third property evaluated in this study in terms of Ward’s three property criteria related to spatial consistency. The correlation coefficient results ranked BAY, GRS, CNS, PANSHARP, RCS, PCA, and MIHS algorithms among the top-performing fusion techniques in terms of spatial consistency. While the Bayer algorithm was considered the best in terms of spectral consistency, most of the algorithms showed high spatial correlation with a correlation coefficient above 0.8 and the wavelet principal component method having the lowest value of 0.542. In contrast to the spectral consistency and synthesis results, the IMVM algorithm did not feature among the top-performing algorithms although it still had a high correlation coefficient of 0.784. This result seems to suggest there is a trade-off between spectral consistency and synthesis with spatial consistency.
While the IMVM and RCS pansharpening methods showed superior performance compared to the other fusion methods such as the PANSHARP, MIHS, GRS, wavelet transform, Bayesian, and EHLERS pansharpening techniques, the results of this study clearly show the credibility of these methods in terms of preservation of spectral and spatial information. When selecting the most ideal pansharpening method to use for practical applications, a trade-off is required in terms of factors such as the need for retention of scene radiometry, image sharpness, spatial and spectral consistency, and computational overhead.
Color distortion due to pansharpening could be attributed to the broadening of the panchromatic band into the near-infrared wavelength region in some modern sensors [26]. In the case of SPOT 6/7, the panchromatic bands have a spectral range of 450 nm to 745 nm, clearly overshooting the bands in the visible spectrum and encroaching into the near-infrared region that starts from the nominal red edge at 700 nm. This spectral coverage essentially spans over the visible spectrum that contains the blue (450–450 nm), green (530–590 nm), and red (625–695 nm) spectral channels. The extension of the panchromatic band affects the grey values of the panchromatic channel rendering some traditional pansharpening techniques less effective. The PANSHARP algorithm, for instance, is resilient to this challenge in that it is a statistics-based technique that uses the least-squares method to determine the best fit between the grey level values of the spectral bands being merged and adjusts the contribution of each band to the pansharpening result to minimize color distortions. Zhang [26,27] also highlights that the statistics-based approach utilized in the PANSHARP algorithm lessens the influence of dataset discrepancy and automates the pansharpening process. This assertion is supported in this study as shown by the superior performance of the IMVM, RCS, and Bayer’s fusion techniques. The high performance of the IMVM image fusion algorithm was confirmed in similar studies. Witharana [44] reported that the IMVM algorithm produced some of the best fusion results when compared to a range of pansharpening algorithms when evaluated using CC, RMSE, Deviation Index (DI), SD, and DIV metrics. Nikolakopoulos and Oikonomidis [43] compared fusion techniques and confirmed that the LMVM algorithm produced the best spectral consistency and synthesis when applied to Worldview-2 data. As in our case, other techniques that produced favorable spectral consistency and synthesis results included PANSHARP, MIHS, EHLERS, GRM, and wavelet principal components techniques [44,45].
The shortcomings of traditional fusion techniques such as PCA, Brovey transform, and wavelet fusion are well described by Zhang [26]. To improve the quality of pansharpening results of traditional pansharpening methods some propositions recommended include stretching the principal components in PCA pansharpening to give them a spherical distribution. Alternatively, the first principal component could be cast-off. Modifications of traditional pansharpening techniques are necessary to deal with some of the limitations confronted in dealing with new satellite sensors. In a general sense, the quality of image geometric and radiometric rectifications done before the pansharpening directly impacts on the quality of all pansharpening results for all the image fusion techniques.
Lastly, the spectral integrity of pansharpened images is an important requirement for most quantitative remote sensing applications. While this study used an array of reference-based metrics to assess the image quality of various pansharpened images in terms of spectral consistency, spatial consistency, and image synthesis, the information content within the images was not quantified. The use of image information metrics such as Shannon entropy and Boltzmann entropy [46,47,48,49,50] enables the quantification of the average amount of information in the fused images and could be used to effectively assess the efficacy of various pansharpening methods in terms of the ability to retain or enhance both spectral and spatial information.

4. Conclusions

Pansharpening in increasingly becoming an important procedure critical in meeting the ever-increasing demands for high-resolution satellite imagery. Preservation of spectral and spatial information is an important requirement for most quantitative remote sensing applications. In this study, image quality metrics were used to evaluate the performance of twelve image fusion techniques. Twelve pansharpening algorithms were presented in this study and the IMVM algorithm was the best in terms of spectral consistency and synthesis followed by the RCS algorithm. Although the IMVM and RCS image fusion techniques showed better results compared to the other pansharpening methods, it is pertinent to highlight that our study also showed the credibility of the other pansharpening algorithms in terms of spatial and spectral consistency as shown by the high correlation coefficients achieved in all methods. The spatial and spectral quality of the pansharpening could, therefore, be improved by implementing some modifications to the traditional pansharpening techniques to deal with the discrepancy that arises due to the broadened panchromatic band that extends to the near-red region. The use of statistics-based techniques such as the IMVM, PANSHARP, and Bayers algorithms used in this study could address this shortcoming. In terms of spatial consistency, BAY, GRS, CNS, PANSHARP, RCS, PCA, and MIHS algorithms showed very good spatial consistency as shown by the high spatial correlation coefficients. The study noted that the algorithms that ranked higher in terms of spectral consistency were outperformed by other competing algorithms in terms of spatial consistency. We, therefore, conclude that the selection of image fusion techniques is driven by the requirements of remote sensing application and a careful trade-off is necessary to account for the impact of scene radiometry, image sharpness, spatial and spectral consistency, and computational overhead.

Author Contributions

Conceptualization, P.M.; methodology, P.M.; validation, P.M., W.M., and N.M.; formal analysis, P.M.; investigation, P.M., W.M., and N.M.; resources, P.M.; writing—original draft preparation, P.M.; writing—review and editing, P.M., W.M., and N.M.; project administration, P.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Pohl, C.; Van Genderen, J.L. Review Article Multisensor Image Fusion in Remote Sensing: Concepts, Methods and Applications. Int. J. Remote Sens. 1998, 19, 823–854. [Google Scholar] [CrossRef] [Green Version]
  2. Ranchin, T.; Wald, L. Fusion of high spatial and spectral resolution images: The ARSIS concept and its implementation. Photogramm. Eng. Remote Sens. 2000, 66, 49–61. [Google Scholar]
  3. Siddiqui, Y. The modified IHS method for fusing satellite imagery. In Proceedings of the ASPRS 2003 Annual Conference Proceedings, Anchorage, Alaska, 5–9 May 2003; pp. 5–9. [Google Scholar]
  4. Thomas, C.; Wald, L. Comparing distances for quality assessment of fused images. EARSEL Symp. 2007, 101–111. [Google Scholar]
  5. Thomas, C.; Wald, L.; Thomas, C.; Wald, L.; Mtf-based, A.; Thomas, C.; Paris, M.D.; Wald, L.; Paris, M.D. A MTF-Based Distance for the Assessment of Geometrical Quality of Fused Products. In Proceedings of the 9th IEEE International Conference on Information Fusion, Florence, Italy, 10–13 July 2006; pp. 1–7. [Google Scholar]
  6. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  7. Yuhendra; Alimuddin, I.; Sumantyo, J.T.S.; Kuze, H. Assessment of pan-sharpening methods applied to image fusion of remotely sensed multi-band data. Int. J. Appl. Earth Obs. Geoinf. 2012, 18, 165–175. [Google Scholar] [CrossRef]
  8. Alparone, L.; Wald, L.; Chanussot, J.; Thomas, C.; Gamba, P.; Bruce, L.M. Comparison of pansharpening algorithms: Outcome of the 2006 GRS-S data-fusion contest. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3012–3021. [Google Scholar] [CrossRef] [Green Version]
  9. Amro, I.; Mateos, J.; Vega, M.; Molina, R.; Katsaggelos, A.K. A survey of classical methods and new trends in pansharpening of multispectral images. Eurasip J. Adv. Signal Process. 2011, 2011, 1–22. [Google Scholar] [CrossRef] [Green Version]
  10. Xu, Q.; Zhang, Y.; Li, B. Recent advances in pansharpening and key problems in applications. Int. J. Image Data Fusion 2014, 3, 175–195. [Google Scholar] [CrossRef]
  11. De Béthune, S.; Muller, F.; Donnay, J.-P. Fusion of multispectral and panchromatic images by local mean and variance matching filtering techniques. In Proceedings of the Second International Conference en Fusion of Earth Data, Sophia Antipolis, France, 28–30 January 1998; pp. 31–36. [Google Scholar]
  12. Chen, Y.; Zhang, G. A Pan-Sharpening Method Based on Evolutionary Optimization and IHS Transformation. Math. Probl. Eng. 2017, 2017. [Google Scholar] [CrossRef] [Green Version]
  13. Ehlersa, M.; Klonusa, S.; Åstrandb, P.J.; Rossoa, P. Multi-sensor image fusion for pansharpening in remote sensing. Int. J. Image Data Fusion 2010, 1, 25–45. [Google Scholar] [CrossRef]
  14. Ghassemian, H. A review of remote sensing image fusion methods. Inf. Fusion 2016, 32, 75–89. [Google Scholar] [CrossRef]
  15. Strait, M.; Rahmani, S.; Markurjev, D.; Advisor, F.; Wittman, T. Evaluation of Pan-Sharpening Methods. 2008. Available online: https://pdfs.semanticscholar.org/a67f/0678c147df99c275f2064ea4b0d78d290528.pdf (accessed on 10 March 2020).
  16. Wald, L.; Ranchin, T.; Mangolini, M. Fusion of satellite images of different spatial resolutions: Assessing the quality of resulting images. Photogramm. Eng. Remote Sens. 1997, 63, 691–699. [Google Scholar]
  17. Blaschke, T. Object based image analysis: A new paradigm in remote sensing? In Proceedings of the American Society for Photogrammetry and Remote Sensing Annual Conference, ASPRS 2013, Baltimore, MD, USA, 26–28 March 2013; Volume 24, pp. 36–43. [Google Scholar]
  18. Cheng, Y.; Pedersen, M.; Chen, G. Evaluation of image quality metrics for sharpness enhancement. Int. Symp. Image Signal Process. Anal. ISPA 2017, 18, 115–120. [Google Scholar]
  19. DrÇŽguÅ£, L.; Csillik, O.; Eisank, C.; Tiede, D. Automated parameterisation for multi-scale image segmentation on multiple layers. Isprs J. Photogramm. Remote Sens. 2014, 88, 119–127. [Google Scholar]
  20. Ghosh, A.; Joshi, P.K. Assessment of pan-sharpened very high-resolution WorldView-2 images. Int. J. Remote Sens. 2013, 34, 8336–8359. [Google Scholar] [CrossRef]
  21. Cakir, H.I.; Khorram, S. Pixel level fusion of panchromatic and multispectral images based on correspondence analysis. Photogramm. Eng. Remote Sens. 2008, 74, 183–192. [Google Scholar] [CrossRef]
  22. Ehlers, M. Multisensor image fusion techniques in remote sensing. ISPRS J. Photogramm. Remote Sens. 1991, 46, 19–30. [Google Scholar] [CrossRef] [Green Version]
  23. Duran, J.; Buades, A.; Coll, B.; Sbert, C.; Blanchet, G. A survey of pansharpening methods with a new band-decoupled variational model. Isprs J. Photogramm. Remote Sens. 2017, 125, 78–105. [Google Scholar] [CrossRef] [Green Version]
  24. Li, H.; Jing, L.; Tang, Y. Assessment of pansharpening methods applied to worldview-2 imagery fusion. Sensors 2017, 17, 89. [Google Scholar] [CrossRef]
  25. Meng, X.; Shen, H.; Li, H.; Zhang, L.; Fu, R. Review of the pansharpening methods for remote sensing images based on the idea of meta-analysis: Practical discussion and challenges. Inf. Fusion 2019, 46, 102–113. [Google Scholar] [CrossRef]
  26. Zhang, Y. Understanding image fusion. Photogramm. Eng. Remote Sens. 2004, 70, 657–661. [Google Scholar]
  27. Zhang, Y. Problems in the fusion of commercial high-resolution satelitte as well as Landsat 7 images and initial solutions. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2002, 34, 587–592. [Google Scholar]
  28. Kalpoma, K.A.; Kudoh, J. Image fusion processing for IKONOS 1-m color imagery. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3075–3086. [Google Scholar] [CrossRef]
  29. Zhang, Y.; He, B.; Li, X. A Pan-sharpening method appropriate to vegetation applications. Chin. Opt. Lett. 2009, 7, 781–783. [Google Scholar] [CrossRef]
  30. Vrabel, J.; Doraiswamy, P.; Stern, A. Application of hyperspectral imagery resolution improvement for site-specific farming. In Proceedings of the ASPRS 2002 Conference Proceedings, Washington, DC, USA, 19–26 April 2002. [Google Scholar]
  31. Vrabel, J.C.; Doraiswamy, P.; McMurtrey, J.E., III; Stern, A. Demonstration of the accuracy of improved-resolution hyperspectral imagery. In Proceedings of the Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery VIII, Anaheim, CA, USA, 11–13 April 2017; International Society for Optics and Photonics: Washington, DC, USA, 2002; Volume 4725, pp. 556–567. [Google Scholar]
  32. King, R.L.; Wang, J. A wavelet based algorithm for pan sharpening Landsat 7 imagery. In Proceedings of the IGARSS 2001. Scanning the Present and Resolving the Future. Proceedings. IEEE 2001 International Geoscience and Remote Sensing Symposium (Cat. No. 01CH37217), Sydney, Australia, 9–13 July 2001; Volume 2, pp. 849–851. [Google Scholar]
  33. Lemeshewsky, G.P. Multispectral image sharpening using a shift-invariant wavelet transform and adaptive processing of multiresolution edges. In Proceedings of the Visual Information Processing XI; International Society for Optics and Photonics: Washington, DC, USA, 2002; Volume 4736, pp. 189–200. [Google Scholar]
  34. Lemeshewsky, G.P. Multispectral multisensor image fusion using wavelet transforms. In Proceedings of the Visual Information Processing VIII; International Society for Optics and Photonics: Washington, DC, USA, 1999; Volume 3716, pp. 214–222. [Google Scholar]
  35. Strang, G.; Nguyen, T. Wavelets and Filter Banks; SIAM; Wellesley-Cambridge Press: Cambridge, MA, USA, 1996. [Google Scholar]
  36. Laben, C.A.; Brower, B.V. Process for Enhancing the Spatial Resolution of Multispectral Imagery Using Pan-Sharpening. U.S. Patent No. 6,011,875, 4 January 2000. [Google Scholar]
  37. Sarp, G. Spectral and spatial quality analysis of pan-sharpening algorithms: A case study in Istanbul. Eur. J. Remote Sens. 2014, 47, 19–28. [Google Scholar] [CrossRef] [Green Version]
  38. CNES OTB CookBook. 2018. Available online: https://www.orfeo-toolbox.org/tag/cookbook/ (accessed on 14 January 2020).
  39. Fasbender, D.; Radoux, J.; Bogaert, P. Bayesian data fusion for adaptable image pansharpening. Ieee Trans. Geosci. Remote Sens. 2008, 46, 1847–1857. [Google Scholar] [CrossRef]
  40. Al-Rubiey, I.J. Increase the Intelligibility of Multispectral Image Using Pan-Sharpening Techniques for Many Remotely Sensed Images. IBN Al-Haitham J. Pure Appl. Sci. 2017, 28, 29–41. [Google Scholar]
  41. Wald, L. Data Fusion: Definitions and Architectures: Fusion of Images of Different Spatial Resolutions; Presses des MINES: Paris, France, 2002. [Google Scholar]
  42. Zoran, L.F. Quality evaluation of multiresolution remote sensing images fusion. UPB Sci. Bull. Ser. C 2009, 71, 38–52. [Google Scholar]
  43. Du, V.Q.; Younan, N.H.; King, R. Shah n the performance evaluation of pan-sharpening techniques. IEEE Geosci. Remote Sens. Lett. 2007, 4, 518–522. [Google Scholar] [CrossRef]
  44. Nikolakopoulos, K.; Oikonomidis, D. Quality assessment of ten fusion techniques applied on worldview-2. Eur. J. Remote Sens. 2015, 48, 141–167. [Google Scholar] [CrossRef]
  45. Witharana, C.; Civco, D.L.; Meyer, T.H. Evaluation of pansharpening algorithms in support of earth observation based rapid-mapping workflows. Appl. Geogr. 2013, 37, 63–87. [Google Scholar] [CrossRef]
  46. Jagalingam, P.; Hegde, A.V. A Review of Quality Metrics for Fused Image. In Proceedings of the Aquatic Procedia, Mangaluru, India, 1 January 2015; pp. 133–142. [Google Scholar]
  47. Price, J.C. Comparison of the Information Content of Data from the LANDSAT-4 Thematic Mapper and the Multispectral Scanner. IEEE Trans. Geosci. Remote Sens. 1984, 22, 272–281. [Google Scholar] [CrossRef] [Green Version]
  48. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  49. Verde, C.N.; Mallinis, G.; Tsakiri-Strati, M.; Georgiadis, C.; Patias, P. Assessment of radiometric resolution impact on remote sensing data classification accuracy. Remote Sens. 2018, 10, 1267. [Google Scholar] [CrossRef] [Green Version]
  50. Roberts, J.W.; van Aardt, J.A.; Ahmed, F.B. Assessment of image fusion procedures using entropy, image quality, and multispectral classification. J. Appl. Remote Sens. 2008, 2, 023522. [Google Scholar]
Table 1. Spatial consistency: correlation coefficient (CC) Laplacian filtering. Abbreviations: Bayesian fusion (BAY); Brovey transform(BRO); Color Normalized Spectral sharpening (CNS); Ehlers fusion technique (EHLERS); Gram–Schmidt (GRS); Local Mean and Variance Matching (IMVM), Modified Intensity Hue Saturation (MIHS), Pansharp algorithm (PANSHARP), Principal component analysis (PCA); Ratio Component Substitution (RCS); WAVELET, Wavelet Resolution merge fusion (WAVELET).
Table 1. Spatial consistency: correlation coefficient (CC) Laplacian filtering. Abbreviations: Bayesian fusion (BAY); Brovey transform(BRO); Color Normalized Spectral sharpening (CNS); Ehlers fusion technique (EHLERS); Gram–Schmidt (GRS); Local Mean and Variance Matching (IMVM), Modified Intensity Hue Saturation (MIHS), Pansharp algorithm (PANSHARP), Principal component analysis (PCA); Ratio Component Substitution (RCS); WAVELET, Wavelet Resolution merge fusion (WAVELET).
BAND #BAYBROCNSEHLERSGRSIMVMMIHSPANSHARPPCARCSWAVELET
10.8540.7610.8530.8010.8530.7850.8260.8450.8460.8440.542
20.8540.8340.8520.8010.8530.7840.8250.8450.8450.8450.542
30.8540.8470.8510.8010.8530.7830.8240.8450.8370.8450.543
AVERAGE0.8540.8140.8520.8010.8530.7840.8250.8450.8430.8440.542
Table 2. Spectral consistency: correlation coefficient (CC).
Table 2. Spectral consistency: correlation coefficient (CC).
BAND #BAYBROCNSEHLERSGRSIMVMMIHSPANSHARPPCARCSWAVELET
10.6550.4990.6900.7140.5860.9700.7210.6200.5790.9130.645
20.5870.4070.5600.5840.4980.9690.5850.5560.5330.8660.870
30.5400.5460.4520.4100.5090.9680.4150.5120.5430.7860.907
AVERAGE0.5940.4840.5670.5700.5310.9690.5740.5620.5520.8550.808
Table 3. Spectral consistency: Erreur Relative Global Adimensionnelle de Synthese (ERGAS).
Table 3. Spectral consistency: Erreur Relative Global Adimensionnelle de Synthese (ERGAS).
BAND #BAYBROCNSEHLERSGRSIMVMMIHSPANSHARPPCARCSWAVELET
17.78827.51825.8306.5917.3221.3325.6386.80841.2992.36112.158
26.73726.77524.68910.5116.2041.0255.4485.74434.7922.2883.406
35.08925.57123.6509.3704.4540.7365.2394.32824.7572.2202.614
AVERAGE6.64726.69924.8018.9936.1221.0625.4575.73034.3742.2967.450
Table 4. Spectral consistency: universal image quality index (UIQI).
Table 4. Spectral consistency: universal image quality index (UIQI).
BAND #BAYBROCNSEHLERSGRSIMVMMIHSPANSHARPPCARCSWAVELET
10.6100.0080.0490.7190.5690.9730.7210.6140.3070.9210.548
20.5240.0040.0460.5320.4720.9720.5690.5390.2780.8690.786
30.4730.0090.0460.3450.4750.9720.3750.4910.3080.7760.866
AVERAGE0.5360.0070.0470.5320.5050.9720.5550.5480.2980.8560.733
Table 5. Spectral consistency: relative average spectral error (RASE).
Table 5. Spectral consistency: relative average spectral error (RASE).
BAND #BAYBROCNSEHLERSGRSIMVMMIHSPANSHARPPCARCSWAVELET
128.00296.98991.46222.00126.3094.68119.45824.632156.3078.24052.354
224.48497.90990.39338.48122.5393.58719.45620.789132.5078.22911.533
318.75296.85189.57534.91416.5002.59919.43215.84294.8498.2149.395
AVERAGE23.91897.57390.69632.74522.0253.74119.51520.606129.3568.23931.080
Table 6. Spectral consistency: root square mean error (RMSE).
Table 6. Spectral consistency: root square mean error (RMSE).
BAND #BAYBROCNSEHLERSGRSIMVMMIHSPANSHARPPCARCSWAVELET
186.162304.456285.78172.92581.01214.74062.37575.319456.93426.118134.517
277.854309.398285.298121.45671.68811.84062.95566.371402.04426.44439.359
358.370293.318271.289107.48751.0958.44260.09249.640283.98025.46529.982
AVERAGE74.129302.390280.789100.62367.93211.67461.80763.776380.98626.00967.953
Table 7. Spectral consistency: difference in variance (DIV).
Table 7. Spectral consistency: difference in variance (DIV).
BAND #BAYBROCNSEHLERSGRSIMVMMIHSPANSHARPPCARCSWAVELET
17.5381.2861.4752.9836.7680.6764.9215.46813.3771.0537.142
26.7431.1201.3422.8925.5000.6604.7374.61210.1101.4392.243
35.6041.1141.4793.4385.1250.5065.3714.0379.2361.3192.494
AVERAGE6.6291.1731.4323.1045.7980.6145.0104.70610.9081.2703.959
Table 8. Spectral consistency: bias.
Table 8. Spectral consistency: bias.
BAND #BAYBROCNSEHLERSGRSIMVMMIHSPANSHARPPCARCSWAVELET
10.1080.9370.8850.1310.1160.0050.0860.1151.4850.0060.459
20.0930.9580.8840.3400.0990.0040.0860.0961.2590.0070.069
30.0720.9560.8840.2990.0730.0030.0860.0720.9000.0070.058
AVERAGE0.0910.9500.8840.2570.0960.0040.0860.0941.2150.0070.195
Table 9. Spectral synthesis: ERGAS.
Table 9. Spectral synthesis: ERGAS.
BAND #BAYBROCNSEHLERSGRSIMVMMIHSPANSHARPPCARCSWAVELET
17.65428.07926.3857.0498.5534.9888.9518.65523.2666.61713.176
26.43226.97524.8756.8197.0923.7265.8087.13919.2176.1134.341
34.79325.65823.7206.5765.0342.6496.3545.31713.5795.6984.257
AVERAGE6.42627.03425.1226.8467.0693.9217.2027.19619.1776.1808.398

Share and Cite

MDPI and ACS Style

Mhangara, P.; Mapurisa, W.; Mudau, N. Comparison of Image Fusion Techniques Using Satellite Pour l’Observation de la Terre (SPOT) 6 Satellite Imagery. Appl. Sci. 2020, 10, 1881. https://doi.org/10.3390/app10051881

AMA Style

Mhangara P, Mapurisa W, Mudau N. Comparison of Image Fusion Techniques Using Satellite Pour l’Observation de la Terre (SPOT) 6 Satellite Imagery. Applied Sciences. 2020; 10(5):1881. https://doi.org/10.3390/app10051881

Chicago/Turabian Style

Mhangara, Paidamwoyo, Willard Mapurisa, and Naledzani Mudau. 2020. "Comparison of Image Fusion Techniques Using Satellite Pour l’Observation de la Terre (SPOT) 6 Satellite Imagery" Applied Sciences 10, no. 5: 1881. https://doi.org/10.3390/app10051881

APA Style

Mhangara, P., Mapurisa, W., & Mudau, N. (2020). Comparison of Image Fusion Techniques Using Satellite Pour l’Observation de la Terre (SPOT) 6 Satellite Imagery. Applied Sciences, 10(5), 1881. https://doi.org/10.3390/app10051881

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop