Next Article in Journal
Correction: Tøttrup et al. A Real-Time Method for Time-to-Collision Estimation from Aerial Images. J. Imaging 2022, 8, 62
Previous Article in Journal
Simple Color Calibration Method by Projection onto Retroreflective Materials
Previous Article in Special Issue
Comparison of Convolutional Neural Networks and Transformers for the Classification of Images of COVID-19, Pneumonia and Healthy Individuals as Observed with Computed Tomography
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Conditional Random Field-Guided Multi-Focus Image Fusion

by
Odysseas Bouzos
*,
Ioannis Andreadis
and
Nikolaos Mitianoudis
*
Department of Electrical and Computer Engineering, Democritus University of Thrace, 67100 Xanthi, Greece
*
Authors to whom correspondence should be addressed.
J. Imaging 2022, 8(9), 240; https://doi.org/10.3390/jimaging8090240
Submission received: 23 July 2022 / Revised: 21 August 2022 / Accepted: 2 September 2022 / Published: 5 September 2022
(This article belongs to the Special Issue The Present and the Future of Imaging)

Abstract

:
Multi-Focus image fusion is of great importance in order to cope with the limited Depth-of-Field of optical lenses. Since input images contain noise, multi-focus image fusion methods that support denoising are important. Transform-domain methods have been applied to image fusion, however, they are likely to produce artifacts. In order to cope with these issues, we introduce the Conditional Random Field (CRF) CRF-Guided fusion method. A novel Edge Aware Centering method is proposed and employed to extract the low and high frequencies of the input images. The Independent Component Analysis—ICA transform is applied to high-frequency components and a Conditional Random Field (CRF) model is created from the low frequency and the transform coefficients. The CRF model is solved efficiently with the α -expansion method. The estimated labels are used to guide the fusion of the low-frequency components and the transform coefficients. Inverse ICA is then applied to the fused transform coefficients. Finally, the fused image is the addition of the fused low frequency and the fused high frequency. CRF-Guided fusion does not introduce artifacts during fusion and supports image denoising during fusion by applying transform domain coefficient shrinkage. Quantitative and qualitative evaluation demonstrate the superior performance of CRF-Guided fusion compared to state-of-the-art multi-focus image fusion methods.

1. Introduction

The limited Depth-of-Field of optical lenses allows only parts of the scene within a certain distance from the camera sensor to be captured well-focused each time, while the remaining parts of the scene stay out-of-focus or blurred. Multi-focus image fusion algorithms are thus of vital importance in order to cope with this limitation. Multi-focus image fusion methods merge multiple input images captured with different focus settings into a single image with extended Depth-of-Field. More precisely, the well-focused pixels of the input images are preserved in the fused image and the out-of-focus pixels of the input images are discarded. Consequently, the fused image should have extended Depth-of-Field and thus more information than each one of the input images and should not introduce artifacts during fusion.
The problem of multi-focus image fusion has been explored widely in the literature. Lately, a number of multi-focus image fusion methods have been proposed. Liu et al. [1] classified the multi-focus image fusion methods in four categories: spatial-domain methods, transform-domain methods, combined methods and deep learning methods. In spatial-domain methods, the fused image is estimated as the weighted average of the input images. Spatial-domain methods are also classified as block-based, region-based, and pixel-based. In block-based methods, the image is decomposed into blocks of fixed size, and the activity level is estimated individually for each of these blocks.
However, since blocks are likely to contain both well-focused and out-of-focus pixels, the block-based methods are likely to have blocking artifacts near the boundaries of well-focused and out-of-focus pixels. Thus, the fused image has lower quality near their boundary. Region-based methods, use a whole region of irregular shape in order to estimate the saliency of the included pixels. Although region-based methods provide higher flexibility than block-based methods, a region may also contain simultaneously both well-focused and out-of-focus pixels. As a result, region-based methods also produce artifacts and have lower fused image quality near the boundaries of well-focused and out-of-focus pixels. In order to overcome these issues, pixel-based methods have lately gained more popularity. In these methods, activity level estimation is carried out at pixel level. Pixel-based methods do not have blocking artifacts and have better accuracy near the boundary of well-focused and out-of-focus pixels, however, they are likely to produce noisy weight maps, which also lead to fused images of lower image quality. Popular spatial domain-based multi-focus image fusion methods include: Quadtree [2], Boundary Finding [3], dense Sift [4], guided filtering [5], PCNN [6] and Image Matting [7]. Singh et al. [8] used the Arithmetic optimization algorithm (AOA) in order to estimate the weight maps for image fusion, which were refined with weighted least square optimization (WLS). The fused image is extracted through pixel-wise weighted average fusion. In [9], the fusion method FNMRA was presented, which used the modified naked mole-rat algorithm (mNMRA) in order to generate the weight maps, which were refined with weighted least-squares optimization. Pixel-wise single-scale composition was used in order to create the fused image.
In transform-domain methods, a forward transform is firstly applied to the input images. A fusion rule is then applied in order to combine the transform coefficients. Finally, an inverse transform is applied to the fused coefficients in order to return the fused image to the spatial domain. An advantage of dictionary-based transform-domain methods is the support of image denoising during fusion, by applying shrinkage methods, such as [10], which can be used to remove the noisy transform-domain coefficients. An issue of transform-domain methods lies in the imperfect forward-backward transforms that result in visible artifacts due to the Gibbs phenomenon. Since both the selection of the transform domain and the manual design of the fusion rule highly impact the quality of the fused image a number of transform domain-based multi-focus image fusion methods have been introduced. Typical transform domain-based multi-focus image fusion methods include: ICA [11], ASR [12], CSR [13], NSCT [14], NSCT-SR [15], MWGF [16] and DCHWT [17]. Qin et al. [18] proposed a new image fusion method combining discrete wavelet transform (DWT) and sparse representation (SR). Jagtap et al. [19] introduced information preservation-based guided filtering in order to decompose the input images into base and detail images. Low-rank representation was used in order to estimate the focus map and perform a fusion of the detailed images. In [20], the authors used weight maps based on local contrast, and the fused image was estimated with multi-scale weighted average fusion based on pyramid decomposition.
The methods that lie in the combined methods category employ both the merits of spatial and transform domain methods. Nonetheless, each method uses different domains. Bouzos et al. [21] combined the advantages of both the ICA domain and spatial domain. Chai et al. [22] combined advantages of multi-scale decomposition and spatial domain. He et al. [23] combined the Meanshift algorithm and NSCT domain. An issue of the aforementioned methods is that they do not support image denoising during fusion. Singh et al. [24] proposed the Discrete Wavelet Transform-bilateral filter (DWTBF) method, which combined the Discrete Wavelet Transform (DWT) and the bilateral filter. In [25], the authors combined a multi-resolution pyramid and the bilateral filter in order to predict the fused image.
Lately, deep learning-based methods have gained more popularity. According to the study [26], deep learning-based methods, are classified into decision map-based methods and end-to-end methods. In decision-map-based methods, the deep learning networks predict a decision map, with a classification-based architecture. Post-processing steps, including morphological operations, are usually employed to refine the decision map. The decision map is later used to guide the fusion of the input images, by selecting the respective pixels from the input images. Typical decision map-based deep learning multi-focus image fusion methods include: CNNFusion [27], ECNN [28] and p-CNN [29]. On the other hand, end-to-end deep learning-based networks, directly predict the fused image without the intermediate step of the decision map. Typical end-to-end based deep learning networks for multi-focus image fusion include: IFCNN [30] and DenseFuse [31]. Ma et al. [32] introduced a multi-focus image fusion method based on an end-to-end multi-scale generative adversarial network (MsGAN). Wei et al. [33] combined advantages of sparse representation and CNN networks in order to estimate the fusion weights for the multi-focus image fusion problem. Since the sensitivity of the aforementioned deep learning-based methods to noise was not studied, the methods are likely to be sensitive to noise. In addition, these deep learning-based multi-focus image fusion methods do not support image denoising during fusion.
In this manuscript, we introduce CRF-Guided fusion, which is a novel transform domain-based method that uses the Conditional Random Field model, in order to guide the fusion of the transform-domain ICA method. Due to various sources, input images are likely to contain noise, thus multi-focus methods that are robust to noise and support fusion and denoising during fusion are of great importance. Since CRF-Guided fusion is a dictionary-based method (ICA), the method is robust to Gaussian noise and supports image denoising during fusion by applying the shrinkage coefficient method [10]. A novel Edge Aware Centering method (EAC) is also introduced and is used, instead of the typical centering method, and alleviates artifacts caused by the centering procedure. The combination of EAC and the proposed CRF-Guided fusion method produce fused images of high quality, without introducing artifacts for both clean images and images that contain Gaussian noise, while also supporting denoising during fusion.
The main contributions of this manuscript and improvements over our previous method [21] are:
  • the development of the novel EAC method, which preserves the strong edges of the input images, instead of the typical centering method.
  • the design of a novel framework, based on a CRF model, that is suitable for transform-domain image fusion.
  • the design of a novel transform-domain fusion method that produces fused images of high visual quality, preserves via CRF optimization, the boundary between well-focused and out-of-focus pixels, and does not introduce artifacts during fusion.
  • the introduction of a novel transform-domain fusion rule, based on the labels extracted from the CRF model, that produces fused images of higher image quality without the transform-domain artifacts.
  • the robustness of the proposed method against Gaussian noise and the support of denoising during fusion, by applying the transform-domain coefficient shrinkage method [10].

2. Proposed Method Description

The proposed framework of the CRF-Guided fusion is summarised in Figure 1. An outline of the method is now provided: Firstly, Edge Aware Centering is applied to the input images, in order to extract the low and high-frequency components. The Forward ICA transform is then applied to the high frequencies of the input images. Then, the Low frequency and ICA coefficients are used to compute the Unary U and Smoothness V potentials and thus construct the CRF model. Consequently, the CRF model is solved efficiently with the α -expansion method based on GraphCuts [34]. The predicted labels are then employed to fuse the low frequencies leading to the fused low-frequency image. In addition, they are also used to guide the fusion of the transform-domain ICA coefficients. Lastly, the inverse ICA transform is applied to the fused transform coefficients in order to return the fused high-frequency component. Finally, the fused image F is estimated by the addition of the fused low-frequency and the fused high-frequency components. More details of the aforementioned steps of the proposed framework are included in the following subsections. Figure 2 includes two source input images for multi-focus image fusion that will be used during the steps of the CRF-Guided fusion.

2.1. Edge Aware Centering

In this section, we introduce the Edge Aware Centering (EAC) method, which is used instead of the typical centering method, in order to estimate the low frequency of the multi-focus input images. EAC consists of a spatially varying Gaussian filter that preserves the strong edges of the input images. More precisely,
w i , j = exp ( x i , j μ i , j ) 2 2 x m , n μ i , j 2
where w i , j is the weight at spatial location i , j , μ i , j is the mean value of a 7 × 7 block with central pixel at i , j , x is the input image and m i 3 , i + 3 , n j 3 , j + 3 . In addition, the · operator implies averaging over the all m , n values. Finally, the filtered image f in spatial locations i , j is estimated as:
f i , j = m , n w m , n I m n m , n w m , n
EAC is applied to both input images in order to estimate the low frequency of each image. Figure 3 includes the low-frequency images, as computed by applying the proposed EAC to the input images of Figure 2. It is evident that the EAC preserves accurately the strong edges of the input images.
By subtracting the low-frequency images from the input images, we extract the high-frequency images as demonstrated in Figure 4. The forward ICA transform is then applied to the high-frequency images in order to get the transform domain coefficients. For more information on the estimation of the ICA transform, and its application on images for fusion, please refer to [11].

2.2. Energy Minimization

In order to model the multi-focus image fusion problem and solve it efficiently, we construct an energy minimization equation. Since solvers of graph cuts can reach a global or close-to-global optimum solution, we formulate the energy minimization problem of multi-focus image fusion as a graph cut problem. More precisely, we introduce the Conditional Random Field (CRF) equation that describes our multi-focus image fusion problem, which is solved efficiently with the inference method of α -expansion reaching a global or close-to-global optimum solution. The solution of the proposed energy minimization leads to the optimum labels of the decision that is used to guide the fusion of low frequency and transform coefficients.
In order to guide the fusion of the low frequency and the transform coefficients, we formulate the Conditional Random Field (CRF) equation, as follows:
= arg min i = 1 N U i + m , n C V m , n m , n
where are the estimated labels, U is the unary potential function, V is the pairwise potential function, i are spatial locations, and m , n adjacent pixels in the C which is the N8-neighborhood. The energy minimization equation is optimized using the α -expansion method, based on GraphCuts [34].

2.3. Inference α -Expansion Method

In the α -expansion, the optimization problem is divided into a sequence of binary-valued maximization problems. Given a current label configuration h and a fixed label α U , with U being the set of all label values. In the α -expansion move, each pixel i gets a binary decision, to either retain its old value or change it to label α . The expansion move starts with the initial set of labels h 0 and then based on some order, computes the optimal α - expansion moves for the labels α . Only the moves that lead to the increase of the objective function are accepted.

2.4. Unary Potential Estimation

Let us assume that x 1 , x 2 are the input images, P L is the probability of low frequency, P H the probability of high frequency, P the probability of the input images, and U unary potential function. Figure 5 depicts the method of estimating the unary potential. More precisely, EAC is firstly applied to the images to extract low and high frequencies. The 2nd Laplacian is applied to both low-frequency components and the probability of the low frequency, P L is estimated by:
P L n = S 0 S 0 + S 1 , n = 0 S 1 S 0 + S 1 , n = 1
where, S 0 , S 1 are the second Laplacian of the low frequencies of the first and the second image respectively.
The probability of the high frequency P H is extracted by the ICA coefficients and is estimated as follows:
P H n = C 0 C 0 + C 1 , n = 0 C 1 C 0 + C 1 , n = 1
where C 0 is the L2-norm of ICA coefficients of the first image, C 1 is the L2-norm of ICA coefficients of the second image. In order to determine the probability that each one of the input images i should contribute to the spatial location n of the guidance map, we compute the combined probability of high and low frequencies for each image. This probability we call the probability of input image that corresponds to label . Thus probability of each input image P n is estimated as follows:
P n = P H n P L n
Finally, the Unary potential function U is estimated by the negative likelihood of the predicted probabilities:
U n = log P n

2.5. Smoothness Term

The smoothness potential function V is estimated from the low-frequency image, as follows:
V p q = l 0 p l 1 q + l 1 p l 0 q l 0 p l 0 q + l 1 p l 1 q
where p , q are adjacent pixels in the N8-neighborhood and l 0 , l 1 are the first and second low-frequency images respectively. Finally, the labels of the CRF model in (3) are estimated efficiently using the α -expansion method [34].
Figure 6 demonstrates the labels, as estimated from the direct minimization of the unary term U and the labels, as estimated from the CRF minimization (3). The predicted labels are then used to fuse the low frequency of the input images.
L F i = ( 1 i ) L 0 i + i L 1 i
where L F is the low-frequency fused image, i is the spatial location, L 0 is the low frequency of the first image and L 1 is the low- frequency of the second image.

2.6. Transform-Domain CRF Fusion Rule

A sliding window with size 7 × 7 is applied to the decision map of the predicted probabilities. The transform coefficients that correspond to each 7 × 7 block are then fused according to the label of the central pixel of the block by selecting the respective coefficients from the input images that correspond to that label. Inverse ICA is then applied to the fused transform coefficients in order to return the fused high frequency. Figure 7 depicts the fused low-frequency component and the fused high-frequency component.
Finally, the fused image is estimated by the addition of the low and high-frequency components. Figure 8 demonstrates the final fused image.

3. Fusion and Denoising

A major advantage of the proposed CRF-Guided fusion is the robustness against Gaussian noise and the support of denoising during fusion. In the case of Gaussian noise, the coefficient shrinkage method [10] is applied to the transform coefficients of both input images. More precisely,
C ( k ) = 0 , if C ( k ) < 1.95 σ n
where C ( k ) is the k-th transform coefficient in the ICA domain and σ n is the standard deviation of the noise, which is estimated by areas of the image where there is low activity. Low activity areas contain no strong edges, therefore may contain only noise and thus can be used to estimate the noise standard deviation σ n . The denoised transform coefficients are then employed to estimate the P H of both input images. Consequently, Guided fusion from the CRF labels is performed on the denoised transform coefficients. Then, the inverse ICA transform is used to return the denoised high-frequency image. Lastly, the final denoised fused image is formed by the addition of the denoised high-frequency and the fused low-frequency images.
Figure 9 includes the noisy input images with Gaussian noise N 0 , σ 2 , σ = 5 and the denoised fused image F. The fused image F is successfully denoised during the fusion, as is demonstrated in Figure 9c.
Figure 10 includes the noisy input images with Gaussian noise N 0 , σ 2 , σ = 10 and the denoised fused image F. The proposed CRF-Guided fusion framework can successfully produce the denoised fused image Figure 10c, with denoising performed during fusion.

4. Experimental Results

The proposed CRF-Guided fusion method is compared to 13 state-of-the-art image fusion methods in the two public datasets: the Lytro dataset [35], which consists of 20 color input image pairs and the grayscale dataset [3], which consists of 17 grayscale input image pairs. The state-of-the-art compared methods are: GBM [36], NSCT [14], ICA [11], DCHWT [17], ASR [12], IFCNN [30] and DenseFuse [31], acof [37], CFL [38], ConvCFL [39], DTNP [40], MLCF [41] and Joint [42]. Both quantitative and qualitative results are included in order to evaluate the performance of CRF-Guided fusion and the compared multi-focus image fusion methods.

4.1. Quantitative Evaluation

In [43,44] Singh et al. made a review of multiple image fusion algorithms along with the image fusion performance metrics. In order to assess the quality of the fused images of the compared multi-focus image fusion methods, eight metrics are used. More precisely the metrics used are: Mutual Information ( M I ) [45], Q a b / f [46], Q g [47], Q y [48], C B [49], SSIM [50], NIQE [51] and Entropy.

4.1.1. Mutual Information—MI

Mutual Information—MI is an information theory-based metric and the objective measure of the mutual dependence of two random variables. For two discrete random variables U and V, M I is defined as follows:
M I U ; V = v V u U p u , v log 2 p u , v p u p v

4.1.2. Yang’s Metric Qy

Yang et al. [48] proposed the image structural similarity-based metric Q Y . For input images A , B and fused image F, it is defined as follows:
Q Y = λ w S S I M A , F | w + 1 λ w S S I M B , F | w , S S I M ( A , B | w ) 0.75 max S S I M A , F | w , S S I M B , F | w , S S I M A , B | w < 0.75
λ w = s A | w s A | w + s B | w
where s A | w is a local salience measure of image A within a window w. A higher value of Q Y indicates better-fused image quality and higher structural similarity of the fused images and the input images.

4.1.3. Chen-Blum Metric— C B

The Chen-Blum Metric C B [49] is a human perception-inspired fusion metric that features the following five steps:
  • Contrast sensitivity filtering: Filtered image I A m , n = I A m , n S r , where S r is the CSF filter in polar form and r = m 2 + n 2 .
  • Local contrast computation:
    C i , j = ϕ k i , j I i , j ϕ k + 1 i , j I i , j 1
    ϕ k x , y = 1 2 π σ k 2 e x 2 + y 2 2 σ k 2
    where σ k = 2 .
  • Contrast preservation calculation: For input image I A the masked contrast map is estimated as:
    C A = t C A p h C A q + Z
    where t , h , p , q , Z are real scalar parameters that determine the shape of the nonlinearity of the masking function [49].
  • Generation of saliency map: The saliency map for image I A is:
    λ A = C A 2 C A 2 + C B 2
    The value of information preservation is:
    Q A F = C A C F if C A < C F C F C A otherwise .
  • The global quality map is defined as:
    Q G Q M i , j = λ A i , j Q A F i , j + λ B i , j Q B F i , j
    The value of metric C B is the average of the global quality map:
    C B = mean i , j Q G Q M i , j

4.1.4. Gradient Based Methods— Q G , Q A B / F

Xydeas et al. [47] proposed a metric to measure the amount of edge information from source images to the fused image. Q G is a gradient-based method. Firstly, a Sobel operator is applied to input image A to extract edge strength g A i , j and orientation α A i , j .
g a i , j = s A x i , j 2 + s A y i , j 2 .
α A i , j = tan 1 s A y i , j s A x i , j
where s A x , s A y are the outputs of the convolution application of the horizontal and vertical Sobel templates respectively. The relative strength between input image A and fused image F is:
G A F i , j = g F i , j g A i , j , if g A i , j > g F i , j g A i , j g F i , j , otherwise .
The orientation values Δ A F between input image A and fused image F are:
Δ A F i , j = 1 α A i , j α F i , j π / 2
The edge strength value is estimated as:
Q g A F i , j = Γ g 1 + e k g G A F i , j σ g
The orientation preservation value is estimated as:
Q α A F i , j = Γ α 1 + e k a Δ A F i , j σ α
The constants Γ g , k g , σ g and Γ α , k α , σ α are used to define the shape of the sigmoid functions used for the edge strength and orientation preservation values [47].
Q A B / F = n = 1 N m = 1 M Q A F w A + Q A B w B n = 1 N m = 1 M w A + w B
and
Q A F = Q g A F Q α A F
where Q A F i , j denotes the edge similarity at position i , j between input image A and fused image F, Q g A F the edge strength similarity and Q α A F the orientation similarity.

4.1.5. Structural Similarity Index—SSIM [50]

The structural similarity index— S S I M for two images A , B is defined as:
S S I M A , B = 2 μ A μ B + C 1 2 σ A B + C 2 μ A 2 + μ B 2 + C 1 σ A 2 + σ B 2 + C 2
where μ A , μ B are the mean intensity values of images A , B , σ A , σ B are the standard deviation of images A , B and σ A B is the square root of covariance of A , B . C 1 , C 2 are constants. Due to the lack of ground truth image, the S S I M for input images A , B and fused image F in the experiments is defined as follows:
S S I M = S S I M A , F + S S I M B , F 2
where A   a n d   B are the two input images and F is the fused image.

4.1.6. Niqe [51]

N I Q E is a blind image quality metric based on the Multivariate Gaussian Model (MVG). The quality of the fused image is defined as the distance between the quality aware natural scene statistic (NSS) model and the MVG fit, extracted from features of the distorted image:
D v 1 , v 2 , Σ 1 , Σ 2 = v 1 v 2 T Σ 1 + Σ 2 2 1 v 1 v 2
where v 1 , v 2 and Σ 1 , Σ 2 are the mean vectors and covariance matrices of the natural multivariate Gaussian model [51] and the multivariate Gaussian model that is fit to the fused image.

4.1.7. Entropy

The entropy of an image I is defined as:
E I = j = 1 2 L 1 p s j log 2 p s j
where L is the number of gray levels, p s j is the probability of occurrence of gray level s j in image I.
Table 1 includes the objective evaluation of the compared methods for the Lytro dataset [35].
For the Lytro dataset [35], the proposed CRF-Guided fusion method has the highest value for the metrics M I , Q g , Q A B / F , Q Y , C B , the lowest value for the N I Q E metric and the second highest score for S S I M and entropy. These results indicate that the fused quality of the proposed fused image is better than the compared state-of-the art methods. Since CRF-Guided has the highest Mutual Information [45], the proposed method preserves best the information of the input images. In addition, CRF-Guided has the highest Q g [47] and Q A B / F [46] values, which indicate that the proposed method preserves best the edge information from the input images to the fused image. In order to assess the quality of the structural similarity of the fused images, Yang’s metric Q Y [48] and the structural similarity index measure S S I M [50] are employed. The proposed method has the highest Q Y value and the second highest according to S S I M , which indicates high fused image quality, regarding structural similarity. DenseFuse [31] has highest S S I M value for the Lytro dataset. The proposed CRF-Guided method has the highest value on the human perception inspired fusion metric C B [49], which implies that perceptually the produced results by the method are the most pleasing to the human eye. According to the blind image quality metric N I Q E [51], CRF-Guided has the lowest value and thus the best fused image quality. Lastly, for the blind image quality Entropy, GBM [36] has the highest score and CRF-Guided has the second highest score. Overall for the Lytro dataset [35] of perfectly registered color input images, the proposed CRF-Guided method outperforms the compared state-of-the art image fusion methods in most metrics.
Table 2 includes the quantitative evaluation of the compared methods for the grayscale dataset [3]. The CRF-Guided fusion method outperforms the compared state-of-the-art methods, in terms of metrics M I [45], Q g [47], Q A B / F [46], Q Y [48], C B [49] and S S I M [50] and has the second lowest score for the N I Q E [51] metric and the second highest Entropy value. More precisely, since CRF-Guided has the highest Mutual Information [45], it preserves better the original information compared to the other methods. The highest value of CRF-Guided in Q g [47] and Q A B / F [46] indicate that the proposed method preserves better the edges of the input images, compared to the state-of-the-art methods. Moreover, the structural information of the original images is best preserved in the CRF-Guided method, since both Q y [48] and S S I M [50] have the highest value for the proposed method. According to the human perception inspired fusion metric C B [49], CRF-Guided has the best fused image quality. For the N I Q E [51] metric, the method dchwt [17] has the lowest score and the proposed method has the second lowest value. The method GBM [36] has the highest entropy value for the grayscale dataset. Overall, the proposed method has the highest fused image compared to the state of the art methods for the grayscale dataset [3].
In summary, according to the 8 metrics used for quantitative evaluation, the proposed CRF-Guided method has the best performance compared to 13 state-of-the art image fusion methods for both public datasets: the Lytro dataset [35] and the grayscale dataset [3].

4.2. Qualitative Evaluation

In this section, we perform a visual comparison between the tested methods. Figure 11 includes the fused results of the compared methods for the scene ‘Lab’ of the grayscale dataset [3]. The compared methods GBM [36], NSCT [14], ICA [11], DCHWT [17], ASR [12], IFCNN [30], DenseFuse [31], acof [37], CFL [38], ConvCFL [39], DTNP [40], MLCF [41] and Joint [42], all feature visible artifacts in the area of the head. Moreover, these methods cannot accurately preserve the boundary of the clock in the red rectangle. MLCF cannot accurately capture the boundaries of the well-focused and out-of-focus pixels. NSCT [14], ICA [11], IFCNN [30], DenseFuse [31], acof [37], CFL [38], ConvCFL [39] also feature artifacts around the arm, included in the red rectangle area. The proposed CRF-Guided method has the highest fused image quality for the area of the head, without introducing artifacts during fusion. Furthermore, the boundary of the clock is best preserved in the CRF-Guided fusion method, compared to the state-of-the-art methods. Moreover, the CRF-Guided fusion method does not introduce artifacts in the area of the red rectangle around the arm. The proposed CRF-Guided method does not have artifacts during fusion and has the highest visual image quality for the ‘Lab’ scene.
Figure 12 includes the resulting fused images of the proposed and the compared methods for the scene ‘Temple’ of the grayscale dataset [3]. Two regions are selected for magnification to assess with qualitative evaluation. GBM [36], NSCT [14], ICA [11], DCHWT [17], ASR [12], IFCNN [30], DenseFuse [31], acof [37], CFL [38], ConvCFL [39], DTNP [40], MLCF [41] and Joint [42] all have visible artifacts in both regions of the red and the blue rectangles. Moreover, they cannot accurately preserve the boundary of the well-focused and out-of-focus pixels. The proposed CRF-Guided method preserves accurately the boundaries between the well-focused and out-of-focus pixels for both regions without introducing artifacts, compared to the other multi-focus image fusion methods. CRF-Guided features the best fused image quality for the scene ‘Temple’. Qualitative evaluation indicates that the proposed CRF-Guided method has the best visual fused image quality, without introducing artifacts during fusion, compared to 13 state-of-the art methods.
Figure 13 includes the qualitative evaluation of the compared methods for the scene ‘Golfer’ of the Lytro dataset [35]. CFL [38] and ConvCFL [39] produce artifacts around the boundary of well-focused and out-of-focus pixels in both regions. The boundary of the well-focused pixels isn’t well preserved in GBM [36], NSCT [14], ICA [11], DCHWT [17], ASR [12], IFCNN [30], DenseFuse [31], acof [37], CFL [38], ConvCFL [39], DTNP [40], MLCF [41] and joint [42], while on the proposed CRFGuided the fused image is better preserved. Methods acof [37], mlcf [41] cannot accurately capture the boundary of well-focused and out-of-focus pixels in both regions. NSCT [14], DenseFuse [31], acof [37], DTNP [40] and MLCF [41] cannot preserve accurately the boundaries between the well-focused and out-of-focus pixels in the area of the red rectangle. The proposed CRFGuided method has the highest visual quality for both regions for the ‘Golfer’ scene of the Lytro [35] dataset, preserving best the boundary of well-focused and out-of-focus pixels, without introducing artifacts during fusion.
Figure 14 features the qualitative evaluation for the ‘Volley’ scene of the Lytro [35] dataset. Two regions were selected with magnification. For the blue region, the boundaries of well-focused and out-of-focus pixels in methods GBM [36], NSCT [14], ICA [11], DCHWT [17], IFCNN [30], DenseFuse [31], acof [37], CFL [38], ConvCFL [39], DTNP [40], MLCF [41] are not accurately preserved. Methods acof [37] and MLCF [41] can not preserve accurately the boundaries of well-focused and out-of-focus pixels in both regions. For the red region, Joint [42] produces color degradation and lower contrast. Moreover, GBM [36], NSCT [14], ICA [11], DCHWT [17], ASR [12], IFCNN [30], DenseFuse [31], acof [37], CFL [38], ConvCFL [39], DTNP [40], MLCF [41] can not preserve well the boundary of well-focused and out-of-focus pixels and the back shoe is not well-focused. The proposed CRFGuided preserves best the boundary between well-focused and out-of-focus pixels for both regions for the ‘Volley’ scene of the Lytro dataset [35], resulting in a fused image of high quality without introducing artifacts during fusion.
According to the previous qualitative evaluation, the proposed CRF-Guided fusion method produces fused images of high quality, preserving best the boundary of well-focused and out-of-focus pixels without introducing artifacts during fusion.

4.3. Complexity

We analyzed the computational complexity of the proposed and compared image fusion methods. The average execution time on the Lytro dataset of the compared methods are included in Table 3. The included times were computed on an Intel® CoreTM i9 2.9 GHz processor with 16 GB RAM and a 64-bit operating system. IFCNN [30] and DenseFuse [31] were executed on an NVIDIA GeForce RTX 2080 with Max-Q Design.
The two deep learning-based approaches IFCNN and DenseFuse have very small execution times, due to their parallel implementation on a GPU. The remaining methods were implemented on MATLAB v2021b. The proposed CRF-Guided was implemented on MATLAB without any code optimization. Nonetheless, its average execution time of 31 s compares favorably with the fastest methods, being the 6th method (excluding the IFCNN and DenseFuse), but with the best overall qualitative performance. Thus, the best qualitative fortunately implies a medium computational complexity.

5. Conclusions

A novel transform domain multi-focus image fusion method is introduced in this paper. The proposed CRF-Guided fusion takes advantage of the CRF minimization and the labels are used to guide the fusion of both low frequency and the ICA transform coefficients and thus the high frequency. CRF-Guided fusion supports image denoising during fusion, by applying coefficient shrinkage. Quantitative and qualitative evaluation demonstrate that CRF-Guided fusion outperforms state-of-the-art multi-focus image fusion methods. Limitations of the proposed CRF-Guided fusion method include the selection of the transform domain and the hand-crafted design of the unary and smoothness potential functions for the energy minimization problem. Future work includes the application of CRF-Guided fusion in different transform domains and learning the unary and smoothness potential function with deep learning networks.

Author Contributions

Conceptualization, O.B.; methodology, O.B.; software, O.B.; validation and writing, O.B.; reviewing and editing, I.A. and N.M.; supervision, I.A. and N.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, Y.; Chen, X.; Wang, Z.; Wang, Z.J.; Ward, R.K.; Wang, X. Deep learning for pixel-level image fusion: Recent advances and future prospects. Inf. Fusion 2018, 42, 158–173. [Google Scholar] [CrossRef]
  2. Bai, X.; Zhang, Y.; Zhou, F.; Xue, B. Quadtree-based multi-focus image fusion using a weighted focus-measure. Inf. Fusion 2015, 22, 105–118. [Google Scholar] [CrossRef]
  3. Zhang, Y.; Bai, X.; Wang, T. Boundary finding based multi-focus image fusion through multi-scale morphological focus-measure. Inf. Fusion 2017, 35, 81–2535. [Google Scholar] [CrossRef]
  4. Liu, Y.; Liu, S.; Wang, Z. Multi-focus image fusion with dense SIFT. Inf. Fusion 2015, 23, 139–155. [Google Scholar] [CrossRef]
  5. Qiu, X.; Li, M.; Zhang, L.; Yuan, X. Guided filter-based multi-focus image fusion through focus region detection. Signal Process. Image Commun. 2019, 72, 35–46. [Google Scholar] [CrossRef]
  6. Li, M.; Cai, W.; Tan, Z. A region-based multi-sensor image fusion scheme using pulse-coupled neural network. Pattern Recognit. Lett. 2006, 27, 1948–1956. [Google Scholar] [CrossRef]
  7. Li, S.; Kang, X.; Hu, J.; Yang, B. Image matting for fusion of multi-focus images in dynamic scenes. Inf. Fusion 2013, 14, 147–162. [Google Scholar] [CrossRef]
  8. Singh, S.; Singh, H.; Mittal, N.; Hussien, A.G.; Sroubek, F. A feature level image fusion for Night-Vision context enhancement using Arithmetic optimization algorithm based image segmentation. Expert Syst. Appl. 2022, 209, 118272. [Google Scholar] [CrossRef]
  9. Singh, S.; Mittal, N.; Singh, H. A feature level image fusion for IR and visible image using mNMRA based segmentation. Neural Comput. Appl. 2022, 34, 8137–8154. [Google Scholar] [CrossRef]
  10. Hyvärinen, A.; Hurri, J.; Hoyer, P.O. Independent Component Analysis. In Natural Image Statistics: A Probabilistic Approach to Early Computational Vision; Springer: London, UK, 2009; pp. 151–175. [Google Scholar]
  11. Mitianoudis, N.; Stathaki, T. Pixel-based and region-based image fusion schemes using ICA bases. Inf. Fusion 2007, 8, 131–142. [Google Scholar] [CrossRef] [Green Version]
  12. Liu, Y.; Wang, Z. Simultaneous image fusion and denoising with adaptive sparse representation. IET Image Process. 2015, 9, 347–357. [Google Scholar] [CrossRef]
  13. Liu, Y.; Chen, X.; Ward, R.K.; Wang, Z.J. Image Fusion with convolutional sparse representation. IEEE Signal Process. Lett. 2016, 23, 1882–1886. [Google Scholar] [CrossRef]
  14. Zhang, Q.; Guo, B.l. Multifocus image fusion using the nonsubsampled contourlet transform. Signal Process. 2009, 89, 1334–1346. [Google Scholar] [CrossRef]
  15. Liu, Y.; Liu, S.; Wang, Z. A general framework for image fusion based on multi-scale transform and sparse representation. Inf. Fusion 2015, 24, 147–164. [Google Scholar] [CrossRef]
  16. Zhou, Z.; Li, S.; Wang, B. Multi-scale weighted gradient-based fusion for multi-focus images. Inf. Fusion 2014, 20, 60–72. [Google Scholar] [CrossRef]
  17. Shreyamsha Kumar, B.K. Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform. Signal Image Video Process. 2013, 7, 1125–1143. [Google Scholar] [CrossRef]
  18. Qin, X.; Ban, Y.; Wu, P.; Yang, B.; Liu, S.; Yin, L.; Liu, M.; Zheng, W. Improved Image Fusion Method Based on Sparse Decomposition. Electronics 2022, 11, 2321. [Google Scholar] [CrossRef]
  19. Jagtap, N.S.; Thepade, S.D. High-quality image multi-focus fusion to address ringing and blurring artifacts without loss of information. Vis. Comput. 2021, 37, 1–9. [Google Scholar] [CrossRef]
  20. Singh, H.; Cristobal, G.; Bueno, G.; Blanco, S.; Singh, S.; Hrisheekesha, P.N.; Mittal, N. Multi-exposure microscopic image fusion-based detail enhancement algorithm. Ultramicroscopy 2022, 236, 113499. [Google Scholar] [CrossRef]
  21. Bouzos, O.; Andreadis, I.; Mitianoudis, N. Conditional random field model for robust multi-focus image fusion. IEEE Trans. Image Process. 2019, 28, 5636–5648. [Google Scholar] [CrossRef]
  22. Chai, Y.; Li, H.; Li, Z. Multifocus image fusion scheme using focused region detection and multiresolution. Opt. Commun. 2011, 284, 4376–4389. [Google Scholar] [CrossRef]
  23. He, K.; Zhou, D.; Zhang, X.; Nie, R. Multi-focus: Focused region finding and multi-scale transform for image fusion. Neurocomputing 2018, 320, 157–170. [Google Scholar] [CrossRef]
  24. Singh, S.; Singh, H.; Gehlot, A.; Kaur, J.; Gagandeep, A. IR and visible image fusion using DWT and bilateral filter. Microsyst. Technol. 2022, 28, 1–11. [Google Scholar] [CrossRef]
  25. Singh, S.; Mittal, N.; Singh, H. Multifocus image fusion based on multiresolution pyramid and bilateral filter. IETE J. Res. 2020, 68, 2476–2487. [Google Scholar] [CrossRef]
  26. Zhang, X. Deep learning-based Multi-focus image fusion: A survey and a comparative study. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 4819–4838. [Google Scholar] [CrossRef]
  27. Liu, Y.; Chen, X.; Peng, H.; Wang, Z. Multi-focus image fusion with a deep convolutional neural network. Inf. Fusion 2017, 36, 191–207. [Google Scholar] [CrossRef]
  28. Amin-Naji, M.; Aghagolzadeh, A.; Ezoji, M. Ensemble of CNN for multi-focus image fusion. Inf. Fusion 2019, 51, 201–214. [Google Scholar] [CrossRef]
  29. Tang, H.; Xiao, B.; Li, W.; Wang, G. Pixel convolutional neural network for multi-focus image fusion. Inf. Sci. 2018, 433–434, 125–141. [Google Scholar] [CrossRef]
  30. Zhang, Y.; Liu, Y.; Sun, P.; Yan, H.; Zhao, X.; Zhang, L. IFCNN: A general image fusion framework based on convolutional neural network. Inf. Fusion 2020, 54, 99–118. [Google Scholar] [CrossRef]
  31. Li, H.; Wu, X.J. DenseFuse: A fusion approach to infrared and visible images. IEEE Trans. Image Process. 2019, 28, 2614–2623. [Google Scholar] [CrossRef] [Green Version]
  32. Ma, X.; Wang, Z.; Hu, S.; Kan, S. Multi-focus image fusion based on multi-scale generative adversarial network. Entropy 2022, 24, 582. [Google Scholar] [CrossRef] [PubMed]
  33. Wei, B.; Feng, X.; Wang, K.; Gao, B. The multi-focus-image-fusion method based on convolutional neural network and sparse representation. Entropy 2021, 23, 827. [Google Scholar] [CrossRef] [PubMed]
  34. Boykov, Y.; Veksler, O.; Zabih, R. Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 1222–1239. [Google Scholar] [CrossRef]
  35. Nejati, M.; Samavi, S.; Shirani, S. Multi-focus image fusion using dictionary-based sparse representation. Inf. Fusion 2015, 25, 72–84. [Google Scholar] [CrossRef]
  36. Paul, S.; Sevcenco, I.S.; Agathoklis, P. Multi-exposure and multi-focus image fusion in gradient domain. J. Circuits Syst. Comput. 2016, 25, 1650123. [Google Scholar] [CrossRef]
  37. Zhu, R.; Li, X.; Huang, S.; Zhang, X. Multimodal medical image fusion using adaptive co-occurrence filter-based decomposition optimization model. Bioinformatics 2021, 38, 818–826. [Google Scholar] [CrossRef]
  38. Veshki, F.G.; Ouzir, N.; Vorobyov, S.A.; Ollila, E. Multimodal image fusion via coupled feature learning. Signal Process. 2022, 200, 108637. [Google Scholar] [CrossRef]
  39. Veshki, F.G.; Vorobyov, S.A. Coupled Feature Learning Via Structured Convolutional Sparse Coding for Multimodal Image Fusion. In Proceedings of the ICASSP 2022–2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, 22–27 May 2022; pp. 2500–2504. [Google Scholar]
  40. Li, B.; Peng, H.; Wang, J. A novel fusion method based on dynamic threshold neural P systems and nonsubsampled contourlet transform for multi-modality medical images. Signal Process. 2021, 178, 107793. [Google Scholar] [CrossRef]
  41. Tan, W.; Thitøn, W.; Xiang, P.; Zhou, H. Multi-modal brain image fusion based on multi-level edge-preserving filtering. Biomed. Signal Process. Control. 2021, 64, 102280. [Google Scholar] [CrossRef]
  42. Li, X.; Zhou, F.; Tan, H. Joint image fusion and denoising via three-layer decomposition and sparse representation. Knowl.-Based Syst. 2021, 224, 107087. [Google Scholar] [CrossRef]
  43. Singh, S.; Mittal, N.; Singh, H. Review of various image fusion algorithms and image fusion performance metric. Arch. Comput. Methods Eng. 2021, 28, 3645–3659. [Google Scholar] [CrossRef]
  44. Singh, S.; Mittal, N.; Singh, H. Classification of various image fusion algorithms and their performance evaluation metrics. In Computational Intelligence for Machine Learning and Healthcare Informatics; De Gruyter: Berlin, Germany, 2020; pp. 179–198. [Google Scholar]
  45. Hossny, M.; Nahavandi, S.; Creighton, D. Comments on ’Information measure for performance of image fusion’. Electron. Lett. 2008, 44, 1066–1067. [Google Scholar] [CrossRef]
  46. Xydeas, C.S.; Petrovic, V. Objective image fusion performance measure. Electron. Lett. 2000, 36, 308–309. [Google Scholar] [CrossRef]
  47. Xydeas, C.S.; Petrovic, V.S. Objective pixel-level image fusion performance measure. In Proceedings of the Sensor Fusion: Architectures, Algorithms, and Applications IV, Orlando, FL, USA, 24–28 April 2000; SPIE: Bellingham, DC, USA, 2000; Volume 4051, pp. 89–98. [Google Scholar] [CrossRef]
  48. Yang, C.; Zhang, J.Q.; Wang, X.R.; Liu, X. A novel similarity based quality metric for image fusion. Inf. Fusion 2008, 9, 156–160. [Google Scholar] [CrossRef]
  49. Chen, Y.; Blum, R.S. A new automated quality assessment algorithm for image fusion. Image Vis. Comput. 2009, 27, 1421–1432. [Google Scholar] [CrossRef]
  50. Wang, Z.; Bovik, A.; Sheikh, H.; Simoncelli, E. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  51. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “Completely Blind” image quality analyzer. IEEE Signal Process. Lett. 2013, 20, 209–212. [Google Scholar] [CrossRef]
Figure 1. CRF-Guided fusion Framework for input images x 1 , x 2 , labels as estimated from the CRF minimization, and the fused image F is constructed by the addition of the low-frequency and high-frequency fusion results.
Figure 1. CRF-Guided fusion Framework for input images x 1 , x 2 , labels as estimated from the CRF minimization, and the fused image F is constructed by the addition of the low-frequency and high-frequency fusion results.
Jimaging 08 00240 g001
Figure 2. Source input images: (a) Near focused image, (b) Far focused image.
Figure 2. Source input images: (a) Near focused image, (b) Far focused image.
Jimaging 08 00240 g002
Figure 3. Low frequency of input images using the EAC: (a) Low frequency of near focused image, (b) Low frequency of far focused image. It is evident that the EAC preserves the strong image edges.
Figure 3. Low frequency of input images using the EAC: (a) Low frequency of near focused image, (b) Low frequency of far focused image. It is evident that the EAC preserves the strong image edges.
Jimaging 08 00240 g003
Figure 4. High frequency of input images: (a) High frequency of near focused image, (b) High frequency of far focused image.
Figure 4. High frequency of input images: (a) High frequency of near focused image, (b) High frequency of far focused image.
Jimaging 08 00240 g004
Figure 5. Unary potential estimation for CRF-Guided method.
Figure 5. Unary potential estimation for CRF-Guided method.
Jimaging 08 00240 g005
Figure 6. Predicted labels, black pixels correspond to = 0 , white pixels correspond to = 1 , (a) = arg min U , (b) = arg min C R F .
Figure 6. Predicted labels, black pixels correspond to = 0 , white pixels correspond to = 1 , (a) = arg min U , (b) = arg min C R F .
Jimaging 08 00240 g006
Figure 7. (a) Fused low frequency, (b) Fused high frequency.
Figure 7. (a) Fused low frequency, (b) Fused high frequency.
Jimaging 08 00240 g007
Figure 8. Final fused image by the proposed method.
Figure 8. Final fused image by the proposed method.
Jimaging 08 00240 g008
Figure 9. (a) Near-focused image with Gaussian noise σ n = 5 , (b) Far-focused image with Gaussian noise σ n = 5 , (c) Denoised fused image.
Figure 9. (a) Near-focused image with Gaussian noise σ n = 5 , (b) Far-focused image with Gaussian noise σ n = 5 , (c) Denoised fused image.
Jimaging 08 00240 g009
Figure 10. (a) Near focused image with Gaussian noise σ = 10 , (b) Far focused image with Gaussian noise σ = 10 , (c) Denoised fused image.
Figure 10. (a) Near focused image with Gaussian noise σ = 10 , (b) Far focused image with Gaussian noise σ = 10 , (c) Denoised fused image.
Jimaging 08 00240 g010
Figure 11. Fused results for the scene ‘Lab’ of the grayscale dataset [3]. (a) Source 1, (b) Source 2, (c) GBM [36], (d) NSCT [14], (e) ICA [11], (f) DCHWT [17], (g) ASR [12], (h) IFCNN [30], (i) DenseFuse [31], (j) acof [37], (k) CFL [38], (l) ConvCFL [39], (m) DTNP [40], (n) MLCF [41], (o) Joint [42], (p) CRFGuided.
Figure 11. Fused results for the scene ‘Lab’ of the grayscale dataset [3]. (a) Source 1, (b) Source 2, (c) GBM [36], (d) NSCT [14], (e) ICA [11], (f) DCHWT [17], (g) ASR [12], (h) IFCNN [30], (i) DenseFuse [31], (j) acof [37], (k) CFL [38], (l) ConvCFL [39], (m) DTNP [40], (n) MLCF [41], (o) Joint [42], (p) CRFGuided.
Jimaging 08 00240 g011
Figure 12. Fused results for the scene ‘Temple’ of the grayscale dataset [3]. (a) Source 1, (b) Source 2, (c) GBM [36], (d) NSCT [14], (e) ICA [11], (f) DCHWT [17], (g) ASR [12], (h) IFCNN [30], (i) DenseFuse [31], (j) acof [37], (k) CFL [38], (l) ConvCFL [39], (m) DTNP [40], (n) MLCF [41], (o) Joint [42], (p) CRFGuided.
Figure 12. Fused results for the scene ‘Temple’ of the grayscale dataset [3]. (a) Source 1, (b) Source 2, (c) GBM [36], (d) NSCT [14], (e) ICA [11], (f) DCHWT [17], (g) ASR [12], (h) IFCNN [30], (i) DenseFuse [31], (j) acof [37], (k) CFL [38], (l) ConvCFL [39], (m) DTNP [40], (n) MLCF [41], (o) Joint [42], (p) CRFGuided.
Jimaging 08 00240 g012
Figure 13. Fused results for the scene ‘Golfer’ of the Lytro dataset [35]. (a) Source 1, (b) Source 2, (c) GBM [36], (d) NSCT [14], (e) ICA [11], (f) DCHWT [17], (g) ASR [12], (h) IFCNN [30], (i) DenseFuse [31], (j) acof [37], (k) CFL [38], (l) ConvCFL [39], (m) DTNP [40], (n) MLCF [41], (o) Joint [42], (p) CRFGuided.
Figure 13. Fused results for the scene ‘Golfer’ of the Lytro dataset [35]. (a) Source 1, (b) Source 2, (c) GBM [36], (d) NSCT [14], (e) ICA [11], (f) DCHWT [17], (g) ASR [12], (h) IFCNN [30], (i) DenseFuse [31], (j) acof [37], (k) CFL [38], (l) ConvCFL [39], (m) DTNP [40], (n) MLCF [41], (o) Joint [42], (p) CRFGuided.
Jimaging 08 00240 g013
Figure 14. Fused results for the scene ‘Volley’ of the Lytro dataset [35]. (a) Source 1, (b) Source 2, (c) GBM [36], (d) NSCT [14], (e) ICA [11], (f) DCHWT [17], (g) ASR [12], (h) IFCNN [30], (i) DenseFuse [31], (j) acof [37], (k) CFL [38], (l) ConvCFL [39], (m) DTNP [40], (n) MLCF [41], (o) Joint [42], (p) CRFGuided.
Figure 14. Fused results for the scene ‘Volley’ of the Lytro dataset [35]. (a) Source 1, (b) Source 2, (c) GBM [36], (d) NSCT [14], (e) ICA [11], (f) DCHWT [17], (g) ASR [12], (h) IFCNN [30], (i) DenseFuse [31], (j) acof [37], (k) CFL [38], (l) ConvCFL [39], (m) DTNP [40], (n) MLCF [41], (o) Joint [42], (p) CRFGuided.
Jimaging 08 00240 g014
Table 1. Objective evaluation for the Lytro dataset [35]. Lower values for N I Q E indicate better fused image quality, while for rest metrics higher values indicate better fused image quality.
Table 1. Objective evaluation for the Lytro dataset [35]. Lower values for N I Q E indicate better fused image quality, while for rest metrics higher values indicate better fused image quality.
Methods MI [45] Qg [47] Q AB / F [46] Qy [48] CB [49] SSIM [50] NIQE [51] Entropy
ASR [12]7.13100.75100.70130.96910.72640.84373.45917.5217
NSCT [14]7.19860.75020.69600.96490.75270.84323.44797.5309
GBM [36]3.88130.71720.62020.85540.61590.79323.04347.5684
ICA [11]6.87690.73930.67410.95120.70880.85343.39157.5267
IFCNN [30]7.04000.73370.66280.95220.72920.84403.46237.5319
DenseFuse [31]6.20480.55320.46940.81410.60370.86513.39537.4681
dchwt [17]6.72980.71840.60780.92020.69240.85263.29767.5205
acof [37]7.26750.52870.51120.94750.63870.82604.65017.4901
cfl [38]5.62540.65760.57460.88270.63230.81583.40337.5734
ConvCFL [39]5.97420.69160.58640.88690.66430.83963.70997.5581
DTNP [40]6.78540.74310.67790.95660.73470.83903.41987.5298
mlcf [41]6.44140.53770.51470.85930.62590.85643.86997.4906
joint [42]6.99910.74350.69700.96210.71760.84263.39357.5200
CRFGuided7.36390.75340.71430.98510.75570.86013.03367.5697
Table 2. Objective evaluation for the grayscale dataset [3]. Lower values for N I Q E indicate better fused image quality, while for rest metrics higher values indicate better fused image quality.
Table 2. Objective evaluation for the grayscale dataset [3]. Lower values for N I Q E indicate better fused image quality, while for rest metrics higher values indicate better fused image quality.
Methods MI [45] Qg [47] Q AB / F [46] Qy [48] CB [49] SSIM [50] NIQE [51] Entropy
ASR [12]6.37900.71920.67210.95410.70570.81505.51117.3262
NSCT [14]6.29470.70740.65930.94390.72840.81615.30807.3451
GBM [36]3.52920.67290.58260.82750.60050.75035.00537.5298
ICA [11]6.01740.69450.65070.93130.69960.83025.21447.3449
IFCNN [30]5.96410.67430.60740.91180.67250.82305.44367.3435
DenseFuse [31]6.04670.61390.57980.85170.62750.83515.25847.3739
dchwt [17]5.99650.67810.58100.89970.67520.82444.97137.3396
acof [37]6.57480.55940.55430.86910.61830.80985.16257.3088
cfl [38]4.81580.59850.53270.85480.61380.79665.51567.4403
ConvCFL [39]5.30140.65100.56190.86400.65580.82345.50237.3895
DTNP [40]6.09110.69660.63570.92960.70560.81195.28177.3496
mlcf [41]6.32940.59120.58900.92740.65940.80405.26707.3176
joint [42]6.65410.72120.67750.95530.72340.81025.45437.3239
CRFGuided6.67400.72900.69030.97980.73370.83565.00017.3928
Table 3. Average running time of compared methods for input image pairs of size 520 × 520 .
Table 3. Average running time of compared methods for input image pairs of size 520 × 520 .
MethodsTime (s)
GBM [36]2.43 s
NSCT [14]87.27 s
ICA [11]24.02 s
DCHWT [17]18.59 s
ASR [12]1204.92 s
IFCNN [30]0.22 s
DenseFuse [31]0.41 s
acof [37]9.91 s
CFL [38]23.69 s
ConvCFL [39]138.42 s
DTNP [40]420 s
MLCF [41]53.11 s
Joint [42]83.09 s
CRF-Guided31.00 s
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bouzos, O.; Andreadis, I.; Mitianoudis, N. Conditional Random Field-Guided Multi-Focus Image Fusion. J. Imaging 2022, 8, 240. https://doi.org/10.3390/jimaging8090240

AMA Style

Bouzos O, Andreadis I, Mitianoudis N. Conditional Random Field-Guided Multi-Focus Image Fusion. Journal of Imaging. 2022; 8(9):240. https://doi.org/10.3390/jimaging8090240

Chicago/Turabian Style

Bouzos, Odysseas, Ioannis Andreadis, and Nikolaos Mitianoudis. 2022. "Conditional Random Field-Guided Multi-Focus Image Fusion" Journal of Imaging 8, no. 9: 240. https://doi.org/10.3390/jimaging8090240

APA Style

Bouzos, O., Andreadis, I., & Mitianoudis, N. (2022). Conditional Random Field-Guided Multi-Focus Image Fusion. Journal of Imaging, 8(9), 240. https://doi.org/10.3390/jimaging8090240

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop