Next Article in Journal
Assessment of Bacterial Sealing Ability of Two Different Bio-Ceramic Sealers in Single-Rooted Teeth Using Single Cone Obturation Technique: An In Vitro Study
Next Article in Special Issue
A Nested UNet Based on Multi-Scale Feature Extraction for Mixed Gaussian-Impulse Removal
Previous Article in Journal
Measurements of Room Acoustic and Thermo-Hygrometric Parameters—A Case Study
Previous Article in Special Issue
Image Deblurring Based on an Improved CNN-Transformer Combination Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Infrared and Visible Image Fusion via Feature-Oriented Dual-Module Complementary

Division of Computer Science and Engineering, CAIIT, Jeonbuk National University, Jeonju 54896, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(5), 2907; https://doi.org/10.3390/app13052907
Submission received: 5 February 2023 / Revised: 20 February 2023 / Accepted: 21 February 2023 / Published: 24 February 2023
(This article belongs to the Special Issue Image Enhancement and Restoration Based on Deep Learning Technology)

Abstract

:
With the industrial demand caused by multi-sensor image fusion, infrared and visible image fusion (IVIF) technology is flourishing. In recent years, scale decomposition methods have led the trend for feature extraction. Such methods, however, have low time efficiency. To address this issue, this paper proposes a simple yet effective IVIF approach via a feature-oriented dual-module complementary. Specifically, we analyze five classical operators comprehensively and construct the spatial gradient capture module (SGCM) and infrared brightness supplement module (IBSM). In the SGCM, three kinds of feature maps are obtained, respectively, by introducing principal component analysis, saliency, and proposing contrast estimation operators considered the relative differences of contrast information covered by the input images. These maps are later reconstructed through pyramidal transformation to obtain the predicted image. The IBSM is then proposed to refine the missing infrared thermal information in the predicted image. Among them, we improve the measurement operators applied to the exposure modalities, namely, the gradient of the grayscale images (2D gradient) and well-exposedness. The former is responsible for extracting fine details, and the latter is meant for locating brightness regions. Experiments performed on public datasets demonstrate that the proposed method outperforms nine state-of-the-art methods in terms of subjective visual and objective indicators.

1. Introduction

The purpose of multi-sensor image fusion is to extract and retain features from source images to generate a comprehensive image with abundant information. As one of the components of multi-sensor image pairs, infrared images reflect the thermal radiation information sensed from the scene but lack details regarding the scene; visible images contain considerably detailed information, but the positioning of infrared targets is severely controlled under harsh environments [1,2,3]. As one of the classic fusion approaches, infrared and visible image fusion (IVIF) has historically played a vital role. Fused images processed by this technique greatly benefit the subsequent advanced computer vision task, such as object detection [4,5,6], semantic segmentation [7,8], and pedestrian re-identification [9,10].
Over the past few years, several IVIF algorithms have been proposed and divided into two categories according to popularity: scale–transformation-based and deep-learning-based methods [1]. The methods in the first category generally entail four stages: pyramid transform, wavelet transform, edge-preserving filter, and hybrid multiscale filter decomposition. Bulanon et al. [11] proposed an IVIF method based on Laplacian pyramid transformation and fuzzy logic to detect fruits and obtained improved detection results. Zhan et al. [12] proposed a discrete-wavelet-transform-based IVIF method based on two fusion rules to obtain better performance. Meng et al. [13] proposed an IVIF method based on non-subsampled contourlet transform (NSCT) and object region detection; this method first locates the infrared target region and then combines the sub-images decomposed by the NSCT to obtain the final fused image with good retention of targets and details. Hu et al. [14] proposed an IVIF method that uses a guided filter to decompose input images to obtain two sub-layers; this method also combines the cumulative distributions of the gray levels and entropy to adaptively preserve the infrared targets and visible textures. Although these methods produce better subjective effects and higher fusion efficiencies, they use only a single filter to decompose the source image, which results in loss of image features to a certain extent. Yang et al. [15] proposed a multi-scale decomposition method based on a rolling guided filter and a fast bilateral filter to decompose the input images into sublayers. Following this, sparse representations and a detail injection model are used to obtain the fused result with abundant information. Chen et al. [16] proposed a guided filter and multi-directional filter banks, where the filter is used to separate the source image into its base and detail layers while the filter bank is applied to fuse the base layers; this combination was shown to achieve better fusion performance. Luo et al. [17] proposed an IVIF scheme based on visibility enhancement and hybrid multiscale decomposition to obtain the base and detail layers, and the weights of the visual saliency illumination map and a convolutional neural network (CNN) were used to process the corresponding sublayers. Compared to a single filter, the hybrid filter obtains finer details and brighter infrared targets, but the fused image is obtained at the expense of loss of algorithmic efficiency.
The second category includes deep learning (DL)-based methods, which have substantially improved and achieved research results [3,4,9,18,19,20]. Liu et al. [18] first applied the deep CNN to IVIF to extract features and calculate the activity level to generate feature maps, thereby obtaining fused images. Li et al. [19] constructed an “encoder–fusion-strategy–decoder” framework to achieve high-quality results. Ma et al. [20] employed a generative adversarial network (GAN) for guiding IVIF for the first time, in which the GAN was similar to an adversarial game between an input image and an imaginary image generated from the input image by setting the loss function to continuously adjust the weights to obtain a near-perfect fused image.
To summarize, scale–transform-based methods can obtain sub-layers at different scales with the help of a decomposition algorithm. Thereafter, suitable fusion rules related to image context information are designed to guide the fusion of these sub-layers. However, these methods are inherently flawed as they involve manual comparison, determining the optimal number of decomposition layers, and selecting the fusion rules; the decomposition processing is time-consuming, leading to poor real-time performance. Although DL-based methods have powerful feature extraction functions, they need sufficient raw data and strong computing resources to train models. These methods also lack convincing theoretical knowledge to explain or evaluate the pros and cons of the networks.
Motivated by the abovementioned discussion, we propose an feature-oriented dual-module complementary IVIF. Specifically, we analyzed five classical operators to replace the potential pitfalls of using scale decomposition filters to extract features according to the original image characteristics, and we comprehensively propose two feature extraction modules, namely, the spatial gradient capture module (SGCM) and infrared brightness supplement module (IBSM). As the name suggests, the former is more focused on preserving the spatial gradient information from the original images, which is constructed using principal component analysis (PCA), saliency, and contrast estimation operators. The latter compensates for the feature loss by focusing on improving two exposure metrics that are closely related to image intensity. Extensive experiments were performed on public datasets to prove that the proposed method has better fusion performance in terms of the overall contrast and feature preservation compared to other existing state-of-the-art fusion methods.
The main contributions of this paper can be summarized as follows:
  • We propose IVIF via a feature-oriented dual-module complementary. Based on the varying input image characteristics, we analyzed five classical operators to replace the potential limitations of using scale decomposition filters to extract the features and constructed the two modules, SGCM and IBSM. Owing to the complementarity of these two modules, the fused image shows good performance with adequate contrast and high efficiency.
  • We design a contrast estimator to adaptively transfer useful details from the original image, which helps to obtain predicted images with good information saturation. Based on the predicted image, a complementary module is proposed to preserve the color of the visible image while injecting infrared information to generate a realistic fused image.
  • We introduce and improve the exposure metrics, namely, the gradient of grayscale (2D gradient) that is responsible for extracting the fine details and well-exposedness for locating the brightness regions. Using these, the infrared information is extracted from the source image and injected into the fused image to highlight the infrared target.
The remainder of this paper is organized as follows. Section 2 briefly introduces the related works. Section 3 describes the proposed IVIF method in detail. Section 4 provides the experimental settings and results analysis, followed by the conclusions in Section 5.

2. Related Works

2.1. Fast Guided Filter

Assume that q is a linear transform of I in a window ω k centered at pixel k :
q i = a k I i + b k , i ω k ,
where a k , b k are linear coefficients assumed to be constant in ω k . This local linear model ensures that q has an edge only if I has an edge because q = a I .
To determine the linear coefficients, we seek a solution to (1) that minimizes the difference between q and filter input p . Specifically, we minimize the following cost function in the window:
E a k , b k = i ω k a k I i + b k p i 2 + ϵ a k 2 ,
Here, ϵ is a regularization parameter that prevents a k from being too large. The solution to (2) is given by linear regression [21] as
a k = 1 ω i ω k I i p i μ k p ¯ k σ k 2 + ϵ ,
b k = p ¯ k a k μ k ,
Here, μ k and σ k 2 are the mean and variance of I in ω k , respectively, ω is the number of pixels in ω k , and p ¯ k = 1 ω i ω k p i is the mean of p in ω k .
After computing a k , b k for all the patches, known as ω k , in the image, we compute the filter output as
q i = 1 ω k : i ω k a k I i + b k = a ¯ i I i + b ¯ i ,
where a ¯ i = 1 ω k ω i a k and b ¯ i = 1 ω k ω i b k .

2.2. Well-Exposedness Metric

The well-exposedness feature E is initially derived from multi-exposure image fusion (MEIF) proposed by Mertens et al. [22] to preserve well-settled regions in the input exposed images, i.e., to neglect the under- and over-exposed pixel intensities. Each feature is extracted in the form of a Gaussian curve, whose definition is refined as follows:
E i = e x p I i 0.5 2 2 σ 2 , i = R , G , B ,
E = E R E G E B .
where “ ” represents pixel-wise multiplication, and σ is equal to 0.2; R , G , B are the channels in the exposed image.

3. The Proposed Method

This section describes the algorithm framework depicted in Figure 1. First, the input images are fed into the SGCM to obtain abundant spatial feature information. In this module, three operators are proposed for specific roles: PCA is introduced to estimate the overall contour instead of the usual dimensionality reduction, saliency is used to highlight the region of interest, and the contrast estimation operator is proposed by constructing coefficient equations between the source images to adaptively preserve the gradient texture, followed by obtaining the capture maps. Then, a Gaussian–Laplacian pyramid algorithm is used to obtain the predicted image. By comparing the image features between the predicted and raw images, we find that infrared thermal information is lost, and thus propose the IBSM. In the IBSM, two operators applied to the exposure modality are improved by focusing on the image intensity. Then, the supplementary maps are obtained by multiplying the corresponding gradient map and intensity maps estimated via the Sobel gradient operator and Gaussian curve with the given weights. Finally, the fused image is obtained by adding the predicted image to the refined image, calculated by weighting the supplementary maps and source images. The specific steps are as follows.

3.1. Spatial Gradient Capture Module

3.1.1. PCA Operator

Generally, the aim of PCA is to reduce the dimensionality of large datasets by exploiting the underlying correlations between the variables efficiently while preserving most of the information [23]. However, in this paper, PCA is used as a feature extractor to estimate the weight maps. To the best of our knowledge, PCA has not been utilized for IVIF, but it has already been adopted in MEIF [24].
First, gray-scale images I n n = 1 N are vectorized into column vectors of size r ˜ c × 1 , where r ˜ and c denote the numbers of rows and columns of the image, respectively. Then, all these column vectors are combined in a data matrix of size r ˜ c × N consisting of r ˜ c objects having N variables each. After calculating the PCA scores of all objects, each object–variable vector is reshaped to an r ˜ × c image matrix. Next, a Gaussian filter is used to eliminate noise and discontinuities while smoothing the sharp changeovers at the transition regions. Lastly, a sum-to-one normalization is performed at each spatial position r ˜ , c over all images, and the final PCA weight map P C A n n = 1 N is obtained for the fusion operation.

3.1.2. Saliency Operator

In image processing, constructing saliency maps is beneficial for observing the human visual system and improving fusion performance. Inspired by Hou et al. [25], an image signature descriptor based on the discrete cosine transform (DCT) is applied to obtain the saliency maps. Given an image I , we can approximately isolate the support of the image foreground signal by taking the sign of the DCT of the mixed signal I in the transformed domain and then computing the inverse DCT back into the spatial domain to obtain the reconstructed image ( I ¯ ). The image signature descriptor (ISD) is defined as
I S D I ^ = s i g n D C T I ,
I ¯ = I D C T I S D I ^ ,
where s i g n denotes the entry-wise sign operator.
Subsequently, the saliency maps ( S A L n ) are obtained by smoothing the squared reconstructed images, which greatly overlap with the regions of human overt attentional interest and can be measured as the salient points on the input images. Its definition is given as follows:
S A L n = g I ¯ I ¯ .
where and g denote the Hadamard product operator and Gaussian blurring used to suppress the noise introduced by the sign quantization, respectively.

3.1.3. Adaptive Contrast Estimation Operator

In general, image contrast reflects the differences in luminance levels between the brightest white and darkest black in the light and dark areas of an image; it is also one of the important elements for measuring the structural details of the image. The magnitude of the image gradient has a low value in the blurred image because the gray-level change at the object edge is not evident. Numerically, the image contrast has a lower value in smooth regions because there are fewer high-frequency components where the grayscale values have large variations. By observing the input images, we found that the visible image often contributes to the contrast distribution of the fused image. However, relying only on the structural details of the visible image and ignoring those in the infrared image inevitably leads to low contrast and poor visual quality of the fused image. To solve this, we propose an adaptive contrast estimation operator to (i) maximize the extraction of the spatial structure details from the infrared image and (ii) preserve the spatial structure of the visible image and the target in the infrared image well, resulting in a contrast map. The specific steps are as follows.
First, calculate the contrast difference map (CDM) between the source images as follows:
C D M x , y = max A C I R x , y A C V I S x , y , 0 A C I R x , y ,
where IR and VIS denote the input images, and A C I x , y is the contrast of image I at the coordinate position  x , y , which is constructed from the original contrast and gradient in the source images [26] as follows:
A C I x , y = 1 α max   I x , y min   I x , y + α max I x , y ,
where x , y is the neighborhood pixel of x , y within the window size N x , y , denotes the gradient operator, and α is a constant with a value of 0.5.
Thus, by employing A C in C D M as in Equation (11), the contrast map C D M has large values for regions with better spatial details in I R compared to V I S and low values (or zeros) for the other regions where the spatial details of V I S are better. Hence, the adaptive contrast equation inherits the goal that it was built for. Note that the C D M comprises simple yet effective calculations for assessing the spatial details of an image and does not require image decomposition via filter banks or frequency decomposition.
After the three different feature maps are extracted from the input images, a fast guided filter ( F G F ) is used to combine them to obtain the captured map ( C M ) as follows:
C M n = F G F P C A n . S A L n . C D M , γ r ,   ε , s ,
where r , ε , and s denote the local window radius, regularization parameter, and subsampling ratio, respectively.
To avoid the appearance of visual artifacts and combine the different scales together, a Gaussian pyramid is constructed for the captured maps as follows:
G n i = d g 2 C M n i ,   n = 1 , N ; i = 1 , 2 , , j .
where d g 2 corresponds to an operator that convolves an image with a Gaussian kernel and then downsamples it to half of its original dimensions; j is the sampling number, and its value is calculated as j = floor   log   min r , c   /   log 2 . Then, a set of progressively smaller and smoother weight maps G n 1 , G n 2 , , G n j are produced.
Similarly, a Gaussian pyramid is built for each input image I n , and a Laplacian pyramid is constructed for each I n through the following recursive formula:
L n i = I n i u g 2 d g 2 I n i ,
where u g 2 is an operator used to upsample an image to twice its original size.
Since L n i captures the frequency content of the original image at scale i , a multi-scale combination of all I n i give:
R n i = G 1 1 L 1 1 + G N 1 L N 1 + G 1 2 L 1 2 + G N 2 L N 2 + + G 1 j L 1 j + G N j L N j = i = 1 j n = 1 N G n i L n i ,
Then, the image R n i is reconstructed by upsampling from the high-scale image to the low-scale image to obtain the predicted image P x , y as follows:
P x , y = n = 1 N i = 1 j u g 2 R n i

3.2. Infrared Brightness Supplement Module

As illustrated in Figure 2, the predicted image has better visual information. Compared with the infrared source image in Figure 2a,b, however, it is observed from Figure 2c,d that the infrared target is dim, which means that the infrared thermal information is not sufficiently extracted by the previous module. To address this, we propose an IBSM focused on image intensity.

3.2.1. Gradient Intensity Operator

Given a continuous grayscale image I , calculate the intensity values in the horizontal and vertical directions. Assume x and y are spatial coordinates such that the intensity components can be denoted by H x , y and V x , y . Then, the grayscale image may be written as f = H x , y , V x , y . The following notations are adopted: x , y = x 1 , x 2 = x f = H , V = f 1 , f 2 , y = f x = f 1 x , f 2 x , and x 2 , where is the set of real numbers.
For i   and   j = 1 , 2 , we assume that the rank of the Jacobian matrix J = f j / x i is two everywhere in 2 . Let f i x = f 1 / x i , f 2 / x i . According to this definition, f i x is a two tuple of a real number. Moreover, we postulate that f i x and their first derivatives are continuous. For i , k = 1, 2, we set
g i k x = f i x f k x ,
where “ ” is the dot product. According to the above notations, f i x can be written as follows:
p = H x h + V x v ,
q = H y h + V y v ,
where h and v are unitary vectors associated with H and V , respectively. During the calculation of the partial derivatives, two Sobel operators are used as follows:
S x = 1 2 1 0 0 0 1 2 1 ,   S y = 1 0 1 2 0 2 1 0 1
Similarly, g i k x i , k = 1 , 2 can be represented by
g x x = p p = H x 2 + V x 2 ,
g y y = q q = H y 2 + V y 2 ,
g x y = g y x = p q = H x H y + V x V y .
In image processing, we are often interested in the following two quantities [27] that are computed locally at each spatial coordinate x , y : (i) direction through x , y , along which f has the maximum rate of change; (ii) absolute value of this maximum rate of change. Therefore, we aim to find the maximization of the following form:
d f 2 = g x x d x d x + g y y d y d y + g x y d x d y + g y x d y d x ,
under the condition
d x d x + d y d y = 1 .
The above problem can also be formulated as finding a θ value that maximizes the following expression:
a r g max θ g x x c o s 2 θ + 2 g x y c o s θ s i n θ + g y y s i n 2 θ .
Let
F θ = g x x c o s 2 θ + 2 g x y c o s θ s i n θ + g y y s i n 2 θ ,
using the common trigonometric function formulas,
s i n 2 θ = 1 2 1 c o s 2 θ ,
c o s 2 θ = 1 2 1 + c o s 2 θ ,
s i n θ c o s θ = 1 2 s i n 2 θ .
F θ can be written as
F θ = 1 2 g x x 1 + c o s 2 θ + 2 g x y s i n 2 θ + g y y 1 c o s 2 θ = 1 2 g x x + g y y + g x x g y y c o s 2 θ + 2 g x y s i n 2 θ
Letting d F / d θ = 0 , we obtain
θ x , y = 1 2 a r c t a n 2 g x y g x x g y y .
Here, θ x , y is the angle that determines the direction through x , y , along which f has the maximum rate of change. If θ 0 is a solution to this equation, then so is θ 0 ± π / 2 . As F θ = F θ + π on the basis on t a n θ = t a n θ ± π , we may confine the values of θ to the interval ( 0 ,   π . Thus, Equation (31) provides two values that are π / 2 apart at each x , y , which means that a pair of orthogonal directions are involved; along one of them, f attains its maximum rate of change, while the minimum is attained along the other. Therefore, the absolute value of this maximum rate of change is given by:
G θ x , y = 1 2 g x x + g y y + g x x g y y cos 2 θ x , y + 2 g x y s i n 2 θ x , y 1 / 2 ,
where G θ x , y denotes the gradient intensity at x , y .

3.2.2. Exposedness Intensity Operator

Exposedness features are often extracted in the MEIF task because it can localize the exposed regions well. Since infrared images are similar to exposure images on the premise of considering the brightness information, this work introduces exposedness to IVIF. However, we also notice that there are two constants (0.5 and 0.2) in Equation (6), which means that the equation does not consider the differences in the source images. This design is actually similar to the commonly used “weighted average” fusion rule, which means that all pixel values are promoted to be equal to 0.5, regardless of the difference in image distribution, while ignoring the regions where the pixel values in the source images are 0 or 1, resulting in loss of structural details and infrared brightness in the fused image. To address this defect, we improved the exposedness intensity ( A n x , y ) by replacing the constants in Equation (6) with the mean and standard deviation to obtain a new equation operator as follows:
A n x , y = exp I n x , y 1 μ I n 2 2 σ I n 2 ,
where μ I n and σ I n are the means and standard deviations of the pixel intensities in I n , respectively. In this case, each exposedness is determined by each input, which can be seen as an adaptive operator.
Next, the supplementary map ( S n x , y ) is obtained through the two feature maps:
S n x , y = G n ω 1 x , y . A n ω 2 x , y .
where ω 1 and ω 2 are weights that determine the ratio of the two feature maps injected into the fused image.
Then, a Gaussian filter is used to smooth S n x , y to reduce the noise artifacts as follows:
R n x , y = Gaussian S n x , y , σ r ,
where σ r denotes the standard deviation of the Gaussian kernel.
Finally, the fused image is obtained as
F = P x , y + i = 1 n I i x , y R i x , y .
Compared to the resulting image in Figure 2c,d, the fused image in Figure 2e achieves better visual results and maintains similar color of the infrared target as that in the infrared input image.

4. Experimental Setting and Results Analysis

4.1. Experimental Setting

To verify the effectiveness of the proposed method, a large number of experiments are performed on the TNO [28] and RoadScene [29] datasets. Meanwhile, nine state-of-the-art fusion methods are used to compare the proposed method, including VGG-19 and a multi-layer-fusion-based method (VggML) [30], ResNet and zero-phase component analysis (Resnet50) [31], Bayesian fusion (BayF) [32], algorithm unrolling image fusion (AUIF) [33], classification-saliency-based fusion (CSF) [34], dual-discriminator conditional generative adversarial network (DDcGAN) [35], semantic-aware real-time fusion network (SeAFusion) [36], visibility enhancement and hybrid multiscale decomposition (VEHMD) [17], and Y-shape dynamic transformer (YDTR) [37]. Six indicators are measured for each method as follows: edge-based metric (Qabf) [38], structure-based metric (SSIM) [39], multiscale-feature-based metric (Qm) [40], phase-congruency-based metric (QP) [41], mutual information for the wavelet (FMIw), and discrete cosine (FMIdct) features [42]. Higher values of these indicators represent better fusion results.

4.2. Parameter Discussion

There are several parameters that need to be discussed to find the optimal values, as shown in Table 1. In this subsection, eight sets of images from the TNO dataset and two groups of images from the RoadScene dataset are averaged for the six objective indicators, and the average value is the largest, and the optimal value is more significant.
The role of FGF is to eliminate possible discontinuities and noise in the combined maps. As shown in (13), to determine the optimal parameters, we follow the variable transformation rule, namely, varying one parameter while the others remain fixed. Table 2 shows the average values of the six indicators for different ε , and it is seen that when ε is 0.1, the averages are the largest. Similarly, we can conclude that the maximum value occurs when r is 8 and s is 2 from the results in Table 3.
As for the parameters ω 1 and ω 2 , we know from the above discussion that these determine the importance of the gradient intensity and exposedness strength in the fused image, and the more important features are assigned higher weight ratios. In the IBSM, the focus is more on extracting the infrared heat information missing from the results of the SGCM from the source images and injecting them into the fused images. In other words, we need to ensure that the gradient strength in the fused image remains unchanged when injecting the infrared heat information to realize effective complementation of the two modules. We thus set ω 1 to 1.0, and ω 2 is assigned a higher weight. Through extensive experiments on eight sets of source images and evaluation of the six indicators, the change trend of ω 2 is seen in Table 4 while ω 1 defaults to 1.0. From the table, we find that the average of five of the indicators is at maximum when ω 2 is 2.8, except for Qm. Moreover, the difference between Qm and the highest value is 0.001, which is a relatively small error. After a final comparison, we assign ω 1 and ω 2 as 1.0 and 2.8, respectively. As observed from Table 5, the average values of the six indicators decrease with continuous increase in σ r , which shows that σ r equal to 3 is the most effective.

4.3. Subjective Comparisons

Figure 3 shows the results obtained by ten fusion methods on images from the TNO dataset. In terms of preservation of the infrared brightness information, the six methods based on VggML, Resnet50, BayF, AUIF, CSF, and YDTR yield relatively dim infrared targets, e.g., the persons and wheels in multiple scenes. The fusion results of the DDcGAN, SeAFusion, and VEHMD methods manifest that these can compensate for the abovementioned shortcomings; however, there are some problems that must be noted: the DDcGAN over enhances the contrast of the source images, resulting in the sharpening of both the infrared targets and texture details, and the visible details of SeAFusion are partially lost, e.g., the branches of the trees are intertwined rather than separated, as can be observed from the magnified blue regions. The VEHMD generates undesirable noise and artifacts that alter the image characteristics. In contrast, the proposed method not only maintains the infrared target brightness well but also finely preserves the visible details without noticeable artifacts.
For further comparison, two groups of image pairs from the RoadScene dataset are examined to obtain the fused results, as shown in Figure 4 and Figure 5. In Figure 4, the overall contrast based on the VggML in Figure 4c, Resnet50 in Figure 4d, BayF in Figure 4e, and VEHMD in Figure 4j are dim, making it difficult to distinguish between infrared brightness information and texture details. The CSF and SeAFusion methods are extreme cases: one is too dim while the other is too bright, and both are fusion results that look unnatural; and the image information of the three lights in the distance is also lost, as shown in the blue enlarged boxes in Figure 4g,i. The DDcGAN in Figure 4h destroys the original texture structure of the source image, resulting in serious artifacts and noise. The AUIF and YDTR methods have improved considerably from the perspective of infrared brightness extraction; the detail information, however, is lost, e.g., the three lights in the blue magnified area in Figure 4f as well as the lightness and texture of the tree trunk in the red magnified area in Figure 4k. By contrast, our method has higher image quality owing to the well-preserved details of the infrared targets and visible features.
Similar conclusions are observable in Figure 5. The overall contrast of the VggML, Resnet50, and BayF methods are low, resulting in unrecognizable letters on the ground. Although the contrast of AUIF, SeAFusion, and YDTR have improved considerably, the over enhanced contrast makes the fused image look too bright, causing the texture on the wheel to be lost, as can be verified from the blue enlarged areas in Figure 5f,i,k. In addition, the methods based on CSF, DDcGAN, and VEHMD produce artifacts and noise, e.g., the outline of the tree within the blue magnified area. On the contrary, the fused image generated by the proposed algorithm in Figure 5l looks natural and preserves the finer details in the source images.

4.4. Objective Evaluation

To evaluate the proposed method more comprehensively, we use the six indicators to test the fusion performances on the TNO and RoadScene datasets containing 30 sets of images, and these results are shown in Figure 6 and Figure 7. From Figure 6, we clearly observe that our method achieves the best average values for all six indicators. Figure 7 demonstrates that our method performs well as it achieves the four highest averages and two top-three highest averages among the six metrics. Judging from the overall trends of the ten methods on the two datasets, our method has a high probability of exceeding the performances of other state-of-the-art methods.

4.5. Algorithm Effectiveness Analysis

After the dual verification of the subjective effects and objective indicators, we confirm that the proposed method is effective. To reiterate, the proposed method involves building two different yet complementary feature extraction modules based on five typical operators, each of which plays a different role. In this section, therefore, we present a detailed analysis of the subjective visual maps generated by the mutual promotion of the five operators of the two modules. As shown in Figure 8, we observe that Figure 8(a1,b1) controls the overall contour and Figure 8(c1) reflects that the details are extracted according to scale from small to large and coarse to fine. Their corresponding heatmaps in Figure 8(a2–c2) also represent these feature changes. After the multiplication operation, the captured maps in Figure 8(d1) show integration of each of the feature maps, leading to more uniform image gradient distribution, i.e., the infrared thermal radiation and gradient information is nearly similar to the information distribution of the source images, as verified in Figure 8(d2).
Similarly, the feature variations of the other module are shown in Figure 9. Unlike the SGCM in Figure 8, which focuses on the spatial gradient features and ignores infrared information, the design of the IBSM in Figure 9 follows two principles: (i) it maintains the overall outline and spatial gradient of the previous module unchanged; and (ii) it extracts infrared information from the source images to the predicted image. Inspired by two metrics related to image exposure, we introduce and improve them in the module to achieve good cross-modal cross-fusion. From Figure 9(a1), the gradient intensity is seen to have clear object outlines and rich infrared brightness, although the visual effect is dim because it is obtained from the horizontal and vertical directions of the source images with the help of the Sobel operator, which has strong edge extraction capability. Subsequently, the exposedness intensity is calculated, and as seen in Figure 9(b1), the brightness features are full. However, maintaining balance between the two intensities when one is too dim and the other too bright is a critical concern. In this case, two weight coefficients ω 1 and ω 2 are used, followed by obtaining supplementary maps. These act similar to a classifier responsible for separating the infrared brightness features from the gradient texture features, as shown in Figure 9(c1,c2). To reduce the noise artifacts, Gaussian smoothing is applied, resulting in refined maps. Finally, the fused images are generated with abundant gradient textures and bright infrared targets.
Through the above analysis, we demonstrate the motivation for each step of the algorithm. It is also seen from the combination of the subjective visual image and objective indicators that the dual-module complementary strategy is successful.

4.6. Computational Efficiency

In order to compare the computational efficiency of the proposed method with other methods, the average running time for a total of 60 images on two image datasets is calculated and presented in Table 6. As shown in Table 6, our method runs faster than CSF, AUIF, and VEHMD. The reason for this is that our method uses five typical operators directly applied to the pixels, while AUIF and VEHMD rely on the time-consuming operation of scale decomposition to obtain various sublayers, resulting in a substantial increase in running time. A few methods require less running time than the proposed method, primarily because the models are pre-trained, including VggML and Resnet50 methods. Despite the effectiveness of these methods, the proposed method yields better results in terms of subjective and objective metrics.

5. Conclusions

This paper proposes an effective feature-oriented dual-module complementary IVIF strategy. Unlike the existing multiscale fusion methods with carefully designed decomposition filters to extract features, we focus on cross-modality introduction and improvement of some classic operators to build a fusion framework. First, PCA, saliency, and contrast estimation operators are used to jointly construct a module aimed at obtaining three kinds of feature maps, which are later reconstructed through pyramidal transformation to obtain the predicted image. Then, the IBSM is then proposed to compensate for the missing infrared information in the predicted image by improving the gradient of the grayscale image and well-exposedness, which are measurement operators applied to exposure modalities. The experimental results show that the proposed method has better fusion performance and outperforms other existing mainstream fusion methods. However, the proposed method also has limitations: the reconstruction method uses pyramid transformation, and the number of transformation layers changes adaptively with image resolution, which may increase the running efficiency of the algorithm; these will be improved in a future work.

Author Contributions

Y.Z. designed and developed the proposed method, conducted the experiments and wrote the manuscript. H.J.L. designed the new concept, provided the conceptual idea and insightful suggestions to refine it further, and reviewed the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the Basic Science Research Program through the National Research Foundation of Korea (NRF), by the Ministry of Education under Grant 2019R1D1A3A03103736, and in part by the project for Joint Demand Technology R&D of Regional SMEs funded by the Korea Ministry of SMEs and Startups in 2023 (Project No. RS-2023-00207672).

Informed Consent Statement

Not applicable.

Data Availability Statement

Unavailable due to further research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ma, J.; Ma, Y.; Li, C. Infrared and visible image fusion methods and applications: A survey. Inf. Fusion 2019, 45, 153–178. [Google Scholar] [CrossRef]
  2. Li, Q.; Han, G.; Liu, P.; Yang, H.; Chen, D.; Sun, X.; Wu, J.; Liu, D. A multilevel hybrid transmission network for infrared and visible image fusion. IEEE Trans. Instrum. Meas. 2022, 71, 1–14. [Google Scholar] [CrossRef]
  3. Wang, Z.; Wu, Y.; Wang, J.; Xu, J.; Shao, W. ResFusion: Infrared and visible image fusion based on dense res2net and double nonlocal attention models. IEEE Trans. Instrum. Meas. 2022, 71, 1–12. [Google Scholar] [CrossRef]
  4. Zhang, Q.; Xiao, T.; Huang, N.; Zhang, D.; Han, J. Revisiting feature fusion for RGB-T salient object detection. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 1804–1818. [Google Scholar] [CrossRef]
  5. Zhou, W.; Zhu, Y.; Lei, J.; Wan, J.; Yu, L. CCAFNet: Crossflow and cross-scale adaptive fusion network for detecting salient objects in RGB-D images. IEEE Trans. Multimedia 2022, 24, 2192–2204. [Google Scholar] [CrossRef]
  6. Wang, Y.; Xiao, Y.; Lu, J.; Tan, B.; Cao, Z.; Zhang, Z.; Zhou, J.T. Discriminative multi-view dynamic image fusion for cross-view 3-d action recognition. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 5332–5345. [Google Scholar] [CrossRef]
  7. Zeng, Z.; Wang, T.; Ma, F.; Zhang, L.; Shen, P.; Shah, S.; Bennamoun, M. Probability-based framework to fuse temporal consistency and semantic information for background segmentation. IEEE Trans. Multimedia 2021, 24, 740–754. [Google Scholar] [CrossRef]
  8. Chen, L.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 834–848. [Google Scholar] [CrossRef] [Green Version]
  9. Zhang, Q.; Huang, N.; Yao, L.; Zhang, D.; Shan, C.; Han, J. RGB-T salient object detection via fusing multi-level CNN features. IEEE Trans. Image Process. 2019, 29, 3321–3335. [Google Scholar] [CrossRef] [Green Version]
  10. Mou, L.; Zhou, C.; Xie, P.; Zhao, P.; Jain, R.; Gao, W.; Yin, B. Isotropic self-supervised learning for driver drowsiness detection with attention-based multimodal fusion. IEEE Trans. Multimedia 2021, 25, 529–542. [Google Scholar] [CrossRef]
  11. Bulanon, D.; Burks, T.; Alchanatis, V. Image fusion of visible and thermal images for fruit detection. Biosyst. Eng. 2009, 103, 12–22. [Google Scholar] [CrossRef]
  12. Zhan, L.; Zhuang, Y.; Huang, L. Infrared and visible images fusion method based on discrete wavelet transform. J. Comput. 2017, 28, 57–71. [Google Scholar] [CrossRef]
  13. Meng, F.; Song, M.; Guo, B.; Shi, R.; Shan, D. Image fusion based on object region detection and non-subsampled contourlet transform. Comput. Electr. Eng. 2016, 62, 375–383. [Google Scholar] [CrossRef]
  14. Hu, H.; Wu, J.; Li, B.; Guo, Q.; Zheng, J. An adaptive fusion algorithm for visible and infrared videos based on entropy and the cumulative distribution of gray levels. IEEE Trans. Multimedia 2017, 19, 2706–2719. [Google Scholar] [CrossRef]
  15. Yang, Y.; Zhang, Y.; Huang, S.; Zuo, Y.; Sun, J. Infrared and visible image fusion using visual saliency sparse representation and detail injection model. IEEE Trans. Instrum. Meas. 2021, 70, 1–15. [Google Scholar] [CrossRef]
  16. Chen, L.; Yang, X.; Lu, L.; Liu, K.; Jeon, G.; Wu, W. An image fusion algorithm of infrared and visible imaging sensors for cyber-physical systems. J. Intell. Fuzzy Syst. 2019, 36, 4277–4291. [Google Scholar] [CrossRef]
  17. Luo, Y.; He, K.; Xu, D.; Yin, W.; Liu, W. Infrared and visible image fusion based on visibility enhancement and hybrid multiscale decomposition. Optik 2022, 258, 168914. [Google Scholar] [CrossRef]
  18. Liu, Y.; Chen, X.; Cheng, J.; Peng, H.; Wang, Z. Infrared and visible image fusion with convolutional neural networks. Int. J. Wavelets Multiresolut. Inf. Process. 2018, 16, 1–20. [Google Scholar] [CrossRef]
  19. Li, H.; Wu, X. DenseFuse: A fusion approach to infrared and visible images. IEEE Trans. Image Process. 2019, 28, 2614–2623. [Google Scholar] [CrossRef] [Green Version]
  20. Ma, J.; Yu, W.; Liang, P.; Li, C.; Jiang, J. FusionGAN: A generative adversarial network for infrared and visible image fusion. Inf. Fusion 2019, 48, 11–26. [Google Scholar] [CrossRef]
  21. He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 1397–1409. [Google Scholar] [CrossRef]
  22. Mertens, T.; Kautz, J.; Reeth, F.V. Exposure fusion: A simple and practical alternative to high dynamic range photography. Comput. Graph. Forum 2009, 28, 161–171. [Google Scholar] [CrossRef]
  23. Wold, S.; Esbensen, K.; Geladi, P. Principal component analysis. Chemom. Intell. Lab. Syst. 1987, 2, 37–52. [Google Scholar] [CrossRef]
  24. Ulucan, O.; Ulucan, D.; Turkan, M. Ghosting-free multi-exposure image fusion for static and dynamic scenes. Signal Process. 2023, 202, 108774. [Google Scholar] [CrossRef]
  25. Hou, X.; Harel, J.; Koch, C. Image signature: Highlighting sparse salient regions. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 194–201. [Google Scholar]
  26. Tai, Y.W.; Brown, M.S. Single image defocus map estimation using local contrast prior. In Proceedings of the IEEE International Conference Image Processing, Cairo, Egypt, 7–10 November 2009; pp. 1797–1800. [Google Scholar]
  27. Di Zenzo, S. A note on the gradient of a multi-image. Comput. Vis. Graph. Image Process. 1986, 33, 116–125. [Google Scholar] [CrossRef]
  28. Toet, A. TNO Image Fusion Dataset. 26 April 2014. Available online: https://figshare.com/articles/dataset/TNO_Image_Fusion_Dataset/1008029 (accessed on 26 April 2014).
  29. Xu, H.; Ma, J.; Jiang, J.; Guo, X.; Ling, H. U2Fusion: A unified unsupervised image fusion network. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 41, 502–518. [Google Scholar] [CrossRef]
  30. Li, H.; Wu, X.; Kittler, J. Infrared and visible image fusion using a deep learning framework. In Proceedings of the IEEE Computer Vision Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 2705–2710. [Google Scholar]
  31. Li, H.; Wu, X.; Durrani, T.S. Infrared and visible image fusion with ResNet and zero-phase component analysis. Infrared Phys. Technol. 2019, 102, 103039. [Google Scholar] [CrossRef] [Green Version]
  32. Zhao, Z.; Xu, S.; Zhang, C.; Liu, J.; Zhang, J. Bayesian fusion for infrared and visible images. Signal Process. 2020, 177, 107734. [Google Scholar] [CrossRef]
  33. Zhao, Z.; Xu, S.; Zhang, J.; Liang, C.; Zhang, C.; Liu, J. Efficient and model-based infrared and visible image fusion via algorithm unrolling. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 1186–1196. [Google Scholar] [CrossRef]
  34. Xu, H.; Zhang, H.; Ma, J. Classification saliency-based rule for visible and infrared image fusion. IEEE Trans. Comput. Imaging 2021, 7, 824–836. [Google Scholar] [CrossRef]
  35. Ma, J.; Xu, H.; Jiang, J.; Mei, X.; Zhang, X.-P. DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion. IEEE Trans. Image Process. 2020, 29, 4980–4995. [Google Scholar] [CrossRef]
  36. Tang, L.; Yuan, J.; Ma, J. Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network. Inf. Fusion 2022, 82, 28–42. [Google Scholar] [CrossRef]
  37. Tang, W.; He, F.; Liu, Y. YDTR: Infrared and Visible Image Fusion via Y-shape Dynamic Transformer. IEEE Trans. Multimedia 2022, 1–16. [Google Scholar] [CrossRef]
  38. Xydeas, C.; Petrovíc, V. Objective image fusion performance measure. Electron. Lett. 2000, 36, 308–309. [Google Scholar] [CrossRef] [Green Version]
  39. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  40. Wang, P.; Liu, B. A novel image fusion metric based on multi-scale analysis. In Proceedings of the IEEE 9th International Conference Signal Processing, Beijing, China, 26–29 October 2008; pp. 965–968. [Google Scholar]
  41. Zhao, J.; Laganiere, R.; Liu, Z. Performance assessment of combinative pixel-level image fusion based on an absolute feature measurement. Int. J. Innov. Comput. Inf. Control 2007, 3, 1433–1447. [Google Scholar]
  42. Haghighat, M.; Razian, M.A. Fast-FMI: Non-reference image fusion metric. In Proceedings of the IEEE 8th International Conference on Application of Information and Communication Technologies (AICT), Astana, Kazakhstan, 15–17 October 2014; pp. 1–3. [Google Scholar]
Figure 1. Flowchart of the IVIF method via feature-oriented dual-module complementary. Different line formats represent different sections. A, B, and F represent the infrared image, visible image, and fused image, respectively.
Figure 1. Flowchart of the IVIF method via feature-oriented dual-module complementary. Different line formats represent different sections. A, B, and F represent the infrared image, visible image, and fused image, respectively.
Applsci 13 02907 g001
Figure 2. Infrared input image, predicted image, fused image, and their corresponding heatmaps.
Figure 2. Infrared input image, predicted image, fused image, and their corresponding heatmaps.
Applsci 13 02907 g002
Figure 3. Subjective result images of the ten fusion methods. From top to bottom in order: infrared images (IR), visible images (VIS), VggML, Resnet50, BayF, AUIF, CSF, DDcGAN, SeAFusion, VEHMD, YDTR, and our method (Ours). From left to right in order: “Camp”, “Octec”, “Kaptein 1654”, “Jeep”, “Sand-path”, and “Kaptein 1123”. The red and blue boxes show the enlarged local regions.
Figure 3. Subjective result images of the ten fusion methods. From top to bottom in order: infrared images (IR), visible images (VIS), VggML, Resnet50, BayF, AUIF, CSF, DDcGAN, SeAFusion, VEHMD, YDTR, and our method (Ours). From left to right in order: “Camp”, “Octec”, “Kaptein 1654”, “Jeep”, “Sand-path”, and “Kaptein 1123”. The red and blue boxes show the enlarged local regions.
Applsci 13 02907 g003aApplsci 13 02907 g003b
Figure 4. Fused images using ten different methods when the source image is “FLIR_video_00018”.
Figure 4. Fused images using ten different methods when the source image is “FLIR_video_00018”.
Applsci 13 02907 g004
Figure 5. Fused images using ten different methods when the source image is “FLIR_video_05245”.
Figure 5. Fused images using ten different methods when the source image is “FLIR_video_05245”.
Applsci 13 02907 g005
Figure 6. Average values of the six objective indicators for 30 sets of images from the TNO dataset. Here, V1, R, B, D, C, Y, S, A, V2, and O denote VggML, Resnet50, BayF, DDcGAN, CSF, YDTR, SeAFusion, AUIF, VEHMD, and Ours, respectively.
Figure 6. Average values of the six objective indicators for 30 sets of images from the TNO dataset. Here, V1, R, B, D, C, Y, S, A, V2, and O denote VggML, Resnet50, BayF, DDcGAN, CSF, YDTR, SeAFusion, AUIF, VEHMD, and Ours, respectively.
Applsci 13 02907 g006
Figure 7. Average values of the six objective indicators for 30 sets of images from the RoadScene dataset. Here, V1, R, B, D, C, Y, S, A, V2, and O denote VggML, Resnet50, BayF, DDcGAN, CSF, YDTR, SeAFusion, AUIF, VEHMD, and Ours, respectively.
Figure 7. Average values of the six objective indicators for 30 sets of images from the RoadScene dataset. Here, V1, R, B, D, C, Y, S, A, V2, and O denote VggML, Resnet50, BayF, DDcGAN, CSF, YDTR, SeAFusion, AUIF, VEHMD, and Ours, respectively.
Applsci 13 02907 g007
Figure 8. The visual maps and their corresponding heatmaps of SGCM in the fusion framework (Figure 1).
Figure 8. The visual maps and their corresponding heatmaps of SGCM in the fusion framework (Figure 1).
Applsci 13 02907 g008
Figure 9. The visual maps and their corresponding heatmaps of IBSM in the fusion framework (Figure 1).
Figure 9. The visual maps and their corresponding heatmaps of IBSM in the fusion framework (Figure 1).
Applsci 13 02907 g009
Table 1. Parameters in the proposed method.
Table 1. Parameters in the proposed method.
ParametersFGF S n R n
r ε s ω 1 ω 2 σ r
Optimal value80.121.02.83
Table 2. Averages of the six metrics for different ε on ten pairs of source images from two public datasets. Numbers in bold font represent the best value.
Table 2. Averages of the six metrics for different ε on ten pairs of source images from two public datasets. Numbers in bold font represent the best value.
Metrics ε = 10 1 ε = 10 2 ε = 10 3 ε = 10 4
Qabf0.48940.48160.44560.4091
SSIM0.83490.82700.78540.7428
Qm0.71020.68780.62460.5845
Qp0.39740.38660.33610.2891
FMIdct0.89240.89160.88700.8821
FMIw0.41460.40980.38870.3713
Table 3. Averages of the six metrics for different r and s values on ten pairs of source images from two public datasets. Numbers in bold font represent the best value.
Table 3. Averages of the six metrics for different r and s values on ten pairs of source images from two public datasets. Numbers in bold font represent the best value.
Metrics r = 2 s = 0.5 r = 2 s = 2 r = 4 s = 1 r = 4 s = 4 r = 8 s = 2 r = 8 s = 8
Qabf0.47570.47100.48440.47660.49200.4774
SSIM0.82700.82370.83230.82690.83630.8266
Qm0.69220.68370.70490.69170.70920.6858
Qp0.38770.38510.39390.38970.39940.3909
FMIdct0.89220.89190.89240.89100.89270.8909
FMIw0.40960.40850.41270.41150.41540.4119
Table 4. Averages of the six metrics for different ω 2 on ten pairs of source images from two public datasets. Numbers in bold font represent the best value.
Table 4. Averages of the six metrics for different ω 2 on ten pairs of source images from two public datasets. Numbers in bold font represent the best value.
MetricsQabfSSIMQmQpFMIdctFMIw
ω 1 = 1.0 ω 2 = 1.0 0.48970.83360.70800.39310.89220.4133
ω 1 = 1.0 ω 2 = 1.3 0.49080.83470.70900.39580.89230.4140
ω 1 = 1.0 ω 2 = 1.5 0.49120.83510.70960.39690.89250.4144
ω 1 = 1.0 ω 2 = 1.8 0.49160.83570.70950.39840.89260.4149
ω 1 = 1.0 ω 2 = 2.0 0.49060.83260.70590.38860.88850.4134
ω 1 = 1.0 ω 2 = 2.3 0.49190.83620.70930.39940.89260.4154
ω 1 = 1.0 ω 2 = 2.5 0.49200.83630.70850.39980.89260.4155
ω 1 = 1.0 ω 2 = 2.8 0.49210.83650.70860.40050.89260.4158
ω 1 = 1.0 ω 2 = 3.0 0.49200.83650.70880.40050.89260.4158
Table 5. Averages of the six metrics for different σ r on ten pairs of source images from two public datasets. Numbers in bold font represent the best value.
Table 5. Averages of the six metrics for different σ r on ten pairs of source images from two public datasets. Numbers in bold font represent the best value.
Metrics σ r = 3 3 σ r = 5 5 σ r = 7 7 σ r = 9 9
Qabf0.49470.48790.48350.4806
SSIM0.83810.83380.83100.8287
Qm0.71060.70250.69470.6900
Qp0.40110.39870.39670.3951
FMIdct0.89270.89210.89210.8920
FMIw0.41600.41520.41450.4138
Table 6. Average running time on two public datasets.
Table 6. Average running time on two public datasets.
MethodsAverage Running Time (unit: s)
TNO DatasetRoadScene Dataset
VggML5.93083.5316
Resnet503.77582.7426
BayF1.16110.8939
DDcGAN3.27971.3940
YDTR2.13071.8370
CSF14.84677.7464
VEHMD78.217848.2212
SeAFusion0.00330.0029
AUIF12.46127.1988
Ours7.67077.0148
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Lee, H.J. Infrared and Visible Image Fusion via Feature-Oriented Dual-Module Complementary. Appl. Sci. 2023, 13, 2907. https://doi.org/10.3390/app13052907

AMA Style

Zhang Y, Lee HJ. Infrared and Visible Image Fusion via Feature-Oriented Dual-Module Complementary. Applied Sciences. 2023; 13(5):2907. https://doi.org/10.3390/app13052907

Chicago/Turabian Style

Zhang, Yingmei, and Hyo Jong Lee. 2023. "Infrared and Visible Image Fusion via Feature-Oriented Dual-Module Complementary" Applied Sciences 13, no. 5: 2907. https://doi.org/10.3390/app13052907

APA Style

Zhang, Y., & Lee, H. J. (2023). Infrared and Visible Image Fusion via Feature-Oriented Dual-Module Complementary. Applied Sciences, 13(5), 2907. https://doi.org/10.3390/app13052907

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop