Next Article in Journal
Gear Shape Parameter Measurement Using a Model-Based Scanning Multi-Distance Measurement Approach
Next Article in Special Issue
Demodulation Method of F-P Sensor Based on Wavelet Transform and Polarization Low Coherence Interferometry
Previous Article in Journal
A Review of the Real-Time Monitoring of Fluid-Properties in Tubular Architectures for Industrial Applications
Previous Article in Special Issue
Noise-Adaptive Visible Light Communications Receiver for Automotive Applications: A Step Toward Self-Awareness
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Letter

Effective Three-Stage Demosaicking Method for RGBW CFA Images Using The Iterative Error-Compensation Based Approach

Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, Taipei 10672, Taiwan
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(14), 3908; https://doi.org/10.3390/s20143908
Submission received: 1 June 2020 / Revised: 6 July 2020 / Accepted: 10 July 2020 / Published: 14 July 2020

Abstract

:
As the color filter array (CFA)2.0, the RGBW CFA pattern, in which each CFA pixel contains only one R, G, B, or W color value, provides more luminance information than the Bayer CFA pattern. Demosaicking RGBW CFA images I R G B W is necessary in order to provide high-quality RGB full-color images as the target images for human perception. In this letter, we propose a three-stage demosaicking method for I R G B W . In the first-stage, a cross shape-based color difference approach is proposed in order to interpolate the missing W color pixels in the W color plane of I R G B W . In the second stage, an iterative error compensation-based demosaicking process is proposed to improve the quality of the demosaiced RGB full-color image. In the third stage, taking the input image I R G B W as the ground truth RGBW CFA image, an I R G B W -based refinement process is proposed to refine the quality of the demosaiced image obtained by the second stage. Based on the testing RGBW images that were collected from the Kodak and IMAX datasets, the comprehensive experimental results illustrated that the proposed three-stage demosaicking method achieves substantial quality and perceptual effect improvement relative to the previous method by Hamilton and Compton and the two state-of-the-art methods, Kwan et al.’s pansharpening-based method, and Kwan and Chou’s deep learning-based method.

1. Introduction

Most modern digital color cameras are equipped with a single sensor covered with a Bayer color filter array (CFA) pattern [1] such that each Bayer CFA pixel contains only one red (R), green (G), or blue (B) color value. The captured Bayer CFA image I B a y e r consists of 25% R, 50% G, and 25% B color values. Besides the low hardware cost advantage, I B a y e r can usually be demosaicked to a high-quality RGB full-color image while using the demosaicking method in I B a y e r [2,3,4,5,6,7]. However, in the low illumination condition, the thermal noise side-effect of I B a y e r leads to low quality of the reconstructed RGB full-color image [8]. Therefore, to receive more luminance than that of the Bayer CFA pattern, a single sensor covered with a RGBW CFA pattern [9], in which one pixel contains only one R, G, B, or white (W) color value, has been incorporated into some digital color cameras and mobile phones, such as the Huawei-Ascend series, Huawei G8, Huawei P8, Huawei Mate S, and Oppo R7 Plus. The captured image by such a device is called the RGBW CFA image I R G B W .
The three commonly used 4×4 RGBW-Kodak CFA patterns [10] are depicted in Figure 1a–c. From practical experience, Compton and Hamilton [10] found that the W color photoresponse is three to four times more sensitive to wide spectrum light than R-, G-, or B-color photoresponse. To balance the same sensitivity as the W pixels, they suggested increasing the size of the R, G, and B color pixels. This is the reason why the RGBW CFA patterns were proposed. For convenience, we take the RGBW-Kodak-1 CFA pattern as the representative pattern in our discussion, although our discussion is also applicable to the other two RGBW-Kodak CFA patterns.
Before providing a RGB full-color image as the target image for human perception, demosaicking the RGBW CFA image I R G B W is a necessary step. Because the RGBW CFA patterns are different from Bayer CFA patterns, the existing demosaicking methods for I B a y e r cannot be directly used to demosaic I R G B W . In what follows, we introduce the related demosaicking methods for I R G B W .

1.1. Related Work

Given an I R G B W , Hamilton and Compton [11] first partition each 4 × 4 RGBW CFA block (see Figure 1a) into two disjointed 4 × 4 CFA blocks in which one contains only eight W color values, called the W-CFA block, and the other contains two R color values, four G color values, and two B color values, called the RGB-CFA block. For the whole RGB-CFA image, they average the two same color values to produce a quarter-sized Bayer CFA image I q , B a y e r . Next, I q , B a y e r is demosaicked to a RGB full-color image I q , R G B . For the whole W-CFA image, first, a bilinear interpolation is applied to recover all the missing W color values for producing a W color image I W . Then, they apply an averaging-based downsampling on I W to produce a quarter-sized W image I q , W . Furthermore, a bilinear interpolation is applied to upsample the three quarter-sized images, producing the three color difference images, I R W , I G W , and I B W . Finally, adding I W to I R W , I G W , and I B W , respectively, produces the required R, G, and B color planes as the demosaiced RGB full-color image I R G B .
Condat [12] proposed a variational approach to demosaick I R G B W (see Figure 1. (III) in [12]), which is exactly the RGBW-Kodak-1 in Figure 1, such that the demosaiced RGB full-color image has maximal smoothness under the constraint of consistency measurements. For convenience, his variational approach method is called the VA method. In VA, Condat considered the orthonormal basis corresponding to luminance, red-green, and blue-yellow chrominances. Furthermore, he transformed the problem of demosaicking I R G B W into a minimization problem with regularization function. Finally, an iterative numerical process was delivered to solve the problem.
Kwan et al. [13] observed that the downsampling process on the upsampled W-CFA image I W may lose useful information in I W . To recover as much as the lost information in I W as possible, they employed a hybrid color mapping-based pansharpening technique [14] in Hamilton and Compton’s method in order to reconstruct a better demosaiced RGB full-color image. First, they demosaicked I q , B a y e r to obtain I q , R G B using Zhang et al.’s method [7], and then they stacked the R and G planes of I q , R G B to I q , W , obtaining I q , R G W . Secondly, by solving a minimization problem with a regularization term, they derived a matrix T to transform I q , R G W to I q , R G B . Thirdly, they stacked the upsampled version of I q , R and I q , G to I W , obtaining I R G W . Finally, with the help of T, I R G W is transformed to the RGB full-color image I R G B .
Based on the convolutional neural networks-based framework, namely ”DEMONET” [15], Kwan and Chou [16] applied the trained version of DEMONET to demosaic I q , B a y e r , which was obtained using Hamilton and Compton’s method, producing the demosaiced quarter-sized RGB full-color image I q , R G B . Next, they created a 2 × 2 fictitious Bayer pattern in which the W color pixel in I W is treated as the G color pixel;the R color pixel and the B color pixel come from the I q , R G B . Furthermore, they applied DEMONET to the fictitious Bayer CFA image again, producing a RBW full-color image I D E M O N E T R B W . Furthermore, they extracted the W color image I D E M O N E T W from I D E M O N E T R B W . With the help of the same matrix T [13], they performed the pansharpening technique [14] on I q , R G B and I D E M O N E T W twice, and then the demosaiced RGB full-color image is obtained.
Based on new inter-pixel chrominance capture and optimal demosaicking transformation, Zhang et al. [17] proposed an effective universal demosaicking method, and the experimental results demonstrated the significant quality superiority of the demosaiced images for six kinds of CFA images, including the RGBW CFA pattern in Figure 1a. In [18], Amba et al. pointed out the time consuming problem in [17].
Among the above-mentioned six related methods, when considering the available codes, the former four methods are included in the comparative methods. For convenience, the methods by Hamilton and Compton [11], Kwan et al. [13], and Kwan and Chou [16] are called the HC method, the pansharpening-based method, and the deep learning-based method, respectively.

1.2. Contributions

For I R G B W , this letter proposes an effective three-stage demosaicking method. In the first stage, the proposed cross shape-based color difference technique is used to reconstruct the missing W color pixels in the W color plane of I R G B W more effectively, and the reconstructed W color plane has a good edge-preserving effect. In the second stage, based on the interpolated W color pixels and I R G B W , an iterative error compensation-based demosaicking process is proposed in order to reduce the demosaiced error in the R, G, and B color planes of the demosaiced RGB full-color image. In the third stage, taking I R G B W as the ground truth RGBW CFA image, the I R G B W -based refinement approach is proposed to improve the result obtained by the second stage, achieving a better demosaiced RGB full-color image.
Based on the testing RGBW CFA images collected from the Kodak and IMAX datasets, the comprehensive experimental results demonstrated that our three-stage demosaicking method achieves substantial quality improvement of the demosaiced RGB full-color images when compared with the HC method [11], the VA method [12], the pansharpening-based method [13], and the deep learning-based method [16]. In addition, the perceptual effect merit of our three-stage method is illustrated relative to the four comparative methods.
The rest of this letter is organized, as follows. In Section 2, for I R G B W , the proposed three-stage demosaicking method is presented. In Section 3, the comprehensive experiments are carried out to demonstrate the quality and perceptual effect merits of our three-stage method. In Section 4, some concluding remarks are addressed.

2. The Proposed Three-Stage Demosaicking Method For RGBW CFA Images

The proposed demosaicking method for I R G B W consists of three new stages: (1) the cross shape-based color difference approach to reconstruct the missing W color pixels in the W color plane of I R G B W , (2) the iterative error compensation process to minimize the error in the R, G, and B color planes of the demosaiced RGB full-color image, and (3) the I R G B W -based refinement approach to improve the result that was obtained by the second stage, achieving a better demosaiced RGB full-color image.

2.1. The First Stage: The Cross Shape-Based Color Difference Approach To Construct The Missing W Color Pixels

In this stage, we propose a simple and fast cross shape-based color difference approach to reconstruct the missing W color pixels in the W color plane of I R G B W . First, we put a cross shape centered at the B color pixel in Figure 1a, as shown in Figure 2. We only discuss how to reconstruct the W color value in the B color pixel I B R G B W , but our discussion is also applicable to reconstruct W color values in the R and G color pixels. Besides the above regular cross shaped color difference approach, when considering the other B pixels, which lie off the cross, in the 9 × 9 window may improve the estimation of the W color value, although it takes more time. In the same way, our discussion is also applicable to the other two RGBW-Kodak CFA patterns presented in Figure 1b,c, and we just apply the X shaped color difference approach by considering the diagonal pixel positions.
In the proposed cross shape-based color difference approach, to reconstruct the W color value in the B color pixel I B R G B W ( i , j ) , as shown in Figure 2, we consider the four neighboring W color pixels of I B R G B W ( i , j ) , which are located in the set W W ( i , j ) = { ( i ± 1 , j ) , ( i , j ± 1 ) } . Simultaneously, we also consider the four neighboring B color pixels of I B R G B W ( i , j ) and the four neighboring B color pixels are located in the set W B ( i , j ) = { ( i ± 4 , j ) , ( i , j ± 4 ) } . Based on our cross shape-based color difference approach, the reconstructed W color value in I B R G B W ( i , j ) , denoted by I W ( i , j ) , can be calculated by
I W ( i , j ) = I B R G B W ( i , j ) + ( x , y ) W W ( i , j ) ω D WB ( x , y ) D WB ( x , y ) ( x , y ) W W ( i , j ) ω D WB ( x , y )
where D WB ( x , y ) denotes the color difference value at location ( x , y ) . To reconstruct I W ( i , j ) , we must consider four such color difference values. For simplicity, we only explain how to calculate the color difference value at location ( i + 1 , j ) , but it is applicable to the other three locations. The color difference value D W B ( i + 1 , j ) is calculated by
D WB ( i + 1 , j ) = I W R G B W ( i + 1 , j ) I B R G B W ( i + 4 , j )
The ω D WB ( x , y ) denotes the weight of D WB ( x , y ) , and it can be calculated by
ω D WB ( x , y ) = 1 + ( p , q ) W W ( i , j ) D WB ( x , y ) D WB ( p , q ) 1
Furthermore, we apply the fusion strategy to achieve better reconstructed W color values. If the standard deviation of the four neighboring W color pixels in the domain W W ( i , j ) is larger than or equal to the threshold T, empirically T = 10, Equation (1) is adopted to reconstruct the W color value at location ( i , j ) , i.e., I W ( i , j ) ; otherwise, the refined value of I W ( i , j ) is fused by
I W ( i , j ) = α × W m e a n + ( 1 α ) × I W ( i , j )
where W m e a n denotes the mean value of the four W color values in the domain W W ( i , j ) ; the value of I W ( i , j ) at the right hand side of Equation (4) is obtained by Equation (1); empirically, the best choice of α is set to 0.6 after examining all possible values from 0.1, 0.2, ..., 0.5, 0.6, ..., 0.8, and 0.9.

2.2. The Second Stage: The Iterative Error Compensation Approach To Minimize The R, G, B Errors

This stage consists of two steps. In the first step, we modify the HC method [11] to produce a rough demosaiced RGB full-color image I r o u g h , R G B . In the second step, we propose a novel iterative error compensation-based approach in order to substantially enhance the quality of I r o u g h , R G B .
(1) Producing the rough demosaiced image I r o u g h , R G B : first, we perform an average-based downsampling process on I W , which has been obtained by the first stage, to produce a quarter-sized W image, namely I q , W . Next, we perform the averaging-based downsampling process on the RGB-CFA image, as mentioned in the first paragraph of Section 1.1, to obtain the quarter-sized Bayer CFA image I q , B a y e r . Subsequently, I q , B a y e r is demosaicked to a full-color image I q , R G B , and I q , R G B is decomposed into three color images, namely I q , R , I q , G , and I q , B .
Furthermore, three quarter-sized color difference images, I q , R W (= I q , R I q , W ), I q , G W (= I q , G I q , W ), and I q , B W (= I q , B I q , W ), are created, and then a bilinear-based upsampling process is performed on the three quarter-sized color difference images. As a result, it yields the three color difference images, I R W , I G W , and I B W . Finally, we add I W back to I R W , I G W , and I B W , respectively, for producing the three color images, I R , I G , and I B ; after combining the three color images, the rough demosaiced RGB full-color image I r o u g h , R G B is followed. Due to applying our cross shape-based color difference approach in order to reconstruct a better W color plane I W , I r o u g h , R G B has better quality than that of Hamilton and Compton’s method [11].
However, as described in the next subsection, we point out the distortion between the input RGBW CFA image I R G B W and the distorted RGBW CFA image, denoted by I d i s t o r t e d R G B W , which is extracted from I r o u g h , R G B . The distortion in I d i s t o r t e d R G B W prompts us to develop a new iterative compensation-based approach to further enhance the quality of I r o u g h , R G B .
(2) Iterative error compensation-based approach to enhance the quality of the rough demosaiced image I r o u g h , R G B : in this subsection, we first define the error RGBW CFA image, denoted by I e r r R G B W , to be equal to I d i s t o r t e d R G B W - I R G B W and I d i s t o r t e d R G B W is often somewhat different from I R G B W . Secondly, we propose an iterative error compensation-based approach to further enhance the quality of I r o u g h , R G B .
We take a real 4 × 4 block to explain how to construct the error RGBW CFA block B e r r R G B W . Figure 3a depicts the original 4 × 4 RGBW CFA block B R G B W . Figure 3b shows the corresponding rough demosaiced RGB full-color block B r o u g h , R G B by the method described in the first step of the first paragraph of Section 2.2. By the equation: W = 1 3 (R + G + B), the W color value of the top-left corner of B r o u g h , R G B in Figure 3b is equal to 54 and so is the collocated W color value of the distorted RGBW CFA block B d i s t o r t e d R G B W in Figure 3c. However, the collocated W color value of B R G B W in Figure 3a is 43. Therefore, the W color value of the top-left corner of B e r r R G B W in Figure 3d is 11 (= 54 − 43). Similarly, the G color value of the top-right corner of Figure 3b is 41, but the collocated G color value of Figure 3a is 46. Therefore, the G color value of the top-right corner of the error RGBW CFA block in Figure 3d is −5 (= 41 − 46). Consequently, Figure 3d illustrates the error RGBW CFA block B e r r R G B W .
From the observation of the error RGBW CFA block in Figure 3d, to enhance the quality of I r o u g h , R G B , we propose an iterative error compensation-based approach to minimize the R, G, and B pixel-errors in I r o u g h , R G B . Similar to the demosaicking method on I R G B W to obtain I r o u g h , R G B , we first apply it to demosaic the error RGBW CFA image I e r r R G B W to obtain the demosaiced error RGB full-color image I e r r R G B .
Subsequently, I r o u g h , R G B is improved by performing the following error-compensation process:
I i m p r o v e d , R G B : = I r o u g h , R G B I e r r R G B
Based on the Kodak and IMAX datasets, “iterative number = 10” for the above error-compensation process is determined by the stop criterion which is dependent on the average absolute difference per pixel between two consecutive error RGBW maps, and when the average absolute difference per pixel is lower than the threshold value 0.5. Let the resultant demosaiced RGB full-color image be denoted by I i m p r o v e d , R G B . After running our iterative error compensation-based approach on Figure 4a, as marked by red ellipses in Figure 4c, the quality of the several demosaiced R, G, and B color values in B r o u g h , R G B by our iterative error compensation-based approach has been improved.

2.3. The Third Stage: The I R G B W -Based Refinement Process To Zeroize I e r r i m p r o v e d , R G B W

In this stage, utilizing the input RGBW CFA image I R G B W as the ground truth RGBW CFA reference base, we propose an I R G B W -based refinement process to further improve the quality of I i m p r o v e d , R G B , which has been obtained by the second stage. The main idea in the third stage is to zeroize the error RGBW CFA image of I i m p r o v e d , R G B , where I e r r i m p r o v e d , R G B W = I d i s t o r t e d i m p r o v e d , R G B W I R G B W , and I d i s t o r t e d i m p r o v e d , R G B W is extracted from I i m p r o v e d , R G B .
This stage consists of two steps. In the first step, we correct the c color pixel in I i m p r o v e d , R G B ( x , y ) , c { R , G , B } , which is collocated with the same color pixel in I R G B W ( x , y ) , and the c color is corrected by performing the assignment operation: c : = I R G B W ( x , y ) . After that, the c color pixel in I i m p r o v e d , R G B ( x , y ) , c { R , G , B } , which is collocated with the same color pixel in I R G B W ( x , y ) , has been corrected. For convenience, the refined version of I m o d i f i e d , R G B ( x , y ) is denoted by I r e f i n e d , R G B ( x , y ) .
In the second step, we refine the R, G, and B color values in I i m p r o v e d , R G B ( x , y ) , say c r , c g , and c b , respectively, which are collocated with the W color pixel in I R G B W ( x , y ) . Note that the location set { ( x , y ) } considered in the second step and the location set { ( x , y ) } considered in the first step are disjointed. The refinement operation in the second step is performed by: I r e f i n e d , R G B ( x , y ) := I i m p r o v e d , R G B ( x , y ) + k where k = W 1 3 ( c r + c b + c g ) . After that, at location ( x , y ) , from the sum of the refined R, G, and B color values being equal to 3W, i.e., ( c r + c g + c b ) + 3 W ( c r + c b + c g ) = 3 W , we thus conclude that the W color value derived from I r e f i n e d , R G B ( x , y ) , which is collocated with the W color value in I R G B W ( x , y ) , has been corrected.
Consequently, all of the error entries in I e r r i m p r o v e d , R G B W have been corrected, i.e., zeroized. After performing the third stage on Figure 4d, the error RGBW CFA block of B r e f i n e d , R G B is a 4 × 4 zero block, as shown in Figure 4e.
After describing the proposed I R G B W -based refinement process, as marked by the yellow ellipses in Figure 4d, we observe that most color pixels in the refined 4 × 4 demosaiced RGB full-color block B r e f i n e d , R G B are equal to or closer to those in the 4 × 4 ground truth RGB full-color block B R G B in Figure 4a, relative to B i m p r o v e d , R G B in Figure 4c, indicating the quality improvement merit of our I R G B W -based refinement process by zeroizing the error RGBW image I e r r i m p r o v e d , R G B W .

3. Experimental Results

For fairness in comparison with the two state-of-the-art demosaicking methods [13,16] without available codes, in the first set of experiments, the same 18 RGBW CFA images and 12 RGBW CFA images that were collected from the IMAX dataset [19] and the Kodak dataset [20], as shown in Figure 5a,b, respectively, are adopted to demonstrate the quality merit of our method. For completeness, in the second set of experiments, the Kodak dataset with 24 images and the IMAX dataset are used to demonstrate the quality merit of our method. In the next paragraph, the detailed implementation ways for the considered methods are described.
In the HC method, as a subroutine to convert a quarter-sized Bayer CFA image I q , B a y e r to a quarter-sized RGB full-color image I q , R G B , we have tried the bicubic interpolation-based demosaicking process, and this version of the HC method is called the HC b i c u b i c method. Besides the bicubic interpolation-based demosaicking process used in HC, we have also tried Kiku et al.’s demosaicking process [3] in HC, and the second version of the HC method is called the HC K i k u method. The available execution codes of the HC b i c u b i c and HC K i k u methods [11] can be accessed from the website [21]. Thankful for the available codes for the VA method [22], the VA method is included in the comparative methods. Since the complete codes for the pansharpening-based method [13] and the deep learning-based method [16] are unavailable, we adopt the experimental data from their papers.
Similarly, as a subroutine in our method, we have tried Kiku et al.’s method [3] and Zhang et al.’s method [7] to demosaick I q , B a y e r to I q , R G B , respectively. However, the average CPSNR values of our method associated with either of them are similar. The available execution code of our method can be accessed from the website [23].
All of the considered experiments are implemented on a computer with an Intel Core i7-8700 CPU 3.2 GHz and 32 GB RAM. The operating system is the Microsoft Windows 10 64- bit operating system. The program development environment is Visual C++ 2017.

3.1. Object Quality Merit of Our Method

Let P = { ( x , y ) | 1 x H , 1 y W } denote the set of pixel coordinates in one RGB full-color image of size W × H . The CPSNR (color peak signal-to-noise ratio) of one demosaiced RGB full-color image is expressed as
CPSNR = 10 log 10 255 2 C M S E
with
C M S E = 1 3 W H p P c { R , G , B } [ I n , c o r i , R G B ( p ) I n , c r e c , R G B ( p ) ] 2
I n , c o r i , R G B ( p ) and I n , c r e c , R G B ( p ) denote the c ( { R , G , B } ) color values of the pixels at position p in nth original and the nth demosaiced RGB full-color images, respectively.
In the first set of experiments, for the six considered demosaicking methods, Table 1 tabulates the CPSNR values for IMAX and Kodak separately; in addition, the related average CPSNR values and the average CPSNR gains are listed. In Table 1, we observe that the proposed three-stage method, abbreviated as “Proposed”, has the best Kodak CPSNR and average CPSNR values in boldface among the considered methods. The average CPSNR gain of our method over HC b i c u b i c is 5.992 (= 35.072 − 29.08) dB. In addition, the average CPSNR gains of our method are 3.472 (= 35.072 − 31.60) dB, 1.572 (= 35.072 − 33.50) dB, 1.652 (= 35.072 − 33.42) dB, and 0.652 (= 35.072 − 34.42) dB when compared with the H C K i k u method, the VA method, the pansharpening-based method [13], and the deep learning-based method [16], respectively.
However, for completeness, in the second set of experiments, based on the Kodak dataset with 24 images and the IMAX dataset, in terms of CPSNR, SSIM (structural similarity index), and the average Δ E, Table 2 indicates the objective quality merit of our method relative to the other three comparative methods. Here, SSIM is measured by the joint effects of the luminance, contrast, and structure similarity preserving effect between the ground-truth RGB full-color image and the reconstructed analogue. We recommend that readers refer to [24] for a detailed definition of SSIM. The average Δ E denotes the average CIE LAB error per pixel between the ground-truth LAB image, which is converted from the ground-truth RGB full-color image, and the reconstructed analogue. We recommend that the readers refer to [17] for the definition of Δ E.

3.2. Perceptual Effect Merit of Our Method

Besides illustrating the CPSNR merit of our method, this subsection demonstrates the perceptual effect merit of our method relative to the comparative methods [11,12,13,16].
We take the magnified subimage cut off from the testing image “IMAX 1”, as shown in Figure 6a, to show the perceptual effect merit of our method. After performing the considered demosaicking methods on the RGBW CFA image of Figure 6a, Figure 6b–g demonstrate the six demosaiced RGB full-color subimages, respectively. When compared with H C b i c u b i c [11], H C K i k u [11], and VA [12], as highlighted by the red ellipses, our method has the best perceptual effect (also refer to the SSIM gain of our method in Table 2) and the least color shifting side effect (also refer to the average Δ E gain of our method in Table 2). As shown in Figure 6e,g, our method has better perceptual effect relative to the pansharpening-based method [19]. As shown in Figure 6f,g, the perceptual effect of our method is quite competitive to the deep learning-based method [13]; in detail, our method has better edges inside the ellipses, but our method is noisier inside the circle and the star.
We also take the magnified subimage cut off from the testing image “Kodak 8”, as shown in Figure 7a, to show the perceptual effect merit of our method. After performing the considered demosaicking methods on the RGBW CFA image of Figure 7a, Figure 7b–g demonstrate the six demosaiced RGB full-color subimages, respectively. As shown in the fences, our three-stage method has the best perceptual effect and the least rainbow side effect.

3.3. Actual Time Cost of Our Method

In this subsection, the actual time cost of our method is reported. For I R G B W , two different image resolutions, namely 1280 × 720 and 1440 × 1080, are created from the Kodak dataset. The image resolution 1280 × 720 is often used for HD DVD and Blu-ray DVD; 1440 × 1080 is often used for HDV. The experimental data indicated that for one 1280 × 720 image, the average execution time is 11.58 s; for one 1440 × 1080 image, the average execution time is 19.92 s. At the above-mentioned two image resolutions, the proposed method is only able to process video frames offline.

4. Conclusions

We have presented our three-stage method for demosaicking RGBW CFA images. In the first stage, we propose a cross shape-based color difference approach to reconstruct the missing W color pixels in the W color plane of I R G B W . In the second stage, first, based on I R G B W and the reconstructed W color plane, the modified version of the HC method is performed on I R G B W in order to obtain a rough demosaiced image I r o u g h , R G B . Secondly, we propose an error compensation-based demosaicking method to reduce the error of the R, G, and B color values in I r o u g h , R G B , obtaining the improved demosaiced RGB full-color image I i m p r o v e d , R G B . In the third stage, an I R G B W -based refinement process is proposed to zeroize the error RGBW CFA image of I i m p r o v e d , R G B , achieving a better demosaiced RGB full-color image. Based on the testing RGBW CFA images that were collected from the Kodak and IMAX datasets, the comprehensive experimental data have justified the CPSNR and perceptual effect merits of our method relative to the HC b i c u b i c and HC K i k u methods [11], the pansharpening-based method [13], and the deep learning-based method [16].
Our future work is to adjust our demosaicking method to tackle the SNR limitation problem when considering the effect of read noise and limited full well charge of small pixels in more modern sensors.

Author Contributions

Conceptualization, K.-L.C.; methodology, K.-L.C., T.-H.C., and S.-N.C.; software, T.-H.C. and S.-N.C.; validation, K.-L.C., T.-H.C., and S.-N.C.; formal analysis, K.-L.C., T.-H.C., and S.-N.C.; investigation, K.-L.C., T.-H.C., and S.-N.C.; resources, K.-L.C.; data curation, T.-H.C. and S.-N.C.; writing—original draft preparation, K.-L.C.; writing—review and editing, K.-L.C., T.-H.C., and S.-N.C.; visualization, T.-H.C. and S.-N.C.; supervision, K.-L.C.; project administration, K.-L.C.; funding acquisition, K.-L.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Grant MOST 107-2221-E-011-108-MY3 from the Ministry of Science and Technology, Taiwan, R. O. C.

Acknowledgments

The authors appreciate the valuable comments of the three anonymous reviewers and the proofreading help of Ms. C. Harrington to improve the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bayer, B.E. Color Imaging Array. U.S. Patent 3 971 065, 20 July 1976. [Google Scholar]
  2. Chung, K.L.; Yang, W.J.; Yan, W.M.; Wang, C.C. Demosaicing of color filter array captured images using gradient edge detection masks and adaptive heterogeneity-projection. IEEE Trans. Image Process. 2008, 17, 2356–2367. [Google Scholar] [CrossRef] [PubMed]
  3. Kiku, D.; Monno, Y.; Tanaka, M.; Okutomi, M. Residual interpolation for color image demosaicking. In Proceedings of the IEEE International Conference on Image Processing, Melbourne, Australia, 15–18 September 2013; pp. 2304–2308. [Google Scholar]
  4. Li, X.; Gunturk, B.; Zhang, L. Image demosaicing: A systematic survey. In Proceedings of the SPIE-IS&T Electronic Imaging, Visual Communications and Image Processing, San Jose, CA, USA, 27–31 January 2008; Volume 6822, pp. 68221J–68221J15. [Google Scholar]
  5. Menon, D.; Calvagno, G. Color image demosaicking: An overview. Signal Process. Image Commun. 2011, 26, 518–533. [Google Scholar] [CrossRef]
  6. Pei, S.C.; Tam, I.K. Effective color interpolation in CCD color filter arrays using signal correlation. IEEE Trans. Circuits Syst. Video Technol. 2003, 13, 503–513. [Google Scholar]
  7. Zhang, L.; Wu, X.; Buades, A.; Li, X. Color demosaicking by local directional interpolation and nonlocal adaptive thresholding. J. Electron. Imaging 2011, 20, 023016. [Google Scholar]
  8. Lee, S.H.; Oh, P.; Kang, M.G. Three dimensional colorization based image/video reconstruction from white-dominant RGBW pattern images. Digit. Signal Process. 2019, 93, 87–101. [Google Scholar] [CrossRef]
  9. Ono, S. Image-Capturing Apparatus. U.S. Patent Application No. 10/166, 271, 30 January 2003. [Google Scholar]
  10. Compton, J.T.; Hamilton, J.F. Image Sensor with Improved Light Sensitivity. U.S. Patent 8,139,130,20, 20 March 2012. [Google Scholar]
  11. Hamilton, J.F.; Compton, J.T. Processing Color and Panchromatic Pixels. U.S. Patent 0,024,879,A1, 25 September 2012. [Google Scholar]
  12. Condat, L. A generic variational approach for demosaicking from an arbitrary color filter array. In Proceedings of the IEEE International Conference on Image Processing, Cairo, Egypt, 7–10 November 2009; pp. 1625–1628. [Google Scholar]
  13. Kwan, C.; Chou, B.; Kwan, L.M.; Budavari, B. Debayering RGBW color filter arrays: A pansharpening approach. In Proceedings of the IEEE Ubiquitous Computing, Electronics & Mobile Communication Conference, New York, NY, USA, 19–21 October 2017; pp. 94–100. [Google Scholar]
  14. Zhou, J.; Kwan, C.; Budavari, B. Hyperspectral image superresolution: A hybrid color mapping approach. J. Appl. Remote Sens. 2016, 10, 035024. [Google Scholar] [CrossRef]
  15. Gharbi, M.; Chaurasia, G.; Paris, S.; Durand, F. Deep joint demosaicking and denoising. ACM Trans. Graph. 2016, 35, 1–2. [Google Scholar] [CrossRef]
  16. Kwan, C.; Chou, B. Further improvement of debayering performance of RGBW color filter arrays using deep learning and pansharpening techniques. J. Imaging 2019, 5, 68. [Google Scholar] [CrossRef] [Green Version]
  17. Zhang, C.; Yan, L.; Wang, J.; Hao, P. Universal demosaicking of color filter arrays. IEEE Trans. Image Process. 2016, 25, 5173–5186. [Google Scholar] [CrossRef] [PubMed]
  18. Amba, P.; Alleysson, D.; Mermillod, M. Demosaicing using dual layer feedforward neural network. In Proceedings of the Twenty-sixth Color and Imaging Conference, Vancouver, BC, Canada, 12–16 November 2018; pp. 211–218. [Google Scholar]
  19. IMAX True Color Image Collection. Available online: https://www.comp.polyu.edu.hk/~cslzhang/CDM_Dataset.htm (accessed on 30 June 2020).
  20. Kodak True Color Image Collection. Available online: http://www.math.purdue.edu/~lucier/PHOTO_CD/BMP_IMAGES/ (accessed on 30 June 2020).
  21. Execution Codes of HCbicubic and HCKiku. Available online: Ftp://140.118.175.164/HC/ (accessed on 30 June 2020).
  22. Execution Code of VA Method. Available online: https://lcondat.github.io/publications.html (accessed on 20 June 2020).
  23. Execution Code of Our Method. Available online: Ftp://140.118.175.164/ours/ (accessed on 30 June 2020).
  24. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. The three commonly used RGBW-Kodak CFA patterns. (a) RGBW-Kodak-1. (b) RGBW-Kodak-2. (c) RGBW-Kodak-3.
Figure 1. The three commonly used RGBW-Kodak CFA patterns. (a) RGBW-Kodak-1. (b) RGBW-Kodak-2. (c) RGBW-Kodak-3.
Sensors 20 03908 g001
Figure 2. The mask used in our cross shape-based color difference approach to reconstruct the missing W color pixels in the W color plane of I R G B W .
Figure 2. The mask used in our cross shape-based color difference approach to reconstruct the missing W color pixels in the W color plane of I R G B W .
Sensors 20 03908 g002
Figure 3. The construction of the error RGBW CFA block B e r r R G B W . (a) The original 4 × 4 RGBW CFA block B R G B W . (b) The rough demosaiced block B r o u g h , R G B . (c) The distorted RGBW CFA block B d i s t o r t e d R G B W . (d) The error RGBW CFA block B e r r R G B W .
Figure 3. The construction of the error RGBW CFA block B e r r R G B W . (a) The original 4 × 4 RGBW CFA block B R G B W . (b) The rough demosaiced block B r o u g h , R G B . (c) The distorted RGBW CFA block B d i s t o r t e d R G B W . (d) The error RGBW CFA block B e r r R G B W .
Sensors 20 03908 g003
Figure 4. The quality improvement by the second and third stages of our three-stage demosaicking method. (a) The original 4 × 4 RGB full-color block B R G B . (b) The rough demosaiced block B r o u g h , R G B . (c) As marked by red ellipses, the improved demosaiced R, G, and B, color values in B i m p r o v e d , R G B by our iterative error compensation-based approach. (d) As marked by yellow ellipses, the improved demosaiced R, G, and B, color values in B r e f i n e d , R G B by our I R G B W -based refinement process. (e) The error RGBW CFA block of B r e f i n e d , R G B has been zeroized.
Figure 4. The quality improvement by the second and third stages of our three-stage demosaicking method. (a) The original 4 × 4 RGB full-color block B R G B . (b) The rough demosaiced block B r o u g h , R G B . (c) As marked by red ellipses, the improved demosaiced R, G, and B, color values in B i m p r o v e d , R G B by our iterative error compensation-based approach. (d) As marked by yellow ellipses, the improved demosaiced R, G, and B, color values in B r e f i n e d , R G B by our I R G B W -based refinement process. (e) The error RGBW CFA block of B r e f i n e d , R G B has been zeroized.
Sensors 20 03908 g004
Figure 5. The testing IMAX and Kodak datasets. (a) The 18 testing images in the IMAX dataset. (b) The 12 testing images in the Kodak dataset.
Figure 5. The testing IMAX and Kodak datasets. (a) The 18 testing images in the IMAX dataset. (b) The 12 testing images in the Kodak dataset.
Sensors 20 03908 g005
Figure 6. The perceptual effect merit of the proposed method for the testing image “IMAX 1. (a) The magnified subimage cut off from the ground truth image. (b) The HC b i c u b i c method [11]. (c) The HC K i k u method [11]. (d) The VA method [12]. (e) The pansharpening-based method [13]. (f) The deep learning-based method [16]. (g) Our three-stage method.
Figure 6. The perceptual effect merit of the proposed method for the testing image “IMAX 1. (a) The magnified subimage cut off from the ground truth image. (b) The HC b i c u b i c method [11]. (c) The HC K i k u method [11]. (d) The VA method [12]. (e) The pansharpening-based method [13]. (f) The deep learning-based method [16]. (g) Our three-stage method.
Sensors 20 03908 g006
Figure 7. The perceptual effect merit of the proposed method for the testing image “Kodak 8”. (a) The magnified subimage cut off from the ground truth image. (b) The HC b i c u b i c method [11]. (c) The HC K i k u method [11]. (d) The VA method [12]. (e) The pansharpening-based method [13]. (f) The deep learning-based method [16]. (g) Our three-stage method.
Figure 7. The perceptual effect merit of the proposed method for the testing image “Kodak 8”. (a) The magnified subimage cut off from the ground truth image. (b) The HC b i c u b i c method [11]. (c) The HC K i k u method [11]. (d) The VA method [12]. (e) The pansharpening-based method [13]. (f) The deep learning-based method [16]. (g) Our three-stage method.
Sensors 20 03908 g007
Table 1. CPSNR MERIT of the Proposed Three-Stage Method.
Table 1. CPSNR MERIT of the Proposed Three-Stage Method.
HC b i c u b i c [11]HC K i k u [11]VA [12]Pansharpening-Based Method [13]Deep Learning-Based Method [16]Proposed
CPSNRIMAX29.3259531.484931.0587733.2603933.9856133.47046
Kodak28.8419231.721135.9551533.5808334.8526736.67467
Average CPSNR29.0831.6033.5033.4234.4235.072
Average CPSNR gain5.9886353.469571.565611.651960.65343
Table 2. CPSNR, SSIM, and Δ E Merits of the Proposed Three-Stage Method.
Table 2. CPSNR, SSIM, and Δ E Merits of the Proposed Three-Stage Method.
HC b i c u b i c [11]HC K i k u [11]VA [12]Proposed
CPSNRIMAX29.3259531.484931.0587733.47046
Kodak29.2675731.8527335.7376836.57367
Average CPSNR29.2967631.6688233.3982335.02207
Average CPSNR gain5.725313.3532551.623845
SSIMIMAX0.89210.9157780.8987750.933092
Kodak0.8954230.9331690.9732590.975752
Average SSIM0.89376150.9244740.9360170.954422
Average SSIM gain0.06066050.0299490.018405
Δ EIMAX3.5311272.7771212.9668272.392574
Kodak3.7085112.5980072.1223442.173272
Average Δ E3.6198192.6875642.5445862.282923
Average Δ E gain−1.336896−0.40464−0.26166

Share and Cite

MDPI and ACS Style

Chung, K.-L.; Chan, T.-H.; Chen, S.-N. Effective Three-Stage Demosaicking Method for RGBW CFA Images Using The Iterative Error-Compensation Based Approach. Sensors 2020, 20, 3908. https://doi.org/10.3390/s20143908

AMA Style

Chung K-L, Chan T-H, Chen S-N. Effective Three-Stage Demosaicking Method for RGBW CFA Images Using The Iterative Error-Compensation Based Approach. Sensors. 2020; 20(14):3908. https://doi.org/10.3390/s20143908

Chicago/Turabian Style

Chung, Kuo-Liang, Tzu-Hsien Chan, and Szu-Ni Chen. 2020. "Effective Three-Stage Demosaicking Method for RGBW CFA Images Using The Iterative Error-Compensation Based Approach" Sensors 20, no. 14: 3908. https://doi.org/10.3390/s20143908

APA Style

Chung, K. -L., Chan, T. -H., & Chen, S. -N. (2020). Effective Three-Stage Demosaicking Method for RGBW CFA Images Using The Iterative Error-Compensation Based Approach. Sensors, 20(14), 3908. https://doi.org/10.3390/s20143908

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop