Author Contributions
Conceptualization, M.H. and Y.C.; methodology, M.H.; software, M.H.; validation, M.H. and Y.C.; formal analysis, M.H.; investigation, M.H.; resources, M.H.; data curation, M.H.; writing—original draft preparation, M.H.; writing—review and editing, Y.C.; visualization, M.H.; supervision, Y.C.; project administration, Y.C.; funding acquisition, Y.C.
Figure 1.
Blurred and sharp image of GOPRO dataset.
Figure 1.
Blurred and sharp image of GOPRO dataset.
Figure 2.
Similarity maps of (a) the content loss function and (b) the style loss function.
Figure 2.
Similarity maps of (a) the content loss function and (b) the style loss function.
Figure 3.
Architecture of GAN.
Figure 3.
Architecture of GAN.
Figure 4.
Distribution of discriminator (blue), generator (green) and real data (black) according to the learning process.
Figure 4.
Distribution of discriminator (blue), generator (green) and real data (black) according to the learning process.
Figure 5.
Example of two probability distributions.
Figure 5.
Example of two probability distributions.
Figure 6.
Graph of Equation (5).
Figure 6.
Graph of Equation (5).
Figure 7.
Feature response of layers in CNN. First row is original image, conv_2 layer and conv_4 layer. Second row is conv_6 layer, conv_9 layer and conv_12 layer.
Figure 7.
Feature response of layers in CNN. First row is original image, conv_2 layer and conv_4 layer. Second row is conv_6 layer, conv_9 layer and conv_12 layer.
Figure 8.
Example images of WGAN-GP with content loss function: (a) Sharp image. (b) Blurred image (c) Reconstruct image.
Figure 8.
Example images of WGAN-GP with content loss function: (a) Sharp image. (b) Blurred image (c) Reconstruct image.
Figure 9.
Extending part of example images.
Figure 9.
Extending part of example images.
Figure 10.
Architecture of VGG16 network.
Figure 10.
Architecture of VGG16 network.
Figure 11.
Generated image using low-level feature.
Figure 11.
Generated image using low-level feature.
Figure 12.
Architecture of generator in WGAN-GP.
Figure 12.
Architecture of generator in WGAN-GP.
Figure 13.
Architecture of discriminator in WGAN-GP.
Figure 13.
Architecture of discriminator in WGAN-GP.
Figure 14.
Configuration diagram for the entire network.
Figure 14.
Configuration diagram for the entire network.
Figure 15.
Twelve different blur kernels in Kohler dataset.
Figure 15.
Twelve different blur kernels in Kohler dataset.
Figure 16.
Object images from comparing proposed method with filter-based methods: (a) Wiener filter (b) Bilateral filter (c) Proposed method (d) Original sharp image.
Figure 16.
Object images from comparing proposed method with filter-based methods: (a) Wiener filter (b) Bilateral filter (c) Proposed method (d) Original sharp image.
Figure 17.
Background images from comparing proposed method with filter-based methods: (a) Wiener filter (b) Bilateral filter (c) Proposed method (d) Original sharp image.
Figure 17.
Background images from comparing proposed method with filter-based methods: (a) Wiener filter (b) Bilateral filter (c) Proposed method (d) Original sharp image.
Figure 18.
Output images from comparing proposed method with filter-based methods on Kohler dataset: (a) Wiener filter (b) Bilateral filter (c) Proposed method (d) Original sharp image.
Figure 18.
Output images from comparing proposed method with filter-based methods on Kohler dataset: (a) Wiener filter (b) Bilateral filter (c) Proposed method (d) Original sharp image.
Figure 19.
Object images from comparing proposed method with WGAN-GP using content-based methods: (a) WGAN with content loss (b) Proposed method (c) Original sharp image.
Figure 19.
Object images from comparing proposed method with WGAN-GP using content-based methods: (a) WGAN with content loss (b) Proposed method (c) Original sharp image.
Figure 20.
Background images from comparing proposed method with WGAN-GP using content -based methods: (a) WGAN with content loss (b) Proposed method (c) Original sharp image.
Figure 20.
Background images from comparing proposed method with WGAN-GP using content -based methods: (a) WGAN with content loss (b) Proposed method (c) Original sharp image.
Figure 21.
Generated images from comparing proposed method with WGAN-GP using content loss method on Kohler dataset: (a) WGAN with content loss (b) Proposed method (c) Original sharp image.
Figure 21.
Generated images from comparing proposed method with WGAN-GP using content loss method on Kohler dataset: (a) WGAN with content loss (b) Proposed method (c) Original sharp image.
Figure 22.
Generated images from extracting different layers in VGG16. First row uses low layers, second row combines low layers and high layer, and last row uses high layers.
Figure 22.
Generated images from extracting different layers in VGG16. First row uses low layers, second row combines low layers and high layer, and last row uses high layers.
Figure 23.
Generated images from extracting different layers in VGG16 on Kohler dataset. First row uses low layers, second row combines low layers and high layer, and last row uses high layers.
Figure 23.
Generated images from extracting different layers in VGG16 on Kohler dataset. First row uses low layers, second row combines low layers and high layer, and last row uses high layers.
Figure 24.
Generated object images according to different λ values: (a) λ = 10 (b) λ = 100 (c) λ = 1000.
Figure 24.
Generated object images according to different λ values: (a) λ = 10 (b) λ = 100 (c) λ = 1000.
Figure 25.
Generated background images according to different λ values: (a) λ = 10 (b) λ = 100 (c) λ = 1000.
Figure 25.
Generated background images according to different λ values: (a) λ = 10 (b) λ = 100 (c) λ = 1000.
Figure 26.
Generated images according to different λ values on Kohler dataset:. (a) λ = 10 (b) λ = 100 (c) λ = 1000.
Figure 26.
Generated images according to different λ values on Kohler dataset:. (a) λ = 10 (b) λ = 100 (c) λ = 1000.
Table 1.
PSNR and SSIM on GOPRO Large dataset.
Table 1.
PSNR and SSIM on GOPRO Large dataset.
Method | Compare Proposed Method to Filter Method |
---|
PSNR | SSIM |
---|
Bilateral filter [1] | 26.67 | 0.93 |
Wiener filter [2] | 28.58 | 0.92 |
Proposed method | 33.29 | 0.98 |
Table 2.
PSNR and SSIM on Kohler dataset.
Table 2.
PSNR and SSIM on Kohler dataset.
Method | Compare Proposed Method to Filter Method |
---|
PSNR | SSIM |
---|
Bilateral filter [1] | 25.24 | 0.84 |
Wiener filter [2] | 24.51 | 0.84 |
Proposed method | 23.29 | 0.86 |
Table 3.
PSNR and SSIM on GOPRO Large dataset.
Table 3.
PSNR and SSIM on GOPRO Large dataset.
Method | Compare Proposed Method to WGAN-GP with Content Loss |
---|
PSNR | SSIM |
---|
WGAN-GP with content loss [5] | 32.96 | 0.97 |
Proposed method | 33.29 | 0.98 |
Table 4.
PSNR and SSIM on Kohler dataset.
Table 4.
PSNR and SSIM on Kohler dataset.
Method | Compare Proposed Method to WGAN-GP with Content Loss |
---|
PSNR | SSIM |
---|
WGAN-GP with content loss [5] | 23.24 | 0.80 |
Proposed method | 23.29 | 0.86 |
Table 5.
PSNR and SSIM of different layer on GOPRO Large dataset.
Table 5.
PSNR and SSIM of different layer on GOPRO Large dataset.
Method | Comparing Between Different Layers |
---|
Low Layer | Combine Layer | High Layer |
---|
PSNR | 29.69 | 30.02 | 33.29 |
SSIM | 0.94 | 0.96 | 0.98 |
Table 6.
PSNR and SSIM of different layer on Kohler dataset.
Table 6.
PSNR and SSIM of different layer on Kohler dataset.
Method | Comparing Between Different Layers |
---|
Low Layer | Combine Layer | High Layer |
---|
PSNR | 22.45 | 21.59 | 23.29 |
SSIM | 0.77 | 0.78 | 0.86 |
Table 7.
PSNR and SSIM of different lambda on GOPRO Large dataset.
Table 7.
PSNR and SSIM of different lambda on GOPRO Large dataset.
Method | Comparing Between Different Layers |
---|
| | |
---|
PSNR | 31.38 | 33.29 | 28.96 |
SSIM | 0.97 | 0.98 | 0.94 |
Table 8.
PSNR and SSIM of different lambda on Kohler dataset.
Table 8.
PSNR and SSIM of different lambda on Kohler dataset.
Method | Comparing Between Different Layers |
---|
| | |
---|
PSNR | 23.31 | 23.29 | 22.45 |
SSIM | 0.76 | 0.86 | 0.75 |