Figure 1.
Three-dimensional reconstruction of an object, (a) affected by quasi-periodic noise, and (b) original object. The image shows the deformation of the surface caused by the noise present in images acquired by the fringe projection in three steps.
Figure 1.
Three-dimensional reconstruction of an object, (a) affected by quasi-periodic noise, and (b) original object. The image shows the deformation of the surface caused by the noise present in images acquired by the fringe projection in three steps.
Figure 2.
Images from the database with quasi-periodic noise at different frequencies: (a) quasi-periodic noise at 4 frequencies, (b) quasi-periodic noise at 8 frequencies, (c) quasi-periodic noise at 16 frequencies, (d) quasi-periodic noise at 32 frequencies.
Figure 2.
Images from the database with quasi-periodic noise at different frequencies: (a) quasi-periodic noise at 4 frequencies, (b) quasi-periodic noise at 8 frequencies, (c) quasi-periodic noise at 16 frequencies, (d) quasi-periodic noise at 32 frequencies.
Figure 3.
Three-dimensional models acquired from platform Turbosquid.
Figure 3.
Three-dimensional models acquired from platform Turbosquid.
Figure 4.
Set of images obtained from a single scene with a 3D model. (a) Ground-truth, (b) original 3D model, (c) region of interest, (d) 3D model with background, (e–g) images with object with 120° shifting pattern projected composed of 4 frequencies, (h–j) reference images with a 4-frequency composite pattern, (k–m) images with object with 120° shifting pattern projected composed of 8 frequencies, (n–p) reference images with a 8-frequency composite pattern, (q–s) images with object with 120° shifting pattern projected composed of 16 frequencies, (t–v) reference images with a 16-frequency composite pattern, (w–y) images with object with 120° shifting pattern projected composed of 32 frequencies, (z,aa,ab) reference images with a 32-frequency composite pattern.
Figure 4.
Set of images obtained from a single scene with a 3D model. (a) Ground-truth, (b) original 3D model, (c) region of interest, (d) 3D model with background, (e–g) images with object with 120° shifting pattern projected composed of 4 frequencies, (h–j) reference images with a 4-frequency composite pattern, (k–m) images with object with 120° shifting pattern projected composed of 8 frequencies, (n–p) reference images with a 8-frequency composite pattern, (q–s) images with object with 120° shifting pattern projected composed of 16 frequencies, (t–v) reference images with a 16-frequency composite pattern, (w–y) images with object with 120° shifting pattern projected composed of 32 frequencies, (z,aa,ab) reference images with a 32-frequency composite pattern.
Figure 5.
The methodology used to generate a database of images with quasi-periodic noise.
Figure 5.
The methodology used to generate a database of images with quasi-periodic noise.
Figure 6.
Images from database created with Blender software: (a,c,e,g) images affected with quasi-periodic noise at different frequencies, (b,d,f,h) ground-truth image.
Figure 6.
Images from database created with Blender software: (a,c,e,g) images affected with quasi-periodic noise at different frequencies, (b,d,f,h) ground-truth image.
Figure 7.
The architecture of convolutional neural network model developed and implemented.
Figure 7.
The architecture of convolutional neural network model developed and implemented.
Figure 8.
Evolution of training and validation loss. Models train with noisy images affected by different frequencies due to different patterns projected. (a) Images with 4 frequencies, (b) images with 8 frequencies, (c) images with 16 frequencies (d) images with 32 frequencies, and (e) images with multifrequencies (4, 8, 16, and 32).
Figure 8.
Evolution of training and validation loss. Models train with noisy images affected by different frequencies due to different patterns projected. (a) Images with 4 frequencies, (b) images with 8 frequencies, (c) images with 16 frequencies (d) images with 32 frequencies, and (e) images with multifrequencies (4, 8, 16, and 32).
Figure 9.
Two-dimensional representation of an object Cat. (a) Image with quasi-periodic noise produced by projection of a four-frequency pattern, inference obtained with models trained with (b) four frequencies, (c) 8 frequencies, (d) 16 frequencies, (e) 32 frequencies, and (f) Multifrequencies. (g) ground-truth image, and (h) original object.
Figure 9.
Two-dimensional representation of an object Cat. (a) Image with quasi-periodic noise produced by projection of a four-frequency pattern, inference obtained with models trained with (b) four frequencies, (c) 8 frequencies, (d) 16 frequencies, (e) 32 frequencies, and (f) Multifrequencies. (g) ground-truth image, and (h) original object.
Figure 10.
Three-dimensional representation of an object Cat. (a) Image with quasi-periodic noise produced by projection of a four-frequency pattern, inference obtained with models trained with (b) four frequencies, (c) 8 frequencies, (d) 16 frequencies, (e) 32 frequencies, and (f) Multifrequencies. (g) ground-truth image, and (h) original object.
Figure 10.
Three-dimensional representation of an object Cat. (a) Image with quasi-periodic noise produced by projection of a four-frequency pattern, inference obtained with models trained with (b) four frequencies, (c) 8 frequencies, (d) 16 frequencies, (e) 32 frequencies, and (f) Multifrequencies. (g) ground-truth image, and (h) original object.
Figure 11.
Profile comparison of 3D objects.
Figure 11.
Profile comparison of 3D objects.
Figure 12.
Two-dimensional representation of an object Cat. (a) Image with quasi-periodic noise produced by projection of an 8-frequency pattern, inference obtained with models trained with (b) four frequencies, (c) 8 frequencies, (d) 16 frequencies, (e) 32 frequencies, and (f) Multifrequencies. (g) ground-truth image, and (h) original object.
Figure 12.
Two-dimensional representation of an object Cat. (a) Image with quasi-periodic noise produced by projection of an 8-frequency pattern, inference obtained with models trained with (b) four frequencies, (c) 8 frequencies, (d) 16 frequencies, (e) 32 frequencies, and (f) Multifrequencies. (g) ground-truth image, and (h) original object.
Figure 13.
Three-dimensional representation of an object Cat. (a) Image with quasi-periodic noise produced by projection of an 8-frequency pattern, inference obtained with models trained with (b) four frequencies, (c) 8 frequencies, (d) 16 frequencies, (e) 32 frequencies, and (f) Multifrequencies. (g) ground-truth image, and (h) original object.
Figure 13.
Three-dimensional representation of an object Cat. (a) Image with quasi-periodic noise produced by projection of an 8-frequency pattern, inference obtained with models trained with (b) four frequencies, (c) 8 frequencies, (d) 16 frequencies, (e) 32 frequencies, and (f) Multifrequencies. (g) ground-truth image, and (h) original object.
Figure 14.
Profile comparison of 3D objects.
Figure 14.
Profile comparison of 3D objects.
Figure 15.
Two-dimensional representation of an object Cat. (a) Image with quasi-periodic noise produced by projection of a 16-frequency pattern, inference obtained with models trained with (b) four frequencies, (c) 8 frequencies, (d) 16 frequencies, (e) 32 frequencies, and (f) Multifrequencies. (g) ground-truth image, and (h) original object.
Figure 15.
Two-dimensional representation of an object Cat. (a) Image with quasi-periodic noise produced by projection of a 16-frequency pattern, inference obtained with models trained with (b) four frequencies, (c) 8 frequencies, (d) 16 frequencies, (e) 32 frequencies, and (f) Multifrequencies. (g) ground-truth image, and (h) original object.
Figure 16.
Three-dimensional representation of an object Cat. (a) Image with quasi-periodic noise produced by projection of a 16-frequency pattern, inference obtained with models trained with (b) four frequencies, (c) 8 frequencies, (d) 16 frequencies, (e) 32 frequencies, and (f) Multifrequencies. (g) ground-truth image, and (h) original object.
Figure 16.
Three-dimensional representation of an object Cat. (a) Image with quasi-periodic noise produced by projection of a 16-frequency pattern, inference obtained with models trained with (b) four frequencies, (c) 8 frequencies, (d) 16 frequencies, (e) 32 frequencies, and (f) Multifrequencies. (g) ground-truth image, and (h) original object.
Figure 17.
Profile comparison of 3D objects.
Figure 17.
Profile comparison of 3D objects.
Figure 18.
Two-dimensional representation of an object Cat. (a) Image with quasi-periodic noise produced by projection of a 32-frequency pattern, inference obtained with models trained with (b) four frequencies, (c) 8 frequencies, (d) 16 frequencies, (e) 32 frequencies, and (f) Multifrequencies. (g) ground-truth image, and (h) original object.
Figure 18.
Two-dimensional representation of an object Cat. (a) Image with quasi-periodic noise produced by projection of a 32-frequency pattern, inference obtained with models trained with (b) four frequencies, (c) 8 frequencies, (d) 16 frequencies, (e) 32 frequencies, and (f) Multifrequencies. (g) ground-truth image, and (h) original object.
Figure 19.
Three-dimensional representation of an object Cat. (a) Image with quasi-periodic noise produced by projection of a 32-frequency pattern, inference obtained with models trained with (b) four frequencies, (c) 8 frequencies, (d) 16 frequencies, (e) 32 frequencies, and (f) Multifrequencies. (g) ground-truth image, and (h) original object.
Figure 19.
Three-dimensional representation of an object Cat. (a) Image with quasi-periodic noise produced by projection of a 32-frequency pattern, inference obtained with models trained with (b) four frequencies, (c) 8 frequencies, (d) 16 frequencies, (e) 32 frequencies, and (f) Multifrequencies. (g) ground-truth image, and (h) original object.
Figure 20.
Profile comparison of 3D objects.
Figure 20.
Profile comparison of 3D objects.
Table 1.
Parameters used during network training for comparison, trained with images affected by quasi-periodic noise at four different patterns (4, 8, 16, and 32 frequencies), as seen in
Figure 2.
Table 1.
Parameters used during network training for comparison, trained with images affected by quasi-periodic noise at four different patterns (4, 8, 16, and 32 frequencies), as seen in
Figure 2.
Parameter | Pattern 1 (Number of Fringes 4) | Pattern 2 (Number of Fringes 8) | Pattern 3 (Number of Fringes 16) | Pattern 4 (Number of Fringes 32) | Pattern 5 (Multifrequency Pattern) |
---|
Batch size | 4 | 4 | 4 | 4 | 4 |
Initials weights | Gaussian random (average = 0.0, standard deviation = 0.01) | Gaussian random (average = 0.0, standard deviation = 0.01) | Gaussian random (average = 0.0, standard deviation = 0.01) | Gaussian random (average = 0.0, standard deviation = 0.01) | Gaussian random (average = 0.0, standard deviation = 0.01) |
Bias | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
Learning rate | 0.007 | 0.007 | 0.007 | 0.007 | 0.007 |
Optimizer | Adam() | Adam() | Adam() | Adam() | Adam() |
Training loss | MSELoss() | MSELoss() | MSELoss() | MSELoss() | MSELoss() |
Validation loss | MSELoss() | MSELoss() | MSELoss() | MSELoss() | MSELoss() |
Test planing (train, val) | 90%, 10% | 90%, 10% | 90%, 10% | 90%, 10% | 90%, 10% |
Images size (Width, Height) | pixels | pixels | pixels | pixels | pixels |
Set train images | 1050 | 1050 | 1050 | 1050 | 4200 |
Set validation images | 105 | 105 | 105 | 105 | 420 |
Set test images | 300 | 300 | 300 | 300 | 300 |
Table 2.
Time employed to perform each training and training and validation loss reached during network training for comparison using images with four different patterns (4, 8, 16, and 32 frequencies), as seen in
Figure 2.
Table 2.
Time employed to perform each training and training and validation loss reached during network training for comparison using images with four different patterns (4, 8, 16, and 32 frequencies), as seen in
Figure 2.
| Pattern 1 (Number of Fringes 4) | Pattern 2 (Number of Fringes 8) | Pattern 3 (Number of Fringes 16) | Pattern 4 (Number of Fringes 32) | Pattern 5 (Multifrequency Pattern) |
---|
Training loss | 0.10275 | 0.11939 | 0.09801 | 0.08825 | 0.12041 |
Validation loss | 0.11187 | 0.10390 | 0.10042 | 0.09749 | 0.10443 |
Training time (HH:MM:SS) | 0:59:37 | 1:08:49 | 0:58:12 | 1:00:16 | 5:20:22 |
Table 3.
Measures obtained with model trained with images affected by noise of four frequencies.
Table 3.
Measures obtained with model trained with images affected by noise of four frequencies.
Inference | IMMSE | SSIM | PSNR | MSE (Profile) |
---|
1 | 0.022 | 0.871 | 64.676 | 0.064 |
2 | 0.017 | 0.879 | 65.767 | 0.048 |
3 | 0.033 | 0.828 | 62.900 | 0.089 |
4 | 0.046 | 0.793 | 61.547 | 0.124 |
5 | 0.012 | 0.873 | 67.263 | 0.034 |
Table 4.
Measures obtained with model trained with images affected by noise of 8 frequencies.
Table 4.
Measures obtained with model trained with images affected by noise of 8 frequencies.
Inference | IMMSE | SSIM | PSNR | MSE (Profile) |
---|
1 | 0.017 | 0.882 | 65.838 | 0.048 |
2 | 0.012 | 0.889 | 67.488 | 0.031 |
3 | 0.025 | 0.846 | 64.224 | 0.063 |
4 | 0.036 | 0.813 | 62.561 | 0.095 |
5 | 0.007 | 0.878 | 69.646 | 0.018 |
Table 5.
Measures obtained with model trained with images affected by noise of 16 frequencies.
Table 5.
Measures obtained with model trained with images affected by noise of 16 frequencies.
Inference | IMMSE | SSIM | PSNR | MSE (Profile) |
---|
1 | 0.014 | 0.886 | 66.517 | 0.043 |
2 | 0.009 | 0.903 | 68.549 | 0.025 |
3 | 0.017 | 0.897 | 65.771 | 0.050 |
4 | 0.028 | 0.872 | 63.609 | 0.082 |
5 | 0.005 | 0.914 | 71.465 | 0.011 |
Table 6.
Measures obtained with model trained with images affected by noise of 32 frequencies.
Table 6.
Measures obtained with model trained with images affected by noise of 32 frequencies.
Inference | IMMSE | SSIM | PSNR | MSE (Profile) |
---|
1 | 0.010 | 0.905 | 68.307 | 0.027 |
2 | 0.005 | 0.923 | 71.543 | 0.011 |
3 | 0.010 | 0.922 | 68.098 | 0.028 |
4 | 0.019 | 0.901 | 65.273 | 0.054 |
5 | 0.002 | 0.927 | 75.116 | 0.002 |