A New Filtering System for Using a Consumer Depth Camera at Close Range
Abstract
:1. Introduction
2. Noise Characterization
- Outlier noise. This is the point with WD that exists away from the realistic surface, and is usually randomly distributed in the depth image spatially and temporally, which usually has impacts on the pass-through filtering [30].
- Edge noise. This kind of noise exists regionally. It can be an unrealistic surface composed of noise points surrounds the realistic surface, or it can be part of the edge of a realistic surface with WD. The closer to the blank area, the greater the depth gradient. It would eventually point to the or (the positive or negative direction of the depth value).
- Plaque noise. This is a kind of residual noise. The filtering system may miss some plaque areas after filtering the first two kinds of noise. Most of the residual plaque areas are isolated and a few are connected to realistic surfaces.
3. Proposed Filtering System
3.1. Improved Dixon Test
Algorithm 1 Outliers Detection | |||
Input:δ(p) Output:Stat(tid), true for noise point | |||
1: | count NZPs:nNZP ← counter(δ(p)) | 10: | if p ∈ H then |
2: | ifnNZP < k then | 11: | return: Stat (tid) ← true |
3: | return: Stat (tid) ← true | 12: | endif |
4: | endif | 13: | calculate SD: SD ← deviation(H) |
5: | ascending ordering:H ← sorter(δ(p)) | 14: | ifSD > SDmax then |
6: | nremain ← nNZP | 15: | return: Stat (tid) ← true |
7: | do | 16: | else |
8: | remove outliers: DixonTest(H, nremain) | 17: | return: Stat (tid) ← false |
9: | whilek < nremain < n | 18: | endif |
- , for any -centred size window, when the number of NZP , that is, there are only at most NZPs except . As is shown in Figure 5, as an outlier, locates on the jagged edges or in the blank area, and it can be directly identified as a noise point.
- , the loop body in eliminates the outliers in the sample repeatedly until there are no more points that can be identified as an outlier. If is eliminated, then it could be defined as an outlier.
- , the remaining NZPs in will be sent to an SD calculator to make a macro evaluation of dispersion. will be marked as an outlier if the SD is too large.
3.2. Edge Noise Filtering Approach
- , when locates at the jagged edges, is defined as a noise point.
- , if NZPs is arranged as a straight-line, is defined as a noise point.
- , in this case, fitting a plane to by using the least-square method, where is the converted 3D global coordinates of . Let be the SD of the vertical distances from to , and be the angle between and sight plane . If or , define as a noise point.
3.3. Plaque Noise Filtering Approach
Algorithm 2 Plaque Noise Area Detection | |
Input:ID, npymin, npymin, mpmin | |
Output:Stat(tid), true for noise point | |
1: | row search: Xsearcher (npx, ID) |
2: | col search: Xsearcher (npy, ID) |
3: | mp = npx × npy |
4: | ifnpx < npxmin||npy < npymin||mp < mpmin then |
5: | return: Stat (tid) ← true |
6: | else |
7: | return: Stat (tid) ← false |
8: | endif |
4. Experiments
5. Conclusions
Future Works
Supplementary Materials
Author Contributions
Funding
Conflicts of Interest
References
- Hansard, M.; Horaud, R.; Amat, M.; Evangelidis, G. Automatic detection of calibration grids in time-of-flight images. Comput. Vision Image Understanding 2014, 121, 108–118. [Google Scholar] [CrossRef] [Green Version]
- Grzegorzek, M.; Theobalt, C.; Koch, R.; Kolb, A. Time-of-Flight and Depth Imaging. Sensors, Algorithms, and Applications. Lect. Notes Comput. Sci. 2013, 8200, 354–360. [Google Scholar]
- Draelos, M.; Deshpande, N.; Grant, E. The Kinect up close: Adaptations for short-range imaging. In Proceedings of the Multisensor Fusion & Integration for Intelligent Systems, Hamburg, Germany, 13–15 September 2012. [Google Scholar]
- Han, X.-F.; Jin, J.S.; Wang, M.-J.; Jiang, W. Guided 3D point cloud filtering. Multimedia Tools Appl. 2018, 77, 17397–17411. [Google Scholar] [CrossRef]
- Buttazzo, G.; Lipari, G.; Abeni, L.; Caccamo, M. Soft Real-Time Systems: Predictability vs. Efficiency (Series in Computer Science); Plenum Publishing Co.: Pavia, Italy, 2005. [Google Scholar]
- Gogouvitis, S.; Konstanteli, K.; Waldschmidt, S.; Kousiouris, G.; Katsaros, G.; Menychtas, A.; Kyriazis, D.; Varvarigou, T. Workflow management for soft real-time interactive applications in virtualized environments. Future Gener. Comput. Syst. 2012, 28, 193–209. [Google Scholar] [CrossRef]
- Ma, Z.; Wu, E. Real-time and robust hand tracking with a single depth camera. Vis. Comput. 2014, 30, 1133–1144. [Google Scholar] [CrossRef]
- Novak-Marcincin, J.; Torok, J. Advanced Methods of Three Dimensional Data Obtaining for Virtual and Augmented Reality. Adv. Mater. Res. 2014, 1025, 1168–1172. [Google Scholar] [CrossRef]
- Nguyen, B.P.; Tay, W.L.; Chui, C.K. Robust Biometric Recognition From Palm Depth Images for Gloved Hands. IEEE Trans. Hum. -Mach. Syst. 2015, 45, 799–804. [Google Scholar] [CrossRef]
- Deng, X.; Yang, S.; Zhang, Y.; Tan, P.; Chang, L.; Wang, H. Hand3D: Hand Pose Estimation using 3D Neural Network. arXiv 2017, arXiv:1704.02224. [Google Scholar]
- Sun, X.; Wei, Y.; Liang, S.; Tang, X.; Sun, J. Cascaded hand pose regression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 824–832. [Google Scholar]
- Qian, C.; Sun, X.; Wei, Y.; Tang, X.; Sun, J. Realtime and Robust Hand Tracking from Depth. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014; pp. 1106–1113. [Google Scholar]
- Morana, M. 3D Scene Reconstruction Using Kinect. In Advances onto the Internet of Things: How Ontologies Make the Internet of Things Meaningful; Gaglio, S., Lo Re, G., Eds.; Springer International Publishing: Cham, Switzerland, 2014. [Google Scholar] [CrossRef]
- Maimone, A.; Bidwell, J.; Peng, K.; Fuchs, H. Enhanced personal autostereoscopic telepresence system using commodity depth cameras. Comput. Graph. 2012, 36, 791–807. [Google Scholar] [CrossRef]
- Buades, A.; Coll, B.; Morel, J. A non-local algorithm for image denoising. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; pp. 60–65. [Google Scholar]
- Zhang, B.; Allebach, J.P. Adaptive Bilateral Filter for Sharpness Enhancement and Noise Removal. IEEE Trans. Image Process. 2008, 17, 664–678. [Google Scholar] [CrossRef]
- Petschnigg, G.; Szeliski, R.; Agrawala, M.; Cohen, M.; Hoppe, H.; Toyama, K. Digital photography with flash and no-flash image pairs. ACM Trans. Graph. 2004, 23, 664–672. [Google Scholar] [CrossRef]
- Essmaeel, K.; Gallo, L.; Damiani, E.; De Pietro, G.; Dipanda, A. Comparative evaluation of methods for filtering Kinect depth data. Multimed. Tools Appl. 2015, 74, 7331–7354. [Google Scholar] [CrossRef]
- Lo, K.-H.; Wang, Y.-C.F.; Hua, K.-L. Edge-Preserving Depth Map Upsampling by Joint Trilateral Filter. IEEE Trans. Cybern. 2018, 48, 371–384. [Google Scholar] [CrossRef] [PubMed]
- Yuan, L.; Sun, J.; Quan, L.; Shum, H.Y. Image deblurring with blurred/noisy image pairs. ACM Trans. Graph. 2007, 26, 1. [Google Scholar] [CrossRef]
- Le, A.V.; Jung, S.W.; Won, C.S. Directional Joint Bilateral Filter for Depth Images. Sensors 2014, 14, 11362–11378. [Google Scholar] [CrossRef] [Green Version]
- Ran, L.; Li, B.; Huang, Z.; Cao, D.; Tan, Y.; Deng, Z.; Miao, X.; Jia, R.; Tan, W. Hole filling using joint bilateral filtering for moving object segmentation. J. Electron. Imaging 2014, 23, 063021. [Google Scholar]
- Cai, Z.; Han, J.; Liu, L.; Shao, L. RGB-D datasets using microsoft kinect or similar sensors: A survey. Multimedia Tools Appl. 2017, 76, 4313–4355. [Google Scholar] [CrossRef]
- Camplani, M.; Mantecon, T.; Salgado, L. Depth-Color Fusion Strategy for 3-D Scene Modeling With Kinect. IEEE Trans. Cybern. 2013, 43, 1560–1571. [Google Scholar] [CrossRef]
- Reshetyuk, Y. Terrestrial Laser Scanning: Error Sources, Self-Calibration and Direct Georeferencing; VDM Verlag: Stockholm, Sweden, 2009. [Google Scholar]
- International Organization for Standardization. ISO/IEC/IEEE 60559:2011, Information Technology—Microprocessor Systems—Floating-Point Arithmetic. 2011. Available online: https://www.iso.org/standard/57469.html (accessed on 7 August 2019).
- Nazir, S.; Rihana, S.; Visvikis, D.; Fayad, H. Technical Note: Kinect V2 surface filtering during gantry motion for radiotherapy applications. Med. Phys. 2018, 45, 1400–1407. [Google Scholar] [CrossRef]
- Yu, Y.; Song, Y.; Zhang, Y.; Wen, S. A Shadow Repair Approach for Kinect Depth Maps. In Proceedings of Computer Vision–ACCV 2012, Proceedings of the 11th Asian Conference on Computer Vision, Daejeon, Korea, 5–9 November 2012. [Google Scholar]
- Accuracy (Trueness and Precision) of Measurement Methods and Results—Part 1: General Principles and Definitions. Available online: https://www.iso.org/standard/11833.html (accessed on 31 May 2019).
- Filtering a PointCloud Using a PassThrough Filter. Available online: http://pointclouds.org/documentation/tutorials/passthrough.php (accessed on 20 May 2019).
- Böhrer, A. One-sided and Two-sided Critical Values for Dixon’s Outlier Test for Sample Sizes up to n = 30. Econ. Qual. Control 2008, 23, 5–13. [Google Scholar] [CrossRef]
- Kim, S.D.; Lee, J.H.; Kim, J.K. A new chain-coding algorithm for binary images using run-length codes. Comput. Vis. Graph. Image Process. 1988, 41, 114–128. [Google Scholar] [CrossRef]
- Structural Analysis and Shape Descriptors. Available online: https://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=findcon#cv2.findContours (accessed on 20 May 2019).
- Khan, R.; Hanbury, A.; Stöttinger, J.; Bais, A. Color based skin classification. Pattern Recognit. Lett. 2012, 33, 157–163. [Google Scholar] [CrossRef] [Green Version]
- Kang, S.I.; Roh, A.; Hong, H. Using depth and skin color for hand gesture classification. In Proceedings of the IEEE International Conference on Consumer Electronics, Las Vegas, NV, USA, 9–12 January 2011. [Google Scholar]
Dis.\Para. | |||||||||
---|---|---|---|---|---|---|---|---|---|
Kinect v2 | |||||||||
500 mm | 3 | 20 | 0.1000 | 4.3 | 7 | 7 | 25 | 3 | 75 |
600 mm | 3 | 20 | 0.1000 | 4.55 | 5 | 5 | 25 | 3 | 75 |
700 mm | 3 | 15 | 0.1000 | 4.8 | 5 | 5 | 25 | 3 | 75 |
800 mm | 3 | 15 | 0.1000 | 5.05 | 4 | 4 | 16 | 3 | 75 |
900 mm | 3 | 10 | 0.1000 | 5.3 | 4 | 4 | 16 | 3 | 75 |
1000 mm | 3 | 10 | 0.1000 | 5.55 | 4 | 4 | 16 | 3 | 75 |
SR300 | |||||||||
400 mm | 3 | 10 | 0.1000 | 2.5 | 5 | 5 | 25 | 1.5 | 70 |
500 mm | 3 | 10 | 0.1000 | 2.5 | 5 | 5 | 25 | 1.5 | 70 |
600 mm | 3 | 10 | 0.1000 | 2.3 | 5 | 5 | 25 | 1.2 | 70 |
700 mm | 3 | 10 | 0.1000 | 2.2 | 4 | 4 | 16 | 1 | 70 |
800 mm | 3 | 10 | 0.1000 | 2.2 | 4 | 4 | 16 | 1 | 70 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Dai, Y.; Fu, Y.; Li, B.; Zhang, X.; Yu, T.; Wang, W. A New Filtering System for Using a Consumer Depth Camera at Close Range. Sensors 2019, 19, 3460. https://doi.org/10.3390/s19163460
Dai Y, Fu Y, Li B, Zhang X, Yu T, Wang W. A New Filtering System for Using a Consumer Depth Camera at Close Range. Sensors. 2019; 19(16):3460. https://doi.org/10.3390/s19163460
Chicago/Turabian StyleDai, Yuanxing, Yanming Fu, Baichun Li, Xuewei Zhang, Tianbiao Yu, and Wanshan Wang. 2019. "A New Filtering System for Using a Consumer Depth Camera at Close Range" Sensors 19, no. 16: 3460. https://doi.org/10.3390/s19163460
APA StyleDai, Y., Fu, Y., Li, B., Zhang, X., Yu, T., & Wang, W. (2019). A New Filtering System for Using a Consumer Depth Camera at Close Range. Sensors, 19(16), 3460. https://doi.org/10.3390/s19163460