An Experimental Assessment of Depth Estimation in Transparent and Translucent Scenes for Intel RealSense D415, SR305 and L515
Abstract
:1. Introduction
- (1)
- We explain/review the operating principles of three different state-of-the-art RealSense™ RGB-D cameras;
- (2)
- We present an experimental framework specifically designed to acquire depth in a scenario where a transparent aquarium fills the entire field of view and translucent liquids are added to it;
- (3)
- We assess the depth estimation performance based on the number of points per image where the cameras failed to estimate depth;
- (4)
- We evaluate the depth estimation precision in terms of point-to-plane orthogonal distances and statistical dispersion of the plane normals;
- (5)
- We qualitatively compare the distribution of the depth between the three cameras for the seven different experiments.
2. Related Work
3. Fundamentals
3.1. Operating Principles and RealSense Cameras
3.1.1. Structured Light
Intel® RealSense™ SR305
3.1.2. Active Stereoscopy
Intel® RealSense™ D415
3.1.3. Time of Flight
Intel® RealSense™ L515
3.2. Light Interaction with Transparent and Translucent Objects
4. Materials and Methods
4.1. Experimental Setup
- A glass aquarium with dimensions 0.84 × 0.22 × 0.58 m (and about 6 mm of thickness);
- A table to support and elevate the aquarium so that the floor does not appear in the visual field;
- A flat white wall behind the aquarium that is partially covered with a black cardboard (Figure 9);
- A RealSense™ depth camera mounted on a tripod. The device is pointed to the aquarium, having the optical axes approximately perpendicular to the wall.
- Wall: This is a zero test where we acquire depth images directly from the wall, that is, without any transparent objects between the camera and the wall. These depth measurements will serve as a reference to the following tests since we do not have ground truth for the depth.
- Empty: In this test, the aquarium is inserted between the camera and the wall. Therefore, we aim to analyze the influence of the two transparent glass walls of the aquarium (with air in-between) in the depth estimation.
- Water full: This test introduces another challenging scenario regarding transparency. The aquarium is filled with water (about 95 L). Then, between the camera and the wall, we have a first glass wall, water, a second glass wall, and air.
- Water milk1/2/3/4: Set of tests, where the water is dyed with milk to experience different levels of transmittance (transparency, translucency and opacity). Water milk1 is the less opaque of the four, with a milk concentration of (v/v). Then, water milk2 with (v/v) and water milk3 with (v/v) Finally, the most opaque solution, water milk4 with (v/v).
4.2. Experimental Evaluation
4.2.1. Depth Estimation Failure Rate
- For each experiment (Camera-test), two Regions of Interest (RoI) were segmented—the right band (rb) and the left band (lb). The decision of having two different regions aimed at allowing the evaluation of the depth estimation as a function of the material reflectivity. Thus, the left band is a RoI with a black background (a cardboard) and the right band is a RoI with a white background (the wall). These areas were carefully selected (avoiding artifacts and the invalid band of D415) and defined in a frame (as illustrated in Figure 13a). A single frame out of 100 was used to select and define the two RoIs. The RoIs so defined were the same for all the frames of the same acquisition. Figure 13b shows the respective representation of the segmented regions in the generated point cloud.
- For each one of the 100 samples, the points in the RoIs whose depth estimation failed were counted and the percentage of invalid points was obtained. Then, the average of the 100 percentages for each experiment was estimated.
4.2.2. Depth Precision
- Coherent pointsfiltering: In this process, only the coherent points of each point cloud are saved to further be concatenated. We designate as a coherent point a point that in all the 100 point clouds meets the following condition , with z being the depth. That is, the coherent point depth has to be greater than zero, since the SDK sets depth to zero in case of an invalid depth measure. Additionally, the coherent point depth must be less than 0.7 m, given that the wall is 0.63 m away from the cameras and so points with depth beyond this value are physically impossible. The coherent point depth must be within this range for all 100 point clouds. Otherwise, this point is excluded and will not be saved in the concatenated point cloud. In Figure 14, we can visualize one of 100 point clouds in raw Figure 14a, named ptCloud_raw. Furthermore, the same point cloud after filtering for coherent points is in ptCloud_filt.
- Concatenation: We concatenate all the 100 point clouds into a single one. In other words, considering a filtered point with and , a filtered point cloud has a shape, such as , where is the number of points of the filtered point cloud i. The concatenated point cloud—the ptCloud_concat has the shape . A point cloud ptCloud_concat resulting from the concatenation is shown in Figure 14c.
- Removal of outliers: Despite the previous coherent points filtering, there are still artifacts in the point clouds. In Figure 14c, we can observe a cluster of 3D points distant (in the Z axis) from a potential cluster to a plane. Therefore, it is important to remove outliers from the ptCloud_concat with respect to the depth values (the z coordinate of the 3D points). So, a point was considered an outlier if its Z coordinate is greater than three scaled median absolute deviations (MAD). The scaled MAD is defined as follows:
- Data centering: Each set of points was centered in the X and Y axes so that their center of mass is positioned at the center of the XY’s plane of the camera’s coordinate system. Figure 14e presents the centered point cloud—ptCloud_cent.
- Fitting point clouds to planes: After the described processing steps, we have 21 point clouds (of type ptCloud_cent) corresponding to the three cameras and the seven different tests. Planes were then estimated for these point clouds. The Total Least Squares (TLS) method, also known as Orthogonal Regression method, was used to estimate the parameters. TLS minimizes the sum of the squared orthogonal distances between the points and the estimated plane [58]. In [59], we can find the fundamentals and the mathematical equations used to estimate the plane. For the practical estimation with the TLS method, the Principal Component Analysis (PCA) was used, based on the MATLAB example [60]. In Figure 14f, we can see an example of an estimated plane, as well as the orthogonal distances between the plane and the data points. In Appendix B, plots with the orthogonal distances for all cameras and tests are shown.
- Analysis of the orthogonal distances: The orthogonal distances are analyzed in terms of descriptive statistics (statistical measures of central tendency and statistical dispersion). The formulas used for the descriptive statistics of a set of observations (which correspond to the estimated orthogonal distances) are the following:
- Arithmetic mean
- Median
- Standard Deviation
- Mean Squared Error ()
where n is the number of the observations (in this case, the number of the orthogonal distances) and is the corresponding predicted value by the model.
4.2.3. Precision in Terms of Plane Modeling Consistency
4.2.4. Qualitative Analysis of Depth Distribution
5. Results and Discussion
5.1. Results and Discussion Regarding Depth Estimation Failure Rate
5.2. Results and Discussion Regarding Depth Precision—Orthogonal Distances
5.3. Results and Discussion Regarding Plane Modeling Consistency
5.4. Results and Discussion Regarding Depth Distribution
6. Conclusions
- -
- The black cardboard was a significant constraint for the SR305 and L515 cameras, being more significant for the SR305;
- -
- The transparency due to the aquarium glass walls do not affect the depth estimation in the three cameras. The wall is detected by all the cameras in a consistent way;
- -
- When the aquarium is filled with water, we have a more complex degree of transparency that negatively impacts depth estimation. The SR305 and the L515 are the most affected, having a high failure rate and very few coherent points to estimate consistent planes;
- -
- The water–milk mixture is a true issue even for the D415 camera. Still, the D415 continues to outperform the other two. The translucency of the water–milk mixture transmits light diffused in all directions, which drastically interferes with the entire operating principle of the SR305 and L515 cameras that emit an infra-red light that which ends up being interfered.
Author Contributions
Funding
Conflicts of Interest
Appendix A. Depth Frames with the Segmented Bands
D415 | L515 | SR305 | |
---|---|---|---|
Plane | |||
Empty | |||
Water_full | |||
Water_milk1 (0.03% milk) | |||
Water_milk2 (0.13% milk) | |||
Water_milk3 (0.24% milk) | |||
Water_milk4 (1.13% milk) |
Appendix B. Fitted Planes and Orthogonal Distances
Appendix B.1. Results for the D415 Camera
Appendix B.2. Results for the L515 Camera
Appendix B.3. Results for the SR305 Camera
Appendix C. Analyzing the Planes’ Normal with Directional Statistics
Appendix C.1. Circular Statistics for the Normal
Appendix C.2. Spherical Statistics for the Normal
References
- Sturm, J.; Engelhard, N.; Endres, F.; Burgard, W.; Cremers, D. A benchmark for the evaluation of RGB-D SLAM systems. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 7–12 October 2012; pp. 573–580. [Google Scholar] [CrossRef]
- Zhang, S.; Zheng, L.; Tao, W. Survey and evaluation of RGB-D SLAM. IEEE Access 2021, 9, 21367–21387. [Google Scholar] [CrossRef]
- Makris, S.; Tsarouchi, P.; Surdilovic, D.; Krüger, J. Intuitive dual arm robot programming for assembly operations. CIRP Ann. 2014, 63, 13–16. [Google Scholar] [CrossRef]
- Prabhu, V.A.; Muhandri, N.; Song, B.; Tiwari, A. Decision support system enabled by depth imaging sensor data for intelligent automation of moving assemblies. Proc. Inst. Mech. Eng. Part B J. Eng. Manuf. 2018, 232, 51–66. [Google Scholar] [CrossRef]
- Zingsheim, D.; Stotko, P.; Krumpen, S.; Weinmann, M.; Klein, R. Collaborative VR-based 3D labeling of live-captured scenes by remote users. IEEE Comput. Graph. Appl. 2021, 41, 90–98. [Google Scholar] [CrossRef]
- Ward, I.R.; Laga, H.; Bennamoun, M. RGB-D image-based object detection: From traditional methods to deep learning techniques. In RGB-D Image Analysis and Processing; Springer: Cham, Switzerland, 2019; pp. 169–201. [Google Scholar] [CrossRef]
- Malleson, C.; Guillemaut, J.Y.; Hilton, A. 3D reconstruction from RGB-D data. In RGB-D Image Analysis and Processing; Springer: Cham, Switzerland, 2019; pp. 87–115. [Google Scholar] [CrossRef]
- Carfagni, M.; Furferi, R.; Governi, L.; Servi, M.; Uccheddu, F.; Volpe, Y.; Mcgreevy, K. Fast and low cost acquisition and reconstruction system for human hand-wrist-arm anatomy. Procedia Manuf. 2017, 11, 1600–1608. [Google Scholar] [CrossRef]
- Zollhöfer, M. Commodity RGB-D sensors: Data acquisition. In RGB-D Image Analysis and Processing; Springer: Cham, Switzerland, 2019; pp. 3–13. [Google Scholar] [CrossRef]
- Ulrich, L.; Vezzetti, E.; Moos, S.; Marcolin, F. Analysis of RGB-D camera technologies for supporting different facial usage scenarios. Multimed. Tools Appl. 2020, 79, 29375–29398. [Google Scholar] [CrossRef]
- Yamazaki, M.; Iwata, S.; Xu, G. Dense 3D reconstruction of specular and transparent objects using stereo cameras and phase-shift method. In Computer Vision—ACCV 2007, Proceedings of the 8th Asian Conference on Computer Vision, Tokyo, Japan, 18–22 November 2007; Springer: Berlin/Heidelberg, Germany, 2007; pp. 570–579. [Google Scholar] [CrossRef]
- Ji, Y.; Xia, Q.; Zhang, Z. Fusing depth and silhouette for scanning transparent object with RGB-D sensor. Int. J. Opt. 2017, 9796127. [Google Scholar] [CrossRef]
- Chen, G.-H.; Wang, J.-Y.; Zhang, A.-J. Transparent object detection and location based on RGB-D camera. J. Phys. Conf. Ser. 2019, 1183, 012011. [Google Scholar] [CrossRef]
- Zhu, L.; Mousavian, A.; Xiang, Y.; Mazhar, H.; van Eenbergen, J.; Debnath, S.; Fox, D. RGB-D local implicit function for depth completion of transparent objects. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 4649–4658. [Google Scholar]
- Menna, F.; Remondino, F.; Battisti, R.; Nocerino, E. Geometric investigation of a gaming active device. Proc. SPIE Videometr. Range Imaging Appl. 2011, 8085, 173–187. [Google Scholar] [CrossRef]
- Hansard, M.; Lee, S.; Choi, O.; Horaud, R.P. Time-of-Flight Cameras: Principles, Methods and Applications; Springer Science & Business Media: London, UK, 2012. [Google Scholar]
- Gonzalez-Jorge, H.; Riveiro, B.; Vazquez-Fernandez, E.; Martínez-Sánchez, J.; Arias, P. Metrological evaluation of microsoft kinect and asus xtion sensors. Measurement 2013, 46, 1800–1806. [Google Scholar] [CrossRef]
- Breuer, T.; Bodensteiner, C.; Arens, M. Low-cost commodity depth sensor comparison and accuracy analysis. Proc. SPIE 2014, 9250, 77–86. [Google Scholar] [CrossRef]
- Zennaro, S.; Munaro, M.; Milani, S.; Zanuttigh, P.; Bernardi, A.; Ghidoni, S.; Menegatti, E. Performance evaluation of the 1st and 2nd generation Kinect for multimedia applications. In Proceedings of the 2015 IEEE International Conference on Multimedia and Expo (ICME), Turin, Italy, 29 June–3 July 2015; pp. 1–6. [Google Scholar] [CrossRef]
- Lachat, E.; Macher, H.; Landes, T.; Grussenmeyer, P. Assessment and calibration of a RGB-D camera (Kinect v2 Sensor) towards a potential use for close-range 3D modeling. Remote Sens. 2015, 7, 13070–13097. [Google Scholar] [CrossRef]
- Sarbolandi, H.; Lefloch, D.; Kolb, A. Kinect range sensing: Structured-light versus Time-of-Flight Kinect. Comput. Vis. Image Underst. 2015, 139, 1–20. [Google Scholar] [CrossRef]
- Guidi, G.; Gonizzi Barsanti, S.; Micoli, L.L. 3D capturing performances of low-cost range sensors for mass-market applications. In Proceedings of the 23rd International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Congress, ISPRS, Prague, Czech Republic, 12–19 July 2016; pp. 33–40. [Google Scholar] [CrossRef] [Green Version]
- Wasenmüller, O.; Stricker, D. Comparison of kinect v1 and v2 depth images in terms of accuracy and precision. In Computer Vision—ACCV 2016 Workshops, Proceedings of the ACCV 2016 International Workshops, Taipei, Taiwan, 20–24 November 2016; Springer: Cham, Switzerland, 2016; pp. 34–45. [Google Scholar] [CrossRef]
- Kimmel, R. Three-Dimensional Video Scanner. US Patent 7,756,323, 13 July 2010. [Google Scholar]
- Rubinstein, O.; Honen, Y.; Bronstein, A.M.; Bronstein, M.M.; Kimmel, R. 3D-color video camera. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops, Kyoto, Japan, 27 September–4 October 2009; pp. 1505–1509. [Google Scholar] [CrossRef]
- Bronstein, A.M.; Bronstein, M.M.; Gordon, E.; KIimmel, R. High-Resolution Structured Light Range Scanner with Automatic Calibration; Technical Report; Computer Science Department, Technion: Haifa, Israel, 2003. [Google Scholar]
- Intel. Intel® Realsense™ Camera SR300 Embedded Coded Light 3D Imaging System with Full High Definition Color Camera; Technical Report; Intel: Santa Clara, CA, USA, 2016. [Google Scholar]
- Zabatani, A.; Surazhsky, V.; Sperling, E.; Moshe, S.B.; Menashe, O.; Silver, D.H.; Karni, Z.; Bronstein, A.M.; Bronstein, M.M.; Kimmel, R. Intel® Realsense™ sr300 coded light depth camera. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 42, 2333–2345. [Google Scholar] [CrossRef]
- Carfagni, M.; Furferi, R.; Governi, L.; Servi, M.; Uccheddu, F.; Volpe, Y. On the performance of the Intel SR300 depth camera: Metrological and critical characterization. IEEE Sens. J. 2017, 17, 4508–4519. [Google Scholar] [CrossRef]
- Giancola, S.; Valenti, M.; Sala, R. A Survey on 3D Cameras: Metrological Comparison of Time-of-Flight, Structured-Light and Active Stereoscopy Technologies; Springer: Cham, Switzerland, 2018. [Google Scholar] [CrossRef]
- Carfagni, M.; Furferi, R.; Governi, L.; Santarelli, C.; Servi, M.; Uccheddu, F.; Volpe, Y. Metrological and critical characterization of the Intel D415 stereo depth camera. Sensors 2019, 19, 489. [Google Scholar] [CrossRef]
- Rosin, P.L.; Lai, Y.K.; Shao, L.; Liu, Y. RGB-D Image Analysis and Processing; Springer: Cham, Switzerland, 2019. [Google Scholar] [CrossRef]
- Rodríguez-Gonzálvez, P.; Guidi, G. Rgb-d sensors data quality assessment and improvement for advanced applications. In RGB-D Image Analysis and Processing; Springer: Cham, Switzerland, 2019; pp. 67–86. [Google Scholar] [CrossRef]
- Lourenço, F.; Araujo, H. Intel RealSense SR305, D415 and L515: Experimental Evaluation and Comparison of Depth Estimation. In Proceedings of the 16th International Conference on Computer Vision Theory and Applications—VISAPP, Vienna, Austria, 8–10 February 2021; Volume 4, pp. 362–369. [Google Scholar] [CrossRef]
- Breitbarth, A.; Hake, C.; Notni, G. Measurement accuracy and practical assessment of the lidar camera Intel RealSense L515. In Proceedings of the Optical Measurement Systems for Industrial Inspection XII, Online Only, 21–26 June 2021; Volume 11782, pp. 218–229. [Google Scholar] [CrossRef]
- Servi, M.; Mussi, E.; Profili, A.; Furferi, R.; Volpe, Y.; Governi, L.; Buonamici, F. Metrological characterization and comparison of D415, D455, L515 realsense devices in the close range. Sensors 2021, 21, 7770. [Google Scholar] [CrossRef]
- Zanuttigh, P.; Marin, G.; Dal Mutto, C.; Dominio, F.; Minto, L.; Cortelazzo, G.M. Time-of-Flight and Structured Light Depth Cameras; Technology and Applications; Springer: Cham, Switzerland, 2016; pp. 978–983. [Google Scholar] [CrossRef]
- Geng, J. Structured-light 3D surface imaging: A tutorial. Adv. Opt. Photonics 2011, 3, 128–160. [Google Scholar] [CrossRef]
- Laga, H.; Guo, Y.; Tabia, H.; Fisher, R.B.; Bennamoun, M. 3D Shape Analysis: Fundamentals, Theory, and Applications; John Wiley & Sons: Hoboken, NJ, USA, 2018. [Google Scholar]
- Xiong, Z.; Zhang, Y.; Wu, F.; Zeng, W. Computational depth sensing: Toward high-performance commodity depth cameras. IEEE Signal Process. Mag. 2017, 34, 55–68. [Google Scholar] [CrossRef]
- Laga, H. A survey on nonrigid 3d shape analysis. In Academic Press Library in Signal Processing; Academic Press: Cambridge, MA, USA, 2018; Volume 6, pp. 261–304. [Google Scholar] [CrossRef]
- Jonasson, R.; Kollberg, A. Structured Light Based Depth and Pose Estimation. Master’s Thesis, Chalmers University of Technology, Gothenburg, Sweden, 2019. [Google Scholar]
- Intel. Intel® RealSense™ Depth Camera SR300 Series Product Family; Intel: Santa Clara, CA, USA, 2019. [Google Scholar]
- Huang, X.; Bai, J.; Wang, K.; Liu, Q.; Luo, Y.; Yang, K.; Zhang, X. Target enhanced 3D reconstruction based on polarization-coded structured light. Opt. Express 2017, 25, 1173–1184. [Google Scholar] [CrossRef]
- Liu, Y.; Pears, N.; Rosin, P.L.; Huber, P. 3D Imaging, Analysis and Applications; Springer: Cham, Switzerland, 2020. [Google Scholar] [CrossRef]
- Intel® RealSense™. Product Family D400 Series Datasheet; Technical Report; Intel: Santa Clara, CA, USA, 2018. [Google Scholar]
- Depth Camera D415—Intel® RealSense™ Depth and Tracking Cameras; Intel: Santa Clara, CA, USA. Available online: https://www.intelrealsense.com/depth-camera-d415 (accessed on 4 April 2022).
- Whyte, R.; Streeter, L.; Cree, M.J.; Dorrington, A.A. Application of lidar techniques to time-of-flight range imaging. Appl. Opt. 2015, 54, 9654–9664. [Google Scholar] [CrossRef]
- Intel® RealSense™. LiDAR Camera L515 User Guide; Technical Report; Intel: Santa Clara, CA, USA, 2020. [Google Scholar]
- Optimizing the Intel RealSense LiDAR Camera L515 Range; Intel: Santa Clara, CA, USA. Available online: https://www.intelrealsense.com/optimizing-the-lidar-camera-l515-range/?_ga=2.101478088.858148249.1647362471-813857126.1646757776 (accessed on 29 March 2022).
- Massot-Campos, M.; Oliver-Codina, G. Optical sensors and methods for underwater 3D reconstruction. Sensors 2015, 15, 31525–31557. [Google Scholar] [CrossRef]
- Ihrke, I.; Kutulakos, K.N.; Lensch, H.P.; Magnor, M.; Heidrich, W. Transparent and specular object reconstruction. Comput. Graph. Forum 2010, 29, 2400–2426. [Google Scholar] [CrossRef]
- Anderson, B.L. Visual perception of materials and surfaces. Curr. Biol. 2011, 21, R978–R983. [Google Scholar] [CrossRef] [PubMed]
- Hearn, D.; Baker, M.P.; Baker, M.P. Computer Graphics with OpenGL; Pearson Prentice Hall: Upper Saddle River, NJ, USA, 2004; Volume 3. [Google Scholar]
- Gigilashvili, D.; Thomas, J.B.; Hardeberg, J.Y.; Pedersen, M. Translucency perception: A review. J. Vis. 2021, 21, 4. [Google Scholar] [CrossRef] [PubMed]
- Premože, S.; Ashikhmin, M.; Tessendorf, J.; Ramamoorthi, R.; Nayar, S. Practical rendering of multiple scattering effects in participating media. In Proceedings of EGSR04: 15th Eurographics Symposium on Rendering; The Eurographics Association: Geneve, Switzerland, 2004; Volume 2, pp. 363–374. [Google Scholar] [CrossRef]
- Bonakdari, H.; Zeynoddin, M. Stochastic Modeling; Elsevier: Amsterdam, The Netherlands, 2022. [Google Scholar] [CrossRef]
- Petráš, I.; Bednárová, D. Total least squares approach to modeling: A Matlab toolbox. Acta Montan. Slovaca 2010, 15, 158. [Google Scholar]
- Muñoz, L.R.; Villanueva, M.G.; Suárez, C.G. A tutorial on the total least squares method for fitting a straight line and a plane. Rev. Cienc. Ingen Inst. Technol. Super. Coatzacoalcos 2014, 1, 167–173. [Google Scholar]
- MATLAB. Fitting an Orthogonal Regression Using Principal Components Analysis; MathWorks: Natick, MA, USA. Available online: https://www.mathworks.com/help/stats/fitting-an-orthogonal-regression-using-principal-components-analysis.html (accessed on 2 March 2022).
- Mardia, K.V.; Jupp, P.E.; Mardia, K. Directional Statistics; Wiley Online Library: Hoboken, NJ, USA, 2000; Volume 2. [Google Scholar]
D415 | L515 | SR305 | Units | |
---|---|---|---|---|
Depth Measurement technology | Active Stereoscopy | Time-of-Flight | Structured Light | - |
Image Sensor Technology | Rolling Shutter | MEMS mirror scan | Global Shutter | |
IR max. resolution | 1280 × 720 | 1024 × 768 | 640 × 480 | (pix) |
RGB max. resolution | 1920 × 1080 | 1920 × 1080 | 1920 × 1080 | (pix) |
Maximum frame rate | 90 | 30 | 60 | (fps) |
Baseline | 55 | - | - | (mm) |
Depth Field of view (FOV) | H: 65 ± 2|V: 40 ± 1 | H: 70 ± 2|V: 55 ± 2 | H: 69 ± 3|V: 54 ± 2 | (°) |
Measurement range | 0.3–10 | 0.25–9 | 0.2–1.5 | (m) |
Dimension | 99 × 20 × 23 | 61 × 26 | 139 × 26.13 × 12 | (mm) |
Weight | 72 | 95 | 70 | (gr) |
nº Points | X Limits | Y Limits | Z Limits | |
---|---|---|---|---|
ptCloud_raw | [−11.2596, 0.0066] | [−2.5039, 0.6715] | [0, 27.7300] | |
ptCloud_filt | [−0.3198, 0.0066] | [−0.2086, 0.0763] | [0.3080, 0.7000] | |
ptCloud_concat | [−0.3068, 0.0067] | [−0.2090, 0.0770] | [0.3090, 0.7000] | |
ptCloud_in | [−0.3068, 0.0067] | [−0.2090, 0.0770] | [0.5390, 0.5890] | |
ptCloud_cent | [−0.1788, 0.1346] | [−0.1421, 0.1439] | [0.5390, 0.5890] |
0.0041 | 0.0150 | 0.0177 | 0.0001 | 37.9954 | 0.6936 | 0.0000 | |
0.0000 | 0.0011 | 0.2806 | 81.4556 | 2.3833 | 1.0275 | 0.0147 | |
0.0000 | 0.0000 | 0.0321 | 1.4239 | 87.9116 | 2.5124 | 0.0000 | |
0.0014 | 8.0577 | 99.8891 | 99.9018 | 0.7502 | 0.0417 | 0.0000 | |
0.0000 | 0.0021 | 100.0000 | 100.0000 | 100.0000 | 100.0000 | 0.0378 | |
100.0000 | 100.0000 | 100.0000 | 100.0000 | 100.0000 | 100.0000 | 0.0198 |
Camera | Acquisition | Used Points | Dist. Plane | |||||||
---|---|---|---|---|---|---|---|---|---|---|
D415 | wall | 88.7129% | 0.6393 m | 0.0021 | 0.0015 | 0.0017 | 0.0013 | |||
empty | 80.2240% | 0.6266 m | 0.0024 | 0.0016 | 0.0019 | 0.0014 | ||||
water full | 59.3398% | 0.5630 m | 0.0053 | 0.0040 | 0.0044 | 0.0030 | ||||
L515 | wall | 97.7275% | 0.6441 m | 0.0038 | 0.0023 | 0.0029 | 0.0025 | |||
empty | 67.6309% | 0.6414 m | 0.0014 | 0.0059 | 0.0021 | 0.0036 | 0.0047 | |||
water full | 8.9310% | 0.6748 m | 0.0060 | 0.0038 | 0.0046 | 0.0037 | ||||
SR305 | wall | 48.9993% | 0.6407 m | 0.0055 | 0.0027 | 0.0038 | 0.0040 | |||
empty | 28.2650% | 0.6248 m | 0.0025 | 0.0016 | 0.0020 | 0.0015 | ||||
water full | 0.3529% | 0.3075 m | - | - | - | - | - | - | - |
Camera | Acquisition | Used Points | Dist. Plane | |||||||
---|---|---|---|---|---|---|---|---|---|---|
D415 | water milk1 | 38.7015% | 0.5681 m | 0.0025 | 0.0016 | 0.0020 | 0.0015 | |||
water milk2 | 8.5417% | 0.3411 m | 0.0018 | 0.0011 | 0.0013 | 0.0011 | ||||
water milk3 | 36.8538% | 0.3419 m | 0.0017 | 0.0011 | 0.0013 | 0.0010 | ||||
water milk4 | 78.2705% | 0.3257 m | 0.0011 | |||||||
L515 | water milk1 | 1.1019% | 0.4221 m | - | - | - | - | - | - | - |
water milk2 | 21.9079% | 0.3867 m | 0.0132 | 0.0083 | 0.0103 | 0.0103 | ||||
water milk3 | 54.7113% | 0.3921 m | 0.0101 | 0.0065 | 0.0080 | 0.0063 | ||||
water milk4 | 99.3779% | 0.3798 m | 0.0035 | 0.0025 | 0.0029 | 0.0021 | ||||
SR305 | water milk1 | 0.0918% | 0.0177 m | - | - | - | - | - | - | - |
water milk2 | 0.1139% | 0.0519 m | - | - | - | - | - | - | - | |
water milk3 | 0.0579% | 0.0222 m | - | - | - | - | - | - | - | |
water milk4 | 45.4095% | 0.3370 m | 0.0025 | 0.0013 | 0.0018 | 0.0017 |
Camera | Acquisition | ||||
---|---|---|---|---|---|
D415 | wall | ||||
empty | |||||
water full | |||||
water milk1 | |||||
water milk2 | |||||
water milk3 | |||||
water milk4 | |||||
L515 | wall | ||||
empty | |||||
water full | |||||
water milk1 | - | - | - | - | |
water milk2 | |||||
water milk3 | |||||
water milk4 | |||||
SR305 | wall | ||||
empty | |||||
water full | |||||
water milk1 | - | - | - | - | |
water milk2 | - | - | - | - | |
water milk3 | - | - | - | - | |
water milk4 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Curto, E.; Araujo, H. An Experimental Assessment of Depth Estimation in Transparent and Translucent Scenes for Intel RealSense D415, SR305 and L515. Sensors 2022, 22, 7378. https://doi.org/10.3390/s22197378
Curto E, Araujo H. An Experimental Assessment of Depth Estimation in Transparent and Translucent Scenes for Intel RealSense D415, SR305 and L515. Sensors. 2022; 22(19):7378. https://doi.org/10.3390/s22197378
Chicago/Turabian StyleCurto, Eva, and Helder Araujo. 2022. "An Experimental Assessment of Depth Estimation in Transparent and Translucent Scenes for Intel RealSense D415, SR305 and L515" Sensors 22, no. 19: 7378. https://doi.org/10.3390/s22197378
APA StyleCurto, E., & Araujo, H. (2022). An Experimental Assessment of Depth Estimation in Transparent and Translucent Scenes for Intel RealSense D415, SR305 and L515. Sensors, 22(19), 7378. https://doi.org/10.3390/s22197378