Liquid Content Detection In Transparent Containers: A Benchmark
Abstract
:1. Introduction
Contribution
- We have proposed a challenging task which combines transparent container detection and liquid content estimation. This task encourages more advanced applications while offering a new perspective on transparent container detection.
- We present the LCDTC dataset, the first benchmark for identifying the liquid content in a transparent container.
2. Related Works
2.1. Traditional Methods for Detecting Transparent Containers
2.2. Deep Learning Methods for Detecting Transparent Containers
2.3. Liquid Content Estimation
3. Benchmark for Liquid Content Detection in Transparent Containers
3.1. LCDTC Collection
3.2. Annotation
- Category: transparent container.
- Bounding box: a bounding box centered on the image’s visible transparent containers with axis alignment.
- Liquid content state: one of ‘empty’, ‘little’, ‘half’, ‘much’, and ‘full’.
3.3. Dataset Statistics
4. Baseline Detectors for Liquid Content in Transparent Containers
4.1. LCD-YOLOF
4.2. LCD-YOLOX
4.3. Convolutional Triplet Attention Module (CTAM)
5. Evaluation
5.1. Evaluation Metrics
5.2. Evaluation Results
5.3. Ablation Study
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Dhulekar, P.; Gandhe, S.; Mahajan, U.P. Development of bottle recycling machine using machine learning algorithm. In Proceedings of the 2018 International Conference on Advances in Communication and Computing Technology (ICACCT), Sangamner, India, 8–9 February 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 515–519. [Google Scholar]
- Wang, J.; Guo, W.; Pan, T.; Yu, H.; Duan, L.; Yang, W. Bottle detection in the wild using low-altitude unmanned aerial vehicles. In Proceedings of the 2018 21st International Conference on Information Fusion (FUSION), Cambridge, UK, 10–13 July 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 439–444. [Google Scholar]
- Liu, L.; Pan, Z.; Lei, B. Learning a rotation invariant detector with rotatable bounding box. arXiv 2017, arXiv:1711.09405. [Google Scholar]
- Do, C.; Schubert, T.; Burgard, W. A probabilistic approach to liquid level detection in cups using an RGB-D camera. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Republic of Korea, 9–14 October 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 2075–2080. [Google Scholar]
- Aoyagi, M.; Hiraguri, T.; Ueno, T.; Okuda, M. Observation of container liquid levels by dynamic heat conduction. Insight-Non-Destr. Test. Cond. Monit. 2013, 55, 10–15. [Google Scholar] [CrossRef]
- Schenck, C.; Fox, D. Towards learning to perceive and reason about liquids. In Proceedings of the 2016 International Symposium on Experimental Robotics, Nagasaki, Japan, 3–8 October 2016; Springer: Berlin/Heidelberg, Germany, 2017; pp. 488–501. [Google Scholar]
- Narasimhan, G.; Zhang, K.; Eisner, B.; Lin, X.; Held, D. Self-supervised transparent liquid segmentation for robotic pouring. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 4555–4561. [Google Scholar]
- Wilson, J.; Sterling, A.; Lin, M.C. Analyzing liquid pouring sequences via audio-visual neural networks. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 7702–7709. [Google Scholar]
- Dong, C.; Takizawa, M.; Kudoh, S.; Suehiro, T. Precision pouring into unknown containers by service robots. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 5875–5882. [Google Scholar]
- Holland, J.; Kingston, L.M.; Mccarthy, C.; Armstrong, E.; O’dwyer, P.; Merz, F.; McConnell, M. Service Robots in the Healthcare Sector. Robotics 2021, 10, 47. [Google Scholar] [CrossRef]
- Cui, C.; Tang, J.; fei Qiao, J.; Wang, Z.; Sun, Z. Review of Waste Plastic Bottle Recycling Equipment Research Status. In Proceedings of the 2020 39th Chinese Control Conference (CCC), Shenyang, China, 27–29 July 2020; pp. 1190–1195. [Google Scholar]
- Fadlil, A.; Umar, R.; Sunardi; Nugroho, A.S. Comparison of Machine Learning Approach for Waste Bottle Classification. Emerg. Sci. J. 2022, 6, 1075–1085. [Google Scholar] [CrossRef]
- Itozaki, H.; Sato-Akaba, H. Detection of bottled explosives by near infrared. In Proceedings of the Optics and Photonics for Counterterrorism, Crime Fighting and Defence IX; and Optical Materials and Biomaterials in Security and Defence Systems Technology X, Dresden, Germany, 23–26 September 2013. [Google Scholar]
- Cordova, A. Technologies for primary screening in aviation security. J. Transp. Secur. 2022, 15, 141–159. [Google Scholar] [CrossRef]
- Chakravarthy, S.; Sharma, R.; Kasturi, R. Noncontact level sensing technique using computer vision. IEEE Trans. Instrum. Meas. 2002, 51, 353–361. [Google Scholar] [CrossRef]
- Wang, T.H.; Lu, M.C.; Hsu, C.C.J.; Chen, C.C.; Tan, J.D. Liquid-level measurement using a single digital camera. Measurement 2009, 42, 604–610. [Google Scholar] [CrossRef]
- Eppel, S.; Kachman, T. Computer vision-based recognition of liquid surfaces and phase boundaries in transparent vessels, with emphasis on chemistry applications. arXiv 2014, arXiv:1404.7174. [Google Scholar]
- Bobovnik, G.; Mušič, T.; Kutin, J. Liquid Level Detection in Standard Capacity Measures with Machine Vision. Sensors 2021, 21, 2676. [Google Scholar] [CrossRef]
- Do, H.T.; Thi, L.P. Artificial intelligence (AI) application on plastic bottle monitoring in coastal zone. J. Hydrometeorol. 2020, 6, 57–67. [Google Scholar]
- Xie, E.; Wang, W.; Wang, W.; Ding, M.; Shen, C.; Luo, P. Segmenting transparent objects in the wild. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part XIII 16. Springer: Berlin/Heidelberg, Germany, 2020; pp. 696–711. [Google Scholar]
- Naseer, M.; Khan, S.H.; Porikli, F.M. Indoor Scene Understanding in 2.5/3D for Autonomous Agents: A Survey. IEEE Access 2018, 7, 1859–1887. [Google Scholar] [CrossRef]
- Schenck, C.; Fox, D. Visual closed-loop control for pouring liquids. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 2629–2636. [Google Scholar]
- Li, X.; Zhao, C.; Chen, Y.; Yi, S.; Li, L.; Han, G. Research on Intelligent Detection Technology of Transparent Liquid based on Style Transfer. In Proceedings of the 2022 8th International Conference on Big Data and Information Analytics (BigDIA), Guiyang, China, 24–25 August 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 290–294. [Google Scholar]
- Narayan Narasimhan, G.; Zhang, K.; Eisner, B.; Lin, X.; Held, D. Self-supervised Transparent Liquid Segmentation for Robotic Pouring. arXiv 2022, arXiv:2203.01538. [Google Scholar]
- Kennedy, M.; Schmeckpeper, K.; Thakur, D.; Jiang, C.; Kumar, V.; Daniilidis, K. Autonomous precision pouring from unknown containers. IEEE Robot. Autom. Lett. 2019, 4, 2317–2324. [Google Scholar] [CrossRef]
- Misra, D.; Nalamada, T.; Arasanipalai, A.U.; Hou, Q. Rotate to attend: Convolutional triplet attention module. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2021; pp. 3139–3148. [Google Scholar]
- Wang, W.; Yao, L.; Chen, L.; Cai, D.; He, X.; Liu, W. CrossFormer: A Versatile Vision Transformer Based on Cross-Scale Attention. arXiv 2021, arXiv:2108.00154. [Google Scholar]
- Klank, U.; Carton, D.; Beetz, M. Transparent object detection and reconstruction on a mobile platform. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 5971–5978. [Google Scholar]
- Lei, Z.; Ohno, K.; Tsubota, M.; Takeuchi, E.; Tadokoro, S. Transparent object detection using color image and laser reflectance image for mobile manipulator. In Proceedings of the 2011 IEEE International Conference on Robotics and Biomimetics, Karon Beach, Thailand, 7–11 December 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 1–7. [Google Scholar]
- Rother, C.; Kolmogorov, V.; Blake, A. “GrabCut” interactive foreground extraction using iterated graph cuts. ACM Trans. Graph. (TOG) 2004, 23, 309–314. [Google Scholar] [CrossRef]
- Osadchy, M. Using specularities for recognition. In Proceedings of the IEEE International Conference on Computer Vision, Nice, France, 13–16 October 2003. [Google Scholar]
- Mchenry, K.; Ponce, J.; Forsyth, D. Finding glass. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005. [Google Scholar]
- Fritz, M.; Black, M.J.; Bradski, G.R.; Karayev, S.; Darrell, T. An Additive Latent Feature Model for Transparent Object Recognition. In Proceedings of the Advances in Neural Information Processing Systems 22: 23rd Annual Conference on Neural Information Processing Systems 2009, Vancouver, BC, Canada, 7–10 December 2009. [Google Scholar]
- Lai, P.J.; Fuh, C.S. Transparent object detection using regions with convolutional neural network. In Proceedings of the IPPR Conference on Computer Vision, Graphics, and Image Processing. 2015, Volume 2. Available online: https://www.csie.ntu.edu.tw/~fuh/personal/TransparentObjectDetectionUsingRegionswithConvolutionalNeuralNetwork.pdf (accessed on 23 June 2023).
- Uijlings, J.R.; Van De Sande, K.E.; Gevers, T.; Smeulders, A.W. Selective search for object recognition. Int. J. Comput. Vis. 2013, 104, 154–171. [Google Scholar] [CrossRef] [Green Version]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
- Khaing, M.P.; Masayuki, M. Transparent object detection using convolutional neural network. In Big Data Analysis and Deep Learning Applications, Proceedings of the First International Conference on Big Data Analysis and Deep Learning, Miyazaki, Japan, 14–15 May 2018; Springer: Berlin/Heidelberg, Germany, 2019; pp. 86–93. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part I 14. Springer: Berlin/Heidelberg, Germany, 2016; pp. 21–37. [Google Scholar]
- Seib, V.; Barthen, A.; Marohn, P.; Paulus, D. Friend or foe: Exploiting sensor failures for transparent object localization and classification. In Proceedings of the 2016 International Conference on Robotics and Machine Vision, Moscow, Russia, 14–16 September 2016; SPIE: Bellingham, WA, USA, 2017; Volume 10253, pp. 94–98. [Google Scholar]
- Cao, Y.; Zhang, Z.; Xie, E.; Hou, Q.; Zhao, K.; Luo, X.; Tuo, J. FakeMix augmentation improves transparent object detection. arXiv 2021, arXiv:2103.13279. [Google Scholar]
- Chen, L.C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking Atrous Convolution for Semantic Image Segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar]
- Wang, Z.; Peng, B.; Huang, Y.; Sun, G. Classification for plastic bottles recycling based on image recognition. Waste Manag. 2019, 88, 170–181. [Google Scholar] [CrossRef]
- Xiao, J.; Tang, Y.; Zhao, Y.; Yan, Y. Design of Plastic Bottle Image Recognition System Based on Improved YOLOv3. In Proceedings of the 2020 5th International Conference on Mechanical, Control and Computer Engineering (ICMCCE), Harbin, China, 25–27 December 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 2047–2050. [Google Scholar]
- Akbar, F.S.P.; Ginting, S.Y.P.; Wu, G.C.; Achmad, S.; Sutoyo, R. Object Detection on Bottles Using the YOLO Algorithm. In Proceedings of the 2022 4th International Conference on Cybernetics and Intelligent System (ICORIS), Prapat, Indonesia, 8–9 October 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–5. [Google Scholar]
- Ju, L.; Zou, X.; Zhang, X.; Xiong, X.; Liu, X.; Zhou, L. An Infusion Containers Detection Method Based on YOLOv4 with Enhanced Image Feature Fusion. Entropy 2023, 25, 275. [Google Scholar] [CrossRef]
- Bochkovskiy, A.; Wang, C.Y.; Liao, H. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Liu, S.; Huang, D.; Wang, Y. Learning Spatial Fusion for Single-Shot Object Detection. arXiv 2019, arXiv:1911.09516. [Google Scholar]
- Hou, Q.; Zhou, D.; Feng, J. Coordinate Attention for Efficient Mobile Network Design. arXiv 2021, arXiv:2103.02907. [Google Scholar]
- Zhang, Y.F.; Ren, W.; Zhang, Z.; Jia, Z.; Wang, L.; Tan, T. Focal and Efficient IOU Loss for Accurate Bounding Box Regression. arXiv 2021, arXiv:2101.08158. [Google Scholar] [CrossRef]
- Feng, F.; Wang, L.; Tan, M.; Yu, Z. Liquid surface location of transparent container based on visual analysis. In Proceedings of the 2017 First International Conference on Electronics Instrumentation & Information Systems (EIIS), Harbin, China, 3–5 June 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–4. [Google Scholar]
- Shen, J.; Castan, S. An optimal linear operator for step edge detection. CVGIP Graph. Model. Image Process. 1992, 54, 112–133. [Google Scholar] [CrossRef]
- Feng, F.; Wang, L.; Zhang, Q.; Lin, X.; Tan, M. Liquid surface location of milk bottle based on digital image processing. In Proceedings of the Multimedia and Signal Processing: Second International Conference, CMSP 2012, Shanghai, China, 7–9 December 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 232–239. [Google Scholar]
- Mottaghi, R.; Schenck, C.; Fox, D.; Farhadi, A. See the glass half full: Reasoning about liquid containers, their volume and content. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 1871–1880. [Google Scholar]
- Everingham, M.; Van Gool, L.; Williams, C.K.; Winn, J.; Zisserman, A. The pascal visual object classes (voc) challenge. Int. J. Comput. Vis. 2009, 88, 303–308. [Google Scholar] [CrossRef] [Green Version]
- Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft coco: Common objects in context. In Proceedings of the Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Proceedings, Part V 13. Springer: Berlin/Heidelberg, Germany, 2014; pp. 740–755. [Google Scholar]
- Guo, Y.; Chen, Y.; Deng, J.; Li, S.; Zhou, H. Identity-Preserved Human Posture Detection in Infrared Thermal Images: A Benchmark. Sensors 2023, 23, 92. [Google Scholar] [CrossRef]
- Ge, Z.; Liu, S.; Wang, F.; Li, Z.; Sun, J. Yolox: Exceeding yolo series in 2021. arXiv 2021, arXiv:2107.08430. [Google Scholar]
- Chen, Q.; Wang, Y.; Yang, T.; Zhang, X.; Cheng, J.; Sun, J. You only look one-level feature. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 13039–13048. [Google Scholar]
- Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path aggregation network for instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8759–8768. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 248–255. [Google Scholar]
- Qin, L.; Zhou, H.; Wang, Z.; Deng, J.; Liao, Y.; Li, S. Detection Beyond What and Where: A Benchmark for Detecting Occlusion State. In Proceedings of the Pattern Recognition and Computer Vision: 5th Chinese Conference, PRCV 2022, Shenzhen, China, 4–7 November 2022; Proceedings, Part IV. Springer: Berlin/Heidelberg, Germany, 2022; pp. 464–476. [Google Scholar]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal Loss for Dense Object Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 42, 2999–3007. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef] [Green Version]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 11–17 October 2021; pp. 10012–10022. [Google Scholar]
- Lin, H.; Cheng, X.; Wu, X.; Shen, D. Cat: Cross attention in vision transformer. In Proceedings of the 2022 IEEE International Conference on Multimedia and Expo (ICME), Taipei, Taiwan, 18–22 July 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–6. [Google Scholar]
- Wang, W.; Xie, E.; Li, X.; Fan, D.P.; Song, K.; Liang, D.; Lu, T.; Luo, P.; Shao, L. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 568–578. [Google Scholar]
Method | |||
---|---|---|---|
LCD-YOLOF | (0.788,0.553,0.626) | (0.753,0.534,0.603) | (0.660,0.469,0.532) |
LCD-YOLOX | (0.809,0.607,0.624) | (0.776,0.588,0.604) | (0.704,0.533,0.548) |
Empty | Little | Half | Much | Full | |
---|---|---|---|---|---|
(LCD-YOLOF) | 0.474 | 0.517 | 0.447 | 0.474 | 0.432 |
(LCD-YOLOX) | 0.541 | 0.589 | 0.486 | 0.510 | 0.537 |
Backbone | |||
---|---|---|---|
CSPDarknet | (0.798,0.552,0.583) | (0.763, 0.534,0.562) | (0.688,0.481,0.508) |
CrossFormer-T | (0.808,0.602, 0.617) | (0.775,0.582,0.598) | (0.698,0.529,0.543) |
CrossFormer-S | (0.809,0.607,0.624) | (0.776,0.588,0.604) | (0.704,0.533,0.548) |
CrossFormer-B | (0.808,0.604,0.619) | (0.776,0.583,0.597) | (0.702,0.530,0.544) |
CrossFormer-L | (0.818,0.593,0.614) | (0.784,0.571,0.591) | (0.696,0.508,0.526) |
Method | CF | TA | |||
---|---|---|---|---|---|
LCD-YOLOF | ✕ | ✕ | (0.788,0.522,0.602) | (0.751,0.506,0.585) | (0.655,0.442,0.512) |
LCD-YOLOF | ✕ | ✓ | (0.788,0.553,0.626) | (0.753,0.534,0.603) | (0.660,0.469,0.532) |
LCD-YOLOX | ✕ | ✕ | (0.798,0.552,0.583) | (0.763, 0.534,0.562) | (0.688,0.481,0.508) |
LCD-YOLOX | ✓ | ✕ | (0.807,0.580,0.616) | (0.762, 0.557,0.592) | (0.693,0.489,0.518) |
LCD-YOLOX | ✕ | ✓ | (0.798,0.586,0.613) | (0.763, 0.561,0.590) | (0.691,0.511,0.534) |
LCD-YOLOX | ✓ | ✓ | (0.809,0.607,0.624) | (0.776,0.588,0.604) | (0.704,0.533,0.548) |
0.2 | (0.808,0.560,0.610) | (0.765,0.547,0.585) | (0.686,0.487,0.524) |
0.4 | (0.808,0.591,0.629) | (0.765,0.568,0.603) | (0.686,0.507,0.540) |
0.6 | (0.807,0.595,0.625) | (0.763,0.572,0.601) | (0.680,0.508,0.534) |
0.8 | (0.809,0.607,0.624) | (0.776,0.588,0.604) | (0.704,0.533,0.548) |
1.0 | (0.808,0.592,0.611) | (0.765,0.568,0.585) | (0.675,0.503,0.520) |
1.2 | (0.817,0.596,0.622) | (0.773,0.570,0.593) | (0.679,0.502,0.523) |
1.4 | (0.818,0.597,0.611) | (0.764,0.568,0.583) | (0.676,0.502,0.515) |
1.6 | (0.807,0.596,0.620) | (0.773,0.574,0.597) | (0.670,0.501,0.522) |
1.8 | (0.808,0.602,0.617) | (0.764,0.578,0.593) | (0.670,0.508,0.522) |
2.0 | (0.798,0.576,0.601) | (0.753,0.552,0.570) | (0.654,0.480,0.499) |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wu, Y.; Ye, H.; Yang, Y.; Wang, Z.; Li, S. Liquid Content Detection In Transparent Containers: A Benchmark. Sensors 2023, 23, 6656. https://doi.org/10.3390/s23156656
Wu Y, Ye H, Yang Y, Wang Z, Li S. Liquid Content Detection In Transparent Containers: A Benchmark. Sensors. 2023; 23(15):6656. https://doi.org/10.3390/s23156656
Chicago/Turabian StyleWu, You, Hengzhou Ye, Yaqing Yang, Zhaodong Wang, and Shuiwang Li. 2023. "Liquid Content Detection In Transparent Containers: A Benchmark" Sensors 23, no. 15: 6656. https://doi.org/10.3390/s23156656
APA StyleWu, Y., Ye, H., Yang, Y., Wang, Z., & Li, S. (2023). Liquid Content Detection In Transparent Containers: A Benchmark. Sensors, 23(15), 6656. https://doi.org/10.3390/s23156656