A Heterogeneous Edge-Fog Environment Supporting Digital Twins for Remote Inspections
Abstract
:1. Introduction
2. 3D Reconstruction, Heterogeneous Edge Environments, and Related Work
2.1. 3D Reconstruction
2.2. Heterogeneous Edge Environment
- Resource-rich servers deployed close to the end-devices: In this category, the end users connect to resource-rich servers on the network, thus being an option for running edge computing platform. For instance, the authors in [21] cite the cloudlets deployed on WIFI access, which are based on Virtual Machines (VM). They presented an example of the Google Glass, where the data acquired by the sensors are processed on the cloudlet to provide real-time assistance.
- Heterogeneous nodes at the edge: This category encompasses different types of resources. As an example, the authors of [21] indicated the use of FOG platforms with heterogeneous nodes processing data for applications.
- Federation of resources at the edge and centralized data centers: This category employs edge devices to provide services both locally and in cloud environments. In addition, the services can be distributed throughout the Internet in a flexible infrastructure.
3. The 3D Reconstruction Proposal
3.1. Acquisition System
3.2. Accumulation System
3.3. Heterogeneous Edge-Fog Environment
4. Experimental Scenarios and Results
- StereoLabs ZED stereo camera.
- Orbbec ASTRA RGB-D camera.
- Machine I was a computer Core i7-7700 CPU 3.60 GHz with 16 GB of RAM memory and an NVIDIA GeForce GTX 1050 Ti.
- Machine II was a computer Core i7-6820 CPU 2.70 GHz with 16 GB of RAM memory and an NVIDIA GeForce GTX 1070.
- A Kuka YouBot robot, was utilized as a robotic arm to support the two cameras.
- An American football ball used to be reconstructed in a prepared scenario.
4.1. First Scenario—Path Format Evaluation
4.2. Odometry Error Evaluation
4.3. Point Cloud Construction Analysis
4.4. Edge-Fog Environment Analysis
4.5. Manual Movement Track Experimentation
4.6. Latency and Throughput Analysis
5. Conclusions and Future Work
Author Contributions
Funding
Conflicts of Interest
References
- Duan, P.; Pang, L.; Jin, Y.; Guo, Q.; Jia, Z.X. Equipment maintenance system based on virtual reality technology. In Proceedings of the 2012 International Conference on Quality, Reliability, Risk, Maintenance, and Safety Engineering, Chengdu, China, 15–18 June 2012; pp. 1396–1399. [Google Scholar]
- Qing, H. Research and application of virtual reality technology in mechanical maintenance. In Proceedings of the International Conference on Advanced Technology of Design and Manufacture (ATDM 2010), Beijing, China, 23–25 November 2010; pp. 1396–1399. [Google Scholar]
- Harjoko, A.; Hujja, R.M.; Awaludin, L. Low-cost 3D surface reconstruction using Stereo camera for small object. In Proceedings of the 2017 International Conference on Signals and Systems (ICSigSys), Sanur, Indonesia, 16–18 May 2017; pp. 285–289. [Google Scholar]
- Zhang, M.; Zhang, Z.; Li, W. 3D model reconstruction based on plantar image’s feature segmentation. In Proceedings of the 2017 International Conference on Progress in Informatics and Computing (PIC), Nanjing, China, 15–17 December 2017; pp. 164–168. [Google Scholar]
- Kontogianni, G.; Koutsaftis, C.; Skamantzari, M.; Georgopoulos, A.; Chrysanthopoulou, C. Developing and Exploiting 3D Textured Models for a Serious Game Application. In Proceedings of the 2016 8th International Conference on Games and Virtual Worlds for Serious Applications (VS-GAMES), Barcelona, Spain, 7–9 September 2016; pp. 1–4. [Google Scholar]
- Shcherbinin, D. Virtual Reconstruction and 3D Visualization of Vostok Spacecraft Equipment. In Proceedings of the 2017 International Workshop on Engineering Technologies and Computer Science (EnT), Moscow, Russia, 28 September 2017; pp. 56–58. [Google Scholar]
- von Stumberg, L.; Usenko, V.; Engel, J.; Stückler, J.; Cremers, D. From monocular SLAM to autonomous drone exploration. In Proceedings of the 2017 European Conference on Mobile Robots (ECMR), Paris, France, 6–8 September 2017; pp. 1–8. [Google Scholar]
- Heng, L.; Choi, B.; Cui, Z.; Geppert, M.; Hu, S.; Kuan, B.; Liu, P.; Nguyen, R.; Yeo, Y.C.; Geiger, A.; et al. Project autovision: Localization and 3D scene perception for an autonomous vehicle with a multi-camera system. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 4695–4702. [Google Scholar]
- Häne, C.; Heng, L.; Lee, G.H.; Fraundorfer, F.; Furgale, P.; Sattler, T.; Pollefeys, M. 3D visual perception for self-driving cars using a multi-camera system: Calibration, mapping, localization, and obstacle detection. Image Vis. Comput. 2017, 68, 14–27. [Google Scholar]
- Cui, Z.; Heng, L.; Yeo, Y.C.; Geiger, A.; Pollefeys, M.; Sattler, T. Real-time dense mapping for self-driving vehicles using fisheye cameras. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 6087–6093. [Google Scholar]
- Milella, A.; Reina, G. 3D reconstruction and classification of natural environments by an autonomous vehicle using multi-baseline stereo. Intell. Serv. Robot. 2014, 7, 79–92. [Google Scholar]
- Jamiruddin, R.; Sari, A.O.; Shabbir, J.; Anwer, T. RGB-depth SLAM review. arXiv, 2018; arXiv:1805.07696. [Google Scholar]
- Schonberger, J.L.; Frahm, J.M. Structure-from-motion revisited. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4104–4113. [Google Scholar]
- Newcombe, R.A.; Lovegrove, S.J.; Davison, A.J. DTAM: Dense tracking and mapping in real-time. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2320–2327. [Google Scholar]
- Newcombe, R.A.; Izadi, S.; Hilliges, O.; Molyneaux, D.; Kim, D.; Davison, A.J.; Kohi, P.; Shotton, J.; Hodges, S.; Fitzgibbon, A. KinectFusion: Real-time dense surface mapping and tracking. In Proceedings of the 2011 10th IEEE International Symposium on Mixed and Augmented Reality, Basel, Switzerland, 26–29 October 2011; pp. 127–136. [Google Scholar]
- Whelan, T.; Kaess, M.; Johannsson, H.; Fallon, M.; Leonard, J.J.; McDonald, J. Real-time large-scale dense RGB-D SLAM with volumetric fusion. Int. J. Robot. Res. 2015, 34, 598–626. [Google Scholar]
- Alves, J.M.; Honorio, L.M.; Capretz, M.A. ML4IoT: A Framework to Orchestrate Machine Learning Workflows on Internet of Things Data. IEEE Access 2019, 7, 152953–152967. [Google Scholar]
- Pinto, M.F.; Marcato, A.L.; Melo, A.G.; Honório, L.M.; Urdiales, C. A framework for analyzing fog-cloud computing cooperation applied to information processing of UAVs. Wirel. Commun. Mob. Comput. 2019, 2019. [Google Scholar] [CrossRef] [Green Version]
- Santos, M.F.; Honório, L.M.; Costa, E.B.; Oliveira, E.J.; Visconti, J.P.P.G. Active fault-tolerant control applied to a hexacopter under propulsion system failures. In Proceedings of the 2015 19th International Conference on System Theory, Control and Computing (ICSTCC), Cheile Gradistei, Romania, 14–16 October 2015; pp. 447–453. [Google Scholar]
- Silva, M.; Ribeiro, A.; Santos, M.; Carmo, M.; Honório, L.; Oliveira, E.; Vidal, V. Design of angular PID controllers for quadcopters built with low cost equipment. In Proceedings of the 2016 20th International Conference on System Theory, Control and Computing (ICSTCC), Sinaia, Romania, 13–15 October 2016; pp. 216–221. [Google Scholar]
- Premsankar, G.; Di Francesco, M.; Taleb, T. Edge computing for the Internet of Things: A case study. IEEE Internet Things J. 2018, 5, 1275–1284. [Google Scholar]
- Schönberger, J.; COLMAP–SfM and MVS. Software. 2018. Available online: https://demuc.de/colmap/ (accessed on 16 September 2018).
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar]
- Snavely, N.; Simon, I.; Goesele, M.; Szeliski, R.; Seitz, S.M. Scene reconstruction and visualization from community photo collections. Proc. IEEE 2010, 98, 1370–1390. [Google Scholar]
- Snavely, N. Scene reconstruction and visualization from internet photo collections: A survey. IPSJ Trans. Comput. Vis. Appl. 2011, 3, 44–66. [Google Scholar]
- Coro, G.; Palma, M.; Ellenbroek, A.; Panichi, G.; Nair, T.; Pagano, P. Reconstructing 3D virtual environments within a collaborative e-infrastructure. Concurr. Comput. Pract. Exp. 2019, 31, e5028. [Google Scholar] [CrossRef]
- Durrant-Whyte, H.; Bailey, T. Simultaneous localization and mapping: Part I. IEEE Robot. Autom. Mag. 2006, 13, 99–110. [Google Scholar] [CrossRef] [Green Version]
- Davison, A.J.; Reid, I.D.; Molton, N.D.; Stasse, O. MonoSLAM: Real-time single camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1052–1067. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Gokturk, S.B.; Yalcin, H.; Bamji, C. A time-of-flight depth sensor-system description, issues and solutions. In Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop, Washington, DC, USA, 27 June–2 July 2004; p. 35. [Google Scholar]
- Product-Astra. Available online: https://orbbec3d.com/product-astra/ (accessed on 11 June 2020).
- Solem, J.E. Programming Computer Vision with Python: Tools and Algorithms for Analyzing Images; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2012. [Google Scholar]
- Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: New York, NY, USA, 2003. [Google Scholar]
- Geiger, A.; Ziegler, J.; Stiller, C. Stereoscan: Dense 3D reconstruction in real-time. In Proceedings of the 2011 IEEE Intelligent Vehicles Symposium (IV), Baden-Baden, Germany, 5–9 June 2011; pp. 963–968. [Google Scholar]
- Whelan, T.; Salas-Moreno, R.F.; Glocker, B.; Davison, A.J.; Leutenegger, S. ElasticFusion: Real-time dense SLAM and light source estimation. Int. J. Robot. Res. 2016, 35, 1697–1716. [Google Scholar] [CrossRef] [Green Version]
- Pfister, H.; Zwicker, M.; Van Baar, J.; Gross, M. Surfels: Surface elements as rendering primitives. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, New Orleans, LA, USA, 23–28 July 2000; pp. 335–342. [Google Scholar]
- Keller, M.; Lefloch, D.; Lambers, M.; Izadi, S.; Weyrich, T.; Kolb, A. Real-time 3D reconstruction in dynamic scenes using point-based fusion. In Proceedings of the 2013 International Conference on 3D Vision-3DV 2013, Seattle, WA, USA, 29 June–1 July 2013; pp. 1–8. [Google Scholar]
- Dai, A.; Nießner, M.; Zollhöfer, M.; Izadi, S.; Theobalt, C. Bundlefusion: Real-time globally consistent 3D reconstruction using on-the-fly surface reintegration. ACM Trans. Graph. (ToG) 2017, 36, 1. [Google Scholar] [CrossRef]
- Ibragimov, I.Z.; Afanasyev, I.M. Comparison of ROS-based visual SLAM methods in homogeneous indoor environment. In Proceedings of the 2017 14th Workshop on Positioning, Navigation and Communications (WPNC), Bremen, Germany, 25–26 October 2017; pp. 1–6. [Google Scholar]
- Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef] [Green Version]
- Mur-Artal, R.; Tardós, J.D. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef] [Green Version]
- Condotta, I.C.; Brown-Brandl, T.M.; Pitla, S.K.; Stinn, J.P.; Silva-Miranda, K.O. Evaluation of low-cost depth cameras for agricultural applications. Comput. Electron. Agric. 2020, 173, 105394. [Google Scholar] [CrossRef]
- Schramm, S.; Rangel, J.; Kroll, A. Data fusion for 3D thermal imaging using depth and stereo camera for robust self-localization. In Proceedings of the 2018 IEEE Sensors Applications Symposium (SAS), Seoul, Korea, 12–14 March 2018; pp. 1–6. [Google Scholar]
- Williem, W.; Tai, Y.W.; Park, I.K. Accurate and real-time depth video acquisition using Kinect—Stereo camera fusion. Opt. Eng. 2014, 53, 043110. [Google Scholar] [CrossRef]
- Honório, L.M.; Leite da Silva, A.M.; Barbosa, D.A.; Delboni, L.F.N. Solving optimal power flow problems using a probabilistic α-constrained evolutionary approach. IET Gener. Transm. Distrib. 2010, 4, 674–682. [Google Scholar] [CrossRef]
- Palazzolo, E. Active 3D Reconstruction for Mobile Robots. Ph.D. Thesis, University of Bonn, Bonn, Germany, 2020. [Google Scholar]
- Nikoohemat, S.; Diakité, A.A.; Zlatanova, S.; Vosselman, G. Indoor 3D reconstruction from point clouds for optimal routing in complex buildings to support disaster management. Autom. Constr. 2020, 113, 103109. [Google Scholar] [CrossRef]
- Segal, A.; Haehnel, D.; Thrun, S. Generalized-icp. In Proceedings of the 5th Robotics: Science and Systems 2009, Seattle, WA, USA, 28 June–1 July 2009; p. 435. [Google Scholar]
- ZED-ROS. Available online: http://wiki.ros.org/zed-ros-wrapper (accessed on 22 June 2020).
- Astra-Camera. Available online: http://wiki.ros.org/astra_camera/ (accessed on 22 June 2020).
- Openni2. Available online: http://wiki.ros.org/openni2_launch (accessed on 22 June 2020).
- Laboratory of Autonomous and Intelligent Robotic Systems at UFJF. 3D Reconstruction through Stereo and RGB-D Sensor for an Arc-Shaped Path. 2020. Available online: https://www.youtube.com/watch?v=ITB38-T1MGI (accessed on 14 September 2020).
- Laboratory of Autonomous and Intelligent Robotic Systems at UFJF. 3D Reconstruction through Stereo and RGB-D Sensor for a Circular Path. 2020. Available online: https://www.youtube.com/watch?v=c1DaKlm_xBo (accessed on 14 September 2020).
Points | P1 | P2 | P3 | P4 | P5 | P6 | μ | σ |
---|---|---|---|---|---|---|---|---|
X | 0.0002 | 1.3045 | −1.4295 | −6.0600 | −7.7983 | −0.4640 | −2.4079 | 3.6527 |
Y | −0.1291 | 1.3241 | 1.8884 | −2.5900 | −5.9095 | −5.7577 | −1.8623 | 3.4441 |
Z | 0.1530 | 0.6897 | 2.5990 | 2.3022 | 4.0891 | 7.2546 | 2.8479 | 2.5784 |
Points | P1 | P2 | P3 | P4 | P5 | P6 | P7 | μ | σ |
---|---|---|---|---|---|---|---|---|---|
X | −0.2842 | 4.8284 | 5.8949 | 6.7364 | 7.5845 | 7.0788 | 5.4866 | 5.3322 | 2.6531 |
Y | 0.0854 | 0.8135 | 1.0923 | 0.2270 | −0.8132 | −2.0117 | −2.7256 | −0.4760 | 1.4413 |
Z | 0.2930 | 2.4786 | 6.2703 | 10.0553 | 12.5111 | 12.6021 | 13.4857 | 8.2423 | 5.2964 |
Time (s) | T1 | T2 | T3 | T4 | T5 | T6 | T7 | T8 | T9 | T10 | Av. | σ |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Local | 30.0 | 34.0 | 40.0 | 33.0 | 34.0 | 33.0 | 34.0 | 32.0 | 37.0 | 38.0 | 34.5 | 3.0 |
Edge | 82.7 | 77.4 | 86.1 | 87.9 | 83.0 | 82.8 | 87.9 | 80.9 | 84.6 | 85.3 | 83.9 | 3.2 |
Fog | 27.3 | 32.6 | 22.9 | 22.1 | 28.0 | 21.2 | 26.1 | 21.1 | 22.4 | 21.7 | 24.5 | 3.8 |
Local Processing | Time | %CPU | Network Usage |
---|---|---|---|
Edge | 360 ms | 60% | 0% |
Local Processing | Time | %CPU | Network Usage |
---|---|---|---|
Edge | 100 ms | 10% | 88% |
FOG | 100 ms | 10% | 88% |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Silva, L.A.Z.d.; Vidal, V.F.; Honório, L.M.; Dantas, M.A.R.; Pinto, M.F.; Capretz, M. A Heterogeneous Edge-Fog Environment Supporting Digital Twins for Remote Inspections. Sensors 2020, 20, 5296. https://doi.org/10.3390/s20185296
Silva LAZd, Vidal VF, Honório LM, Dantas MAR, Pinto MF, Capretz M. A Heterogeneous Edge-Fog Environment Supporting Digital Twins for Remote Inspections. Sensors. 2020; 20(18):5296. https://doi.org/10.3390/s20185296
Chicago/Turabian StyleSilva, Luiz A. Z. da, Vinicius F. Vidal, Leonardo M. Honório, Mário A. R. Dantas, Milena Faria Pinto, and Miriam Capretz. 2020. "A Heterogeneous Edge-Fog Environment Supporting Digital Twins for Remote Inspections" Sensors 20, no. 18: 5296. https://doi.org/10.3390/s20185296
APA StyleSilva, L. A. Z. d., Vidal, V. F., Honório, L. M., Dantas, M. A. R., Pinto, M. F., & Capretz, M. (2020). A Heterogeneous Edge-Fog Environment Supporting Digital Twins for Remote Inspections. Sensors, 20(18), 5296. https://doi.org/10.3390/s20185296