Role of Reference Frames for a Safe Human–Robot Interaction
Abstract
:1. Introduction
2. Problem Definition
3. The Proposed Approach
3.1. The Proposed Topology Network of Reference Frames
3.2. The Proposed Procedure for Layout Arrangement
- I.
- The robot is positioned as far as possible from the human agent;
- II.
- The functional activities occur on a planar desk over the genital level and parallel to the transverse (i.e., horizontal) plane of the human agent;
- III.
- The robot is positioned at the side of the secondary hand of the human;
- IV.
- A virtual barrier is positioned between the robot and the head/breasts of the human, with a passage under these levels;
- V.
- The kinematic chain of the robot must be always farther from the human agent than its gripper;
- VI.
- The gripping and working actions are realized in a plane parallel to the transverse (i.e., horizontal) plane of the human agent;
- VII.
- Only necessary objects are present in the area of interest.
3.3. Quantitative Measures of Safety
4. A Case of Study of Human–Robot Collaboration
4.1. The Proposed Experimental Set Up
4.2. Implementation of the Proposed Procedure
- is the speed of the operator in the direction of the robot (in case it is not detected by the motion tracking system, the ISO/TS 15066 suggests using as the reference value in the direction of separation distance reduction to be conservative);
- is the speed of the robot in the direction of the person;
- is the reaction time of the robot, which includes the time taken by the tracking system to detect the position of the operator until the activation of the robot’s stop signal;
- is the maximum deceleration of the robot to stop the motion;
- is the parameter that takes into account the detection uncertainty of the position of the person and the robot;
- is the robot–operator distance at any time .
4.3. Results of the Experimental Tests
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Nomenclature
a | reference frame on a generic object that must be avoided, i.e., anti-task (Section 2); |
ah | reference frame on a generic object that must be avoided, i.e., anti-task, by a generic human agent (Section 2); |
ar | reference frame on a generic object that must be avoided, i.e., anti-task, by a generic robotic agent (Section 2); |
aij | reference frame on the ith object that must be avoided, i.e., anti-task, by a generic jth agent, where i is a natural number and j is a tag that can assume the value h or r, respectively, for a human or robotic agent (Section 2); |
aij,k | reference frame on the ith object that must be avoided, i.e., anti-task, by a generic jth agent, where i is a natural number, j is a tag that can assume the value h or r, respectively, for a human or robotic agent, and kth is a natural number associated with the jth type of agent (Section 2); |
ai,hj | reference frame on the ith object that must be avoided, i.e., anti-task, by the hth gripper of a generic jth agent, where i and h are natural numbers and j is a tag that can assume the value h or r, respectively, for a human or robotic agent (Section 2); |
ai,hj,k | reference frame on the ith object that must be avoided, i.e., anti-task, by the hth gripper of a generic jth agent, where i and h are natural numbers, j is a tag that can assume the value h or r, respectively, for a human or robotic agent, and kth is a natural number associated with the jth type of agent (Section 2); |
air(s) | ith anti-target reference frame perceived by a robot in the topological space (Section 3.1); |
aih(s) | ith anti-target reference frame perceived by a human in the topological space (Section 3.1); |
ah | acceleration of the human hand entering the interaction area (Section 3.2); |
α | constant parameter incorporating inertia and stiffness of interacting objects (Section 3.3); |
b | base/absolute reference frame solid with a fixed point in the environment (Section 2); |
bh | base/absolute reference frame solid with a fixed point in the environment for a generic human agent (Section 2); |
br | base/absolute reference frame solid with a fixed point in the environment for a generic robotic agent (Section 2); |
C | constant parameter incorporating uncertainty of the human–robot relative position (Section 4.2); |
di,g | vector of the minimal distance between the anti-target i and the gripper gr (Section 3.1); |
di,g | module of the vector di,g (Section 3.1); |
Dx | dimension of the interaction area along the x axis (Section 3.2); |
dj,k | distance between the spheres circumscribed to the jth and kth objects (Section 3.3); |
e | environment; it is used to describe the whole set of objects fixed to the base in the workspace (i.e., the workbench) (Section 2); |
measure of the error matrix ∑ (Section 3.1); | |
xi | x component of the measurement error of the perceived coordinates x by the ith sensor (Section 3.2); |
g | reference frame on the hand/gripper of a generic agent (Section 2); |
gh | reference frame on the hand/gripper of a generic human agent (Section 2); |
gr | reference frame on the hand/gripper of a generic robotic agent (Section 2); |
gij | reference frame on the ith hand/gripper of a generic jth agent, where i is a natural number and j is a tag that can assume the value h or r, respectively, for a human or robotic agent (Section 2); |
gij,k | reference frame on the ith hand/gripper of a generic jth agent, where i is a natural number, j is a tag that can assume the value h or r, respectively, for a human or robotic agent, and kth is a natural number associated with the jth type of agent (Section 2); |
gr(s) | gripper reference frame perceived by a robot in the topological space (Section 3.1); |
gh(s) | gripper reference frame perceived by a human in the topological space (Section 3.1); |
KH | human kinematical space in the topological transformation network (Section 3.1); |
KR | robot kinematical space in the topological transformation network (Section 3.1); |
kh | human head stiffness (Section 3.3); |
kr | robot stiffness (Section 3.3); |
k | combined stiffness of human and robot (Section 3.3); |
Mdt,g | desired (or nominal) orientation matrix between gripper and target object to be manipulated (Section 3.1); |
mh | human head inertia (Section 3.3); |
mr | robot inertia (Section 3.3); |
Mi,j | transformation matrix from the reference frame (j) to the reference frame (i) (Section 3.1); |
Pi | set of homogeneous coordinates describing the position P with respect to the reference frame (i) (Section 3.1); |
Pir | set of homogeneous coordinates describing the position P of the anti-target ith of the robot agent r with respect to the reference frame aij (Section 3.1); |
Pg | set of homogeneous coordinates describing the position P of the anti-target ith of the robot agent r with respect to the reference frame gr (Section 3.1, Equation (8)); set of homogeneous coordinates describing the position P of the target ith of the robot agent r with respect to the reference frame gr (Section 3.1, Equation (10)); |
Pt | set of homogeneous coordinates describing the position P of the target ith of the robot agent r with respect to the reference frame at (Section 3.1); |
p | payload (Section 3.3); |
Ri | radius of the sphere circumscribing the volume Vi (Section 3.1); |
Rh | radius of the sphere circumscribing the human hand (Section 3.2); |
s | reference frame on a generic sensor of a generic agent (Section 2); |
sh | reference frame on a generic sensor of a generic human agent (Section 2); |
sr | reference frame on a generic sensor of a generic robotic agent (Section 2); |
sij | reference frame on the ith sensor of a generic jth agent, where i is a natural number and j is a tag that can assume the value h or r, respectively, for a human or robotic agent (Section 2); |
sij,k | reference frame on the ith sensor of a generic jth agent, where i is a natural number, j is a tag that can assume the value h or r, respectively, for a human or robotic agent, and kth is a natural number associated with the jth type of agent (Section 2); |
SH | human sensor, i.e., perceived, space in the topological transformation network (Section 3.1); |
SR | robot sensor, i.e., perceived, space in the topological transformation network (Section 3.1); |
sr | reference frame of the perception for a robot sensor in the topological space (Section 3.1); |
sh | reference frame of the perception for a human sensor in the topological space (Section 3.1); |
Sp | minimum separation distance between human and robot according to ISO/TS 15066 (Section 4.2); |
S(t0) | human–robot distance at the t0 time; |
∑ | error matrix describing the difference between the desired and real orientation matrices between the gripper and the target object to be manipulated (Section 3.1); |
t | reference frame on a generic target (Section 2); |
th | reference frame on a generic human target (Section 2); |
tr | reference frame on a generic robotic target (Section 2); |
tij | reference frame on the ith target associated with the ith gripper of a generic jth agent, where i is a natural number and j is a tag that can assume the value h or r, respectively, for a human or robotic agent (Section 2); |
tij,k | reference frame on the ith target associated with the ith gripper of a generic jth agent, where i is a natural number, j is a tag that can assume the value h or r, respectively, for a human or robotic agent, and kth is the a natural number associated with the jth type of agent (Section 2); |
Ti,g | translational term of the transformation matrix Mi,g (Section 3.1); |
Tj i,g | jth scalar component along the jth axis of the translational term Ti,g (Section 3.1); |
tr(s) | target reference frame perceived by a robot in the topological space (Section 3.1); |
th(s) | target reference frame perceived by a human in the topological space (Section 3.1); |
Tr | reaction time of the robot (Section 4.2); |
V | volume of points solid with the anti-task reference frame that should be avoided (Section 2); |
V− | volume of points solid with the target reference frame that must be considered allow a proper manipulation and to avoid undesired contacts with the target (Section 2); |
Vji(xji) | volume of points solid with the anti-task reference frame of the jth object x of the ith agent that should be avoided, where i and j are natural numbers and x can be any object in the workspace that should be avoided (an anti-task, a different gripper, a task of another gripper) (Section 2); |
V−,ji(tji) | volume of points solid with the jth target t reference frame of the ith agent that must be considered allow a proper manipulation and to avoid undesired contacts with the target (Section 2); |
vh | speed of the human hand entering the interaction area (Section 3.2); |
v | relative speed between robot and human (Section 3.3); |
vh | component of the human speed in the direction of the robot (Section 4.2); |
svr | component of the robot speed in the direction of the human (Section 4.2). |
References
- Villani, V.; Pini, F.; Leali, F.; Secchi, C. Survey on human–robot collaboration in industrial settings: Safety, intuitive interfaces and applications. Mechatronics 2018, 55, 248–266. [Google Scholar] [CrossRef]
- Aaltonen, I.; Salmi, T.; Marstio, I. Refining levels of collaboration to support the design and evaluation of human–robot interaction in the manufacturing industry. Proc. CIRP 2018, 72, 93–98. [Google Scholar] [CrossRef]
- Hongyi, L.; Lihui, W. Gesture Recognition for Human–robot Collaboration: A Review. Int. J. Ind. Ergon. 2017, 68, 355–367. [Google Scholar]
- Siciliano, B.; Khatib, O. Springer Handbook of Robotics; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
- Marvel, J.A.; Norcross, R. Implementing speed and separation monitoring in collaborative robot workcells. Robot. Comput. Integr. Manuf. 2017, 44, 144–155. [Google Scholar] [CrossRef] [Green Version]
- Maurtua, I.; Ibarguren, A.; Kildal, J.; Susperregi, L.; Sierra, B. Human–robot collaboration in industrial applications: Safety, interaction and trust. Int. J. Adv. Robot. Syst. 2017, 14, 1729881417716010. [Google Scholar] [CrossRef] [Green Version]
- Valori, M.; Scibilia, A.; Fassi, I.; Saenz, J.; Behrens, R.; Herbster, S.; Bidard, C.; Lucet, E.; Magisson, A.; Schaake, L.; et al. Validating safety in human–robot collaboration: Standards and new perspectives. Robotics 2021, 10, 65. [Google Scholar] [CrossRef]
- Borboni, A.; Marinoni, P.; Nuzzi, C.; Faglia, R.; Pagani, R.; Panada, S. Towards safe collaborative interaction empowered by face recognition. In Proceedings of the 2021 24th International Conference on Mechatronics Technology (ICMT), Singapore, 18–22 December 2021. [Google Scholar]
- Pedrocchi, N.; Vicentini, F.; Malosio, M.; Tosatti, L.M. Safe human–robot cooperation in an industrial environment. Int. J. Adv. Robot. Syst. 2013, 10, 27. [Google Scholar] [CrossRef]
- Lasota, P.A.; Fong, T.; Shah, J.A. A Survey of Methods for Safe Human–robot Interaction. Found. Trends Robot. 2017, 5, 261–349. [Google Scholar] [CrossRef]
- Dumonteil, G.; Manfredi, G.; Devy, M.; Confetti, A.; Sidobre, D. Reactive Planning on a Collaborative Robot for Industrial Applications. In Proceedings of the 2015 12th International Conference on Informatics in Control, Automation and Robotics (ICINCO), Colmar, France, 21–23 July 2015. [Google Scholar]
- Salmi, T.; Marstio, I.; Malm, T.; Montonen, J. Advanced safety solutions for human–robot-cooperation. In Proceedings of the 47th International Symposium on Robotics, ISR 2016, Munich, Germany, 21–22 June 2016. [Google Scholar]
- Zanchettin, A.M.; Lacevic, B.; Rocco, P.A.M. A novel passivity-based control law for safe human–robot coexistence. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, IEEE, New York, NY, USA, 7–12 October 2012. [Google Scholar]
- Haddadin, S. Physical safety in robotics. In Formal Modeling and Verification of Cyber-Physical Systems: 1st International Summer School on Methods and Tools for the Design of Digital Systems; Springer: Bremen, Germany, 2015. [Google Scholar]
- Zhang, J.; Wang, Y.; Xiong, R. Industrial robot programming by demonstration. In Proceedings of the ICARM 2016—2016 International Conference on Advanced Robotics and Mechatronics, Macau, China, 18–20 August 2016. [Google Scholar]
- Navarro, B.; Cherubini, A.; Fonte, A.; Passama, R.; Poisson, G.; Fraisse, P. An ISO10218-compliant adaptive damping controller for safe physical human–robot interaction. In Proceedings of the IEEE International Conference on Robotics and Automation, Stockholm, Sweden, 16–21 May 2016. [Google Scholar]
- Berger, E.; Vogt, D.; Grehl, S.; Jung, B.; Amor, H.B. Estimating perturbations from experience using neural networks and information transfer. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Daejeon, Republic of Korea, 9–14 October 2016. [Google Scholar]
- Magrini, E.; De Luca, A. Hybrid force/velocity control for physical human–robot collaboration tasks. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Daejeon, Republic of Korea, 9–14 October 2016. [Google Scholar]
- Schmidtler, J.; Knott, V.; Hölzel, C.; Bengler, K. Human centered assistance applications for the working environment of the future. Occup. Ergon. 2015, 12, 83–95. [Google Scholar] [CrossRef]
- Prati, E.; Peruzzini, M.; Pellicciari, M.; Raffaeli, R. How to include User eXperience in the design of Human–robot Interaction. Robot. Comput. Integr. Manuf. 2021, 68, 102072. [Google Scholar] [CrossRef]
- Fogliaroni, P.; Clementini, E. Modeling visibility in 3D space: A qualitative frame of reference. In Lecture Notes in Geoinformation and Cartography; Springer: Berlin/Heidelberg, Germany, 2015; pp. 243–258. [Google Scholar]
- Mohamed, H.A.; Moussa, A.; Elhabiby, M.M.; El-Sheimy, N. Improved Reference Key Frame Algorithm. In ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences; ISPRS: Hannover, Germany, 2019. [Google Scholar]
- De Fonseca, V.P.; de Oliveira, T.E.A.; Petriu, E.M. Estimating the Orientation of Objects from Tactile Sensing Data Using Machine Learning Methods and Visual Frames of Reference. Sensors 2019, 19, 2285. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Brenneis, D.J.A.; Dawson, M.R.; Murgatroyd, G.; Carey, J.P.; Pilarski, P.M. Initial Investigation of a Self-Adjusting Wrist Control System to Maintain Prosthesis Terminal Device Orientation Relative to the Ground Reference Frame. In Proceedings of the IEEE RAS and EMBS International Conference on Biomedical Robotics and Biomechatronics, Enschede, The Netherlands, 26–29 August 2018. [Google Scholar]
- Mäkinen, P.; Dmitrochenko, O.; Mattila, J. Floating frame of reference formulation for a flexible manipulator with hydraulic actuation—Modelling and experimental validation. In Proceedings of the BATH/ASME 2018 Symposium on Fluid Power and Motion Control, FPMC 2018, Sarasota, FL, USA, 16–18 October 2018. [Google Scholar]
- Kalla, P.; Koona, R.; Ravindranath, P.; Sudhakar, I. Coordinate reference frame technique for robotic planar path planning. Mater. Today Proc. 2018, 5, 19073–19079. [Google Scholar] [CrossRef]
- Stoltmann, K.; Fuchs, S.; Krifka, M. The influence of animacy and spatial relation complexity on the choice of frame of reference in German. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2018; pp. 119–133. [Google Scholar]
- Brown, A.; Uneri, A.; De Silva, T.; Manbachi, A.; Siewerdsen, J.H. Technical note: Design and validation of an open-source library of dynamic reference frames for research and education in optical tracking. In Proceedings of the Progress in Biomedical Optics and Imaging—Proceedings of SPIE, Houston, TX, USA, 10–15 February 2018. [Google Scholar]
- Craifaleanu, A.; Stroe, I. Study of vibrations of a robotic arm, using the lagrange equations with respect to a non-inertial reference frame. In Acoustics and Vibration of Mechanical Structures—AVMS-2017: Proceedings of the 14th AVMS Conference, Timisoara, Romania, 25–26 May 2017; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
- Relvas, P.; Costa, P.; Moreira, A.P. Object tracking in a moving reference frame. Adv. Intell. Syst. Comput. 2018, 693, 26–35. [Google Scholar]
- Dugar, V.; Choudhury, S.; Scherer, S. A κiTE in the wind: Smooth trajectory optimization in a moving reference frame. In Proceedings of the IEEE International Conference on Robotics and Automation, Singapore, 29 May–3 June 2017. [Google Scholar]
- Mohamed, H.; Moussa, A.; Elhabiby, M.; El-Sheimy, N.; Sesay, A. A novel real-time reference key frame scan matching method. Sensors 2017, 17, 1060. [Google Scholar] [CrossRef] [Green Version]
- Oess, T.; Krichmar, J.L.; Rohrbein, F. A computational model for spatial navigation based on reference frames in the hippocampus, retrosplenial cortex, and posterior parietal cortex. Front. Neurorobot. 2017, 11, 4. [Google Scholar] [CrossRef] [Green Version]
- Lee, B.H.; Ahn, H.S. Distributed estimation for the unknown orientation of the local reference frames in N-dimensional space. In Proceedings of the 2016 14th International Conference on Control, Automation, Robotics and Vision, ICARCV 2016, Phuket, Thailand, 13–15 November 2016. [Google Scholar]
- Montijano, E.; Cristofalo, E.; Schwager, M.; Sagues, C. Distributed formation control of non-holonomic robots without a global reference frame. In Proceedings of the IEEE International Conference on Robotics and Automation, Stockholm, Sweden, 16–21 May 2016. [Google Scholar]
- Taris, F.; Andrei, A.; Roland, J.; Klotz, A.; Vachier, F.; Souchay, J. Long-term R and V-band monitoring of some suitable targets for the link between ICRF and the future Gaia celestial reference frame. Astron. Astrophys. 2016, 587, A221. [Google Scholar] [CrossRef] [Green Version]
- Expert, F.; Ruffier, F. Flying over uneven moving terrain based on optic-flow cues without any need for reference frames or accelerometers. Bioinspir. Biomim. 2015, 10, 026003. [Google Scholar] [CrossRef]
- Legnani, G.; Casalo, F.; Righettini, P.; Zappa, B. A homogeneous matrix approach to 3D kinematics and dynamics—I. Theory. Mech. Mach. Theory 1996, 31, 573–587. [Google Scholar] [CrossRef]
- Legnani, G.; Casalo, F.; Righettini, P.; Zappa, B. A homogeneous matrix approach to 3D kinematics and dynamics—II. Applications to chains of rigid bodies and serial manipulators. Mech. Mach. Theory 1996, 31, 589–605. [Google Scholar] [CrossRef]
- Di Gregorio, R. A novel point of view to define the distance between two rigid-body poses. In Advances in Robot Kinematics: Analysis and Design; Springer: Berlin/Heidelberg, Germany, 2008; pp. 361–369. [Google Scholar]
- Mazzotti, C.; Sancisi, N.; Parenti-Castelli, V. A Measure of the Distance Between Two Rigid-Body Poses Based on the Use of Platonic Solids. In ROMANSY 21—Robot Design, Dynamics and Control. ROMANSY21 2016. CISM International Centre for Mechanical Sciences; Springer: Berlin/Heidelberg, Germany, 2016; pp. 81–89. [Google Scholar]
- Mastrogiovanni, F.; Cannata, G.; Natale, L.; Metta, G. Advances in tactile sensing and touch based human–robot interaction. In Proceedings of the HRI’12—Proceedings of the 7th Annual ACM/IEEE International Conference on Human–Robot Interaction, Boston, MA, USA, 5–8 March 2012. [Google Scholar]
- Scalera, L.; Giusti, A.; Vidoni, R.; Di Cosmo, V.; Matt, D.T.; Riedl, M. Application of dynamically scaled safety zones based on the ISO/TS 15066:2016 for collaborative robotics. Int. J. Mech. Control 2020, 21, 41–49. [Google Scholar]
- ISO 15066; Robots and Robotic Devices-Collaborative Robots. ISO: Geneva, Switzerland, 2016.
- Rosenstrauch, M.J.; Kruger, J. Safe human–robot-collaboration-introduction and experiment using ISO/TS 15066. In Proceedings of the 2017 3rd International Conference on Control, Automation and Robotics, ICCAR 2017, Nagoya, Japan, 22–24 April 2017. [Google Scholar]
- Wahrmann, D.; Hildebrandt, A.C.; Wittmann, R.; Sygulla, F.; Rixen, D.; Buschmann, T. Fast object approximation for real-time 3D obstacle avoidance with biped robots. In Proceedings of the IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM, Banff, AB, Canada, 12–15 July 2016. [Google Scholar]
- Yakovlev, S. The expanding space method in sphere packing problem. In Advances in Intelligent Systems and Computing; Springer: Berlin/Heidelberg, Germany, 2021; pp. 151–163. [Google Scholar]
- Echávarri, J.; Ceccarelli, M.; Carbone, G.; Alén, C.; Muñoz, J.L.; Díaz, A.; Munoz-Guijosa, J.M. Towards a safety index for assessing head injury potential in service robotics. Adv. Robot. 2013, 27, 831–844. [Google Scholar] [CrossRef]
- Cordero, C.A.; Carbone, G.; Ceccarelli, M.; Echávarri, J.; Muñoz, J.L. Experimental tests in human–robot collision evaluation and characterization of a new safety index for robot operation. Mech. Mach. Theory 2014, 80, 184–199. [Google Scholar] [CrossRef] [Green Version]
- Fryman, J.; Bjoern, M. Safety of industrial robots: From conventional to collaborative applications. In Proceedings of the ROBOTIK 2012; 7th German Conference on Robotics, Munich, Germany, 21–22 May 2012; pp. 1–5. [Google Scholar]
- Perez-Sala, X.; Escalera, S.; Angulo, C.; Gonzalez, J. A survey on model based approaches for 2D and 3D visual human pose recovery. Sensors 2014, 14, 4189–4210. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Bazarevsky, V.; Grishchenko, I.; Raveendran, K.; Zhu, T.; Zhang, F.; Grundmann, M. Blazepose: On-device real-time body pose tracking. arXiv 2020, arXiv:2006.10204. [Google Scholar]
- Wu, L.; Ren, H. Finding the kinematic base frame of a robot by hand-eye calibration using 3D position data. IEEE Trans. Autom. Sci. Eng. 2016, 14, 314–324. [Google Scholar] [CrossRef]
- Daniilidis, K. Hand-eye calibration using dual quaternions. Int. J. Robot. Res. 1999, 18, 286–298. [Google Scholar] [CrossRef]
- Scalera, L.; Giusti, A.; Vidoni, R.; Di Cosmo, V.; Matt, D.T.; Riedl, M. A Collaborative Robotics Safety Control Application Using Dynamic Safety Zones Based on the ISO/TS 15066:2016. In RAAD 2019: Advances in Service and Industrial Robotics; Springer: Berlin/Heidelberg, Germany, 2019; pp. 430–437. [Google Scholar]
- Dong, J.; Jiang, W.; Huang, Q.; Bao, H.; Zhou, X. Fast and robust multi-person 3D pose estimation from multiple views. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Tu, H.; Wang, C.; Zeng, W. Voxelpose: Towards multi-camera 3d human pose estimation in wild environment. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020, Proceedings, Part I 16; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
- Ferraguti, F.; Landi, C.T.; Costi, S.; Bonfè, M.; Farsoni, S.; Secchi, C.; Fantuzzi, C. Safety barrier functions and multi-camera tracking for human–robot shared environment. Robot. Auton. Syst. 2020, 124, 103388. [Google Scholar] [CrossRef]
- Charalambous, G.; Fletcher, S.; Webb, P. The development of a scale to evaluate trust in industrial human–robot collaboration. Int. J. Soc. Robot. 2016, 8, 193–209. [Google Scholar] [CrossRef]
- Sun, Y.; Sun, L.; Liu, J. Human comfort following behavior for service robots. In Proceedings of the 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO), Qingdao, China, 3–7 December 2016. [Google Scholar]
- Dufour, K.; Ocampo-Jimenez, J.; Suleiman, W. Visual-spatial attention as a comfort measure in human—Robot collaborative tasks. Robot. Auton. Syst. 2020, 133, 103626. [Google Scholar] [CrossRef]
- Changizi, A.; Lanz, M. The comfort zone concept in a human–robot cooperative task. In International Precision Assembly Seminar; Springer: Berlin/Heidelberg, Germany, 2018; pp. 82–91. [Google Scholar]
- Wang, W.; Chen, Y.; Li, R.; Jia, Y. Learning and comfort in human--robot interaction: A review. Appl. Sci. 2019, 9, 5152. [Google Scholar] [CrossRef] [Green Version]
- Xiong, P.; Tong, X.; Liu, P.X.; Song, A.; Li, Z. Robotic Object Perception Based on Multispectral Few-Shot Coupled Learning. IEEE Trans. Syst. Man Cybern. Syst. 2023, 1–13. [Google Scholar] [CrossRef]
- Xiong, P.; Liao, J.; Zhou, M.; Song, A.; Liu, P.X. Deeply Supervised Subspace Learning for Cross-Modal Material Perception of Known and Unknown Objects. IEEE Trans. Ind. Inform. 2023, 19, 2259–2268. [Google Scholar] [CrossRef]
- Liu, F.; Sun, F.; Fang, B.; Li, X.; Sun, S.; Liu, H. Hybrid Robotic Grasping with a Soft Multimodal Gripper and a Deep Multistage Learning Scheme. IEEE Trans. Robot. 2023, 39, 2379–2399. [Google Scholar] [CrossRef]
- Pohtongkam, S.; Srinonchat, J. Object Recognition for Humanoid Robots Using Full Hand Tactile Sensor. IEEE Access 2023, 11, 20284–20297. [Google Scholar] [CrossRef]
- Villafañe, J.H.; Valdes, K.; Vanti, C.; Pillastrini, P.; Borboni, A. Reliability of handgrip strength test in elderly subjects with unilateral thumb carpometacarpal osteoarthritis. Hand 2015, 10, 205–209. [Google Scholar] [CrossRef] [Green Version]
- Aggogeri, F.; Borboni, A.; Merlo, A.; Pellegrini, N.; Ricatto, R. Real-time performance of mechatronic PZT module using active vibration feedback control. Sensors 2016, 16, 1577. [Google Scholar] [CrossRef] [Green Version]
- Borboni, A.; Aggogeri, F.; Pellegrini, N.; Faglia, R. Innovative modular SMA actuator. Adv. Mater. Res. 2012, 590, 405–410. [Google Scholar] [CrossRef]
- Borboni, A.; Aggogeri, F.; Pellegrini, N.; Faglia, R. Precision point design of a cam indexing mechanism. Adv. Mater. Res. 2012, 590, 399–404. [Google Scholar] [CrossRef]
- Amici, C.; Borboni, A.; Faglia, R.; Fausti, D.; Magnani, P.L. A parallel compliant meso-manipulator for finger rehabilitation treatments: Kinematic and dynamic analysis. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, Nice, France, 22–26 September 2008; pp. 735–740. [Google Scholar]
- Borboni, A.; De Santis, D.; Faglia, R. Large deflection of a non-linear, elastic, asymmetric Ludwick cantilever beam. In Proceedings of the ASME 2010 10th Biennial Conference on Engineering Systems Design and Analysis, ESDA2010, Istanbul, Turkey, 12–14 July 2010; pp. 99–106. [Google Scholar]
RF 1 | Application | MP | AA 2 | D 3 | KC | Ref. |
---|---|---|---|---|---|---|
A | Autonomous flight | Decouple the trajectory optimization | S | G, T | Dynamically feasible, time-optimal trajectories in the presence of wind. | [22,23,24,25,26,27] |
A | Autonomous high-performance flight | Decouple the trajectory optimization | S | G, T | Decouples a path optimization in the ground frame and velocity optimization in the airframe | [28,29,30,31,32] |
E,A | Autonomous Mobile Robot | Human/robot distributed control | S | G,T | Definition of perceptive action reference is directly relevant to the measured sensory outputs | [33] |
E,A | Autonomous Mobile Robot | Human/robot distributed control | S | G,T | Comparison between perceptive frame and time-based reference frame | [34] |
E,A | Spatial mapping | Control of non-holonomic robots | S | G,T | The distance-based holonomic control is transformed to cope with non-holonomic constraints using a piecewise-smooth function | [35,36] |
A | Surgical Medical | Graphical User Interface | I | L,T | Development of the frame of reference transformation tool | [37] |
E | Human cognition | Human’s perception of spatial relations | S | G, S | Navigation strategies depend on the agent’s confidence (reference frame and sensor information) | [21] |
In | Work | Out | Interactions | |
---|---|---|---|---|
(a) | X | 1 | ||
(b) | X | 1 | ||
(c) | X | 1 | ||
(d) | X | X | 2 | |
(e) | X | X | 2 | |
(f) | X | X | 2 | |
(g) | X | X | X | 3 |
Scale Item | Major Components |
---|---|
The way the robot moved made me uncomfortable | Robot’s motion and pick-up speed (Figure 11) |
The speed at which the gripper picked up and released the components made me uneasy | |
I trusted that the robot was safe to cooperate with | Safe cooperation (Figure 12) |
I was comfortable that the robot would not hurt me | |
The size of the robot did not intimidate me | |
I felt safe interacting with the robot | |
I knew the gripper would not drop the components | Robot and gripper reliability (Figure 13) |
The robot gripper did not look reliable | |
The gripper seemed like it could be trusted | |
I felt I could rely on the robot to do what it was supposed to do |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Borboni, A.; Pagani, R.; Sandrini, S.; Carbone, G.; Pellegrini, N. Role of Reference Frames for a Safe Human–Robot Interaction. Sensors 2023, 23, 5762. https://doi.org/10.3390/s23125762
Borboni A, Pagani R, Sandrini S, Carbone G, Pellegrini N. Role of Reference Frames for a Safe Human–Robot Interaction. Sensors. 2023; 23(12):5762. https://doi.org/10.3390/s23125762
Chicago/Turabian StyleBorboni, Alberto, Roberto Pagani, Samuele Sandrini, Giuseppe Carbone, and Nicola Pellegrini. 2023. "Role of Reference Frames for a Safe Human–Robot Interaction" Sensors 23, no. 12: 5762. https://doi.org/10.3390/s23125762
APA StyleBorboni, A., Pagani, R., Sandrini, S., Carbone, G., & Pellegrini, N. (2023). Role of Reference Frames for a Safe Human–Robot Interaction. Sensors, 23(12), 5762. https://doi.org/10.3390/s23125762