Next Article in Journal
Advancing Sustainable Cyber-Physical System Development with a Digital Twins and Language Engineering Approach: Smart Greenhouse Applications
Next Article in Special Issue
Deep Learning-Based Vision Systems for Robot Semantic Navigation: An Experimental Study
Previous Article in Journal
Training Artificial Neural Networks to Detect Multiple Sclerosis Lesions Using Granulometric Data from Preprocessed Magnetic Resonance Images with Morphological Transformations
Previous Article in Special Issue
Fast Detection of the Stick–Slip Phenomenon Associated with Wheel-to-Rail Sliding Using Acceleration Sensors: An Experimental Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Virtual Teleoperation System for Mobile Manipulator Robots Focused on Object Transport and Manipulation

by
Fernando J. Pantusin
1,*,
Christian P. Carvajal
2,3,
Jessica S. Ortiz
1,* and
Víctor H. Andaluz
1
1
Departamento de Eléctrica y Electrónica, Universidad de las Fuerzas Armadas-ESPE, Sangolquí 171103, Ecuador
2
Instituto de Automática (INAUT), Universidad Nacional de San Juan-CONICET, Av. San Martín (Oeste) 1109, San Juan 5402, Argentina
3
Centro de Investigación en Ciencias Humanas y de la Educación (CICHE), Facultad de Ingenierías, Ingeniería Industrial, Universidad Tecnológica Indoamérica, Ambato 180103, Ecuador
*
Authors to whom correspondence should be addressed.
Technologies 2024, 12(9), 146; https://doi.org/10.3390/technologies12090146
Submission received: 25 July 2024 / Revised: 24 August 2024 / Accepted: 29 August 2024 / Published: 31 August 2024
(This article belongs to the Special Issue Advanced Autonomous Systems and Artificial Intelligence Stage)

Abstract

:
This work describes the development of a tool for the teleoperation of robots. The tool is developed in a virtual environment using the Unity graphics engine. For the development of the application, a kinematic model and a dynamic model of a mobile manipulator are used. The mobile manipulator robot consists of an omnidirectional platform and an anthropomorphic robotic arm with 4 degrees of freedom (4DOF). The model is essential to emulate the movements of the robot and to facilitate the immersion in the virtual environment. In addition, the control algorithms are established and developed in MATLAB 2020 software, which improves the acquisition of knowledge to teleoperate robots and execute tasks of manipulation and transport of objects. This methodology offers a cheaper and safer alternative to real physical systems, as it reduces both the costs and risks associated with using a real robot for training.

1. Introduction

Service robotics focuses on the development of robotic systems capable of performing a wide range of beneficial tasks in non-industrial environments, including industrial automation applications, with a special focus on interacting with humans to improve their quality of life [1]. With the most recent technological advances, attention has turned not only to meeting human needs, but also to improving comfort and human interaction in various areas of daily life, from the home to hospitals and public spaces [2,3]. Service robotics comprises two main groups, professional and personal use according to the IFR [4]. Among these groups are a number of approaches such as: (i) Transport and logistics:Service robots for logistics fulfil areas of transport, handling, packaging, sorting and delivery of goods in offices, airports, post offices, among others [5]; (ii) Cleaning: where robotic systems are developed for the cleaning of large or crowded areas, most of the commercially available home service robots have been developed with respect to requirements posed exclusively by a specific domestic task such as floor cleaning [2]; (iii) Medicine: providing assistance in surgeries, rehabilitation and patient monitoring, as well as patient transport and orientation tasks, which represents up to 46% of hospital expenses. In addition, virtual AI-based psychotherapeutic devices are used in the field of mental health and for the treatment of diseases that require prolonged observation of the patient that can be managed through online monitoring [6,7]; (iv) Hospitality: focused on entertainment and improving the quality of the human experience; several hotels are even using robots to communicate with guests or perform hotel operations [8]; finally, (v) Agriculture: robotic systems are being developed for outdoor cultivation, improving productivity and reducing farmers’ working hours [9]. Resulting in the need to develop more complex robotic systems.
Due to the complexity of the applications and environments in which service robots must be deployed, the need has arisen to create systems with diverse morphologies and the combination of multiple robots to address a variety of tasks [10,11]. However, the acquisition of highly complex systems entails considerable investment, either for their implementation or study. This is further complicated when considering the need to train individuals in the handling of these robots, in addition to requiring the presence of trained supervisors to ensure their correct use and learning by the users. Currently, the most recent technological advances have opened new alternatives in the field of service robotics, facilitating the learning process through innovative approaches, such as virtual environments, programming with LEGO kits and pedagogical systems, with a STEM educational approach focused on service and educational robotics [12], such educational systems allow me to give a hint in the operation of robotics and programming, but their acquisition cost is quite considerable, in addition to its operation in the presence of people or other objects not considered, can become damaged or cause damage. Another disadvantage is that these educational systems cannot handle large weight loads or perform precision tasks, which generate great limitations [13,14].
To facilitate the learning process in situations that seek to replicate reality, various technological tools are available, with simulators being one of the most prominent in virtual robotics education [15,16]. Within this broad set of tools, virtual environments stand out, allowing the creation of immersive environments that simulate real situations [17,18]. These environments benefit from the use of devices such as helmets, gloves, sensors and controls, which are now widely available at the simulation level [19]. This technology has made it possible to develop virtual reality experiences within these environments, which, in addition to reducing costs compared to the use of real equipment and associated resources, provides a level of fail-safety in the learning process that cannot be achieved in a real physical environment [17].
In this context, the present work proposes the development of a teaching-learning system based on virtual environments in the Unity 3D graphics engine using an omnidirectional robot with an anthropomorphic arm controlled by means of MATLAB 2020 software of MathWorks through a communication channel. In order to facilitate the understanding of the various applications of service robotics and the complexity of robot morphologies. This is achieved by considering the mathematical modeling of the robot in question and a haptic device that allows to train users to perform autonomous and teleoperated tasks in an accessible way in an immersive virtual environment which presents different challenges for user understanding [20].
Finally, this article is organized in sections to address these aspects in a structured way. Section 2 presents the methodology needed to carry out this process, with a focus on educational robotics. Section 3 focuses on the mathematical basis required for the modeling and control of the robot. Section 4 is devoted to the development of the virtual environment. Section 5 presents the results obtained, and Section 6 contains the conclusions derived from this study.

2. Methodology and Process Conceptualisation

The challenge in service robotics lies in operating in unstructured environments, which are highly unpredictable and difficult to manage. Creating a completely unstructured system is complicated by the nature of these environments. Therefore, it is essential to have a thorough understanding of the real world so that the assumed foundations are sufficiently flexible and adaptable to the variability of the environment. Although the current development of robotic systems has advanced, it is still far from achieving effective operation in completely unstructured environments with fully autonomous control, therefore, a semi-autonomous control is established through the use of teleoperation of robotic systems, in this context, it is proposed the following work methodology to address this issue.
As can be seen in the scheme above (Figure 1), It starts from service robotics present in the real world and contextualize it using knowledge gained through experience and theoretical background. This allows us to develop a mental model that describes the operation of an omnidirectional robot and provides an understanding of its behavior. Subsequently, we build a formal model based on physical and geometrical principles, resulting in the mathematical model of the mobile manipulator robot contemplated by the mobile platform and the robotic arm, therefore, the model of each of these robots is considered. In addition, depending on the control objective, the independent model of each robot can be used, or a unified model can be established to control the end-effector of the whole robot. In response to the demands of the environment, a robotic process is created that combines the two previously defined systems for the purpose of evaluating the behavior, limitations and constraints of the robotic system.
Simultaneously, it creates an environment that adapts to real-world demands and leverages contextualized human knowledge. This environment is designed to host multiple tasks or objectives that require system intervention. Subsequently, we carry out the virtualization of this defined environment through the use of CAD software and graphics engines, with the purpose of familiarizing the user and facilitating clarity in the learning process. A schematic of the conceptualisation process involved is presented in the Figure 2.
Therefore, the virtual system designed in the Unity graphic engine includes a mobile manipulator robot, previously modelled in CAD software, with an environment composed of two predefined scenes, based on external resources of the real robot and for the simulation scenarios. The CAD software used is Solid Works 2019.Through a formal mathematical model, which incorporates the kinematics and dynamics of the robotic system, the animations of the robot’s movements are generated by means of scripts integrated in the graphic engine. In addition, control algorithms are developed in MATLAB, which allow the execution of the necessary actions to accomplish the tasks desired by the user. These actions are commanded through the Novint Falcon haptic device, which is connected to the control software. The communication between the control software and the graphics engine is done through a Dynamic-Link Libraries (DLL). Where, MATLAB software sends the control actions to Unity and receives the information from the robot. On the other hand, control actions can also be transmitted to a real robot, using a TCP/IP wireless communication protocol, which allows virtual tasks to be replicated in a physical environment, making it possible to evaluate the controller to remotely control the robot experimentally and also design an autonomous controller for the robot to perform a defined task.
In this way, a virtual environment is created with predefined goals to be achieved by the robotic system driven by a human operator. It can also be used to evaluate autonomous control algorithms for the execution of defined tasks.

3. Modeling and Control

A mobile manipulator robot refers to robots that consist of a robotic arm (also known as a manipulator) mounted on a mobile platform or vehicle. These robots enable tasks to be carried out that require the capability to navigate and manipulate. These systems are characterized by high redundancy.

3.1. Kinematic Model

The kinematics determines the characteristics of motion in the plane or in space of a robot, the kinematics of robots are described here.

3.1.1. Omnidirectional Mobile Robot

The position of the mobile robot is defined in the fixed reference space defined by η t = η x t , η y t , ψ t T . The kinematic configuration of the omnidirectional robot can be observed in Figure 3.
Now, it is necessary to obtain the model of the mobile robot as a function of its manoeuvrability velocities. The differential kinematics of the omnidirectional robot is represented by:
η ˙ x = u l cos ψ u m sin ψ η ˙ y = u l sin ψ + u m cos ψ η ˙ ψ = ω
written in matrix form (1) results:
η ˙ t = J P ψ u t
with η ˙ t = η ˙ x , η ˙ y , ψ ˙ T which represents the velocity vector of the point of interest with respect to the inertial reference system R ; J P ψ 3 × 3 is the Jacobian matrix that represents the motion characteristics of the omnidirectional robot; and u t 3 is the manoeuvrability vector of the omnidirectional robot with respect to R P [21].

3.1.2. Robotic Arm

The position of the robotic arm in the fixed workspace R B is defined in terms of its independent articulations, so that it h ( t ) = f ( q a ) , where the vector of independent coordinates is q a t = [ q a 1 q a 2 q a n a ] . The configuration of the robotic arm can be observed in Figure 4.
Now, taking the fixed reference system R B ; and the number of independent arm joints is four, i.e., a 4DOF anthropomorphic robotic arm, (see Figure 4). The position and orientation of the robotic arm is determined:
h t = h r t = l 2 S q 2 C q 1 + l 3 S q 2 q 3 C q 1 + l 4 S q 2 q 3 q 4 C q 1 h s t = l 2 S q 2 S q 1 + l 3 S q 2 q 3 S q 1 + l 4 S q 2 q 3 q 4 S q 1 h t t = l 1 + l 2 C q 2 + l 3 C q 2 q 3 + l 4 C q 2 q 3 q 4
where, the expressions in (3) represent: S a = sin ( a ) ; C a = cos ( a ) ; S a b . . n = sin ( a + b + + n ) ; and C a b . . n = cos ( a + b + + n ) .
Now, the model of the arm as a function of its velocities is obtained through the partial derivative of f ( q a ) in relation to q a , where is obtained the differential kinematics of the robot described in matrix form as:
h ˙ t = J a ( q a ( t ) ) q a ˙ ( t )
where, q a ˙ is the joint velocity vector of the robotic arm joints; h ˙ t = h ˙ x t , h ˙ y t , h ˙ z t T is the vector of arm end-effector velocities in workspace; and J a q a 3 × 4 is the Jacobian of the robotic arm that transforms the manoeuvring velocities of the arm to the velocities of the end-effector.

3.1.3. Omnidirectional Mobile Manipulator

The mobile manipulator robot considered in this work consists of a robotic arm located on a mobile robot with omnidirectional traction. The configuration of the mobile manipulator robot is known when the position and orientation of all its points with respect to an inertial reference system is known R X , Y , Z . The kinematic configuration of the robot can be seen in Figure 5.
The direct kinematics of the mobile manipulator robot defines the position and location of the end-effector. ξ t as a function of the configuration of the robotic arm and the mobile robot with omnidirectional traction ξ t = f q p , q a ;
ξ t = η t + R z ψ a b 0 + h a q a , ψ
where, η t = η x t , η y t , η P t T represents the position of the omnirectional robot relative to the inertial reference frame R X , Y , Z ; η x t y η y t are the positions on the axes X , Y , while η P represents the height of the mobile robot with respect to Z; a y b represent the distance of the robotic arm location from the moving reference system located at the center of gravity of the omnidirectional robot R P ; R z ψ represents the rotation matrix with respect to the Z-axis of the reference system R X , Y , Z ; and h q , ψ considers the position of the robotic arm with respect to the position and orientation of the omnidirectional robot, i.e.,
h a q a , ψ = C q 1 ψ l 2 S q 2 + l 3 S q 2 q 3 + l 4 S q 2 q 3 q 4 S q 1 ψ l 2 S q 2 + l 3 S q 2 q 3 + l 4 S q 2 q 3 q 4 η P + l 1 + l 2 C q 2 + l 3 C q 2 q 3 + l 4 C q 2 q 3 q 4
Considering Equations (6) and the position of the mobile robot in (5), the direct kinematics of the mobile manipulator robot can be represented as:
ξ t = ξ x t = η x + a C ψ b S ψ + C q 1 ψ l 2 S q 2 + l 3 S q 2 q 3 + l 4 S q 2 q 3 q 4 ξ y t = η y + a S ψ + b C ψ + S q 1 ψ l 2 S q 2 + l 3 S q 2 q 3 + l 4 S q 2 q 3 q 4 ξ z t = η P + l 1 + l 2 C q 2 + l 3 C q 2 q 3 + l 4 C q 2 q 3 q 4
Finally, the differential kinematic model of the mobile manipulator robot establishes the derivative of the end-effector location as a function of the derivative of the configuration of the robotic arm and the omnidirectional robot, i.e., ξ ˙ t = d d t ξ t . As a result, it follows that:
ξ ˙ t = ξ ˙ x = u l C ψ u m S ψ a S ψ b C ψ l 2 S q 1 ψ C q 2 + l 3 S q 1 ψ C q 2 q 3 + l 4 S q 1 ψ C q 2 q 3 q 4 q ˙ 1 + l 2 C q 1 ψ S q 2 + l 3 C q 1 ψ S q 2 q 3 + l 4 C q 1 S q 2 q 3 q 4 q ˙ 2 + l 4 C q 1 ψ S q 2 q 3 q 4 q ˙ 4 + l 3 C q 1 ψ S q 2 q 3 + l 4 C q 1 ψ S q 2 q 3 q 4 q ˙ 3 ξ ˙ y = u l S ψ + u m C ψ + a C ψ b S ψ l 2 C q 1 ψ C q 2 + l 3 C q 1 ψ C q 2 q 3 + l 4 C q 1 ψ C q 2 q 3 q 4 q ˙ 1 + l 2 S q 1 ψ S q 2 + l 3 S q 1 ψ S q 2 q 3 + l 4 S q 1 S q 2 q 3 q 4 q ˙ 2 + + l 4 S q 1 ψ S q 2 q 3 q 4 q ˙ 4 + l 3 S q 1 ψ S q 2 q 3 + l 4 S q 1 ψ S q 2 q 3 q 4 q ˙ 3 ξ ˙ z = l 2 C q 2 + l 3 C q 2 q 3 + l 4 ˙ C 2 q 3 q 4 q ˙ 2 + l 3 C q 2 q 3 + l 4 C q 2 q 3 q 4 q ˙ 3 + ( l 4 C q 2 q 3 q 4 ) q ˙ 4
The differential kinematic model (8) of the robot is written in matrix form:
ξ ˙ t = J q v t
where, J q 3 × 7 represents the Jacobin matrix of the mobile manipulator robot, which defines a linear mapping between the joint velocity vector defined by v t = u l , u m , ω , q ˙ 1 , q ˙ 2 , q ˙ 3 , q ˙ 4 R 7 and the end-effector velocity vector ξ ˙ t = ξ ˙ x , ξ ˙ y , ξ ˙ z 3 . The Jacobian matrix J q 3 × 7 is a non-square matrix where the number of rows is less than the number of columns; therefore, the mobile manipulator robot is a redundant system.

3.2. Control

The teleoperation scheme allows to command any robot over long distances, i.e., it allows to operate the robot to perform a specific task by the human operator (Figure 6). The defined task usually consists of several operation processes, and these commonly include three stages: (i) approach; (ii) manipulation; and (iii) return process. When the mission is initiated, the human operator must move the mobile robot manipulator close to the object, the robotic arm should be considered to be within its working space. Once the mobile robot reaches the proximity of the target object, the human operator switches the locomotion mode to manipulation mode (control of the robotic arm). When the manipulation mission is completed, the human operator switches the manipulation mode back to locomotion mode and the mobile manipulator robot shall return to a safe area. To accomplish this task, the teleoperation scheme proposed for this work is presented in Figure 7.

3.2.1. Local Site

The proposed scheme allows the generation of reference commands for each robot depending on the task selected by the human operator (h). The reference vectors generated by the human operator are η ˙ ref h t , h ˙ ref h t y ξ ˙ ref h t ; such that, each reference is dependent on the haptic device, i.e., η ˙ ref h t , h ˙ ref h t , ξ ˙ ref h t = f p ˙ ref h t , where p ˙ ref h t is the direct reference that the haptic device generates at each instant (t) when excited by the human operator (h), and is defined by:
p ˙ ref h t = p ˙ x h t p ˙ y h t p ˙ z h t

3.2.2. Locomotion

The human operator can select: (i) to control only the omnidirectional mobile robot, (ii) to control the total mobile manipulator robot. For both locomotion cases the commands generated by the human operator through the haptic device shall correspond to velocity commands.
(i) Omnidirectional Robot (Reference Generation). This mode allows only the omnidirectional robot to be controlled. For this, the human operator (h) commands the point of interest of the omnidirectional robot by the haptic device. Therefore, the velocity reference for the robot is obtained through the differential kinematics of the mobile robot (2), such that:
η ˙ ref h = J P ψ u ref h t
Remark 1. 
In this case the reference p ˙ z h is not translated to linear velocity but to rotational velocity of the robot since the omnidirectional robot only moves in the X Y -plane and rotates around the p ˙ z h -axis of the fixed reference system.
(ii) Mobile Manipulator Robot (Reference Generation). Now, in this mode the operator commands the entire mobile manipulator robot, i.e., he controls the end-effector by the haptic device. Considering the human operator’s reference vector p ˙ ref h t R 3 , the reference generator considers the mapping of these velocities generated by the human operator as follows:
ξ ˙ ref h t = R ψ , q 1 p ˙ ref h t
where, ξ ˙ ref h t 3 is the reference velocity of the end-effector of the mobile manipulator robot; and R ψ , q 1 : p ˙ ref h t ξ ˙ ref h t is a matrix of that maps the velocities generated by (h) at the reference velocities of the operational point of the robot.

3.2.3. Manipulation

(i) Robotic Arm (Reference Generator). In this mode the human operator (h) controls only the robotic arm; for which, the commands generated by (h) through the haptic device p ˙ ref h t will correspond to position commands of the end-effector of the robotic arm. h ˙ ref h t 3 . The same expression of (12) is considered, however, taking into account that for this the reference velocities generated by (h) are limited according to the manipulability of the robotic arm, so that:
h ˙ ref h t = s at p ˙ ref h t s at = 0 3 × 3 w λ I 3 × 3 w > λ
where w = det J a J a T , is the manipulability index [22]; and λ + is a real value we define the minimum manipulability for the robotic arm such that λ > 0 .

3.2.4. Remote Site

The remote station considers the implementation of control algorithms that consider the signals generated by the human operator (h), through the haptic device. p ˙ ref h t , and convert them into signals for the mobile manipulator robot, according to the mode of operation selected by the operator (h). Therefore, each controller generates the vector of manoeuvrability velocities of the robot v ref t . It is defined by:
v ref t = u ref t q a ref t = u l r e f u m r e f ω r e f q ˙ 1 r e f q ˙ 2 r e f q ˙ 3 r e f q ˙ 4 r e f T
(i) Kinematic Control of Omnidirectional Mobile Robot. The kinematic model of Equation (2) is considered, therefore, the proposed control law for the omnidirectional robot is:
u ref t = J P 1 ψ η ˙ ref h t + K p tanh η ˜ ˙ t
where, η ˙ ref h t represents the desired velocity generated by the human operator (h) through the haptic device; J P 1 ψ represents the Jacobian matrix of the behaviour of the omnidirectional robot; K p 3 × 3 is the gain matrix that weights the velocity errors of the local controller of the omnidirectional robot; and u ref t represent the manoeuvrability velocities of the omnidirectional robot. The motion control error of the omnidirectional robot can be defined as the difference between the commands generated by human operator and the motion velocity of the omnidirectional robot, i.e:
η ˜ ˙ t = η ˙ ref h t η ˙ t
As the operator generates commands only for the mobile robot, then the robotic arm should remain in a fixed position, therefore a control law is defined to keep the robotic arm static during the locomotion task. Let be the desired position of the arm end-effector with respect to the reference frame of the mobile robot h d t = h x d , h x d , h x d T . The controller that generates the reference velocities of the robotic arm joints is given by:
q ˙ a ref t = J a # K a tanh h ˜ a t + I J a # J a H tanh χ
where, K a 3 × 3 is a positive diagonal matrix weighing the control error h ˜ a t = h d t h a t ; H 4 × 4 is a positive diagonal matrix weighing the error of the desired configuration of the robotic arm χ = q ˜ 1 , q ˜ 3 , q ˜ 3 , q ˜ 4 T . As the Jacobian of the arm has more columns than rows, the controller is based on secondary objectives as it has several degrees of freedom; and J a # is the pseudo-inverse matrix of the robotic arm.
(ii) Robotic Arm Kinematic Control. The position control of the end-effector position of the robotic arm, the kinematic model defined in Equation (4) is considered; therefore, the proposed control law is:
q ˙ aref t = J a # h ˙ ref h + K a tanh h ˜ a t + I J a # J a H tanh χ
where, J a # q a = J a T J a J a T 1 is the pseudo-inverse matrix of the robotic arm; h ref h t = t 0 t f h ˙ ref h τ represents the desired position of the end-effector of the robotic arm; h ˙ ref h t is the desired velocity generated by the operator (h); h ˜ a t = h ref h t h a t represents the error vector of the position of the robotic arm; h a t represents the actual position of the end-effector of the robotic arm. Finally, in this case, the omnidirectional mobile robot maintains its position, i.e., it does not make any movement η ˙ ref h t = 0 3 , therefore, the reference velocity in this case is:
u ref t = 0 0 0 T
(iii) Kinematic Control of the Mobile Manipulator Robot. Similarly, the kinematic model of Equation (9) is considered, therefore, the proposed control law is:
v ref t = J # ξ ˙ ref h t + K 1 tanh ξ ˜ ˙ t + I 7 × 7 J # J K 2 tanh ζ
where, v ref t 7 represent the manoeuvrability velocities of the mobile manipulator robot; ξ ˙ ref h t represents the desired velocity of the end-effector of the mobile manipulator robot generated by the human operator (h) through the haptic device; J # ψ , q a = J T J J T 1 is the pseudo-inverse matrix of the mobile manipulator robot; K 1 3 ; K 2 7 × 7 are positive diagonal matrices weighing the robot end-effector position control error and the secondary target control vector errors respectively. The motion control error of the mobile manipulator robot is defined as the difference between the commands generated by the human operator and the motion velocity of the end-effector of the mobile manipulator robot, such that:
ξ ˜ ˙ t = ξ ˙ ref h t ξ t
For this work, the configuration of the robotic arm is considered as secondary objectives, therefore, the secondary objective vector is defined as:
ζ t = [ 0 0 0 q 1 d q 1 q 2 d q 2 q 3 d q 3 q 4 d q 4 ] T
where, q i d con i = 1 , 2 , 3 , 4 represents the desired position of each joint of the robotic arm; and q i is the current position of each joint of the robotic arm.

3.3. Dynamic of the Robots

To emulate the behaviour of the robot with its dynamic characteristics. In this section the models of the omnidirectional robot and the robotic arm are defined. Therefore, the robot in the 3D virtual environment allows to replicate the real system behaviour as close to real as possible. With this we simulate in real time the physical performance of the robot. This methodology is a tool that provides a safe and ideal environment for the validation of control techniques in the development of prototypes and for educational purposes for handling robotic prototypes. Figure 8 shows the interpretation of the virtual robot.

3.3.1. Direct Dynamics of the Omnidirectional Robot

The dynamic model is a mathematical formulation that describes the dynamic behaviour of a system. It establishes the relationships between the robot’s joint coordinates or coordinates of interest, velocities, accelerations, forces and torques applied at its joints and the robot’s parameters, such as masses and moments of inertia. The model used in this work is a simplified [23] model, this model is represented in terms of reference velocities, because it is easier to apply velocities than torques, as in traditional robot models. Therefore, to emulate the dynamics of the robot, the direct dynamic model of the robot is used. For this purpose, the inverse dynamic model u ref d t = M σ u ˙ t + C σ , ω u t is used. Now, the direct dynamic model used in this work is defined by:
u ˙ t = A P σ u ref t + B P σ , ω u t
where, u ref t 3 is the reference velocity vector of the omnidirectional robot; u ˙ t = u ˙ l u ˙ m ω ˙ T 3 is the vector of accelerations of the omnidirectional robot obtained from the direct dynamics; u t 3 is the actual velocity vector of the robot; A P σ = M σ 1 ; B P σ , ω = M σ 1 C σ , ω . The matrix M σ 3 × 3 is the mass matrix of the robot, is square and positive definite; and C σ , ω 3 × 3 is a quadratic matrix representing the centripetal and coriolis forces of the robot. The matrices are given by:
M σ = d i a g [ σ 1 , σ 2 , σ 3 ] C σ , ω = σ 4 ω σ 5 0 ω σ 6 σ 7 0 0 0 σ 8
where, σ = σ 1 , σ 2 , δ 8 T is the vector of the dynamic parameters of the omnidirectional robot.

3.3.2. Direct Dynamics of the Robotic Arm

Similar to the omnidirectional robot, the dynamic model used in this work is represented in terms of reference velocities and not in terms of forces and torques. That is to say, the structure of the dynamic model is given by q ˙ a ref t = M a δ q ¨ a t + C a δ , q ˙ a q ˙ a t + g δ . Where, q ¨ a t = q ¨ 1 , q ¨ 2 , q ¨ 3 , q ¨ 4 T is the vector of accelerations of the robotic arm; M a δ 4 × 4 is the mass and inertia matrix of the robotic arm; C a δ , q ˙ a 4 × 4 is the matrix of centripetal and coriolis forces; y g a δ 4 is the gravity vector.
M a δ = m 11 δ m 12 δ m 13 δ m 14 δ m 21 δ m 22 δ m 23 δ m 24 δ m 31 δ m 32 δ m 33 δ m 34 δ m 41 δ m 42 δ m 43 δ m 44 δ g a δ = g 1 δ g 2 δ g 3 δ g 4 δ C a δ , q ˙ a = c 11 δ , q ˙ a c 12 δ , q ˙ a c 13 δ , q ˙ a c 14 δ , q ˙ a c 21 δ , q ˙ a c 22 δ , q ˙ a c 23 δ , q ˙ a c 24 δ , q ˙ a c 31 δ , q ˙ a c 32 δ , q ˙ a c 33 δ , q ˙ a c 34 δ , q ˙ a c 41 δ , q ˙ a c 42 δ , q ˙ a c 43 δ , q ˙ a c 44 δ , q ˙ a
where, δ = δ 1 , δ 2 , δ n T is the vector of dynamic parameters of the robotic arm. Finally, to emulate the robotic arm, the direct dynamics defined by:
q ¨ a t = A a δ q ˙ a ref t + B a δ , q ˙ a q ˙ a t + C a δ
where, A a δ = M a δ 1 ; B a δ , q ˙ a = M a δ 1 C a δ , q ˙ a ; y C a δ = M a δ 1 g δ .

4. Virtual Environment

This section presents the development of an immersive environment, a 3D virtual environment is developed in the Unity 3D graphics engine to simulate the teleoperation tasks of a mobile manipulator robot in two completely different scenes. The virtual environment is focused on teaching and learning in the area of robotics, so it allows interaction with different users. To do this, in the first instance, the virtualisation of the 4 DOF robotic arm and the omnidirectional robot is carried out, to subsequently place them in 2 environments where the tasks demanded by the user are carried out through a means of communication between the operator and the virtual environment.

4.1. Virtualization of the Mobile Manipulator Robot

In order to carry out the digitalization process of the mobile manipulator robot, it is necessary to take external resources as a reference. As shown in the diagram above (Figure 9), a 4 DOF robot arm is used for the manipulation of an object in question, while a KUKA omnidirectional robot is used as a reference for the movement of the robot [24]. Then, using CAD software, a 3D solid object is modelled for each of the robots, with the necessary physics for their respective movements and constraints in order to develop the required animation. Once the CAD design of the two robots has been completed, it is exported to Autodesk 3DS-Max 2024 software, which allows the design to be rendered, dimension and rotation changes to be made, and exported in the format required for the graphics engine. In addition, the rotation axes of each joint of the robotic arm are configured and the centre of the axis of the mobile robot is established during the animation phase. At this stage, the movement of each joint and of the platform is checked to ensure that it matches the axes of movement of the real robots. Each component of the robot can then be exported separately to the graphics engine, as each part has its own animation and movement, determined by the kinematic and dynamic model through script programming.

4.2. Scenario Virtualization

For the development of each scenario, it is essential to identify the places that will be used for the creation of the environment. In this case, the project has focused on two scenes in which dangerous objects are handled. To do this, an exhaustive analysis of the objects that make up the scenarios in a real environment is carried out (view Figure 10).
For the development of the virtual environment, The use of prefabricated scenarios that provide a high degree of realism in the objects present in the real environment. This approach facilitates the creation of the virtual scenes and allows for a more efficient development process. Since the main objective of the project is the teaching and learning of service robotics and robot morphology. The use of these prefabricated designs ensures an accurate and functional representation of the different scenarios.
Once the prefabricated scenarios and objects have been selected, we proceed to render the scenes, adjusting to the scale and dimensions of the robot as necessary. For this purpose, 3DS Max software is used, which acts as a link between the prefabricated design and the Unity graphics engine.
Once the export and rendering of the prefabricated scenarios in the Unity graphic engine is completed, we proceed to add the attributes, materials and physical properties to each of the elements present. In this project, two main scenarios have been developed. The first is an environment dedicated to the handling of explosives, where a grenade has been placed near a machine; in this case, the object of interest is the bomb. The second scenario represents a laboratory for handling chemicals, with a flask filled with an explosive liquid as the main object, as shown in the graphic above (Figure 11).
The objective in both scenarios is to move the object of interest to a safe place, away from the risk area, using exclusively the movements of the mobile omnidirectional manipulator robot.
To enable navigation through the environment by moving the mobile manipulator robot, a visual interface has been developed and is shown in the figure (Figure 12). This interface presents the virtual environment from the perspective of the manipulator robot as the main screen, providing an overview of the operating area. Additionally, three auxiliary cameras have been incorporated to provide different perspectives crucial to the task. One of these auxiliary cameras provides a detailed view of the operating end, which is essential for precise manipulations. The other two cameras focus on the objects of interest in each scene, allowing a clear visualization of the target elements to be manipulated. This multi-camera setup ensures that the user has a complete understanding of the environment in order to monitor and control the robot, thus improving efficiency and accuracy in task execution. According to the type of task to be performed, whether it is locomotion or object manipulation by the mobile manipulator robot, the control law is selected by means of the Novint Falcon device, as described in Section 3.2. Additionally, for the movement of the end-effector, there are different types of sensors that help this movement, such as force, optical or piezo-resistive sensors, but for this application, by means of the haptic device, an on/off switching of the gripper is generated, which grips the object as long as it is close to the target.
Additionally, an interface developed in MATLAB software is used, which requests the necessary information to select the predetermined scene and set the duration of the simulation. This interface presents the following data:
  • Execution Time
  • User
  • Scene.
This menu also displays the X, Y, Z coordinates obtained from the Falcon haptic device, as well as the hits and misses related to the manipulation of the objects of interest in the environment. Including Play, Stop, and Save buttons, in order to start or end the simulation.
As mentioned above, the goal of the teaching-learning system is to transport an object of interest to a safe location using the mobile manipulator robot. This safe place is located outside the two main scenes, and is where the dangerous object should be deposited. If the task is performed successfully, the menu increases the number of successes; otherwise, the number of failures is increased (view Figure 13). Otherwise, if the object of interest falls in an area outside the safe area or contacts any other part of the environment, such as walls, machines or other objects present in the scenes, the number of failures is increased.

4.3. Communication Chanel

Finally, once the virtualization of the scenarios and objects in the graphics engine has been completed and the control algorithms have been established in MATLAB, a bilateral communication channel known as Shared Memories is used to facilitate the exchange of information between the virtual environment and the MATLAB mathematical software, using a Dynamic Link Library (DLL) that creates a shared memory in RAM (SM) for the exchange of data between different programs [25].Through the SM, the control actions calculated in the target controller are integrated into the mathematical model of the robotic system. This model calculates the position and velocity outputs, which are then sent to the mathematical software, thus closing the control loop by providing feedback on the robot’s output states. The desired task can be selected by the haptic device, or in the case of using an autonomous control, the user can define the desired task to be performed, e.g., positioning, trajectory tracking or path following, where the computation of the control algorithms is performed by MATLAB mathematical software [21]. Subsequently, the references generated by the controllers are applied to the robot. The Figure 14 indicates the communication scheme used for bilateral communication.

5. Experimental Results

This section presents the results of the use of the virtual environment as a teaching-learning development tool in the field of educational robotics. In addition, a brief usability analysis of the tool on a given number of students is presented. The hardware used for the simulation of the tool is described as follows: Intel i7-7700HQ processor, CPU 2.80 GHz × 8 (Intel, Santa Clara, CA, USA) and Ge-Force GTX 1050 graphics (Nvidia Corporation, Santa Clara, CA, USA). The characteristics of such hardware is not strictly related to the performance of the tool. Nevertheless, for better animation, it is strongly recommended to include specialized graphics hardware. The implemented system for the teleopraticon is shown in Figure 15. Here, you can see the human operator with the haptic device teleoperated to the robot in the developed virtual environment. In [26] you can watch a video showing the operation of the developed application, where the scenarios and tasks developed by the user are shown.
When the objective of the task is accomplished, with each of the limitations presented by the movements of the robot and the environment, the menu shows a success in bringing the object of interest, in this case an explosive, to the safe place. Figure 16 shows how the object is transported to the safe place by the robot commanded by the human operator.
Otherwise, if due to a wrong movement by the user when controlling the mobile manipulator robot, and the object falls somewhere other than the safe zone, the environment presents an explosion animation and the menu indicates an increase in the number of failures due to poor execution of the program, thus resetting the object in its initial position, as shown in the next graphic (Figure 17).

Usability Evaluation

To identify the usability and relevance of the tool, a comprehensive evaluation of the didactic application of teaching and learning is performed. For this purpose, 20 students of the Electronics and Automation Engineering course of the University of the Armed Forces ESPE-L have been chosen to use the virtual environment. These students have basic knowledge in kinematic and dynamic modeling of robots, control systems and virtual reality, which allows them to correctly evaluate the tool. The survey consists of 10 questions focused on evaluating the functionality and intuitiveness of the developed tool. Student responses will be collected using a scale from 1 to 5, where 1 represents “Very difficult”, 2 “Difficult”, 3 “Intermediate”, 4 “Easy” and 5 “Very easy”. The list of questions proposed to the users is presented in Table 1.
The results of the usability evaluation of the system are presented in the graphic above (Figure 18). For the first question, a mean of 3.25 with a standard deviation of 0.79 was obtained. The second question showed a mean of 3.35 with a deviation of 0.88. The third question registered a mean of 3.40 with a deviation of 0.68, while the fourth question reached a mean of 3.80 with a deviation of 0.52. The fifth question had a mean of 3.40 and a standard deviation of 0.88. For the sixth question, the mean was 3.30 with a deviation of 0.69, and for the seventh question, a mean of 3.30 with a deviation of 0.8 was obtained. The eighth question recorded a mean of 3.65 with a standard deviation of 0.67. The ninth question obtained a mean of 3.35 with a deviation of 0.75, while the tenth question yielded a mean of 4.00 with a deviation of 0.85.
The usability evaluation data, illustrated in the figure above (Figure 18), provide detailed insight into how users perceive the teaching-learning tool. The error bars in the figure show the variability of responses among students, indicating the range within which the responses of the rest of the population fall.
Overall, the results suggest a positive evaluation of the virtual environment tool. The relatively small standard deviations reveal a favorable perception of the users regarding the usability of the system. Thus indicating an ease of use of the tool.
In addition, a traditional theoretical teaching method on kinematic and dynamic modeling of robots was chosen to be used with a different group of 20 additional students, but with similar characteristics to those who used the tool. Subsequently, a comparative evaluation of the knowledge acquired in both groups was carried out. The results indicated that the students who used the teaching-learning system demonstrated greater knowledge about motions, rotations, and the kinematic and dynamic modeling of robots, compared to the group that received conventional theoretical instruction.

6. Conclusions

The kinematic and dynamic modeling of the robot using virtual environments significantly increases the realism of the simulation tool. By replicating with greater precision the movements and behaviors of the robot in question, resulting in the simulation being remarkably close to the behavior of the real robot. This fidelity allows users to interact with the virtual environment in a way that almost identically mimics the operation of the robot in real situations. This results in a much more effective and realistic learning experience, providing users with a more accurate understanding of how the robot behaves in a real situation.
The implementation of virtual environments facilitates the development of new learning tools, offering several significant advantages. First of all, these environments allow a much more accessible and economical system acquisition, by considerably reducing costs compared to the purchase of real systems. In addition, they eliminate the risks associated with the use of physical equipment, such as possible damage and the need to operate with caution. This results in a safe and hazard-free learning experience. Similarly, virtual environments do not require the presence of expert operators for training, which simplifies the process and results in greater accessibility for users. In summary, virtual environments reduce costs, risks and also optimize the teaching-learning process, offering a practical and efficient solution compared to real systems.
The use of the Falcon haptic device in the development of the teaching-learning system proves to be an excellent tool to enhance interaction between the environment and the users. The ability to provide feedback allows students to experience increased realism, manipulation and control of robots, which significantly enriches the learning process. Overcoming the limitations inherent in traditional theoretical teaching methods. In short, the use of the haptic device in combination with virtual environments offers an innovative and highly effective approach to robotics education, benefiting both students and instructors.
The development of a teaching-learning system using virtual environments, presented in this work, facilitates the acquisition of new knowledge on robot modeling and control for users with basic or limited knowledge on the subject. This approach is more effective compared to theoretical, classical or conventional teaching methods, which present limitations for both the instructor and the students in the learning process.

Author Contributions

Conceptualization, F.J.P., C.P.C., J.S.O. and V.H.A.; methodology, F.J.P., C.P.C., J.S.O. and V.H.A.; software, F.J.P., C.P.C., J.S.O. and V.H.A.; validation, F.J.P., C.P.C., J.S.O. and V.H.A.; formal analysis, F.J.P., C.P.C., J.S.O. and V.H.A.; investigation, F.J.P., C.P.C., J.S.O. and V.H.A.; resources, F.J.P., C.P.C., J.S.O. and V.H.A.; data curation, F.J.P., C.P.C., J.S.O. and V.H.A.; writing—original draft preparation, F.J.P., C.P.C., J.S.O. and V.H.A.; writing—review and editing, F.J.P., C.P.C., J.S.O. and V.H.A.; visualization, F.J.P., C.P.C., J.S.O. and V.H.A.; supervision, F.J.P., C.P.C., J.S.O. and V.H.A.; project administration, F.J.P., C.P.C., J.S.O. and V.H.A.; funding acquisition V.H.A. and J.S.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

If anyone wants to get the original data of this paper, please contact with Fernando J. Pantusin.

Acknowledgments

The authors would like to thank the Universidad de las Fuerzas Armadas ESPE for the support given to research, development, and innovation, through the "Autonomous Control of Aerial Manipulators Robots" research project in Ecuador; the ARSI research group; CICHE Research Center and SISAu Research Group. The results of this work also are part of the project “Tecnologías de la Industria 4.0 en Educación, Salud, Empresa e Industria” developed by Universidad Tecnológica Indoamérica; the German Academic Exchange Service, also known by its German acronym (DAAD) for Scholarship Award in third Country Programme Latin America; and the Instituto de Automática of the Universidad Nacional de San Juan and CONICET in Argentina.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gonzalez-Aguirre, J.A.; Osorio-Oliveros, R.; Rodríguez-Hernández, K.L.; Lizárraga-Iturralde, J.; Morales Menendez, R.; Ramírez-Mendoza, R.A.; Ramírez-Moreno, M.A.; Lozoya-Santos, J.d.J. Service Robots: Trends and Technology. Appl. Sci. 2021, 11, 10702. [Google Scholar] [CrossRef]
  2. Zachiotis, G.A.; Andrikopoulos, G.; Gornez, R.; Nakamura, K.; Nikolakopoulos, G. A Survey on the Application Trends of Home Service Robotics. In Proceedings of the 2018 IEEE International Conference on Robotics and Biomimetics (ROBIO), Kuala Lumpur, Malaysia, 12–15 December 2018; pp. 1999–2006. [Google Scholar] [CrossRef]
  3. Jeon, J.; Jung, H.r.; Pico, N.; Luong, T.; Moon, H. Task-Motion Planning System for Socially Viable Service Robots Based on Object Manipulation. Biomimetics 2024, 9, 436. [Google Scholar] [CrossRef] [PubMed]
  4. Zielinska, T.T. History of Service Robots and New Trends. In Novel Design and Applications of Robotics Technologies; IGI Global: Hershey, PA, USA, 2019; pp. 158–187. [Google Scholar] [CrossRef]
  5. Karabegovi, I.; Karabegovi, E.; Mahmi, M.; Husak, E. The application of service robots for logistics in manufacturing processes. Adv. Prod. Eng. Manag. 2015, 10, 185–194. [Google Scholar] [CrossRef]
  6. Dobrev, Y.; Vossiek, M.; Christmann, M.; Bilous, I.; Gulden, P. Steady Delivery: Wireless Local Positioning Systems for Tracking and Autonomous Navigation of Transport Vehicles and Mobile Robots. IEEE Microw. Mag. 2017, 18, 26–37. [Google Scholar] [CrossRef]
  7. Abdel-Basset, M.; Chang, V.; Nabeeh, N.A. An intelligent framework using disruptive technologies for COVID-19 analysis. Technol. Forecast. Soc. Change 2021, 163, 120431. [Google Scholar] [CrossRef] [PubMed]
  8. Leung, R. Smart hospitality: Taiwan hotel stakeholder perspectives. Tour. Rev. 2019, 74, 50–62. [Google Scholar] [CrossRef]
  9. Ju, C.; Son, H.I. Modeling and Control of Heterogeneous Agricultural Field Robots Based on Ramadge–Wonham Theory. IEEE Robot. Autom. Lett. 2020, 5, 48–55. [Google Scholar] [CrossRef]
  10. Pecka, M.; Zimmermann, K.; Reinstein, M.; Svoboda, T. Controlling Robot Morphology From Incomplete Measurements. IEEE Trans. Ind. Electron. 2017, 64, 1773–1782. [Google Scholar] [CrossRef]
  11. Saltaren, R.; Aracil, R.; Alvarez, C.; Yime, E.; Sabater, J.M. Field and service applications-Exploring deep sea by teleoperated robot-An Underwater Parallel Robot with High Navigation Capabilities. IEEE Robot. Autom. Mag. 2007, 14, 65–75. [Google Scholar] [CrossRef]
  12. Anwar, S.; Bascou, N.; Menekse, M.; Kardgar, A. A Systematic Review of Studies on Educational Robotics. J. Pre-College. Eng. Educ. Res. 2019, 9, 2. [Google Scholar] [CrossRef]
  13. Guerrero-Osuna, H.A.; Nava-Pintor, J.A.; Olvera-Olvera, C.A.; Ibarra-Pérez, T.; Carrasco-Navarro, R.; Luque-Vega, L.F. Educational Mechatronics Training System Based on Computer Vision for Mobile Robots. Sustainability 2023, 15, 1386. [Google Scholar] [CrossRef]
  14. Arshad, N.I.; Hashim, A.S.; Mohd Ariffin, M.; Mohd Aszemi, N.; Low, H.M.; Norman, A.A. Robots as Assistive Technology Tools to Enhance Cognitive Abilities and Foster Valuable Learning Experiences among Young Children With Autism Spectrum Disorder. IEEE Access 2020, 8, 116279–116291. [Google Scholar] [CrossRef]
  15. Charão dos Santos, M.C.; Sangalli, V.A.; Pinho, M.S. Evaluating the Use of Virtual Reality on Professional Robotics Education. In Proceedings of the 2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC), Torino, Italy, 4–8 July 2017; Volume 1, pp. 448–455. [Google Scholar] [CrossRef]
  16. Safaric, R.; Sinjur, S.; Zalik, B.; Parkin, R. Control of robot arm with virtual environment via the Internet. Proc. IEEE 2003, 91, 422–429. [Google Scholar] [CrossRef]
  17. Carvajal, C.P.; Méndez, M.G.; Torres, D.C.; Terán, C.; Arteaga, O.B.; Andaluz, V.H. Autonomous and Tele-Operated Navigation of Aerial Manipulator Robots in Digitalized Virtual Environments. In Proceedings of the 5th International Conference on Augmented Reality, Virtual Reality, and Computer Graphics, Otranto, Italy, 24–27 June 2018; De Paolis, L.T., Bourdot, P., Eds.; Springer: Cham, Switzerland, 2018; pp. 496–515. [Google Scholar] [CrossRef]
  18. Pantusin, F.J.; Cordonez, J.W.; Quimbita, M.A.; Andaluz, V.H.; Vargas, A.D. Training System for the Tomato Paste Production Process Through Virtual Environments. In Proceedings of the Intelligent Systems and Applications, Barcelona, Spain, 13–17 March 2023; Arai, K., Ed.; Springer: Cham, Switzerland, 2024; pp. 46–55. [Google Scholar] [CrossRef]
  19. Vera-Mora, G.; Sanz, C.V.; Coma-Roselló, T.; Baldassarri, S. Model for Designing Gamified Experiences Mediated by a Virtual Teaching and Learning Environment. Educ. Sci. 2024, 14, 907. [Google Scholar] [CrossRef]
  20. Saunier, L.; Hoffmann, N.; Preda, M.; Fetita, C. Virtual Reality Interface Evaluation for Earthwork Teleoperation. Electronics 2023, 12, 4151. [Google Scholar] [CrossRef]
  21. Andaluz, V.H.; Carvajal, C.P.; Arteaga, O.; Pérez, J.A.; Valencia, F.S.; Solís, L.A. Unified Dynamic Control of Omnidirectional Robots. In Proceedings of the Towards Autonomous Robotic Systems, Guildford, UK, 19–21 July 2017; Gao, Y., Fallah, S., Jin, Y., Lekakou, C., Eds.; Springer: Cham, Switzerland, 2017; pp. 673–685. [Google Scholar] [CrossRef]
  22. Yoshikawa, T. Manipulability of Robotic Mechanisms. Int. J. Robot. Res. 1985, 4, 3–9. [Google Scholar] [CrossRef]
  23. Falkenhahn, V.; Mahl, T.; Hildebrandt, A.; Neumann, R.; Sawodny, O. Dynamic Modeling of Bellows-Actuated Continuum Robots Using the Euler–Lagrange Formalism. IEEE Trans. Robot. 2015, 31, 1483–1496. [Google Scholar] [CrossRef]
  24. Bischoff, R.; Huggenberger, U.; Prassler, E. KUKA youBot—A mobile manipulator for research and education. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 1–4. [Google Scholar] [CrossRef]
  25. Gallardo, C.; Pogrebnoy, A.; Varela-Aldás, J. Development and Use of Dynamic Link Libraries Generated Under Various Calling Conventions. In Information Technology and Systems: ICITS 2021; Springer International Publishing: Cham, Switzerland, 2021; Volume 1, pp. 220–232. [Google Scholar] [CrossRef]
  26. Carvajal, C.P.; Pantusin, F.J.; Ortiz, J.S.; Andaluz, V.H. Virtual Teleoperation System for Mobile Manipulator Robots Focused on Object Transport. 2024. Available online: https://youtu.be/76nzhxMp3SM (accessed on 24 July 2024).
Figure 1. Methodology of the work proposal.
Figure 1. Methodology of the work proposal.
Technologies 12 00146 g001
Figure 2. Diagram of the conceptualisation process.
Figure 2. Diagram of the conceptualisation process.
Technologies 12 00146 g002
Figure 3. Omnidirectional Robot Kinematic.
Figure 3. Omnidirectional Robot Kinematic.
Technologies 12 00146 g003
Figure 4. Kinematic of Robotic Arm.
Figure 4. Kinematic of Robotic Arm.
Technologies 12 00146 g004
Figure 5. Mobile Robot Kinematic.
Figure 5. Mobile Robot Kinematic.
Technologies 12 00146 g005
Figure 6. Human operator with haptic device.
Figure 6. Human operator with haptic device.
Technologies 12 00146 g006
Figure 7. Teleoperation schematic for the mobile manipulator robot.
Figure 7. Teleoperation schematic for the mobile manipulator robot.
Technologies 12 00146 g007
Figure 8. Virtual emulator of the mobile manipulator robot scheme.
Figure 8. Virtual emulator of the mobile manipulator robot scheme.
Technologies 12 00146 g008
Figure 9. 4 DOF Robotic Arm and Omnidirectional Robot Virtualization.
Figure 9. 4 DOF Robotic Arm and Omnidirectional Robot Virtualization.
Technologies 12 00146 g009
Figure 10. Virtual Environment Digization.
Figure 10. Virtual Environment Digization.
Technologies 12 00146 g010
Figure 11. Virtual Environment Scenes.
Figure 11. Virtual Environment Scenes.
Technologies 12 00146 g011
Figure 12. Teaching−learning tool environment.
Figure 12. Teaching−learning tool environment.
Technologies 12 00146 g012
Figure 13. Safe Zone.
Figure 13. Safe Zone.
Technologies 12 00146 g013
Figure 14. Communication channel scheme.
Figure 14. Communication channel scheme.
Technologies 12 00146 g014
Figure 15. Experimental tests in the virtual environment for the teaching-learning of robotics through the human operator.
Figure 15. Experimental tests in the virtual environment for the teaching-learning of robotics through the human operator.
Technologies 12 00146 g015
Figure 16. Object moved to safe place correctly.
Figure 16. Object moved to safe place correctly.
Technologies 12 00146 g016
Figure 17. Error in the transport of the object.
Figure 17. Error in the transport of the object.
Technologies 12 00146 g017
Figure 18. Results of the list of questions to the students.
Figure 18. Results of the list of questions to the students.
Technologies 12 00146 g018
Table 1. List of questions.
Table 1. List of questions.
Ord.Questions
1How easy was it for you to learn how to use our virtual environment tool?
2How useful have you found our tool in improving your understanding of service robotics?
3Do you find the user interface of our tool intuitive and easy to use?
4Do virtual environments offer an adequate variety of resources (videos, images, etc.) to facilitate your learning?
5Are the instructions provided in the tool clear and easy to follow?
6Do you consider that the tool is interactive and allows you to actively participate in the learning process about the cinematic model of the robot?
7Are you satisfied with the functionality and accuracy of the robot’s movements and rotations in the simulation?
8Has the tool helped you better understand the concepts of movements and rotations of a mobile omnidirectional manipulator robot?
9Do you consider that the virtual environment has provided you with a valuable educational experience compared to other learning methods?
10Overall, would you recommend our simulation tool in Unity to other students or teachers interested in learning about robotics?
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pantusin, F.J.; Carvajal, C.P.; Ortiz, J.S.; Andaluz, V.H. Virtual Teleoperation System for Mobile Manipulator Robots Focused on Object Transport and Manipulation. Technologies 2024, 12, 146. https://doi.org/10.3390/technologies12090146

AMA Style

Pantusin FJ, Carvajal CP, Ortiz JS, Andaluz VH. Virtual Teleoperation System for Mobile Manipulator Robots Focused on Object Transport and Manipulation. Technologies. 2024; 12(9):146. https://doi.org/10.3390/technologies12090146

Chicago/Turabian Style

Pantusin, Fernando J., Christian P. Carvajal, Jessica S. Ortiz, and Víctor H. Andaluz. 2024. "Virtual Teleoperation System for Mobile Manipulator Robots Focused on Object Transport and Manipulation" Technologies 12, no. 9: 146. https://doi.org/10.3390/technologies12090146

APA Style

Pantusin, F. J., Carvajal, C. P., Ortiz, J. S., & Andaluz, V. H. (2024). Virtual Teleoperation System for Mobile Manipulator Robots Focused on Object Transport and Manipulation. Technologies, 12(9), 146. https://doi.org/10.3390/technologies12090146

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop