1. Introduction
Service robotics focuses on the development of robotic systems capable of performing a wide range of beneficial tasks in non-industrial environments, including industrial automation applications, with a special focus on interacting with humans to improve their quality of life [
1]. With the most recent technological advances, attention has turned not only to meeting human needs, but also to improving comfort and human interaction in various areas of daily life, from the home to hospitals and public spaces [
2,
3]. Service robotics comprises two main groups, professional and personal use according to the IFR [
4]. Among these groups are a number of approaches such as: (i) Transport and logistics:Service robots for logistics fulfil areas of transport, handling, packaging, sorting and delivery of goods in offices, airports, post offices, among others [
5]; (ii) Cleaning: where robotic systems are developed for the cleaning of large or crowded areas, most of the commercially available home service robots have been developed with respect to requirements posed exclusively by a specific domestic task such as floor cleaning [
2]; (iii) Medicine: providing assistance in surgeries, rehabilitation and patient monitoring, as well as patient transport and orientation tasks, which represents up to 46% of hospital expenses. In addition, virtual AI-based psychotherapeutic devices are used in the field of mental health and for the treatment of diseases that require prolonged observation of the patient that can be managed through online monitoring [
6,
7]; (iv) Hospitality: focused on entertainment and improving the quality of the human experience; several hotels are even using robots to communicate with guests or perform hotel operations [
8]; finally, (v) Agriculture: robotic systems are being developed for outdoor cultivation, improving productivity and reducing farmers’ working hours [
9]. Resulting in the need to develop more complex robotic systems.
Due to the complexity of the applications and environments in which service robots must be deployed, the need has arisen to create systems with diverse morphologies and the combination of multiple robots to address a variety of tasks [
10,
11]. However, the acquisition of highly complex systems entails considerable investment, either for their implementation or study. This is further complicated when considering the need to train individuals in the handling of these robots, in addition to requiring the presence of trained supervisors to ensure their correct use and learning by the users. Currently, the most recent technological advances have opened new alternatives in the field of service robotics, facilitating the learning process through innovative approaches, such as virtual environments, programming with LEGO kits and pedagogical systems, with a STEM educational approach focused on service and educational robotics [
12], such educational systems allow me to give a hint in the operation of robotics and programming, but their acquisition cost is quite considerable, in addition to its operation in the presence of people or other objects not considered, can become damaged or cause damage. Another disadvantage is that these educational systems cannot handle large weight loads or perform precision tasks, which generate great limitations [
13,
14].
To facilitate the learning process in situations that seek to replicate reality, various technological tools are available, with simulators being one of the most prominent in virtual robotics education [
15,
16]. Within this broad set of tools, virtual environments stand out, allowing the creation of immersive environments that simulate real situations [
17,
18]. These environments benefit from the use of devices such as helmets, gloves, sensors and controls, which are now widely available at the simulation level [
19]. This technology has made it possible to develop virtual reality experiences within these environments, which, in addition to reducing costs compared to the use of real equipment and associated resources, provides a level of fail-safety in the learning process that cannot be achieved in a real physical environment [
17].
In this context, the present work proposes the development of a teaching-learning system based on virtual environments in the Unity 3D graphics engine using an omnidirectional robot with an anthropomorphic arm controlled by means of MATLAB 2020 software of MathWorks through a communication channel. In order to facilitate the understanding of the various applications of service robotics and the complexity of robot morphologies. This is achieved by considering the mathematical modeling of the robot in question and a haptic device that allows to train users to perform autonomous and teleoperated tasks in an accessible way in an immersive virtual environment which presents different challenges for user understanding [
20].
Finally, this article is organized in sections to address these aspects in a structured way.
Section 2 presents the methodology needed to carry out this process, with a focus on educational robotics.
Section 3 focuses on the mathematical basis required for the modeling and control of the robot.
Section 4 is devoted to the development of the virtual environment.
Section 5 presents the results obtained, and
Section 6 contains the conclusions derived from this study.
2. Methodology and Process Conceptualisation
The challenge in service robotics lies in operating in unstructured environments, which are highly unpredictable and difficult to manage. Creating a completely unstructured system is complicated by the nature of these environments. Therefore, it is essential to have a thorough understanding of the real world so that the assumed foundations are sufficiently flexible and adaptable to the variability of the environment. Although the current development of robotic systems has advanced, it is still far from achieving effective operation in completely unstructured environments with fully autonomous control, therefore, a semi-autonomous control is established through the use of teleoperation of robotic systems, in this context, it is proposed the following work methodology to address this issue.
As can be seen in the scheme above (
Figure 1), It starts from service robotics present in the real world and contextualize it using knowledge gained through experience and theoretical background. This allows us to develop a mental model that describes the operation of an omnidirectional robot and provides an understanding of its behavior. Subsequently, we build a formal model based on physical and geometrical principles, resulting in the mathematical model of the mobile manipulator robot contemplated by the mobile platform and the robotic arm, therefore, the model of each of these robots is considered. In addition, depending on the control objective, the independent model of each robot can be used, or a unified model can be established to control the end-effector of the whole robot. In response to the demands of the environment, a robotic process is created that combines the two previously defined systems for the purpose of evaluating the behavior, limitations and constraints of the robotic system.
Simultaneously, it creates an environment that adapts to real-world demands and leverages contextualized human knowledge. This environment is designed to host multiple tasks or objectives that require system intervention. Subsequently, we carry out the virtualization of this defined environment through the use of CAD software and graphics engines, with the purpose of familiarizing the user and facilitating clarity in the learning process. A schematic of the conceptualisation process involved is presented in the
Figure 2.
Therefore, the virtual system designed in the Unity graphic engine includes a mobile manipulator robot, previously modelled in CAD software, with an environment composed of two predefined scenes, based on external resources of the real robot and for the simulation scenarios. The CAD software used is Solid Works 2019.Through a formal mathematical model, which incorporates the kinematics and dynamics of the robotic system, the animations of the robot’s movements are generated by means of scripts integrated in the graphic engine. In addition, control algorithms are developed in MATLAB, which allow the execution of the necessary actions to accomplish the tasks desired by the user. These actions are commanded through the Novint Falcon haptic device, which is connected to the control software. The communication between the control software and the graphics engine is done through a Dynamic-Link Libraries (DLL). Where, MATLAB software sends the control actions to Unity and receives the information from the robot. On the other hand, control actions can also be transmitted to a real robot, using a TCP/IP wireless communication protocol, which allows virtual tasks to be replicated in a physical environment, making it possible to evaluate the controller to remotely control the robot experimentally and also design an autonomous controller for the robot to perform a defined task.
In this way, a virtual environment is created with predefined goals to be achieved by the robotic system driven by a human operator. It can also be used to evaluate autonomous control algorithms for the execution of defined tasks.
3. Modeling and Control
A mobile manipulator robot refers to robots that consist of a robotic arm (also known as a manipulator) mounted on a mobile platform or vehicle. These robots enable tasks to be carried out that require the capability to navigate and manipulate. These systems are characterized by high redundancy.
3.1. Kinematic Model
The kinematics determines the characteristics of motion in the plane or in space of a robot, the kinematics of robots are described here.
3.1.1. Omnidirectional Mobile Robot
The position of the mobile robot is defined in the fixed reference space defined by
. The kinematic configuration of the omnidirectional robot can be observed in
Figure 3.
Now, it is necessary to obtain the model of the mobile robot as a function of its manoeuvrability velocities. The differential kinematics of the omnidirectional robot is represented by:
written in matrix form (
1) results:
with
which represents the velocity vector of the point of interest with respect to the inertial reference system
;
is the Jacobian matrix that represents the motion characteristics of the omnidirectional robot; and
is the manoeuvrability vector of the omnidirectional robot with respect to
[
21].
3.1.2. Robotic Arm
The position of the robotic arm in the fixed workspace
is defined in terms of its independent articulations, so that it
, where the vector of independent coordinates is
. The configuration of the robotic arm can be observed in
Figure 4.
Now, taking the fixed reference system
; and the number of independent arm joints is four, i.e., a 4DOF anthropomorphic robotic arm, (see
Figure 4). The position and orientation of the robotic arm is determined:
where, the expressions in (
3) represent:
;
;
; and
.
Now, the model of the arm as a function of its velocities is obtained through the partial derivative of
in relation to
, where is obtained the differential kinematics of the robot described in matrix form as:
where,
is the joint velocity vector of the robotic arm joints;
is the vector of arm end-effector velocities in workspace; and
is the Jacobian of the robotic arm that transforms the manoeuvring velocities of the arm to the velocities of the end-effector.
3.1.3. Omnidirectional Mobile Manipulator
The mobile manipulator robot considered in this work consists of a robotic arm located on a mobile robot with omnidirectional traction. The configuration of the mobile manipulator robot is known when the position and orientation of all its points with respect to an inertial reference system is known
. The kinematic configuration of the robot can be seen in
Figure 5.
The direct kinematics of the mobile manipulator robot defines the position and location of the end-effector.
as a function of the configuration of the robotic arm and the mobile robot with omnidirectional traction
;
where,
represents the position of the omnirectional robot relative to the inertial reference frame
;
y
are the positions on the axes
, while
represents the height of the mobile robot with respect to
Z;
a y
b represent the distance of the robotic arm location from the moving reference system located at the center of gravity of the omnidirectional robot
;
represents the rotation matrix with respect to the
Z-axis of the reference system
; and
considers the position of the robotic arm with respect to the position and orientation of the omnidirectional robot, i.e.,
Considering Equations (6) and the position of the mobile robot in (5), the direct kinematics of the mobile manipulator robot can be represented as:
Finally, the differential kinematic model of the mobile manipulator robot establishes the derivative of the end-effector location as a function of the derivative of the configuration of the robotic arm and the omnidirectional robot, i.e.,
. As a result, it follows that:
The differential kinematic model (8) of the robot is written in matrix form:
where,
represents the Jacobin matrix of the mobile manipulator robot, which defines a linear mapping between the joint velocity vector defined by
and the end-effector velocity vector
. The Jacobian matrix
is a non-square matrix where the number of rows is less than the number of columns; therefore, the mobile manipulator robot is a redundant system.
3.2. Control
The teleoperation scheme allows to command any robot over long distances, i.e., it allows to operate the robot to perform a specific task by the human operator (
Figure 6). The defined task usually consists of several operation processes, and these commonly include three stages: (i) approach; (ii) manipulation; and (iii) return process. When the mission is initiated, the human operator must move the mobile robot manipulator close to the object, the robotic arm should be considered to be within its working space. Once the mobile robot reaches the proximity of the target object, the human operator switches the locomotion mode to manipulation mode (control of the robotic arm). When the manipulation mission is completed, the human operator switches the manipulation mode back to locomotion mode and the mobile manipulator robot shall return to a safe area. To accomplish this task, the teleoperation scheme proposed for this work is presented in
Figure 7.
3.2.1. Local Site
The proposed scheme allows the generation of reference commands for each robot depending on the task selected by the human operator (
h). The reference vectors generated by the human operator are
,
y
; such that, each reference is dependent on the haptic device, i.e.,
, where
is the direct reference that the haptic device generates at each instant (
t) when excited by the human operator (
h), and is defined by:
3.2.2. Locomotion
The human operator can select: (i) to control only the omnidirectional mobile robot, (ii) to control the total mobile manipulator robot. For both locomotion cases the commands generated by the human operator through the haptic device shall correspond to velocity commands.
(i) Omnidirectional Robot (Reference Generation). This mode allows only the omnidirectional robot to be controlled. For this, the human operator (
h) commands the point of interest of the omnidirectional robot by the haptic device. Therefore, the velocity reference for the robot is obtained through the differential kinematics of the mobile robot (
2), such that:
Remark 1. In this case the reference is not translated to linear velocity but to rotational velocity of the robot since the omnidirectional robot only moves in the -plane and rotates around the -axis of the fixed reference system.
(ii) Mobile Manipulator Robot (Reference Generation). Now, in this mode the operator commands the entire mobile manipulator robot, i.e., he controls the end-effector by the haptic device. Considering the human operator’s reference vector
, the reference generator considers the mapping of these velocities generated by the human operator as follows:
where,
is the reference velocity of the end-effector of the mobile manipulator robot; and
is a matrix of that maps the velocities generated by (
h) at the reference velocities of the operational point of the robot.
3.2.3. Manipulation
(i) Robotic Arm (Reference Generator). In this mode the human operator (
h) controls only the robotic arm; for which, the commands generated by (
h) through the haptic device
will correspond to position commands of the end-effector of the robotic arm.
. The same expression of (
12) is considered, however, taking into account that for this the reference velocities generated by (
h) are limited according to the manipulability of the robotic arm, so that:
where
, is the manipulability index [
22]; and
is a real value we define the minimum manipulability for the robotic arm such that
.
3.2.4. Remote Site
The remote station considers the implementation of control algorithms that consider the signals generated by the human operator (
h), through the haptic device.
, and convert them into signals for the mobile manipulator robot, according to the mode of operation selected by the operator (
h). Therefore, each controller generates the vector of manoeuvrability velocities of the robot
. It is defined by:
(i) Kinematic Control of Omnidirectional Mobile Robot. The kinematic model of Equation (
2) is considered, therefore, the proposed control law for the omnidirectional robot is:
where,
represents the desired velocity generated by the human operator (
h) through the haptic device;
represents the Jacobian matrix of the behaviour of the omnidirectional robot;
is the gain matrix that weights the velocity errors of the local controller of the omnidirectional robot; and
represent the manoeuvrability velocities of the omnidirectional robot. The motion control error of the omnidirectional robot can be defined as the difference between the commands generated by human operator and the motion velocity of the omnidirectional robot, i.e:
As the operator generates commands only for the mobile robot, then the robotic arm should remain in a fixed position, therefore a control law is defined to keep the robotic arm static during the locomotion task. Let be the desired position of the arm end-effector with respect to the reference frame of the mobile robot
. The controller that generates the reference velocities of the robotic arm joints is given by:
where,
is a positive diagonal matrix weighing the control error
;
is a positive diagonal matrix weighing the error of the desired configuration of the robotic arm
. As the Jacobian of the arm has more columns than rows, the controller is based on secondary objectives as it has several degrees of freedom; and
is the pseudo-inverse matrix of the robotic arm.
(ii) Robotic Arm Kinematic Control. The position control of the end-effector position of the robotic arm, the kinematic model defined in Equation (
4) is considered; therefore, the proposed control law is:
where,
is the pseudo-inverse matrix of the robotic arm;
represents the desired position of the end-effector of the robotic arm;
is the desired velocity generated by the operator (
h);
represents the error vector of the position of the robotic arm;
represents the actual position of the end-effector of the robotic arm. Finally, in this case, the omnidirectional mobile robot maintains its position, i.e., it does not make any movement
, therefore, the reference velocity in this case is:
(iii) Kinematic Control of the Mobile Manipulator Robot. Similarly, the kinematic model of Equation (
9) is considered, therefore, the proposed control law is:
where,
represent the manoeuvrability velocities of the mobile manipulator robot;
represents the desired velocity of the end-effector of the mobile manipulator robot generated by the human operator (
h) through the haptic device;
is the pseudo-inverse matrix of the mobile manipulator robot;
;
are positive diagonal matrices weighing the robot end-effector position control error and the secondary target control vector errors respectively. The motion control error of the mobile manipulator robot is defined as the difference between the commands generated by the human operator and the motion velocity of the end-effector of the mobile manipulator robot, such that:
For this work, the configuration of the robotic arm is considered as secondary objectives, therefore, the secondary objective vector is defined as:
where,
con
represents the desired position of each joint of the robotic arm; and
is the current position of each joint of the robotic arm.
3.3. Dynamic of the Robots
To emulate the behaviour of the robot with its dynamic characteristics. In this section the models of the omnidirectional robot and the robotic arm are defined. Therefore, the robot in the 3D virtual environment allows to replicate the real system behaviour as close to real as possible. With this we simulate in real time the physical performance of the robot. This methodology is a tool that provides a safe and ideal environment for the validation of control techniques in the development of prototypes and for educational purposes for handling robotic prototypes.
Figure 8 shows the interpretation of the virtual robot.
3.3.1. Direct Dynamics of the Omnidirectional Robot
The dynamic model is a mathematical formulation that describes the dynamic behaviour of a system. It establishes the relationships between the robot’s joint coordinates or coordinates of interest, velocities, accelerations, forces and torques applied at its joints and the robot’s parameters, such as masses and moments of inertia. The model used in this work is a simplified [
23] model, this model is represented in terms of reference velocities, because it is easier to apply velocities than torques, as in traditional robot models. Therefore, to emulate the dynamics of the robot, the direct dynamic model of the robot is used. For this purpose, the inverse dynamic model
is used. Now, the direct dynamic model used in this work is defined by:
where,
is the reference velocity vector of the omnidirectional robot;
is the vector of accelerations of the omnidirectional robot obtained from the direct dynamics;
is the actual velocity vector of the robot;
;
. The matrix
is the mass matrix of the robot, is square and positive definite; and
is a quadratic matrix representing the centripetal and coriolis forces of the robot. The matrices are given by:
where,
is the vector of the dynamic parameters of the omnidirectional robot.
3.3.2. Direct Dynamics of the Robotic Arm
Similar to the omnidirectional robot, the dynamic model used in this work is represented in terms of reference velocities and not in terms of forces and torques. That is to say, the structure of the dynamic model is given by
. Where,
is the vector of accelerations of the robotic arm;
is the mass and inertia matrix of the robotic arm;
is the matrix of centripetal and coriolis forces; y
is the gravity vector.
where,
is the vector of dynamic parameters of the robotic arm. Finally, to emulate the robotic arm, the direct dynamics defined by:
where,
;
; y
.
4. Virtual Environment
This section presents the development of an immersive environment, a 3D virtual environment is developed in the Unity 3D graphics engine to simulate the teleoperation tasks of a mobile manipulator robot in two completely different scenes. The virtual environment is focused on teaching and learning in the area of robotics, so it allows interaction with different users. To do this, in the first instance, the virtualisation of the 4 DOF robotic arm and the omnidirectional robot is carried out, to subsequently place them in 2 environments where the tasks demanded by the user are carried out through a means of communication between the operator and the virtual environment.
4.1. Virtualization of the Mobile Manipulator Robot
In order to carry out the digitalization process of the mobile manipulator robot, it is necessary to take external resources as a reference. As shown in the diagram above (
Figure 9), a 4 DOF robot arm is used for the manipulation of an object in question, while a KUKA omnidirectional robot is used as a reference for the movement of the robot [
24]. Then, using CAD software, a 3D solid object is modelled for each of the robots, with the necessary physics for their respective movements and constraints in order to develop the required animation. Once the CAD design of the two robots has been completed, it is exported to Autodesk 3DS-Max 2024 software, which allows the design to be rendered, dimension and rotation changes to be made, and exported in the format required for the graphics engine. In addition, the rotation axes of each joint of the robotic arm are configured and the centre of the axis of the mobile robot is established during the animation phase. At this stage, the movement of each joint and of the platform is checked to ensure that it matches the axes of movement of the real robots. Each component of the robot can then be exported separately to the graphics engine, as each part has its own animation and movement, determined by the kinematic and dynamic model through script programming.
4.2. Scenario Virtualization
For the development of each scenario, it is essential to identify the places that will be used for the creation of the environment. In this case, the project has focused on two scenes in which dangerous objects are handled. To do this, an exhaustive analysis of the objects that make up the scenarios in a real environment is carried out (view
Figure 10).
For the development of the virtual environment, The use of prefabricated scenarios that provide a high degree of realism in the objects present in the real environment. This approach facilitates the creation of the virtual scenes and allows for a more efficient development process. Since the main objective of the project is the teaching and learning of service robotics and robot morphology. The use of these prefabricated designs ensures an accurate and functional representation of the different scenarios.
Once the prefabricated scenarios and objects have been selected, we proceed to render the scenes, adjusting to the scale and dimensions of the robot as necessary. For this purpose, 3DS Max software is used, which acts as a link between the prefabricated design and the Unity graphics engine.
Once the export and rendering of the prefabricated scenarios in the Unity graphic engine is completed, we proceed to add the attributes, materials and physical properties to each of the elements present. In this project, two main scenarios have been developed. The first is an environment dedicated to the handling of explosives, where a grenade has been placed near a machine; in this case, the object of interest is the bomb. The second scenario represents a laboratory for handling chemicals, with a flask filled with an explosive liquid as the main object, as shown in the graphic above (
Figure 11).
The objective in both scenarios is to move the object of interest to a safe place, away from the risk area, using exclusively the movements of the mobile omnidirectional manipulator robot.
To enable navigation through the environment by moving the mobile manipulator robot, a visual interface has been developed and is shown in the figure (
Figure 12). This interface presents the virtual environment from the perspective of the manipulator robot as the main screen, providing an overview of the operating area. Additionally, three auxiliary cameras have been incorporated to provide different perspectives crucial to the task. One of these auxiliary cameras provides a detailed view of the operating end, which is essential for precise manipulations. The other two cameras focus on the objects of interest in each scene, allowing a clear visualization of the target elements to be manipulated. This multi-camera setup ensures that the user has a complete understanding of the environment in order to monitor and control the robot, thus improving efficiency and accuracy in task execution. According to the type of task to be performed, whether it is locomotion or object manipulation by the mobile manipulator robot, the control law is selected by means of the Novint Falcon device, as described in
Section 3.2. Additionally, for the movement of the end-effector, there are different types of sensors that help this movement, such as force, optical or piezo-resistive sensors, but for this application, by means of the haptic device, an on/off switching of the gripper is generated, which grips the object as long as it is close to the target.
Additionally, an interface developed in MATLAB software is used, which requests the necessary information to select the predetermined scene and set the duration of the simulation. This interface presents the following data:
This menu also displays the X, Y, Z coordinates obtained from the Falcon haptic device, as well as the hits and misses related to the manipulation of the objects of interest in the environment. Including Play, Stop, and Save buttons, in order to start or end the simulation.
As mentioned above, the goal of the teaching-learning system is to transport an object of interest to a safe location using the mobile manipulator robot. This safe place is located outside the two main scenes, and is where the dangerous object should be deposited. If the task is performed successfully, the menu increases the number of successes; otherwise, the number of failures is increased (view
Figure 13). Otherwise, if the object of interest falls in an area outside the safe area or contacts any other part of the environment, such as walls, machines or other objects present in the scenes, the number of failures is increased.
4.3. Communication Chanel
Finally, once the virtualization of the scenarios and objects in the graphics engine has been completed and the control algorithms have been established in MATLAB, a bilateral communication channel known as Shared Memories is used to facilitate the exchange of information between the virtual environment and the MATLAB mathematical software, using a Dynamic Link Library (DLL) that creates a shared memory in RAM (SM) for the exchange of data between different programs [
25].Through the SM, the control actions calculated in the target controller are integrated into the mathematical model of the robotic system. This model calculates the position and velocity outputs, which are then sent to the mathematical software, thus closing the control loop by providing feedback on the robot’s output states. The desired task can be selected by the haptic device, or in the case of using an autonomous control, the user can define the desired task to be performed, e.g., positioning, trajectory tracking or path following, where the computation of the control algorithms is performed by MATLAB mathematical software [
21]. Subsequently, the references generated by the controllers are applied to the robot. The
Figure 14 indicates the communication scheme used for bilateral communication.
5. Experimental Results
This section presents the results of the use of the virtual environment as a teaching-learning development tool in the field of educational robotics. In addition, a brief usability analysis of the tool on a given number of students is presented. The hardware used for the simulation of the tool is described as follows: Intel i7-7700HQ processor, CPU 2.80 GHz × 8 (Intel, Santa Clara, CA, USA) and Ge-Force GTX 1050 graphics (Nvidia Corporation, Santa Clara, CA, USA). The characteristics of such hardware is not strictly related to the performance of the tool. Nevertheless, for better animation, it is strongly recommended to include specialized graphics hardware. The implemented system for the teleopraticon is shown in
Figure 15. Here, you can see the human operator with the haptic device teleoperated to the robot in the developed virtual environment. In [
26] you can watch a video showing the operation of the developed application, where the scenarios and tasks developed by the user are shown.
When the objective of the task is accomplished, with each of the limitations presented by the movements of the robot and the environment, the menu shows a success in bringing the object of interest, in this case an explosive, to the safe place.
Figure 16 shows how the object is transported to the safe place by the robot commanded by the human operator.
Otherwise, if due to a wrong movement by the user when controlling the mobile manipulator robot, and the object falls somewhere other than the safe zone, the environment presents an explosion animation and the menu indicates an increase in the number of failures due to poor execution of the program, thus resetting the object in its initial position, as shown in the next graphic (
Figure 17).
Usability Evaluation
To identify the usability and relevance of the tool, a comprehensive evaluation of the didactic application of teaching and learning is performed. For this purpose, 20 students of the Electronics and Automation Engineering course of the University of the Armed Forces ESPE-L have been chosen to use the virtual environment. These students have basic knowledge in kinematic and dynamic modeling of robots, control systems and virtual reality, which allows them to correctly evaluate the tool. The survey consists of 10 questions focused on evaluating the functionality and intuitiveness of the developed tool. Student responses will be collected using a scale from 1 to 5, where 1 represents “Very difficult”, 2 “Difficult”, 3 “Intermediate”, 4 “Easy” and 5 “Very easy”. The list of questions proposed to the users is presented in
Table 1.
The results of the usability evaluation of the system are presented in the graphic above (
Figure 18). For the first question, a mean of 3.25 with a standard deviation of 0.79 was obtained. The second question showed a mean of 3.35 with a deviation of 0.88. The third question registered a mean of 3.40 with a deviation of 0.68, while the fourth question reached a mean of 3.80 with a deviation of 0.52. The fifth question had a mean of 3.40 and a standard deviation of 0.88. For the sixth question, the mean was 3.30 with a deviation of 0.69, and for the seventh question, a mean of 3.30 with a deviation of 0.8 was obtained. The eighth question recorded a mean of 3.65 with a standard deviation of 0.67. The ninth question obtained a mean of 3.35 with a deviation of 0.75, while the tenth question yielded a mean of 4.00 with a deviation of 0.85.
The usability evaluation data, illustrated in the figure above (
Figure 18), provide detailed insight into how users perceive the teaching-learning tool. The error bars in the figure show the variability of responses among students, indicating the range within which the responses of the rest of the population fall.
Overall, the results suggest a positive evaluation of the virtual environment tool. The relatively small standard deviations reveal a favorable perception of the users regarding the usability of the system. Thus indicating an ease of use of the tool.
In addition, a traditional theoretical teaching method on kinematic and dynamic modeling of robots was chosen to be used with a different group of 20 additional students, but with similar characteristics to those who used the tool. Subsequently, a comparative evaluation of the knowledge acquired in both groups was carried out. The results indicated that the students who used the teaching-learning system demonstrated greater knowledge about motions, rotations, and the kinematic and dynamic modeling of robots, compared to the group that received conventional theoretical instruction.
6. Conclusions
The kinematic and dynamic modeling of the robot using virtual environments significantly increases the realism of the simulation tool. By replicating with greater precision the movements and behaviors of the robot in question, resulting in the simulation being remarkably close to the behavior of the real robot. This fidelity allows users to interact with the virtual environment in a way that almost identically mimics the operation of the robot in real situations. This results in a much more effective and realistic learning experience, providing users with a more accurate understanding of how the robot behaves in a real situation.
The implementation of virtual environments facilitates the development of new learning tools, offering several significant advantages. First of all, these environments allow a much more accessible and economical system acquisition, by considerably reducing costs compared to the purchase of real systems. In addition, they eliminate the risks associated with the use of physical equipment, such as possible damage and the need to operate with caution. This results in a safe and hazard-free learning experience. Similarly, virtual environments do not require the presence of expert operators for training, which simplifies the process and results in greater accessibility for users. In summary, virtual environments reduce costs, risks and also optimize the teaching-learning process, offering a practical and efficient solution compared to real systems.
The use of the Falcon haptic device in the development of the teaching-learning system proves to be an excellent tool to enhance interaction between the environment and the users. The ability to provide feedback allows students to experience increased realism, manipulation and control of robots, which significantly enriches the learning process. Overcoming the limitations inherent in traditional theoretical teaching methods. In short, the use of the haptic device in combination with virtual environments offers an innovative and highly effective approach to robotics education, benefiting both students and instructors.
The development of a teaching-learning system using virtual environments, presented in this work, facilitates the acquisition of new knowledge on robot modeling and control for users with basic or limited knowledge on the subject. This approach is more effective compared to theoretical, classical or conventional teaching methods, which present limitations for both the instructor and the students in the learning process.
Author Contributions
Conceptualization, F.J.P., C.P.C., J.S.O. and V.H.A.; methodology, F.J.P., C.P.C., J.S.O. and V.H.A.; software, F.J.P., C.P.C., J.S.O. and V.H.A.; validation, F.J.P., C.P.C., J.S.O. and V.H.A.; formal analysis, F.J.P., C.P.C., J.S.O. and V.H.A.; investigation, F.J.P., C.P.C., J.S.O. and V.H.A.; resources, F.J.P., C.P.C., J.S.O. and V.H.A.; data curation, F.J.P., C.P.C., J.S.O. and V.H.A.; writing—original draft preparation, F.J.P., C.P.C., J.S.O. and V.H.A.; writing—review and editing, F.J.P., C.P.C., J.S.O. and V.H.A.; visualization, F.J.P., C.P.C., J.S.O. and V.H.A.; supervision, F.J.P., C.P.C., J.S.O. and V.H.A.; project administration, F.J.P., C.P.C., J.S.O. and V.H.A.; funding acquisition V.H.A. and J.S.O. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
If anyone wants to get the original data of this paper, please contact with Fernando J. Pantusin.
Acknowledgments
The authors would like to thank the Universidad de las Fuerzas Armadas ESPE for the support given to research, development, and innovation, through the "Autonomous Control of Aerial Manipulators Robots" research project in Ecuador; the ARSI research group; CICHE Research Center and SISAu Research Group. The results of this work also are part of the project “Tecnologías de la Industria 4.0 en Educación, Salud, Empresa e Industria” developed by Universidad Tecnológica Indoamérica; the German Academic Exchange Service, also known by its German acronym (DAAD) for Scholarship Award in third Country Programme Latin America; and the Instituto de Automática of the Universidad Nacional de San Juan and CONICET in Argentina.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Gonzalez-Aguirre, J.A.; Osorio-Oliveros, R.; Rodríguez-Hernández, K.L.; Lizárraga-Iturralde, J.; Morales Menendez, R.; Ramírez-Mendoza, R.A.; Ramírez-Moreno, M.A.; Lozoya-Santos, J.d.J. Service Robots: Trends and Technology. Appl. Sci. 2021, 11, 10702. [Google Scholar] [CrossRef]
- Zachiotis, G.A.; Andrikopoulos, G.; Gornez, R.; Nakamura, K.; Nikolakopoulos, G. A Survey on the Application Trends of Home Service Robotics. In Proceedings of the 2018 IEEE International Conference on Robotics and Biomimetics (ROBIO), Kuala Lumpur, Malaysia, 12–15 December 2018; pp. 1999–2006. [Google Scholar] [CrossRef]
- Jeon, J.; Jung, H.r.; Pico, N.; Luong, T.; Moon, H. Task-Motion Planning System for Socially Viable Service Robots Based on Object Manipulation. Biomimetics 2024, 9, 436. [Google Scholar] [CrossRef] [PubMed]
- Zielinska, T.T. History of Service Robots and New Trends. In Novel Design and Applications of Robotics Technologies; IGI Global: Hershey, PA, USA, 2019; pp. 158–187. [Google Scholar] [CrossRef]
- Karabegovi, I.; Karabegovi, E.; Mahmi, M.; Husak, E. The application of service robots for logistics in manufacturing processes. Adv. Prod. Eng. Manag. 2015, 10, 185–194. [Google Scholar] [CrossRef]
- Dobrev, Y.; Vossiek, M.; Christmann, M.; Bilous, I.; Gulden, P. Steady Delivery: Wireless Local Positioning Systems for Tracking and Autonomous Navigation of Transport Vehicles and Mobile Robots. IEEE Microw. Mag. 2017, 18, 26–37. [Google Scholar] [CrossRef]
- Abdel-Basset, M.; Chang, V.; Nabeeh, N.A. An intelligent framework using disruptive technologies for COVID-19 analysis. Technol. Forecast. Soc. Change 2021, 163, 120431. [Google Scholar] [CrossRef] [PubMed]
- Leung, R. Smart hospitality: Taiwan hotel stakeholder perspectives. Tour. Rev. 2019, 74, 50–62. [Google Scholar] [CrossRef]
- Ju, C.; Son, H.I. Modeling and Control of Heterogeneous Agricultural Field Robots Based on Ramadge–Wonham Theory. IEEE Robot. Autom. Lett. 2020, 5, 48–55. [Google Scholar] [CrossRef]
- Pecka, M.; Zimmermann, K.; Reinstein, M.; Svoboda, T. Controlling Robot Morphology From Incomplete Measurements. IEEE Trans. Ind. Electron. 2017, 64, 1773–1782. [Google Scholar] [CrossRef]
- Saltaren, R.; Aracil, R.; Alvarez, C.; Yime, E.; Sabater, J.M. Field and service applications-Exploring deep sea by teleoperated robot-An Underwater Parallel Robot with High Navigation Capabilities. IEEE Robot. Autom. Mag. 2007, 14, 65–75. [Google Scholar] [CrossRef]
- Anwar, S.; Bascou, N.; Menekse, M.; Kardgar, A. A Systematic Review of Studies on Educational Robotics. J. Pre-College. Eng. Educ. Res. 2019, 9, 2. [Google Scholar] [CrossRef]
- Guerrero-Osuna, H.A.; Nava-Pintor, J.A.; Olvera-Olvera, C.A.; Ibarra-Pérez, T.; Carrasco-Navarro, R.; Luque-Vega, L.F. Educational Mechatronics Training System Based on Computer Vision for Mobile Robots. Sustainability 2023, 15, 1386. [Google Scholar] [CrossRef]
- Arshad, N.I.; Hashim, A.S.; Mohd Ariffin, M.; Mohd Aszemi, N.; Low, H.M.; Norman, A.A. Robots as Assistive Technology Tools to Enhance Cognitive Abilities and Foster Valuable Learning Experiences among Young Children With Autism Spectrum Disorder. IEEE Access 2020, 8, 116279–116291. [Google Scholar] [CrossRef]
- Charão dos Santos, M.C.; Sangalli, V.A.; Pinho, M.S. Evaluating the Use of Virtual Reality on Professional Robotics Education. In Proceedings of the 2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC), Torino, Italy, 4–8 July 2017; Volume 1, pp. 448–455. [Google Scholar] [CrossRef]
- Safaric, R.; Sinjur, S.; Zalik, B.; Parkin, R. Control of robot arm with virtual environment via the Internet. Proc. IEEE 2003, 91, 422–429. [Google Scholar] [CrossRef]
- Carvajal, C.P.; Méndez, M.G.; Torres, D.C.; Terán, C.; Arteaga, O.B.; Andaluz, V.H. Autonomous and Tele-Operated Navigation of Aerial Manipulator Robots in Digitalized Virtual Environments. In Proceedings of the 5th International Conference on Augmented Reality, Virtual Reality, and Computer Graphics, Otranto, Italy, 24–27 June 2018; De Paolis, L.T., Bourdot, P., Eds.; Springer: Cham, Switzerland, 2018; pp. 496–515. [Google Scholar] [CrossRef]
- Pantusin, F.J.; Cordonez, J.W.; Quimbita, M.A.; Andaluz, V.H.; Vargas, A.D. Training System for the Tomato Paste Production Process Through Virtual Environments. In Proceedings of the Intelligent Systems and Applications, Barcelona, Spain, 13–17 March 2023; Arai, K., Ed.; Springer: Cham, Switzerland, 2024; pp. 46–55. [Google Scholar] [CrossRef]
- Vera-Mora, G.; Sanz, C.V.; Coma-Roselló, T.; Baldassarri, S. Model for Designing Gamified Experiences Mediated by a Virtual Teaching and Learning Environment. Educ. Sci. 2024, 14, 907. [Google Scholar] [CrossRef]
- Saunier, L.; Hoffmann, N.; Preda, M.; Fetita, C. Virtual Reality Interface Evaluation for Earthwork Teleoperation. Electronics 2023, 12, 4151. [Google Scholar] [CrossRef]
- Andaluz, V.H.; Carvajal, C.P.; Arteaga, O.; Pérez, J.A.; Valencia, F.S.; Solís, L.A. Unified Dynamic Control of Omnidirectional Robots. In Proceedings of the Towards Autonomous Robotic Systems, Guildford, UK, 19–21 July 2017; Gao, Y., Fallah, S., Jin, Y., Lekakou, C., Eds.; Springer: Cham, Switzerland, 2017; pp. 673–685. [Google Scholar] [CrossRef]
- Yoshikawa, T. Manipulability of Robotic Mechanisms. Int. J. Robot. Res. 1985, 4, 3–9. [Google Scholar] [CrossRef]
- Falkenhahn, V.; Mahl, T.; Hildebrandt, A.; Neumann, R.; Sawodny, O. Dynamic Modeling of Bellows-Actuated Continuum Robots Using the Euler–Lagrange Formalism. IEEE Trans. Robot. 2015, 31, 1483–1496. [Google Scholar] [CrossRef]
- Bischoff, R.; Huggenberger, U.; Prassler, E. KUKA youBot—A mobile manipulator for research and education. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 1–4. [Google Scholar] [CrossRef]
- Gallardo, C.; Pogrebnoy, A.; Varela-Aldás, J. Development and Use of Dynamic Link Libraries Generated Under Various Calling Conventions. In Information Technology and Systems: ICITS 2021; Springer International Publishing: Cham, Switzerland, 2021; Volume 1, pp. 220–232. [Google Scholar] [CrossRef]
- Carvajal, C.P.; Pantusin, F.J.; Ortiz, J.S.; Andaluz, V.H. Virtual Teleoperation System for Mobile Manipulator Robots Focused on Object Transport. 2024. Available online: https://youtu.be/76nzhxMp3SM (accessed on 24 July 2024).
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).