Next Article in Journal
Blockchain-Based Distributed Patient-Centric Image Management System
Next Article in Special Issue
Review of Advanced Medical Telerobots
Previous Article in Journal
Frequency of Sleep Bruxism Behaviors in Healthy Young Adults over a Four-Night Recording Span in the Home Environment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Telelocomotion—Remotely Operated Legged Robots

Department of Engineering, Trinity College, 300 Summit Street, Hartford, CT 06106, USA
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(1), 194; https://doi.org/10.3390/app11010194
Submission received: 29 November 2020 / Revised: 17 December 2020 / Accepted: 22 December 2020 / Published: 28 December 2020
(This article belongs to the Special Issue Haptics for Tele-Communication and Tele-Training)

Abstract

:

Featured Application

Teleoperated control of legged locomotion for robotic proxies.

Abstract

Teleoperated systems enable human control of robotic proxies and are particularly amenable to inaccessible environments unsuitable for autonomy. Examples include emergency response, underwater manipulation, and robot assisted minimally invasive surgery. However, teleoperation architectures have been predominantly employed in manipulation tasks, and are thus only useful when the robot is within reach of the task. This work introduces the idea of extending teleoperation to enable online human remote control of legged robots, or telelocomotion, to traverse challenging terrain. Traversing unpredictable terrain remains a challenge for autonomous legged locomotion, as demonstrated by robots commonly falling in high-profile robotics contests. Telelocomotion can reduce the risk of mission failure by leveraging the high-level understanding of human operators to command in real-time the gaits of legged robots. In this work, a haptic telelocomotion interface was developed. Two within-user studies validate the proof-of-concept interface: (i) The first compared basic interfaces with the haptic interface for control of a simulated hexapedal robot in various levels of traversal complexity; (ii) the second presents a physical implementation and investigated the efficacy of the proposed haptic virtual fixtures. Results are promising to the use of haptic feedback for telelocomotion for complex traversal tasks.

1. Introduction

Telerobots, or remotely controlled robotic proxies, combine the robustness, scalability, and precision of machines with human-in-the-loop control. This teleoperation architecture extends human intervention to spaces that are too hazardous or otherwise unreachable by humans alone. In the particular case of emergency response, autonomy is still insufficient, and human first-responders risk their lives to navigate and operate in extreme conditions, oftentimes with stressful task constraints. This work is motivated to alleviate this human risk by working towards human-controlled, semi-autonomous and legged robot proxies that can both navigate and dexterously interact in difficult and potentially sensitive environments. For this, two user studies were conducted:
(I)
simulated servo-driven hexapedal telerobotic platform, as shown in Figure 1a, to compare the proposed haptic interface with commonly used alternatives for controlling locomotion in varying levels of traversal task difficulty;
(II)
physical implementation of hexapedal telerobotic platform, as shown in Figure 1b, to evaluate the haptic virtual fixtures developed.

1.1. Teleoperation

Autonomous robots have proven to be successful in surveillance, assembly, and basic navigation tasks, yet most real-world tasks are beyond the capabilities of state-of-the-art autonomy. Where autonomy is insufficient, human-in-the-loop systems have shown success [1,2,3]. Teleoperated systems can employ semi-autonomous robots, reducing the operator’s cognitive load. In a reliable system, the robot is tasked with autonomous subtasks, while operator intervention is left for high-level decision making, such as obstacle avoidance or managing a team of robots [4]. In addition, properly incorporated haptic feedback can improve teleoperator performance [5] in the fields of robot-assisted minimally invasive surgery [6], control in cluttered environments [7,8], obstacle avoidance in aerial navigation [9], and programming welding robots [10] to name a few.
The benefits of teleoperation are widely associated with manipulation tasks. Examples include robotic maintenance of underwater structures [11,12], robot-assisted minimally invasive surgery [13,14], and assembly/welding tasks [15,16]. Most of these operations rely on robot proxies localized to the task environment and require little if any traversal of terrain.

1.2. Haptic Feedback in Teleoperation

A key component of a successful teleoperated system is a seamless user interface with the ultimate goal to achieve a high degree of telepresence [3]. Teleoperator feedback modes can influence the overall effectiveness. Typical traditional feedback modes include visual, 2D and 3D, and auditory [17]. The haptic sensorimotor pathway has been leveraged to provide the operator with additional sensory information in the forms of both tactile and kinesthetic feedback [18]. Costes et al. presented a method to provide four types of haptic feedback—compliance, friction, fine-roughness and shape with a single force feedback device [19]. Haptic feedback can present cues to the operator as well as intelligent, task-based force feedback called virtual fixtures. Virtual fixtures can enhance task execution and shift task burdens from the operator to semiautonomous feedback. Such systems have great promise in the domains of vision-based assistance in surgical robotic procedures [14,20], joint-limit cue delivery [21], underwater robotic operations [12], integrating auxiliary sensors [22,23], and assembly and disassembly training [24].

1.3. Dynamic Locomotion

Successful implementations of robot navigation typically involve wheeled systems [25,26,27], yet these methods are constrained to predictable and relatively smooth terrains. Disaster scenarios, however, present navigation obstacles that cannot be solved by wheeled or tracked navigation [28]. Legged robots have demonstrated potential applicability in tasks such as search and rescue operations, and the handling of hazardous waste and recovery in disaster scenarios [29,30]. Legged robots also exhibit sufficiently high torque and speed for demanding terrains [31]. Xi et al. attempted to increase energetic economy by using multiple gaits in a single walking robot [32], and Carpentier et al. proposed a centroidal dynamics model for multicontact locomotion of legged robots [33]. Walking robots are a potential solution, but autonomous approaches are not yet reliable enough [34,35,36,37].

1.3.1. DARPA Robotics Challenge

The DARPA Robotics Challenge (DRC) aimed to develop teleoperated ground robots to complete complex tasks in dangerous, unpredictable task environments [38]. Due to the progression of wheeled or tracked robots to a new age of legged walking robots, mobility and balance were main areas of focus for the DRC [39]. Control of legged walking ranged from manual step planning [40], plan-and-execute trajectories with a balance controller, to full body humanoid control approaches [41]. Throughout the DRC trials, many robots fell en route towards the navigation goal, resulting in unrecoverable failures. Relying on predictable contact mechanics was ineffective, and more focus should have been placed on making, rather than merely planning, footholds [42]. The reader is directed to [43,44] for detailed software and hardware implementations used at the DRC trials.

1.3.2. Bio-inspired Locomotion

Legged natural organisms are adept at leveraging contact forces to negotiate complex obstacles [45]. Challenges remain for walking robotic platforms. However, examining the dynamic forms of locomotion from nature can lead to sophisticated solutions for robotic locomotion. For example, central pattern generators (CPGs) are biological networks that produce coordinated rhythmic output signals under the control of rudimentary input signals [46]. Lachat et al. presented a control architecture constructed around a CPG implemented as a system of coupled non-linear oscillators for the BoxyBot, a novel fish robot capable of both swimming and crawling [47].
Bipedal, quadrupedal, hexapedal and rotating tri-legged platforms are popular modes of robot locomotion [45]. Examples include BigDog, RHex, Whegs, and iSprawl [48,49,50,51]. Zucker et al. used a fast, anytime footstep planner guided by a cost map for Boston Dynamics’ LittleDog [52]. Arthropod locomotion, specifically cockroach movement dynamics, has been a source of much inspiration for hexapedal robots. Cockroaches make use of passive dynamics in order to achieve a high degree of stability, speed and maneuverability. Hoover et al. presented the DynaRoACH, an under-actuated cockroach-inspired hexapedal robot capable of running at 14 body lengths per second and executing dynamic turning actions [53].

1.4. Contributions

To the best of the authors’ knowledge, the work presented here is the first to prototype a seminal haptic operator interface for commanding cyclical hexapedal legged locomotion for online human operated control of robot gait execution. The authors then evaluate the interface through two experiments: (I) to compare the performance of said haptic interface for simulated teleoperated robot legged locomotion against more standard ones used to control robotic proxies with varying levels of traversal task complexity (II) to evaluate the preliminary performance of the haptic virtual fixtures used for the haptic interface in a physical implementation of the telelocomotion task.

2. Methods

2.1. Teleoperated Platform

In this work, the 18DOF Trossen PhantomX hexapedal robot, as shown in Figure 1, was used as the telerobotic platform in both simulation and physical implementation. The hexapod’s many points of contact with the ground alleviate concerns of balance. Furthermore, the device is light-weight and servo driven, leading to negligible dynamic analysis of walking gait and locomotion. The remaining mechanics are left to robot kinematics and the physics simulation engine for experiments conducted in the simulated environment.

2.1.1. Hexapod Legged Mechanism

The PhantomX hexapod is comprised of 18 joints: six kinematically identical limbs with three joints each. The robot exhibits sagittal symmetry, with each symmetrical half consisting of three legs. These legs are denoted the prothoracic, mesothoracic and metathoracic. In this work they are enumerated as Legs 0–5, as depicted in Figure 1a.
Each of the six legs consists of three links, the coxa, femur and tibia, as depicted in Figure 2a. The servo-driven joints that actuate these links afford 3DOF rotary motion for each leg. These joints are the thorax-coxa, coxa-trochanter, and femur-tibia joints, as illustrated in Figure 2b. Joint 0, the thorax-coxa, has a vertical axis of rotation, while joints 1 and 2, the coxa-trochanter and femur-tibia respectively, present parallel horizontal axes of rotation.
The PhantomX ROS package comes prepackaged with a general position plus yaw controller. The gait mechanism allows for forward and backward translation as well as pivoting about the vertical z-axis around its center. These predefined gait trajectories are well-studied alternating tripod gait sequences, similar to that of the common cockroach. More explicitly for forward and backward translation, from Figure 1a, odd enumerated legs move in concert while even enumerated legs contact points remain fixed for balance and vice versa.

2.1.2. Peripherals and Augmentations

The PhantomX hexapod Arbotix Robocontroller lacks flexibility in software development as well as seamless integration of physical peripherals necessary for telelocomotion control. Desired augmentations namely include high definition streaming RGB camera and long-range wireless connectivity. Thus, the Arbotix Robocontroller was replaced with the NVIDIA Jetson Nano. Additional peripherals were then incorporated as: an eight megapixel Raspberry Pi Camera V2 RGB camera, power distribution (buck converter), the Dynamixel U2D2 USB adapter, and a Netis AC1200 IEEE 802.11a/b/g/n/ac wireless adapter. These additional hardware components were mounted to the physical platform via 3D printed plastic mounts, as depicted in Figure 3a. 3D printing was performed using the Cetus MKIII with eSun PLA+ at 0.2 mm layer height, 0.35mm line thickness, and 15% infill. The modified PhantomX is illustrated alongside in Figure 3b.

2.2. Operator Interface

2.2.1. Visual Feedback

The operator interface included visual feedback, captured either from a simulated monocular camera mounted to the PhantomX hexapod base or as the Raspberry Pi Camera V2. This was presented to the human user via a standard LCD display. This constituted the only visual feedback provided to the operator, and no 3D visualization was performed. Figure 4 shows typical examples of the two different visual feedback sources.

2.2.2. Haptic Interface

The physical operator station also consisted of the Sensable PHANToM Omni® 3DOF haptic devices and a desktop computer. In the presented method, two Omni haptic devices were utilized—one for each set of three legs per phase of the walking tripod gait. Kinesthetic force-feedback from the haptic devices encouraged the users to command efficient hexapod movement. In terms of system hardware, the operator interface machine ran with a dual-core 64-bit Intel® Core™ i7-640M running at 2.8 GHz using the Microsoft Windows 10 operating system. The machine was equipped with 4 GB of system memory, and a graphics engine consisting of a NVIDIA NVS 3100M with 512 MB of GDDR3 64 bit memory. With regard to baseline software, haptic device setup files for calibrating and interfacing with the devices were necessary. The Computer Haptics and Active Interface (CHAI3D) SDK was implemented to both gather haptic device configuration and generate appropriate force feedback. This is in contrast to whole-body methods [54,55,56]. Communication between the operator console and the simulation platform was achieved via a rosbridge node using TCP/IP.
For forward and backward motion, each device is paired with one set of the alternating tripod gait groups. Specifically, the left haptic device controls odd enumerated legs, while the right even enumerated legs. This is shown in Figure 5.
A momentary button press on the haptic joystick activates a mode switch, and enables the user to command pivoting. The positioning of the alternating tripod gait sequences are governed by the haptic device configurations. First consider the raw joint limits for each leg
J ^ l = θ F θ C θ T ( 25 . 8 , 68 . 75 ) ( 0 , 43 ) ( 25 . 8 , 54 . 5 )
where J l is the joint state for leg l, θ F is the angle for the femur-tibia joint, θ C the angle for the coxa-trochanter joint, and θ T the angle of the thorax-coxa joint, see Figure 2b. The zero state for θ T is as depicted in Figure 1, while the zero states for θ F and θ C are such that the femur and tibia are both parallel to the ground. Each of the two Sensable PHANToM Omni devices returns a 6DOF input, i.e., 3DOF position and 3DOF orientation. In this work, only the 3DOF position readings are used. Given the haptic device stylus position, depicted in Figure 5 and tracked as joint 3 of the Omni device.
p h = x h y h z h
the goal is to then map p h to resultant J l that conveys intuitive robot locomotion from user commands. Two such mappings are developed for two different modes: one for forward/backward translation, and one for pivoting motion. Users can initiate a mode switch with a simple momentary push button on the haptic device stylus.
For translational forward and backward locomotion, the amplitude and stride length of the tripod gait cycle are determined from the values of x h , z h position coordinates of haptic stylus h. Specifically, x h and z h are mapped directly to the commanded values for θ T and θ C respectively. In this proof-of-concept implementation, any user provided command position is mapped to appropriate joint angles as:
J l ( z h , x h ) = θ F θ C θ T = θ ¯ F α z z h sgn ( ϕ h l ) α x x h
where θ ¯ F = 0 is the fixed angle for the femur-tibia joint, α z , α x are heuristically tuned real scalars, and sgn ( ϕ h l ) are direction flags for the phase of the alternating tripod gait associated with each haptic device h and leg l. In this way, the gait cycles were forced to occur one at a time in software, thus enforcing at least three points of contact. Walking could thus be commanded with periodic, alternating cyclic motion in the XZ plane. This is summarized as pseudocode in Algorithm 1.
Algorithm 1 Translational Motion.
1:
define k z , k x as positive scalar thresholds—these determine minimum movement to register motion command
2:
for haptic device h controlling leg l do
3:
    if z h > k z then
4:
        check J l limits and if leg l in contact
5:
        set
θ ^ C = θ C joint limit , in contact α z z h , else
6:
        if θ ^ C θ C and | x h | > k x then
7:
           set
θ ^ T = θ T joint limit sgn ( ϕ h l ) α x x h , else
8:
        end if
9:
      end if
10:
    note that θ denotes previous joint angle
11:
    note that θ ^ denotes updated joint angle
12:
    note ϕ h l is assigned to each leg consistent with phase of the tripod gait
13:
end for
Turning motion is achieved similarly to forward and backward locomotion, except the amplitude and stride length controlled by haptic device h is governed by y h , x h respectively. That is, the joint angles θ T and θ C are determined directly from x h , y h respectively. The femur-tibia joint is again fixed at θ ¯ F = 0 (in both translation and pivoting, the femur-tibia fixed angle constraint is imposed to simplify mapping from low-dimensionality inputs). This is shown as
J l ( y h , x h ) = θ F θ C θ T = θ ¯ F α y y h sgn ψ l α x x h
where α y , α x are heuristically tuned real scalars, and sgn ( ψ l ) is a direction flag for the phase of the pivoting gait associated with each leg l. This pivoting action is summarized as pseudocode in Algorithm 2.
Algorithm 2 Pivoting motion.
1:
define β y , β x as positive scalar thresholds—these determine minimum movement to register pivoting command
2:
define T y as positive scalar phase switch—this allows y h to switch gait phase
3:
check γ , the phase of pivot
4:
while momentary button on device h pressed, for leg l do
5:
    check J l limits
6:
    if y h > β y then
7:
        set
θ ^ C = θ C joint limit sgn ( γ l ) α y y h , else
8:
        if x h > β x then
9:
           set
θ ^ T = θ T joint limit sgn ( γ l ) α x x h , else
10:
        end if
11:
    end if
12:
    for each transition between y h > T y and y h < T y do
13:
        toggle phase state of pivot, γ
14:
    end for
15:
end while
Figure 6 depicts the switching of phase state γ , which determines which set of three legs are raised for each phase of the pivot motion. Haptic feedback is then generated separately for each of the two modes (translational and pivoting) via simple haptic virtual fixtures. In particular, for forward and backward motion, the operator is encouraged to stay in the XZ vertical plane of the haptic device with force rendered as a spring and haptic proxy or god object movement defined as the projection of the user position on the XZ plane. Once the mode switch for pivoting is activated, haptic feedback is rendered similarly now with the horizontal XY plane serving as the virtual fixture. It was hypothesized that forward or pivoting gait are best controlled when the reaches of input circular motion commands are mapped to the XZ or XY planes. When motions deviate from said planes, desired user input commands are not only suppressed but also distorted depending on angular deviation of the commands. See Figure 7 for visualization of the planar virtual fixtures used.
The haptic device user input processing and resultant joint-level commands for the PhantomX hexapod were executed identically in both the simulation environment as well as the physical implementation. More specifically, alternating tripod gait amplitude and stride length were executed as described in Algorithms 1 and 2. Each leg within a tripod for translational movement undergo the same joint level commands, and tripod leg selection is maintained as either odd or even enumerated legs as shown in Figure 1a. For experiments with the haptic virtual fixtures, the kinesthetic force feedback was calculated and rendered identically in either simulation or physical implementation, namely as planar guidance fixtures as depicted in Figure 7.

3. Experimental Protocol

In this work, the proposed haptic telelocomotion was evaluated with two different user study experiments:
(I)
simulation based comparison of competing interface types with varying task complexity;
(II)
physical implementation of the haptic interface evaluating efficacy of the virtual fixtures.

3.1. Experiment I—Telelocomotion Interface Types with Varying Traversal Complexity

3.1.1. Simulation Environment

ROS Gazebo was used to simulate kinematics and physical collisions of the PhantomX simulated device with the environment. 3D simulated environments were generated using Autodesk Fusion 360 software, and were imported and simulated in ROS Gazebo. The simulation environment was built on a Ubuntu 16.04 system in ROS Gazebo, and simulation contact mechanics were handled by Gazebosim’s multiphysics engine. In terms of hardware, the simulation system ran with a quad-core 64-bit Intel® Core™ i7-7700 running at 3.6 GHz. The machine was equipped with 16 GB of DDR4 DRAM 3000 MHz system memory, and a graphics engine consisting of a NVIDIA GeForce GTX 1070 with 8 GB of GDDR5 256 bit dedicated memory.

3.1.2. Experimental Task

In this experiment, subjects were tasked with navigating the hexapod through two different courses. The first course consisted of a flat surface and two ninety degree turns, while the second consisted of a series of ascending and descending staircases and one ninety degree turn. The task courses are shown in Figure 8. Starting lines and locomotion end goal areas were clearly demarcated in both of the task courses.

3.1.3. Evaluated Operator Interface Types

In terms of operator input, three different hardware platforms were evaluated:
(i)
standard computer keyboard, K;
(ii)
standard gaming controller, J;
(iii)
Sensable PHANToM Omni 3 DOF haptic device, H.
Operator interface mode H is that described in detail in Section 2.2.2, while the devices types of keyboard K and controller J are commonly used alternatives in teleoperation architectures. In all cases, the hexapod was maneuvered with the alternating tripod gait.

Computer Keyboard, K

This operator interface is the most basic of the three tested, and involves just the use of a common computer keyboard. The user input mechanism relies solely on six keystrokes, as depicted in Figure 9a. With keys A and S, the operator is able to modulate step size for the predefined, alternating tripod gait sequence, and thus effectively change walking speed. Furthermore, the operator is able to translate the hexapod forward and backward with I and K respectively, and pivot left and right with J and L.

Gaming Controller, J

This interface incorporates the use of a handheld gaming controller. The components utilized are the left and right joystick, as depicted in Figure 9b. In this setup, the step size or speed is modulated by how far the user pushes the left joystick forward or backward. The further the joystick is displaced, the faster the robot walks. The pivoting motion is controlled by the right joystick. Similar controllers are used to control the Endeavor’s PackBot [57] and Auris Monarch surgical robot [58].

3.1.4. Subject Recruitment

In this study, recruitment was performed on the university campus and subjects consisted of undergraduate students. Participants were recruited through word of mouth only, and no compensation or rewards were provided or advertised. A total of 21 subjects were tested in this within user study, with ages ranging from 18 to 29 years of age (average of 20). A total of 15 male subjects and six female subjects were tested, and all subjects used computers regularly (more than 10 hours per week). 19 of the subjects were right handed, and the remaining two subjects were left handed. No personally identifiable information was gathered and the study thus did not require approval by the Trinity College Institutional Review Board. In experiment I, a total of three test conditions were utilized, namely the three telelocomotion interfaces: K, J, H, and all participants were tasked with using each of the three experimental interface conditions.

3.1.5. Metrics

In this experiment, two quantitative metrics of efficiency were of interest. These were:
(i)
time taken to complete the navigation task (s)
(ii)
number of steps taken to complete the navigation task
While the measures are hypothesized to be correlated, the latter may be more indicative of energy consumed in the navigation task. Both metrics were measured for each trial starting from the first step past the starting line and ending with the first step into the goal area.

3.1.6. Procedure

Prior to test trials, each subject was allowed fifteen minutes or whenever satisfied (whichever came first) to practice each interface on an open flat surface; in all cases the subject was satisfied prior to the fifteen minute mark. Once the experiment began, subjects were equipped with noise isolating ear protection to eliminate auditory distractions and cues. Each subject was asked to complete each of the two courses using all three different interfaces, and each interface was used in four trials per course; i.e. 24 total trial runs per user. The trial sequence was randomized prior to each subject. Additionally, users were allowed to take short breaks between trials when requested, and were reminded frequently that they could end the experiment at any time. A mean score across the four trials per each of six conditions (combination of course and interface) resulted in six data points per subject.

3.2. Experiment II—Physical Implementation

In this experiment, the modified PhantomX hexapod shown in Figure 3b was teleoperated. Users were presented with visual feedback from the Raspberry Pi Camera V2 and allowed to provide input motion commands via haptic devices, while the physical robot was placed out of view in another room.

3.2.1. Experimental Task

In this experiment, subjects were tasked with navigating the physical PhantomX hexapod through a physical staircase and obstacle traversal course. This course features a set of ascending and descending stairs, uneven surfaces, as well as a ninety degree turns. The staired navigation task is shown in Figure 10. The entire course length is approximately 4.5 m. Starting point, end goal, and desired path were clearly indicated. The obstacle course construction consisted of 1/2 inch construction grade plywood.

3.2.2. Experimental Conditions

Two experimental conditions for this study were evaluated:
(i)
haptic virtual fixtures disabled, D;
(ii)
haptic virtual fixtures enabled, E.
In both modes D and E, users motion commands using the Sensable PHANToM Omni® 3DOF haptic devices are translated to physical robot motion as described in Section 2.2.2. However, the haptic virtual fixtures as described by Figure 7 provide kinesthetic force feedback only in mode E, while no force feedback is presented in mode D.

3.2.3. Subject Recruitment

In this study, recruitment was performed on the university campus and subjects consisted of undergraduate students. Participants were recruited through word of mouth only, and no compensation or rewards were provided or advertised. A total of 10 subjects were tested in this within user study, with ages ranging from 18 to 22 years of age (average of 20). A total of eight male subjects and two female subjects were tested, and all subjects used computers regularly (more than 10 hours per week). Nine of the subjects were right handed, and one subject was left handed. No personally identifiable information was gathered and the study thus did not require approval by the Trinity College Institutional Review Board.

3.2.4. Metrics

The same metrics that were recorded in Experiment I were used for Experiment II, namely time to completion and number of steps to completion.

3.2.5. Procedure

Prior to test trials, each subject was allowed fifteen minutes or whenever satisfied (whichever came first) to practice both mode D and mode E on a flat surface; in all cases the subject was satisfied prior to the fifteen minute mark. Subjects could also view the obstacle course ahead of time. Once the experiment began, subjects were equipped with noise isolating ear protection to eliminate auditory distractions and cues. Each subject was asked to complete the stair navigation task with each mode two times. The order of trials were pre-randomized for each subject to mitigate the effects of learning. Additionally, users were allowed to take short breaks between trials when requested, and were reminded frequently that they could end the experiment at any time. A mean score for each metric in each of the two experimental conditions, D and E, were reported for each participant, resulting in four data points per subject. Figure 11 depicts a high-level flowchart of the telerobotic architecture utilized in Experiment II.

4. Results

4.1. Experiment I—Telelocomotion Interface Types with Varying Traversal Complexity

Figure 12 shows distributions of scores (time and number of steps to completion) for each of the three operator interfaces and in each of the two courses.
The mean values of each metric across tasks and operator modes, in addition to omnibus non-parametric Kruskal-Wallis one way analysis of variance p values, are shown below in Table 1. p values < 0.05 indicate that pairwise comparisons are warranted for the corresponding row; each row warrants further analysis.
Raincloud plots better visualize performance drops with increasing task complexity for each operator interface, task and metric, as shown in Figure 13. Since the Kruskal-Wallis omnibus results in Table 1 indicate statistical significance in both metrics for both courses, post-hoc pairwise comparisons using Tukey Honest Significant Difference (HSD) were conducted. The results are illustrated in Table 2.

4.2. Experiment II—Physical Implementation

Figure 14 shows the distribution of scores for each of the two experimental conditions, D and E, on the physical stair and obstacle course.
The mean values of each metric across both interface modes, virtual fixture disabled D and virtual fixture enabled E, in addition to the mean improvement by addition of the proposed haptic virtual fixtures, are shown below in Table 3.

5. Discussion

Teleoperation has been shown to be effective in task spaces too dangerous or otherwise non-ideal for human intervention. Furthermore, when combined with haptic feedback and real-time sensing, performance and safety can be improved. These teleoperated architectures have success in controlling robot manipulators, but are only useful if the robot is able to reach the task at hand. This becomes an issue for difficult or challenging terrain unsuitable for wheeled or tracked navigation systems. Such obstacles are frequent in disaster response or space exploration, for example.
Articulated legs, as opposed to wheels and tracks, offer practical flexibility and maneuverability to potentially traverse such terrains. The problem is real-time automation of such an unstructured task, as evidenced by the trials at the DARPA Robotics Challenge [59,60]. Instead, human-in-the-loop control may offer better performance and adaptability in uncertain task environments. This project provides a proof of concept implementation of such an architecture. Overcoming challenging obstacles in space exploration might be achieved with human-controlled legged walking. In another case, in lieu of risking the lives of human responders in dangerous tasks, human-controlled legged robot proxies may be able to traverse a variety of real-world terrains and execute real tasks. This potential to save lives is one that is particularly of interest to the authors, and the promising preliminary results show that haptic feedback has the potential to be useful in future interfaces for control of robot gait execution. In particular, Experiment I showed that for remote control of legged locomotion, haptic joysticks with the proposed virtual fixtures resulted in significantly better performance for traversing complex terrain compared to widely used interface types. Furthermore, Experiment II produced preliminary results that suggest performance enhancements using the proposed virtual fixtures in a physical implementation of telelocomotion to traverse a complex obstacle course. Performance was observed to be more consistently distributed across trials and users with the proposed method.
Beyond life-critical tasks, robotic devices may need to navigate very unpredictable everyday traversal tasks. For example, navigating stairs, unfinished pathways, construction sites or otherwise loose terrain may be encountered. This work thus has the potential to extend telepresence to domains where wheeled or tracked robots cannot reach.

6. Conclusions

This paper introduces and implements telelocomotion, or real-time remotely operated walking robots. While autonomous locomotion may suffice in well known terrains, it relies on assumptions such as predictable contact mechanics. Humans on the other hand are well adept at responding to unanticipated scenarios. Combining the maneuverability of legged robots with the high-level decision making of humans can allow robots to negotiate challenging terrain. Offline approaches, as shown in the DRC, can be used to plan footholds but do not achieve desirable robustness in overall gait execution. This exploratory work proposed a haptic-enabled telelocomotion interface. The proof-of-concept method mapped user input configuration to amplitude and reach of the hexapod gait. Haptic feedback in the form of a spatial virtual fixture encouraged users to remain within a 2D spatial plane, and periodic circular motion resulted in effective forward and backward motion.
The method was validated with two different user studies. The first examined three different online telelocomotion interfaces—keyboard (K), gaming controller (J), and the haptic-enabled telelocomotion interface (H) with different simulated traversal difficulties. Despite the simple nature of interface H, results show comparable performance to modes K and J on flat surfaces, and significantly better performance on stairs. Figure 13 and Table 2 depict these results. Modes K and J show high degrees of degradation moving from the flat maze to the staired environment, while performance deviated far less using H. In the second study, the method was adapted to a physical implementation of the hexapedal robot. Users were tasked with navigating a real, physical, staired obstacle course using the interface with either haptic virtual fixtures disabled (D) or enabled (E). Results are promising and demonstrate that the planar virtual fixtures indeed reduced time to completion and steps required to complete the real-world telelocomotion task on average. While results are still preliminary, bean and box-whisker plots show that typical performance is more tightly grouped and consistent about the average in both metrics when the haptic virtual fixtures were enabled. Removing outliers strengthens this observation and enhances the observed improvements when using the haptic enabled mode, E, as compared to D. Figure 14 and Table 3 depict these preliminary results. The maximum traversal speed with ideal conditions of the PhantomX hexapod is about 80 cm/s [61]. With stairs and ramps, the speed to traverse the 4.5m course here is about seven times slower than experiments without obstacles [62].
In this work, 3DOF haptic devices were used to telelocomote an 18DOF hexapod by constraining hexapod motion via fixed joint angles and the holistic motion for an alternating tripod gait. With a simple mapping, low dimension user inputs translated to high-level telelocomotion of a high dimensional legged robot. Future work will look more closely at designing efficient mappings from low dimensional inputs to operating kinematically dissimilar devices, and precise foot placements should be analyzed via Denavit-Hartenberg parameters. This is particularly crucial for navigating more complex terrains; in this work no modifications to the gait or footholds were made to account for stairs. The ideal interface may depend on the degree of kinematic dissimilarity and types of sensory data. More extensive studies can also be conducted to further evaluate performance gains along a more granular scale of terrain traversal complexity, as well as investigating varying mapping gains α x , α y (in this study, these were heuristically tuned to 10 and nine respectively) to affect different parameters, such as speed, stride length etc.. Disaster situations can present challenging terrains unsuited for predetermined traversal strategies, and different modes of control or levels of autonomy may be best suited for each task or environment. Flat, predictable areas with high levels of autonomous confidence may be traversed with supervised autonomy, while more delicate and unpredictable scenarios may call for some combination of online human intervention or manual planning. This work provides a seminal step towards one part of this holistic solution, efficient online human control of legged locomotion. Future work will examine the incorporation of balance controllers for bipedal telelocomotion, implementation of varying task complexity with physical hardware platforms, and direct comparisons with autonomous alternatives. Improvements to the physical implementation of the hexapedal robot include addition of contact sensing at the terminal link of each limb. Refinements to the haptic virtual fixtures may be derived from such contact information.

Author Contributions

Conceptualization, K.H. and D.C.; methodology, D.C., K.B., E.A., R.M, I.Y. and D.S.; software, D.C. and D.S.; validation, D.C., D.S., R.M., and I.Y.; formal analysis, K.H., D.C., D.S., R.M.; investigation, I.Y. and R.M.; resources, K.H.; data curation, K.H.; writing–original draft preparation, K.H, R.M., D.C.; writing–review and editing, K.H., D.C., K.B., R.M, I.Y., E.A., and D.S; visualization, D.C., D.S., R.M., I.Y., and E.A.; supervision, K.H.; project administration, K.H.; funding acquisition, K.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the NASA CT Space Grant, Grant No. NNX15AI12H under the PTE Federal Award; the Trinity College Department of Engineering; and the Trinity College Faculty Research Committee.

Institutional Review Board Statement

Ethical review and approval were waived for this study, since no identifiable private information about living humans was obtained or used.

Informed Consent Statement

Verbal informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy restrictions.

Acknowledgments

The authors thank Sam Burden and Blake Hannaford from the University of Washington, Hassan Rashid and David Mauro from Trinity College, Haoyu Wang from Central Connecticut State University and Biao Zhang from IEEE CT for their technical support and guidance.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Franchi, A.; Secchi, C.; Son, H.I.; Bulthoff, H.H.; Giordano, P.R. Bilateral teleoperation of groups of mobile robots with time-varying topology. IEEE Trans. Robot. 2012, 28, 1019–1033. [Google Scholar] [CrossRef]
  2. De Barros, P.G.; Linderman, R.W. A Survey of User Interfaces for Robot Teleoperation. WPI Digital Commons 2009. [Google Scholar]
  3. Niemeyer, G.; Preusche, C.; Stramigioli, S.; Lee, D. Telerobotics. In Springer Handbook of Robotics; Springer: Cham, Switzerland, 2016; pp. 1085–1108. [Google Scholar]
  4. Mortimer, M.; Horan, B.; Seyedmahmoudian, M. Building a Relationship between Robot Characteristics and Teleoperation User Interfaces. Sensors 2017, 17, 587. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Huang, K.; Chitrakar, D.; Rydén, F.; Chizeck, H.J. Evaluation of haptic guidance virtual fixtures and 3D visualization methods in telemanipulation—A user study. Intell. Serv. Robot. 2019, 12, 289–301. [Google Scholar] [CrossRef] [Green Version]
  6. Okamura, A.M. Haptic feedback in robot-assisted minimally invasive surgery. Curr. Opin. Urol. 2009, 19, 102. [Google Scholar] [CrossRef]
  7. Bimbo, J.; Pacchierotti, C.; Aggravi, M.; Tsagarakis, N.; Prattichizzo, D. Teleoperation in cluttered environments using wearable haptic feedback. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 3401–3408. [Google Scholar]
  8. Leeper, A.; Hsiao, K.; Ciocarlie, M.; Sucan, I.; Salisbury, K. Methods for collision-free arm teleoperation in clutter using constraints from 3d sensor data. In Proceedings of the 2013 13th IEEE-RAS International Conference on Humanoid Robots (Humanoids), Atlanta, GA, USA, 15–17 October 2013; pp. 520–527. [Google Scholar]
  9. Roberts, R.; Barajas, M.; Rodriguez-Leal, E.; Gordillo, J.L. Haptic feedback and visual servoing of teleoperated unmanned aerial vehicle for obstacle awareness and avoidance. Int. J. Adv. Robot. Syst. 2017, 14, 1729881417716365. [Google Scholar] [CrossRef] [Green Version]
  10. Ni, D.; Yew, A.; Ong, S.; Nee, A. Haptic and visual augmented reality interface for programming welding robots. Adv. Manuf. 2017, 5, 191–198. [Google Scholar] [CrossRef]
  11. Gancet, J.; Urbina, D.; Letier, P.; Ilzokvitz, M.; Weiss, P.; Gauch, F.; Antonelli, G.; Indiveri, G.; Casalino, G.; Birk, A.; et al. Dexrov: Dexterous undersea inspection and maintenance in presence of communication latencies. IFAC-PapersOnLine 2015, 48, 218–223. [Google Scholar] [CrossRef]
  12. Rydén, F.; Stewart, A.; Chizeck, H.J. Advanced telerobotic underwater manipulation using virtual fixtures and haptic rendering. In Proceedings of the 2013 OCEANS-San Diego, San Diego, CA, USA, 23–27 September 2013; pp. 1–8. [Google Scholar]
  13. Okamura, A.M.; Verner, L.N.; Reiley, C.; Mahvash, M. Haptics for robot-assisted minimally invasive surgery. In Robotics Research; Springer: Berlin/Heidelberg, Germany, 2010; pp. 361–372. [Google Scholar]
  14. Su, Y.H.; Huang, I.; Huang, K.; Hannaford, B. Comparison of 3d surgical tool segmentation procedures with robot kinematics prior. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4411–4418. [Google Scholar]
  15. Zhang, B.; Staab, H.; Wang, J.; Zhang, G.Q.; Boca, R.; Choi, S.; Fuhlbrigge, T.A.; Kock, S.; Chen, H. Teleoperated Industrial Robots. U.S. Patent 9,132,551, 15 September 2015. [Google Scholar]
  16. Peer, A.; Stanczyk, B.; Unterhinninghofen, U.; Buss, M. Tele-assembly in Wide Remote Environments. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Beijing, China, 9–15 October 2006. [Google Scholar]
  17. Enayati, N.; De Momi, E.; Ferrigno, G. Haptics in robot-assisted surgery: Challenges and benefits. IEEE Rev. Biomed. Eng. 2016, 9, 49–65. [Google Scholar] [CrossRef] [Green Version]
  18. Ju, Z.; Yang, C.; Li, Z.; Cheng, L.; Ma, H. Teleoperation of humanoid baxter robot using haptic feedback. In Proceedings of the 2014 International Conference on Multisensor Fusion and Information Integration for Intelligent Systems (MFI), Beijing, China, 28–29 September 2014; pp. 1–6. [Google Scholar]
  19. Costes, A.; Danieau, F.; Argelaguet-Sanz, F.; Lécuyer, A.; Guillotel, P. KinesTouch: 3D Force-Feedback Rendering for Tactile Surfaces. In International Conference on Virtual Reality and Augmented Reality; Springer: Cham, Switzerland, 2018; pp. 97–116. [Google Scholar]
  20. Su, Y.H.; Huang, K.; Hannaford, B. Real-time vision-based surgical tool segmentation with robot kinematics prior. In Proceedings of the 2018 International Symposium on Medical Robotics (ISMR), Atlanta, GA, USA, 1–3 March 2018; pp. 1–6. [Google Scholar]
  21. Huang, K.; Su, Y.H.; Khalil, M.; Melesse, D.; Mitra, R. Sampling of 3DOF Robot Manipulator Joint-Limits for Haptic Feedback. In Proceedings of the 2019 IEEE 4th International Conference on Advanced Robotics and Mechatronics (ICARM), Toyonaka, Japan, 3–5 July 2019; pp. 690–696. [Google Scholar]
  22. Huang, K.; Lancaster, P.; Smith, J.R.; Chizeck, H.J. Visionless Tele-Exploration of 3D Moving Objects. In Proceedings of the 2018 IEEE International Conference on Robotics and Biomimetics (ROBIO), Kuala Lumpur, Malaysia, 12–15 December 2018; pp. 2238–2244. [Google Scholar]
  23. Huang, K.; Jiang, L.T.; Smith, J.R.; Chizeck, H.J. Sensor-aided teleoperated grasping of transparent objects. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 4953–4959. [Google Scholar]
  24. Jiang, W.; Zheng, J.J.; Zhou, H.J.; Zhang, B.K. A new constraint-based virtual environment for haptic assembly training. Adv. Eng. Softw. 2016, 98, 58–68. [Google Scholar] [CrossRef]
  25. Diolaiti, N.; Melchiorri, C. Teleoperation of a mobile robot through haptic feedback. In Proceedings of the IEEE International Workshop HAVE Haptic Virtual Environments and Their, Ottawa, ON, Canada, 17–18 November 2002; pp. 67–72. [Google Scholar]
  26. Cislo, N. Telepresence and Intervention Robotics; Technical report; Laboratoire de Robotique de Paris: Velizy-Villacoublay, France, 2000. [Google Scholar]
  27. Miyasato, T. Tele-nursing system with realistic sensations using virtual locomotion interface. In Proceedings of the 6th ERCIM Workshop “User Interfaces for All”, Florence, Italy, 25–26 October 2000. [Google Scholar]
  28. Klamt, T.; Rodriguez, D.; Schwarz, M.; Lenz, C.; Pavlichenko, D.; Droeschel, D.; Behnke, S. Supervised autonomous locomotion and manipulation for disaster response with a centaur-like robot. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1–8. [Google Scholar]
  29. Bellicoso, C.D.; Bjelonic, M.; Wellhausen, L.; Holtmann, K.; Günther, F.; Tranzatto, M.; Fankhauser, P.; Hutter, M. Advances in real-world applications for legged robots. J. Field Robot. 2018, 35, 1311–1326. [Google Scholar] [CrossRef]
  30. Luk, B.; Galt, S.; Cooke, D.; Hewer, N. Intelligent walking motions and control for a legged robot. In Proceedings of the 1999 European Control Conference (ECC), Karlsruhe, Germany, 31 August–3 September 1999; pp. 4756–4761. [Google Scholar]
  31. Focchi, M.; Orsolino, R.; Camurri, M.; Barasuol, V.; Mastalli, C.; Caldwell, D.G.; Semini, C. Heuristic planning for rough terrain locomotion in presence of external disturbances and variable perception quality. In Advances in Robotics Research: From Lab to Market; Springer: Cham, Switzerland, 2020; pp. 165–209. [Google Scholar]
  32. Xi, W.; Yesilevskiy, Y.; Remy, C.D. Selecting gaits for economical locomotion of legged robots. Int. J. Robot. Res. 2016, 35, 1140–1154. [Google Scholar] [CrossRef]
  33. Carpentier, J.; Mansard, N. Multicontact locomotion of legged robots. IEEE Trans. Robot. 2018, 34, 1441–1460. [Google Scholar] [CrossRef]
  34. Guizzo, E.; Ackerman, E. The hard lessons of DARPA’s robotics challenge [News]. IEEE Spectr. 2015, 52, 11–13. [Google Scholar] [CrossRef]
  35. Westervelt, E.R.; Grizzle, J.W.; Chevallereau, C.; Choi, J.H.; Morris, B. Feedback Control of Dynamic Bipedal Robot Locomotion; CRC Press: Boca Raton, FL, USA, 2007. [Google Scholar]
  36. Hong, A.; Lee, D.G.; Bülthoff, H.H.; Son, H.I. Multimodal feedback for teleoperation of multiple mobile robots in an outdoor environment. J. Multimodal User Interfaces 2017, 11, 67–80. [Google Scholar] [CrossRef]
  37. Dubey, S. Robot Locomotion—A Review. Int. J. Appl. Eng. Res. 2015, 10, 7357–7369. [Google Scholar]
  38. Johnson, M.; Shrewsbury, B.; Bertrand, S.; Wu, T.; Duran, D.; Floyd, M.; Abeles, P.; Stephen, D.; Mertins, N.; Lesman, A.; et al. Team IHMC’s lessons learned from the DARPA robotics challenge trials. J. Field Robot. 2015, 32, 192–208. [Google Scholar] [CrossRef]
  39. Yanco, H.A.; Norton, A.; Ober, W.; Shane, D.; Skinner, A.; Vice, J. Analysis of human-robot interaction at the DARPA robotics challenge trials. J. Field Robot. 2015, 32, 420–444. [Google Scholar] [CrossRef]
  40. DeDonato, M.; Dimitrov, V.; Du, R.; Giovacchini, R.; Knoedler, K.; Long, X.; Polido, F.; Gennert, M.A.; Padır, T.; Feng, S.; et al. Human-in-the-loop control of a humanoid robot for disaster response: A report from the DARPA Robotics Challenge Trials. J. Field Robot. 2015, 32, 275–292. [Google Scholar] [CrossRef]
  41. Feng, S.; Whitman, E.; Xinjilefu, X.; Atkeson, C.G. Optimization-based full body control for the DARPA robotics challenge. J. Field Robot. 2015, 32, 293–312. [Google Scholar] [CrossRef] [Green Version]
  42. Atkeson, C.G.; Babu, B.; Banerjee, N.; Berenson, D.; Bove, C.; Cui, X.; DeDonato, M.; Du, R.; Feng, S.; Franklin, P.; et al. What happened at the DARPA robotics challenge, and why. DRC Final. Spec. Issue J. Field Robot. 2016, 1. submitted. [Google Scholar]
  43. Kohlbrecher, S.; Romay, A.; Stumpf, A.; Gupta, A.; Von Stryk, O.; Bacim, F.; Bowman, D.A.; Goins, A.; Balasubramanian, R.; Conner, D.C. Human-robot teaming for rescue missions: Team ViGIR’s approach to the 2013 DARPA Robotics Challenge Trials. J. Field Robot. 2015, 32, 352–377. [Google Scholar] [CrossRef]
  44. Hopkins, M.A.; Griffin, R.J.; Leonessa, A.; Lattimer, B.Y.; Furukawa, T. Design of a compliant bipedal walking controller for the DARPA Robotics Challenge. In Proceedings of the 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), Seoul, Korea, 3–5 November 2015; pp. 831–837. [Google Scholar]
  45. Sitti, M.; Menciassi, A.; Ijspeert, A.J.; Low, K.H.; Kim, S. Survey and introduction to the focused section on bio-inspired mechatronics. IEEE/ASME Trans. Mechatronics 2013, 18, 409–418. [Google Scholar] [CrossRef] [Green Version]
  46. Righetti, L.; Ijspeert, A.J. Programmable central pattern generators: An application to biped locomotion control. In Proceedings of the 2006 IEEE International Conference on Robotics and Automation, ICRA 2006, Orlando, FL, USA, 15–19 May 2006; pp. 1585–1590. [Google Scholar]
  47. Lachat, D.; Crespi, A.; Ijspeert, A.J. Boxybot: A swimming and crawling fish robot controlled by a central pattern generator. In Proceedings of the The First IEEE/RAS-EMBS International Conference on Biomedical Robotics and Biomechatronics, BioRob 2006, Pisa, Italy, 20–22 February 2006; pp. 643–648. [Google Scholar]
  48. Raibert, M.; Blankespoor, K.; Nelson, G.; Playter, R. Bigdog, the rough-terrain quadruped robot. IFAC Proc. Vol. 2008, 41, 10822–10825. [Google Scholar] [CrossRef] [Green Version]
  49. Saranli, U.; Buehler, M.; Koditschek, D.E. RHex: A simple and highly mobile hexapod robot. Int. J. Robot. Res. 2001, 20, 616–631. [Google Scholar] [CrossRef] [Green Version]
  50. Boxerbaum, A.S.; Oro, J.; Quinn, R.D. Introducing DAGSI Whegs™: The latest generation of Whegs™ robots, featuring a passive-compliant body joint. In Proceedings of the 2008 IEEE International Conference on Robotics and Automation, Pasadena, CA, USA, 19–23 May 2008; pp. 1783–1784. [Google Scholar]
  51. Kim, S.; Clark, J.E.; Cutkosky, M.R. iSprawl: Design and tuning for high-speed autonomous open-loop running. Int. J. Robot. Res. 2006, 25, 903–912. [Google Scholar] [CrossRef]
  52. Zucker, M.; Ratliff, N.; Stolle, M.; Chestnutt, J.; Bagnell, J.A.; Atkeson, C.G.; Kuffner, J. Optimization and learning for rough terrain legged locomotion. Int. J. Robot. Res. 2011, 30, 175–191. [Google Scholar] [CrossRef]
  53. Hoover, A.M.; Burden, S.; Fu, X.Y.; Sastry, S.S.; Fearing, R.S. Bio-inspired design and dynamic maneuverability of a minimally actuated six-legged robot. In Proceedings of the 2010 3rd IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics, Tokyo, Japan, 26–29 September 2010; pp. 869–876. [Google Scholar]
  54. Ramos, J.; Kim, S. Dynamic locomotion synchronization of bipedal robot and human operator via bilateral feedback teleoperation. Sci. Robot. 2019, 4. [Google Scholar] [CrossRef]
  55. Ramos, J.; Wang, A.; Ubellacker, W.; Mayo, J.; Kim, S. A balance feedback interface for whole-body teleoperation of a humanoid robot and implementation in the HERMES system. In Proceedings of the 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), Seoul, Korea, 3–5 November 2015; pp. 844–850. [Google Scholar]
  56. Ramos, J.; Kim, S. Humanoid dynamic synchronization through whole-body bilateral feedback teleoperation. IEEE Trans. Robot. 2018, 34, 953–965. [Google Scholar] [CrossRef]
  57. Wilde, G.A.; Murphy, R.R. User Interface for Unmanned Surface Vehicles Used to Rescue Drowning Victims. In Proceedings of the 2018 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Philadelphia, PA, USA, 6–8 August 2018; pp. 1–8. [Google Scholar]
  58. Hindman, A. Robotic Bronchoscopy. Oncol. Issues 2019, 34, 16–20. [Google Scholar] [CrossRef]
  59. Atkeson, C.G.; Babu, B.P.W.; Banerjee, N.; Berenson, D.; Bove, C.P.; Cui, X.; DeDonato, M.; Du, R.; Feng, S.; Franklin, P.; et al. No falls, no resets: Reliable humanoid behavior in the DARPA robotics challenge. In Proceedings of the 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), Seoul, Korea, 3–5 November 2015; pp. 623–630. [Google Scholar]
  60. Guizzo, E. Rescue-robot show-down. IEEE Spectr. 2014, 51, 52–55. [Google Scholar] [CrossRef]
  61. Dupeyroux, J.; Passault, G.; Ruffier, F.; Viollet, S.; Serres, J. Hexabot: A Small 3D-Printed Six-Legged Walking Robot Designed for Desert Ant-Like Navigation Tasks. In Proceedings of the 20th World Congress of the International Federation of Automatic Control (IFAC), Toulouse, France, 9–14 July 2017; pp. 16628–16631. [Google Scholar]
  62. Chitrakar, D.; Mitra, R.; Huang, K. Haptic Interface for Hexapod Gait Execution. In Proceedings of the 2020 Fourth IEEE International Conference on Robotic Computing (IRC), Taichung, Taiwan, 9–11 November 2020; pp. 414–415. [Google Scholar]
Figure 1. Trossen Robotics PhantomX hexapod (a) leg enumeration, rendered and simulated in ROS Gazebo (b) physical implementation of hexapod robot. For alternating tripod gaits odd enumerated legs move separately from even legs.
Figure 1. Trossen Robotics PhantomX hexapod (a) leg enumeration, rendered and simulated in ROS Gazebo (b) physical implementation of hexapod robot. For alternating tripod gaits odd enumerated legs move separately from even legs.
Applsci 11 00194 g001
Figure 2. Hexapod leg links and coordinate frames (a) links and joint axes labeled on simulation (b) joint coordinate frames for each leg.
Figure 2. Hexapod leg links and coordinate frames (a) links and joint axes labeled on simulation (b) joint coordinate frames for each leg.
Applsci 11 00194 g002
Figure 3. Hexapod physical augmentations and peripherals (a) CAD rendering of physical mounts for peripherals (b) modified PhantomX platform with Jetson Nano controller, Raspberry Pi Camera V2, buck convereter, U2D2, and Netis AC wireless adapter.
Figure 3. Hexapod physical augmentations and peripherals (a) CAD rendering of physical mounts for peripherals (b) modified PhantomX platform with Jetson Nano controller, Raspberry Pi Camera V2, buck convereter, U2D2, and Netis AC wireless adapter.
Applsci 11 00194 g003
Figure 4. Typical visual feedback received by operator, rendered either from (a) a simulated streaming RGB camera mounted to the robot base or (b) the Raspberry Pi Camera V2 mounted to the physical robot. In both cases, feedback is rendered on a standard LCD display.
Figure 4. Typical visual feedback received by operator, rendered either from (a) a simulated streaming RGB camera mounted to the robot base or (b) the Raspberry Pi Camera V2 mounted to the physical robot. In both cases, feedback is rendered on a standard LCD display.
Applsci 11 00194 g004
Figure 5. Haptic device to alternating tripod group mapping for forward or backward motion using the haptic operator interface. The red dot indicates the stylus position.
Figure 5. Haptic device to alternating tripod group mapping for forward or backward motion using the haptic operator interface. The red dot indicates the stylus position.
Applsci 11 00194 g005
Figure 6. Pivoting phase is determined solely by transitions from outside to within the shaded region, determined by T y . As the commanded position coordinates shift from p 1 to p 2 , the cursor enters the shaded region and the y position is less than T y . When the cursor next leaves the shaded region, for example to p 3 , the phases of the rotation is toggled, which alternates which set of three legs are raised.
Figure 6. Pivoting phase is determined solely by transitions from outside to within the shaded region, determined by T y . As the commanded position coordinates shift from p 1 to p 2 , the cursor enters the shaded region and the y position is less than T y . When the cursor next leaves the shaded region, for example to p 3 , the phases of the rotation is toggled, which alternates which set of three legs are raised.
Applsci 11 00194 g006
Figure 7. Haptic feedback is rendered as a simple spring force via proxy method to encourage users to stay within a 2D plane, (a) for forward and backward, (b) for turning. Proxy location is simply user position projected onto rendered plane.
Figure 7. Haptic feedback is rendered as a simple spring force via proxy method to encourage users to stay within a 2D plane, (a) for forward and backward, (b) for turning. Proxy location is simply user position projected onto rendered plane.
Applsci 11 00194 g007
Figure 8. Teleoperated locomotion tasks. (a) demonstrates a baseline case for which all useful interfaces should perform similarly, while (b) introduces challenging and non-ideal terrain. In (b), orange indicates ascending, purple indicates descending, red indicates flat, and white indicates goal.
Figure 8. Teleoperated locomotion tasks. (a) demonstrates a baseline case for which all useful interfaces should perform similarly, while (b) introduces challenging and non-ideal terrain. In (b), orange indicates ascending, purple indicates descending, red indicates flat, and white indicates goal.
Applsci 11 00194 g008
Figure 9. Evaluated operator interfaces to be compared against the proposed haptic interface H (a) keyboard layout for operator interface K (b) game controller mapping for operator interface J.
Figure 9. Evaluated operator interfaces to be compared against the proposed haptic interface H (a) keyboard layout for operator interface K (b) game controller mapping for operator interface J.
Applsci 11 00194 g009
Figure 10. Obstacle course for Experiment II (a) CAD rendering with dimensions and (b) physical realization. The goal is to start from the bottom left corner and navigate clockwise to the bottom right.
Figure 10. Obstacle course for Experiment II (a) CAD rendering with dimensions and (b) physical realization. The goal is to start from the bottom left corner and navigate clockwise to the bottom right.
Applsci 11 00194 g010
Figure 11. High level flowchart for telerobotic architecture of Experiment II.
Figure 11. High level flowchart for telerobotic architecture of Experiment II.
Applsci 11 00194 g011
Figure 12. Performance comparisons across both courses, both metrics, and all three operator interface modes. (a) time for flat, (b) time for stairs, (c) steps for flat (d) steps for stairs. Pink indicates K, blue indicates J, gray indicates H.
Figure 12. Performance comparisons across both courses, both metrics, and all three operator interface modes. (a) time for flat, (b) time for stairs, (c) steps for flat (d) steps for stairs. Pink indicates K, blue indicates J, gray indicates H.
Applsci 11 00194 g012
Figure 13. Performance robustness to increased task complexity. Left plot shows performance shift in time to completion, right shows performance shift for number of steps. Pink indicates K, blue indicates J, gray indicates H.
Figure 13. Performance robustness to increased task complexity. Left plot shows performance shift in time to completion, right shows performance shift for number of steps. Pink indicates K, blue indicates J, gray indicates H.
Applsci 11 00194 g013
Figure 14. Performance comparisons between modes D and E for Experiment II, showing (a) distribution for time required and (b) number of steps needed to complete the navigation task.
Figure 14. Performance comparisons between modes D and E for Experiment II, showing (a) distribution for time required and (b) number of steps needed to complete the navigation task.
Applsci 11 00194 g014
Table 1. Experiment I Mean Time to Completion and Mean Steps.
Table 1. Experiment I Mean Time to Completion and Mean Steps.
Operator ModeKJHp
Metric
Course 1
(Flat)
Time [s]70.65547.34267.265 5.09 × 10 9
Steps131.3092.44983.125 4.13 × 10 10
Course 2
(Stairs)
Time [s]150.30171.3878.048 5.64 × 10 9
Steps266.87331.3179.571 4.69 × 10 10
Table 2. Tukey-Honest Significant Difference (HSD Corrected Pairwise Comparisons, Experiment I.
Table 2. Tukey-Honest Significant Difference (HSD Corrected Pairwise Comparisons, Experiment I.
Comparison ModeK-JK-HJ-H
Metric
Course 1
(Flat)
Time [s]7.547 × 10 9   0.2264.319 × 10 5  
Steps6.009 × 10 6   1.7843 × 10 9   0.264
Course 2
(Stairs)
Time [s]0.3212.328 × 10 5   1.119 × 10 8  
Steps0.4122.422 × 10 6   2.589 × 10 9  
bold denotes significance, p < 0.05. green indicates that J performed better. blue indicates that H performed better. red indicates that K performed better.
Table 3. Experiment II Mean Time to Completion and Mean Steps.
Table 3. Experiment II Mean Time to Completion and Mean Steps.
Operator ModeD
Virtual Fixtures
Disabled
E
Virtual Fixtures
Enabled
D-E
Improvement with
Fixtures
Metric
Time [s]141.513011.5
Steps127.7124.952.75
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Huang, K.; Subedi, D.; Mitra, R.; Yung, I.; Boyd, K.; Aldrich, E.; Chitrakar, D. Telelocomotion—Remotely Operated Legged Robots. Appl. Sci. 2021, 11, 194. https://doi.org/10.3390/app11010194

AMA Style

Huang K, Subedi D, Mitra R, Yung I, Boyd K, Aldrich E, Chitrakar D. Telelocomotion—Remotely Operated Legged Robots. Applied Sciences. 2021; 11(1):194. https://doi.org/10.3390/app11010194

Chicago/Turabian Style

Huang, Kevin, Divas Subedi, Rahul Mitra, Isabella Yung, Kirkland Boyd, Edwin Aldrich, and Digesh Chitrakar. 2021. "Telelocomotion—Remotely Operated Legged Robots" Applied Sciences 11, no. 1: 194. https://doi.org/10.3390/app11010194

APA Style

Huang, K., Subedi, D., Mitra, R., Yung, I., Boyd, K., Aldrich, E., & Chitrakar, D. (2021). Telelocomotion—Remotely Operated Legged Robots. Applied Sciences, 11(1), 194. https://doi.org/10.3390/app11010194

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop