Next Article in Journal
Small Sample Building Energy Consumption Prediction Using Contrastive Transformer Networks
Next Article in Special Issue
Power Amplifier Predistortion Using Reduced Sampling Rates in the Forward and Feedback Paths
Previous Article in Journal
Novel Information-Driven Smoothing Spline Linearization Method for High-Precision Displacement Sensors Based on Information Criterions
Previous Article in Special Issue
Various-Order Low-Pass Filter with the Electronic Change of Its Approximation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Latency Reduction and Packet Synchronization in Low-Resource Devices Connected by DDS Networks in Autonomous UAVs

1
Department of Aerospace Engineering, Physics and Space Sciences, Florida Institute of Technology, Melbourne, FL 32901, USA
2
Core Developer, PX4 Autopilot, 8092 Zurich, Switzerland
3
Control Systems Design and Analysis Branch, NASA Marshall Space Flight Center, Huntsville, AL 35812, USA
4
Department of Mechanical and Civil Engineering, Florida Institute of Technology, Melbourne, FL 32901, USA
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(22), 9269; https://doi.org/10.3390/s23229269
Submission received: 13 October 2023 / Revised: 9 November 2023 / Accepted: 16 November 2023 / Published: 18 November 2023

Abstract

:
Real-time flight controllers are becoming dependent on general-purpose operating systems, as the modularity and complexity of guidance, navigation, and control systems and algorithms increases. The non-deterministic nature of operating systems creates a critical weakness in the development of motion control systems for robotic platforms due to the random delays introduced by operating systems and communication networks. The high-speed operation and sensitive dynamics of UAVs demand fast and near-deterministic communication between the sensors, companion computer, and flight control unit (FCU) in order to achieve the required performance. In this paper, we present a method to assess communications latency between a companion computer and an RTOS open-source flight controller, which is based on an XRCE-DDS bridge between clients hosted in the low-resource environment and the DDS network used by ROS2. A comparison based on the measured statistics of latency illustrates the advantages of XRCE-DDS compared to the standard communication method based on MAVROS-MAVLink. More importantly, an algorithm to estimate latency offset and clock skew based on an exponential moving average filter is presented, providing a tool for latency estimation and correction that can be used by developers to improve synchronization of processes that rely on timely communication between the FCU and companion computer, such as synchronization of lower-level sensor data at the higher-level layer. This addresses the challenges introduced in GNC applications by the non-deterministic nature of general-purpose operating systems and the inherent limitations of standard flight controller hardware.

1. Introduction

The growing interest in Autonomous Aerial and Underwater Vehicles (AAUVs) is rapidly reshaping technologies that support new applications and improve performance. The extensive range of applications and increasing complexity of missions are prompting the development of advanced real-time flight controllers capable of integrating guidance, navigation, and control (GNC) algorithms to reliably execute increasingly complex tasks.
State-of-the-art systems use a divided architecture to address the computational limitations of standard flight control units (FCU), separating tasks between low-level and higher-level layers [1]. This allows computationally intensive GNC functions to be handled by the higher-level layers, which rely on companion computers that are typically more powerful than the low-level FCU. While this division is a good alternative to support mission goals, it introduces a critical drawback in that high-level layers often use general-purpose operating systems known for their non-deterministic time management and unpredictable delays [2]. This creates challenges in developing high performance GNC solutions, as random latencies introduced by the operating system and interface networks can affect the overall performance of motion control systems [1,2,3,4].
The development of autonomous robotic platforms emphasizes software modularity and integration with widely used frameworks. This study presents an innovative method for reducing latency in packet synchronization between a companion computer and a Real-Time Operating System (RTOS) flight controller (a device notably constrained in ternms of computational resources) based on widely available open-source tools: a DDS network in the Robot Operating System 2 (ROS 2) environment, the PX4 FCU, and the eXtremely Resource Constrained Environment-Data Distribution Service (MicroXRCE-DDS) as an intermediary agent between system level layers. Latency measurements for the proposed method are compared against results from the same flight controller software, PX4, with the well-known open-source Micro Air Vehicle Link (MAVLink) protocol, Robotic Operational System (ROS), and MAVROS, a communication bridge between MAVlink and ROS [1,5,6,7,8].
In summary, this paper presents a method to assess latency in communications between the flight controller and companion computer. A comparison based on the measured latency statistics illustrates the advantages of XRCE-DDS compared to standard communication methods based on MAVROS-MAVLink. More importantly, an algorithm to estimate latency offset and clock skew based on an exponential moving average filter is presented, providing a tool for latency estimation and correction that can be used by developers to improve synchronization of processes that rely on timely communication between the FCU and companion computer, such as synchronization of lower-level sensor data at the higher-level layer. This addresses the challenges introduced in GNC applications by the non-deterministic nature of general-purpose operating systems and the inherent limitations of standard flight controller hardware.

2. Materials and Methods

2.1. Flight Controller Unit

The flight control unit used in this study was the Holybro Pixhawk 4, running a slightly modified PX4 firmware version 1.14.0 [7]. The FCU uses two ARM processors: an STM32F765 as the CPU and an STM32F100 as the IO processor. The internal sensors include an inertial measurement unit (IMU), magnetometer, and barometer, which can be fused with other non-deterministic and deterministic sensors for state estimation during operation. Communication between the FCU and high-level layer was implemented using an FCU UART port at 921600 BPS.

2.2. PX4 Firmware

PX4 works through a topic publish/subscribe framework called uORB that uses internal message definitions. This format allows the user to declare custom uORB definitions to be used internally in the FCU, and enables all uORB definitions to be exported for use in an ROS/ROS 2 package. While PX4 can accept MAVlink messages, these have to be converted to uORB in order to properly interface with critical internal functionalities; custom modifications to MAVlink require substantial user effort [7].

2.3. ROS/ROS 2

In robotic systems with divided architectures, the two most popular current frameworks are the Robot Operating System (ROS) and Robot Operating System 2 (ROS 2). Although similar in name, their internal publish/subscribe models have fundamentally different protocols. ROS uses a custom message queuing protocol similar to Advanced Message Queuing Protocol (AMQP), in which two types of nodes are used: ROS nodes and a ROS Master. A roscore is a computational entity that can communicate with other nodes and can publish and subscribe to receive and transmit data. The ROS Master acts as central hub for the ROS ecosystem, keeping a registry of active nodes, services, and topics to ease communication between nodes, i.e., discovery registration. Multiple ROS nodes can be executed at the same time and be registered to a roscore; however, cooperation between multiple roscores is not natively supported by ROS [9]. In contrast, ROS 2 implements a decentralized architecture using a Data Distribution Service (DDS), which eliminates the need for a ROS Master and automates the discovery registration. ROS 2 nodes can transmit and receive data without central coordination, with the direct result that ROS2 incurs less latency than ROS. ROS 2 enables the use of several node types (regular, real-time, intra-process, composable, and lifecycle nodes), enabling cooperation between different layers of a system as long as the ROS 2 nodes of interest are under the same DDS domain [10,11].

2.4. MAVlink

MAVlink is an open-source messaging protocol widely used by the UAV industry. It uses a header-only message marshaling library optimized for low-resource environments. The implementation of custom messages in MAVlink is a demanding task, in particular when integrating ROS/ROS 2 routines. Custom implementations require changes to the FCU’s source code as well as modifications to the communications bridge used to transmit and translate messages between the ROS/ROS 2 nodes and the FCU [8,10,11,12].

2.5. MAVROS

MAVROS is the most commonly used open-source bridge between ROS and MAVlink, converting ROS messages to their MAVlink equivalent. MAVROS source code has to be modified to implement custom MAVlink messages; for instance, a MAVROS extra plugin, which requires a complex set of tasks, is needed to implement a simple addition. Although the MAVROS team has announced their intention to support ROS 2 in the future, the majority of current UAV applications using MAVlink coupled with ROS/ROS 2 use the ROS-compatible MAVROS version. The use of MAVROS and ROS 2 could be implemented using a ROS–ROS2 bridge, although this would increase computational effort and latency, as ROS and ROS2 use different communication architectures [13].

2.6. MicroXRCE-DDS

MicroXRCE-DDS (Figure 1) is a protocol that allows low-resource devices to be integrated with a DDS network while maintaining real-time and reliable data distribution capabilities (RTPS). In this investigation, it is used as an ROS 2 middleware to enable nodes running on the FCU to communicate with nodes running on the companion computer [2,14,15].

2.7. Companion Computer

The companion is an auxiliary computer running ROS/ROS 2 nodes to support the required mission functionality. It communicates with the FCU using an FTDI cable (USB to UART). For this investigation, the companion computer was an RPi 4 with 8 GB of RAM.

2.8. Host Computer

The host was implemented on an x86 computer with an Intel Xeon(R) Gold 6148 CPU running Ubuntu 22.04 with Gazebo 11 and QGroundControl. The host was used as the terminal for the FCU and as a virtual environment to emulate the PX4 internal sensor data [16,17].

2.9. Latency Assessment

Latency assessment was implemented via hardware-in-the-loop simulation, and consisted of measuring the time offset between transmitted and received messages on both the FCU side and a high-frequency ROS or ROS 2 node. The forward path consisted of messages sent from the companion computer, simulating data from an external vision sensor to be fused at the PX4’s Extended Kalman Filter (EKF2); latencies were collected when the message was parsed into the estimator module in the FCU. The reverse path consisted of messages sent from the FCU to the ROS or ROS 2 node containing raw measurements from the FCU’s IMU. The choice of messages relates to typical GNC applications such as SLAM or Visual Inertia Odometry, in which rapid IMU feedback and fast state estimate transmission are crucial for mission performance. The messages were custom-modified to carry the same number of 115 bytes.
Figure 2 outlines the PX4’s EKF2 algorithm [18]. The top block shows the main estimator, which uses a ring buffer to account for different sensor sampling frequencies and predict the states in a delayed horizon. The second block is the output predictor, which uses corrected high-frequency IMU measurements for quick state prediction and UAV rate control. Pseudocode and diagrams outlining communications between higher-level and lower-level processors are presented for both scenarios in Algorithms 1 and 2, and in Figure 3 and Figure 4.
Algorithm 1 Latency Test Node in rospy (ROS)
  • Class LatencyTest:
  •    Initialize variables
  •    Initialize publisher and subscriber
  •    Initialize timer
  •  
  • function Initialize variables
  •      t i m e _ p a c k e t _ c r e a t i o n empty deque of max length 2
  •      o f f s e t _ e s t i m a t e d , N , h i g h _ d e v _ c o u n t e r , h i g h _ r t t _ c o u n t e r 0
  •      a l p h a , b e t a 0.05
  •      s k e w 0
  •      c o n v e r g e n c e _ w i n d o w 500
  •  
  • function Initialize publisher and subscriber
  •     Subscribe to ’mavros/imu/data_raw’
  •     Publish to ’mavros/vision_pose/pose’
  •  
  • function Initialize timer
  •     Set timer frequency to 200 Hz
  •  
  • function Sensor callback(msg)
  •     Calculate i m u _ t i m e s t a m p , c u r r e n t _ t i m e
  •     Obtain i m u _ t i m e _ o f f s e t _ o b s e r v e d , r t t
  •     Update o f f s e t and counters based on r t t and i m u _ t i m e _ o f f s e t _ o b s e r v e d
  •     Write data to log.txt
  •  
  • function Update offset and counters(rtt, imu_time_offset_observed)
  •     if  r t t < 10,000 then
  •         Update a l p h a , b e t a , and  o f f s e t using i m u _ t i m e _ o f f s e t _ o b s e r v e d
  •     else
  •          h i g h _ r t t _ c o u n t e r h i g h _ r t t _ c o u n t e r + 1
  •  
  • function Cmdloop callback(event)
  •     Create and publish v i o _ m s g
  •     Append v i o _ m s g . h e a d e r . s t a m p to t i m e _ p a c k e t _ c r e a t i o n
  •  
  • function reset filter
  •      N 0
  •      Offset E s t i m a t e d 0
  •      Skew Estimated 0
  •      α ^ α max
  •      β ^ β max
  •      high _ deviation _ count 0
  •      high _ rtt _ count 0
Algorithm 2 Latency Test Node in RCLPY (ROS 2)
  • Class LatencyTest extends Node:
  •    Initialize variables
  •    Initialize publisher and subscriber
  •    Initialize timer
  •  
  • function Initialize variables
  •      t i m e _ p a c k e t _ c r e a t i o n empty deque of max length 2
  •      o f f s e t _ e s t i m a t e d , N , h i g h _ d e v _ c o u n t e r , h i g h _ r t t _ c o u n t e r 0
  •      a l p h a , b e t a 0.05
  •      s k e w 0
  •      c o n v e r g e n c e _ w i n d o w 500
  •  
  • function Initialize publisher and subscriber
  •     Create a subscription to ‘/fmu/out/vehicle_imu’
  •     Create a publisher to ‘/fmu/in/vehicle_visual_odometry’
  •  
  • function Initialize timer
  •     Set timer frequency to 200 Hz
  •  
  • function Sensor callback(msg)
  •     Calculate i m u _ t i m e s t a m p , c u r r e n t _ t i m e
  •     Calculate i m u _ t i m e _ o f f s e t _ o b s e r v e d , r t t
  •     Update o f f s e t and counters based on r t t and i m u _ t i m e _ o f f s e t _ o b s e r v e d
  •     Write data to “time_offsets.txt”
  •  
  • function Update offset and counters(rtt, imu_time_offset_observed)
  •     if  r t t < 10,000 then
  •         Update a l p h a , b e t a , and  o f f s e t using i m u _ t i m e _ o f f s e t _ o b s e r v e d
  •     else
  •          h i g h _ r t t _ c o u n t e r h i g h _ r t t _ c o u n t e r + 1
  •  
  • function Cmdloop callback
  •     Create and publish v i o _ m s g
  •     Append v i o _ m s g . t i m e s t a m p _ s a m p l e to t i m e _ p a c k e t _ c r e a t i o n
  •  
  • function reset filter
  •      N 0
  •      Offset E s t i m a t e d 0
  •      Skew Estimated 0
  •      α ^ α max
  •      β ^ β max
  •      high _ deviation _ count 0
  •      high _ rtt _ count 0
The ROS messages in Table 1 directly translate into the uORB messages shown in Table 2. The red and blue lines in Figure 3 and Figure 4 show the message flow (IMU and VIO) and modules involved. Interpreting the time offset results requires understanding how the FCU firmware and the ROS/ROS 2 nodes estimate the time offset in each message, including how both lower-level and higher-level system time bases can be aligned and how the communication delay is quantified. The time offset is defined as the difference between two clock readings:
Offset Observed ( i ) = T Packet Creation ( i ) + T Current ( i ) 2 × T Remote Stamp ( i ) 2
where T Packet Creation is the time when the packet containing the uORB or MAVlink messages was created, that is, the time when information originated from the ROS/ROS 2 node or FCU was serialized and sent, T Remote Stamp is the time when the message was received and sent back from the remote level (either the higher-level or lower-level system layer), and  T Current is the current system time. Clock skew is defined as the difference in the register update rate (loop rate) at both the FCU and companion computer. The offset and clock skew are estimated using an exponential moving average filter, as described in [19,20]:
Offset Estimated ( i ) = α × Offset Observed ( i ) + ( 1 α ) × ( Offset Estimated ( i 1 ) + Skew Estimated ( i 1 ) )
Skew Estimated ( i ) = β × ( Offset Estimated ( i ) Offset Estimated ( i 1 ) ) + ( 1 β ) × Skew Estimated ( i 1 )
where α and β are the filter gains for the offset and skew, respectively. To check convergence of the estimated offset, the message round-trip time is obtained and to determine whether it falls within a maximum threshold:
T RTT ( i ) = T Current ( i ) T Packet Creation ( i ) < 10 ms .
If Equation (4) is true, the deviation between the estimated offset and the latest observed offset is compared against a maximum threshold:
Offset Estimated ( i ) Offset Observed ( i + 1 ) < 100 ms .
If Equation (5) holds, the statistical quality of α and β is assessed for each estimated offset by counting the number of times the expressions used to determine α and β are called in the code:
N 500 = Convergence Window Converged .
If Equations (4) and (5) hold while Equation (6) fails (i.e., the number of calls is smaller than the convergence window), then α and β are corrected by interpolation:
p = 1.0 exp 0.5 × 1.0 1.0 1.0 N 500
α ^ = p × α min + ( 1.0 p ) × α max
β ^ = p × β min + ( 1.0 p ) × β max
with α max and β max set as 0.05 and  α min and β min set as 0.003 for convergence of the moving average filter under the tested conditions. The one-way time-synchronized latency is as follows.  
Latency 1 = T Higher - level Lower - level Offset Higher - level Lower - level
Latency 2 = T Lower - level Higher - level Offset Lower - level Higher - level
The respective bounds of 10 ms and 100 ms in Equations (4) and (5) are specific to the implementation in the PX4 platform, and are MAVlink defaults [12,18]. While they can be fine-tuned, this could affect the number of filter resets, in turn impacting estimation of the offset.

2.10. Experimental Setup

A hardware-in-the-loop (HIL) setup was used for the latency assessment (Figure 5), based on the following: Pixhawk 4 flight controller FMU-V5 (Lower-level system), companion computer (Higher-level system), and HIL simulation host computer.

3. Results

3.1. Latency Comparison in the Flight Control Unit

The experiment sequence is described in detail in Section 2.9. After collecting time stamps from the Extended Kalman Filter (EKF2) module, the advantages of DDS network communication in UAVs become quite clear. A reduction in average latency is found; more importantly, the reduction of latency peaks with the use of the MicroXRCE–DDS bridge, which enables more accurate delay prediction for external sensor data fusion. Figure 6 and Figure 7 show results from a stress test at maximum companion computer CPU usage while prioritizing the ROS2 or ROS process in order to check the latency difference when transmitting an external odometry message. In this scenario, the latency is defined as the time elapsed between message creation at the ROS/ROS2 Node and message arrival at the EKF2 module, including the time synchronization process at the FCU, i.e., Algorithm 3 and Equation (10). Figure 8 shows the consequence of transmitting and receiving high-frequency topics in a low-resource device coupled to a slightly non-robust DDS network due to high companion computer CPU usage, namely, latency peaks, which can usually be mitigated using ROS2 Quality of Service settings [21]. For all assessments using the MicroXRCE-DDS–ROS 2 bridge, the publishers and subscribers used the following configuration settings:
  • Reliability: BEST_EFFORT; the publisher attempts to deliver the maximum number of samples possible.
  • History and Queue Size: KEEP_LAST; only one message is stored in the processing queue.
  • Durability: TRANSIENT_LOCAL; the publishers are responsible for sending the last available message to newly discovered subscribers.
Algorithm 3 Time Synchronization Algorithm (PX4 uORB/MAVlink [12,18])
1:
N 0
2:
Offset E s t i m a t e d 0
3:
Skew E s t i m a t e d 0
4:
α ^ α max                     ▹ Initialize α ^ as α max
5:
β ^ β max                     ▹ Initialize β ^ as β max
6:
high _ deviation _ count 0
7:
high _ rtt _ count 0
8:
 
9:
procedure update( T Current , T Remote Stamp , T Packet Creation )
10:
    if  T Remote Stamp > 0  then
11:
         Offset O b s e r v e d T Packet Creation + T Current 2 × ( T Remote Stamp ) 2
12:
         T RTT T Current T Packet Creation
13:
         deviation | Offset E s t i m a t e d Offset O b s e r v e d |
14:
        if  T RTT < 10  ms then
15:
           if  est _ sync _ converged ( ) ( deviation > 100  ms) then
16:
                high _ deviation _ count high _ deviation _ count + 1
17:
               if  high _ deviation _ count > 5  then
18:
                   reset_filter
19:
           else
20:
               if not est_sync_converged then
21:
                    progress N / 500
22:
                    p 1 exp 0.5 × ( 1 1 1 progress )
23:
                    α ^ p × α min + ( 1 p ) × α max
24:
                    β ^ p × β min + ( 1 p ) × β max
25:
               else
26:
                    α ^ α min
27:
                    β ^ β min
28:
               add_sample( Offset O b s e r v e d )
29:
                N N + 1
30:
                high _ deviation _ count 0
31:
                high _ rtt _ count 0
32:
        else
33:
            high _ rtt _ count high _ rtt _ count + 1
34:
 
35:
procedure add_sample( Offset O b s e r v e d )
36:
     Offset E s t i m a t e d 1 Offset E s t i m a t e d
37:
    if  N = = 0  then
38:
         Offset E s t i m a t e d Offset O b s e r v e d
39:
    else
40:
         Offset E s t i m a t e d α ^ × Offset O b s e r v e d + ( 1 α ^ ) × ( Offset E s t i m a t e d + Skew Estimated )
41:
         Skew Estimated β ^ × ( Offset E s t i m a t e d Offset E s t i m a t e d 1 ) + ( 1 β ^ ) × Skew Estimated
42:
 
43:
procedure reset_filter
44:
     N 0
45:
     Offset E s t i m a t e d 0
46:
     Skew Estimated 0
47:
     α ^ α max
48:
     β ^ β max
49:
     high _ deviation _ count 0
50:
     high _ rtt _ count 0

3.2. Latency Comparison at the Companion Computer

The latency measurements at the companion computer follow the logic shown in Algorithms 1–3. Figure 9 shows the latency values with the elapsed time between an IMU sample and message arrival at the ROS/ROS2 Node, including the time synchronization correction at the companion computer, i.e., the estimated offset from Equation (2) after convergence is reached. Figure 10 shows the corresponding probability distribution. The messages are time-synchronized at the same node where the latency is assessed, as can be seen by comparing Figure 9 to Figure 6. There is a 349.01 us difference in average latency between the MAVROS–ROS and XRCE–DDS architectures.

3.3. Flight Controller CPU and RAM Utilization

The effect of the MicroXRCE-DDS–ROS2 bridge implementation in terms of FCU CPU and RAM usage is significant, and can provide insights into the minimum hardware requirements of future UAV missions. In this study, both MAVROS–ROS and MicroXRCE–DDS bridges were deployed using only those topics listed in Table 1 and Table 2. The CPU demand decreases when using the MicroXRCE-DDS–ROS2 bridge. However, this is related to the capabilities of the chosen FCU hardware (in this case, the PX4 FCU V5, a popular flight control unit) and the number and loop rate of the topics transmitted to the FCU and companion computer. The jump in CPU/RAM usage shown below (Figure 11) corresponds to the beginning of operations, i.e., the start of the Gazebo simulator; a second jump can occur if the MicroXRCE–DDS bridge is not active at the beginning of operations or is not running at its fullest yet, i.e., when the ROS/ROS 2 routines have not yet started.

4. Discussion

4.1. Latency Reduction and Time Synchronization for Enhanced GNC in UAVs

This study highlights a crucial aspect of software development for complex UAV missions: the inherent latency in multi-layer system architectures. The results presented (Figure 6 and Figure 9) illustrate that the XRCE-DDS–ROS 2 bridge is not only an effective method to reduce communication latency; it is a cornerstone for the development of enhanced GNC algorithms, as the performance of GNC algorithms relies on timely and reliable data exchange [22]. The proposed approach paves the way to develop synchronization corrections for internal and external sensor data within UAV systems, enabling higher performance by providing more accurate timing. The latency correction approach boils down to the estimation of the offset and clock skew, as respectively described in Equations (2) and (3). Note that time synchronization is not symmetrical; the latency in each direction needs to be corrected separately.

4.2. Time Synchronization Effects in UAV High-Level Layer (Companion Computer)

Synchronization is a fundamental aspect of real-time systems. Lack of accurate timekeeping introduces jitter in control signals and degrades the performance of sensor fusion estimation. The proposed synchronization approach allows ROS or ROS 2 nodes to combine their time-synchronized sensor data (data from a low-resource device) with sensor data at the higher-level layer (e.g., a camera connected to the companion computer), which enables improved sensor fusion at the companion level for operations such as Visual Inertial Odometry (VIO) and SLAM, thereby enhancing loosely coupled distributed UAV systems [3,4,23]. Implementing Algorithm 1 or Algorithm 2 in the high-level layer accounts for asymmetrical network paths and processing delays until data arrival at the GNC ROS/ROS 2 node by using reliable and updated offset and clock skew estimates as opposed to assuming a symmetrical network path with time synchronization at the communication bridge. Furthermore, the proposed implementation accounts for synchronization of the sample time instead of the message header time, i.e., the timestamp when the measurements (e.g., IMU or VIO) were collected, as opposed to the timestamp when the message was sent from the FCU to the companion computer (or vice-versa). A particular case of interest is where T Remote Timestamp = T Sensor Sample .

4.3. Event-Driven Communication between Flight Controller and Companion Computer

The modifications implemented in the PX4 firmware to support this study include a complete event-driven MicroXRCE-DDS–ROS 2 bridge, which allows incoming and outgoing messages to be consumed by requesting processes as soon as they are available. A direct consequence of this is a reduction in the latency’s standard deviation, increasing offset predictability and lowering end-to-end delays.

4.4. Trade-Offs of Using DDS Networks in UAV Systems

The decentralized nature of a DDS network improves fault tolerance by improving the system’s resilience against data outliers. Furthermore, robust security features can be employed in DDS (DDS-security). Security enhancements in ROS2 include authentication of nodes joining the DDS domain and encryption of data transmitted through ROS2 topics. DDS enables deployment of companion computers that use real-time operating systems, with potential to further reduce latency and improve synchronization [9,11,24,25].
On the other hand, even when coupled with MicroXRCE-DDS, DDS networks can cause spikes in the FCU’s CPU and RAM utilization if the amount of topics or the message size to be transmitted and received is not properly monitored. This increase is only significant when the FCU publishes and subscribes to a large number of topics (10+), which is not common in UAV missions; nevertheless, it should be carefully monitored, as routine complexity and scalability are related to this issue.

4.5. Current Limitations of MAVlink and MAVROS

The MAVlink protocol, although widely used, is expected to become unsuitable in the future as an internal communication protocol between FCUs and companion computers. Message types are constantly evolving as new algorithms, sensors, and data acquisition technologies are developed. Adding a new MAVlink message and streaming it to and from the FCU using MAVROS is a demanding process, and requires extensive knowledge of the MAVlink and MAVROS libraries. ROS depends on a central node that acts as a look-up table with respect to the network nodes, which creates further increases in latency compared to ROS 2, where nodes are discovered automatically. The PX4 FCU firmware uses one type of message for internal communications, uORB, which can be used directly in ROS 2 nodes. On the other hand, MAVROS-ROS uses MAVlink messages that need to be converted to ROS messages on the companion computer side and to uORB in the FCU side.
The results presented here (Table 3) clearly show the increased latency created by the interface layer that converts MAVlink messages to the internal communication protocol in the PX4 FCU. Even with a MAVROS version capable of working in the ROS 2 framework, the delay created by the interface layer remains an issue for future UAV applications. MAVlink messages will continue to be used in several FCU applications, in particular those with well-establish functionalities such as radio transmission of telemetry packets. That said, the MAVlink community should continue to develop alternative communications solutions that can be used with uORB [12].

5. Conclusions

Our analysis and testing results show that MicroXRCE-DDS–ROS 2 is a better option as a communication bridge between a high-level companion computer and a low-resource FCU compared to the MAVROS–ROS bridge, having smaller communication latency and providing operation closer to real-time in GNC applications. The decentralized nature of DDS networks enables enhanced security features and risk reduction in AAUV missions. The proposed approach to time synchronization and latency correction improves performance in multi-layered AAUV systems, as it allows proper time alignment of sensor data from lower-level layers; algorithms in the higher-level layer have have access to data with more accurate timestamps. This can be particularly beneficial in sensor fusion for depth and visual-inertial odometry applications, where IMU measurements from the FCU need to be time-synced with camera frames at the companion computer for improved performance. An algorithm to estimate latency offset and clock skew based on an exponential moving average filter has been presented, providing a tool for latency estimation and correction that can be used by developers to improve synchronization of processes that rely on timely communication between an FCU and companion computer, such as the synchronization of lower-level sensor data at the higher-level layer. This addresses the challenges introduced in GNC applications by the non-deterministic nature of general-purpose operating systems and the inherent limitations of standard FCU hardware.

Future Work

  • Assessment of the effect of high-level latency correction in the performance of GNC algorithms, in particular motion control.
  • Assessment of scalability effects on latency and latency correction when using denser ROS 2 routines, i.e., when more DDS topics are shared between the FCU and companion computer.
  • A latency comparison between the MicroXRCE-DDS–ROS 2 bridge and other emerging technologies, such as Zenoh-Pico–ROS 2 [26].

Author Contributions

Conceptualization, J.L.S.C., D.A., H.G., I.R.B. and J.P.I.; methodology, J.L.S.C., D.A. and H.G.; software, J.L.S.C. and D.A.; validation, J.L.S.C. and D.A.; formal analysis, J.L.S.C., D.A., H.G., I.R.B. and J.P.I.; supervision, H.G.; funding acquisition and project management, H.G., I.R.B. and J.P.I. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by NASA’s Marshall Space Flight Center, Cooperative Agreement 80NSSC21M0262, Dual-Use Technology Development, CAN 2021.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data and codes are available upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AAUVsAutonomous Aerial and Underwater Vehicles
DDSData Distribution Service
EKFExtended Kalman Filter
FCUFlight Controller Unit
IMUInertial Measurement Unit
RTPSReal-Time Publish–Subscribe
ROSRobot Operating System
SLAMSimultaneous Localization and Mapping
UAVUnmanned Aerial Vehicles
HILHardware-in-the-loop

References

  1. Bigazzi, L.; Basso, M.; Boni, E.; Innocenti, G.; Pieraccini, M. A Multilevel Architecture for Autonomous UAVs. Drones 2021, 5, 55. [Google Scholar] [CrossRef]
  2. Dehnavi, S.; Goswami, D.; Koedam, M.; Nelson, A.; Goossens, K. Modeling, implementation, and analysis of XRCE-DDS applications in distributed multi-processor real-time embedded systems. In Proceedings of the 2021 Design, Automation & Test in Europe Conference & Exhibition (DATE), Grenoble, France, 1–5 February 2021; pp. 1148–1151. [Google Scholar] [CrossRef]
  3. Skog, I.; Handel, P. Time Synchronization Errors in Loosely Coupled GPS-Aided Inertial Navigation Systems. IEEE Trans. Intell. Transp. Syst. 2011, 12, 1014–1023. [Google Scholar] [CrossRef]
  4. Di Pietra, V.; Dabove, P.; Piras, M. Loosely Coupled GNSS and UWB with INS Integration for Indoor/Outdoor Pedestrian Navigation. Sensors 2020, 20, 6292. [Google Scholar] [CrossRef] [PubMed]
  5. Bigazzi, L.; Basso, M.; Gherardini, S.; Innocenti, G. Mitigating latency problems in vision-based autonomous UAVs. In Proceedings of the 2021 29th Mediterranean Conference on Control and Automation (MED), Puglia, Italy, 22–25 June 2021; pp. 1203–1208. [Google Scholar] [CrossRef]
  6. Virginio, R.; Adiprawita, W. Utilization of MAVROS for electro optical seeker subsystem in the design of ROS based architecture for autonomous guided platform. AIP Conf. Proc. 2021, 2366, 060002. [Google Scholar]
  7. Meier, L.; Honegger, D.; Pollefeys, M. PX4: A node-based multithreaded open source robotics framework for deeply embedded platforms. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 6235–6240. [Google Scholar] [CrossRef]
  8. Koubâa, A.; Allouch, A.; Alajlan, M.; Javed, Y.; Belghith, A.; Khalgui, M. Micro Air Vehicle Link (MAVlink) in a Nutshell: A Survey. IEEE Access 2019, 7, 87658–87680. [Google Scholar] [CrossRef]
  9. Quigley, M.; Conley, K.; Gerkey, B.; Faust, J.; Foote, T.; Leibs, J.; Berger, E.; Wheeler, R.; Ng, A. ROS: An open-source Robot Operating System. In Proceedings of the ICRA 2009, Kobe, Japan, 2–17 May 2009. [Google Scholar]
  10. Macenski, S.; Foote, T.; Gerkey, B.; Lalancette, C.; Woodall, W. Robot Operating System 2: Design, architecture, and uses in the wild. Sci. Robot. 2022, 7, eabm6074. [Google Scholar] [CrossRef] [PubMed]
  11. Macenski, S.; Soragna, A.; Carroll, M.; Ge, Z. Impact of ROS 2 Node Composition in Robotic Systems. IEEE Robot. Auton. Lett. (RA-L) 2023, 8, 3996–4003. [Google Scholar] [CrossRef]
  12. MAVLink: Micro Air Vehicle Communication Protocol. 2023. Available online: https://mavlink.io/en/ (accessed on 23 September 2023).
  13. MAVROS: MAVLink Extendable Communication Node for ROS. 2023. Available online: https://github.com/mavlink/mavros (accessed on 23 September 2023).
  14. Solpan, S.; Kucuk, K. DDS-XRCE Standard Performance Evaluation of Different Communication Scenarios in IoT Technologies. EAI Endorsed Trans. Internet Things 2022, 8, e1. [Google Scholar] [CrossRef]
  15. Micro XRCE-DDS: EProsima Micro XRCE-DDS Documentation. 2023. Available online: https://micro-xrce-dds.docs.eprosima.com (accessed on 23 September 2023).
  16. Koenig, N.; Howard, A. Design and use paradigms for Gazebo, an open-source multi-robot simulator. In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566), Sendai, Japan, 28 September–2 October 2004; Volume 3, pp. 2149–2154. [Google Scholar] [CrossRef]
  17. Ramirez-Atencia, C.; Camacho, D. Extending QGroundControl for Automated Mission Planning of UAVs. Sensors 2018, 18, 2339. [Google Scholar] [CrossRef] [PubMed]
  18. PX4: Open Source Autopilot. 2023. Available online: https://www.px4.io/ (accessed on 23 September 2023).
  19. Chen, H.; Chen, S. A moving average based filtering system with its application to real-time QRS detection. In Proceedings of the Computers in Cardiology, Thessaloniki, Greece, 21–24 September 2003; pp. 585–588. [Google Scholar] [CrossRef]
  20. Silva, J.G.; Aquino Limaverde Filho, J.O.D.; Feitosa Fortaleza, E.L. Adaptive Extended Kalman Filter using Exponencial Moving Average. IFAC PapersOnLine 2018, 51, 208–211. [Google Scholar] [CrossRef]
  21. Kronauer, T.; Pohlmann, J.; Matthé, M.; Smejkal, T.; Fettweis, G.P. Latency Overhead of ROS2 for Modular Time-Critical Systems. arXiv 2021, arXiv:2101.02074. [Google Scholar] [CrossRef]
  22. Silva Cotta, J.L.; Rakoczy, J.; Gutierrez, H. Precision landing comparison between smartphone video guidance sensor and IRLock by hardware-in-the-loop emulation. CEAS Space J. 2023. [Google Scholar] [CrossRef]
  23. Li, P.; Cao, J.; Liang, D. UAV-BS Formation Control Method Based on Loose Coupling Structure. IEEE Access 2022, 10, 88330–88339. [Google Scholar] [CrossRef]
  24. Du, J.; Gao, C.; Feng, T. Formal Safety Assessment and Improvement of DDS Protocol for Industrial Data Distribution Service. Future Internet 2023, 15, 24. [Google Scholar] [CrossRef]
  25. Vilches, V.M.; White, R.; Caiazza, G.; Arguedas, M. SROS2: Usable Cyber Security Tools for ROS 2. arXiv 2022, arXiv:2208.02615. [Google Scholar]
  26. Corsaro, A.; Cominardi, L.; Hecart, O.; Baldoni, G.; Enoch, J.; Avital, P.; Loudet, J.; Guimarães, C.; Ilyin, M.; Bannov, D. Zenoh: Unifying Communication, Storage and Computation from the Cloud to the Microcontroller. DSD. September 2023. Available online: https://www.researchgate.net/publication/373757741_Zenoh_Unifying_Communication_Storage_and_Computation_from_the_Cloud_to_the_Microcontroller (accessed on 12 October 2023).
Figure 1. MicroXRCE-DDS architecture [15].
Figure 1. MicroXRCE-DDS architecture [15].
Sensors 23 09269 g001
Figure 2. PX4 EKF2 architecture [18].
Figure 2. PX4 EKF2 architecture [18].
Sensors 23 09269 g002
Figure 3. ROS and FCU communications for latency assessment using MAVROS/MAVlink bridge.
Figure 3. ROS and FCU communications for latency assessment using MAVROS/MAVlink bridge.
Sensors 23 09269 g003
Figure 4. ROS 2 and FCU communications for latency assessment using the MicroXRCE–DDS bridge.
Figure 4. ROS 2 and FCU communications for latency assessment using the MicroXRCE–DDS bridge.
Sensors 23 09269 g004
Figure 5. Hardware-in-the-loop setup for end-to-end latency measurements.
Figure 5. Hardware-in-the-loop setup for end-to-end latency measurements.
Sensors 23 09269 g005
Figure 6. Visual odometry message: latency comparison, FCU.
Figure 6. Visual odometry message: latency comparison, FCU.
Sensors 23 09269 g006
Figure 7. Visual odometry message: comparison of latency probability distribution, FCU.
Figure 7. Visual odometry message: comparison of latency probability distribution, FCU.
Sensors 23 09269 g007
Figure 8. Visual odometry message: latency when using MicroXRCE-DDS–ROS 2.
Figure 8. Visual odometry message: latency when using MicroXRCE-DDS–ROS 2.
Sensors 23 09269 g008
Figure 9. IMU message: latency comparison, companion computer.
Figure 9. IMU message: latency comparison, companion computer.
Sensors 23 09269 g009
Figure 10. IMU message: comparison of latency probability distribution, companion computer.
Figure 10. IMU message: comparison of latency probability distribution, companion computer.
Sensors 23 09269 g010
Figure 11. Pixhawk 4 FMU-V5: CPU and RAM load comparison during latency testing.
Figure 11. Pixhawk 4 FMU-V5: CPU and RAM load comparison during latency testing.
Sensors 23 09269 g011
Table 1. ROS: FCU communication using MAVROS bridge.
Table 1. ROS: FCU communication using MAVROS bridge.
MAVROS TopicROS MessageRate (Hz)
mavros/vision_pose/pose 1PoseStamped200
mavros/imu/data_raw 2Imu200
1 Published from companion computer, assessed at FCU. 2 Published from FCU, assessed at companion computer. The message and topic rate were custom-modified for comparison; the original rate was 50 Hz.
Table 2. ROS 2: FCU communication using MicroXRCE-DDS bridge.
Table 2. ROS 2: FCU communication using MicroXRCE-DDS bridge.
TopicuORBRate (Hz)
vehicle_visual_odometry 1VehicleOdometry200
vehicle_imu 2VehicleImu200
1 Published from companion computer, assessed at FCU. 2 Published from FCU, assessed at companion computer. Custom definition.
Table 3. Results summary.
Table 3. Results summary.
MethodPathFCU Resource Utilization (%) 1Average Latency and Standard Deviation (Microsec)
MicroXRCE-DDS–ROS 2Companion Computer → Flight ControllerCPU: 28.51329 ± 162
Flight Controller → Companion ComputerRAM: 57.61752 ± 315
MAVlink–MAVROS (ROS)Companion Computer → Flight ControllerCPU: 59.11678 ± 560
Flight Controller → Companion ComputerRAM: 64.92133 ± 1020
1 Average values, Holybro Pixhawk 4 FMU-v5.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Silva Cotta, J.L.; Agar, D.; Bertaska, I.R.; Inness, J.P.; Gutierrez, H. Latency Reduction and Packet Synchronization in Low-Resource Devices Connected by DDS Networks in Autonomous UAVs. Sensors 2023, 23, 9269. https://doi.org/10.3390/s23229269

AMA Style

Silva Cotta JL, Agar D, Bertaska IR, Inness JP, Gutierrez H. Latency Reduction and Packet Synchronization in Low-Resource Devices Connected by DDS Networks in Autonomous UAVs. Sensors. 2023; 23(22):9269. https://doi.org/10.3390/s23229269

Chicago/Turabian Style

Silva Cotta, Joao Leonardo, Daniel Agar, Ivan R. Bertaska, John P. Inness, and Hector Gutierrez. 2023. "Latency Reduction and Packet Synchronization in Low-Resource Devices Connected by DDS Networks in Autonomous UAVs" Sensors 23, no. 22: 9269. https://doi.org/10.3390/s23229269

APA Style

Silva Cotta, J. L., Agar, D., Bertaska, I. R., Inness, J. P., & Gutierrez, H. (2023). Latency Reduction and Packet Synchronization in Low-Resource Devices Connected by DDS Networks in Autonomous UAVs. Sensors, 23(22), 9269. https://doi.org/10.3390/s23229269

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop