1. Introduction
This article demonstrates the development and application of a methodology that integrated the use of digital twins and model-based engineering approach during system development to improve design decision-making. The study focuses on the use of a descriptive model that can function as a digital twin for an electric unmanned autonomous vehicle, as shown in
Figure 1. The digital twin is an enriched model with real-time data from sensors, allowing it to dynamically reflect the current state of the physical asset, essentially creating a “live” virtual replica that can be used for analysis, simulation, and predictive maintenance by integrating sensor data and updating the model accordingly. This often involves adding data connections, establishing data streams, and setting up algorithms to interpret and apply the incoming sensor data to the model.
This digital twin can then be used to emulate the actual product, system, or process. It can be used to analyze design changes before they are incorporated into the actual system [
1,
2]. This can also be a realistic digital representation of a physical system, which is useful to predict how the real-world object will behave, monitor its operation, diagnose issues, and evaluate changes before implementing them on the physical object. Digital twins are already created for and applied to manufacturing operations, cars, and large buildings [
3,
4]. In this scenario, the digital model of the electric unmanned autonomous vehicle uses real-time data from the physical object’s sensors, allowing them to update their simulations and generate insights that can be applied in making critical performance design decisions like on motor sizing, steering servo torque, and suspension design as well as safety design for braking and structural systems during the system development process.
Despite the recognized potential of digital twins in the development and testing of autonomous systems, there remains a significant gap in the literature concerning the effective integration of these technologies with real-time communication and control systems. This gap is particularly evident when using descriptive models, such as those created with Systems Modeling Language (SysML), in the context of electric unmanned autonomous vehicles. The challenge lies in ensuring that virtual models and physical assets are synchronized, especially when real-time data acquisition and communication are critical to system performance. As a result, a central research question emerges, which this article seeks to address: how can abstract SysML models be effectively transformed into actionable controls for electric unmanned autonomous vehicles?
The Systems Modeling Language (SysML) is a graphical modeling language designed to support the specification, analysis, design, verification, and validation of complex systems. It is an extension of the Unified Modeling Language (UML) tailored specifically for systems engineering applications. The four pillars of SysML, or the main four diagrams, include types that facilitate the modeling of system requirements, structure, behavior, and parametric features, thus enabling engineers to capture the multifaceted nature of systems in a coherent manner [
5]. One of the primary advantages of SysML is its expressiveness and flexibility compared to UML. While UML is primarily focused on software design, SysML addresses the needs of systems engineers by providing constructs that can represent both hardware and software components, as well as their interactions [
6].
Real-time kinematic (RTK) global positioning system (GPS) technology is used to reach centimeter-level 3D global position accuracy in many applications from unmanned autonomous vehicles to surveying or location-based services.
Figure 2 shows an overview of how RTK is achieved for the electric unmanned autonomous vehicle. This setup applies to an Ardusimple 4G NTRIP Master Modem paired with an Ardusimple SimpleRTK2B-F9P V3 GPS board equipped with a U-Blox ZED-F9P module. Several options were investigated and tested. The authors decided to use the precise-point positioning (PPP-RTK) state-space representation (SSR) SimpleSSR service offered by Ardusimple and a local RTK base station such as CTMA3 on the University of Connecticut CT DOT ACORN public RTK network. This article also explains the setup and configuration used to enable RTK on the physical asset.
This study introduces a methodology that bridges theoretical models with real-world applications, leveraging Systems Modeling Language (SysML) models and real-time data control to enhance communication protocols. It addresses key challenges in system design, particularly for autonomous technologies, and lays the foundation for integrating diverse control systems across various domains. A SysML model is designed to communicate with and control an ArduPilot electric unmanned autonomous vehicle. The model employs activity diagrams featuring opaque actions executing Python scripts, facilitating the transmission and reception of data and messages with the physical asset. Communication between the model’s host computer, referred to as the ground control base station computer (GCBS), and the physical asset is established using the MAVLink protocol, utilizing wireless communication such as radio and Wi-Fi connection. MAVLink serves as the standardized protocol across all ArduPilot vehicles and GCBS, enabling the issuance of commands such as arm, disarm, and mission upload, among others [
7]. Additionally, the model can receive MAVLink messages containing sensor data. The overarching objective of this initiative is the development of a digital twin for the electric unmanned autonomous vehicle, simulating and dynamically controlling its behavior in real-time.
The initial phase focuses on the development of activity diagrams for sending elementary commands to the physical asset. Within the descriptive model file, two distinct models were generated. The first model, Move Northeast, encompasses an activity diagram capable of arming the physical asset, transitioning it to guided mode, directing it to a specified distance northeast, and capturing sensor data within the corresponding block diagram for this model. The second model, Run Mission, incorporates an activity diagram that can upload a mission to the physical asset, command it to execute the mission, and document sensor data within the associated block diagram for this model. Both models underwent successful testing, enabling the execution of missions and the capture of sensor data within the systems model. During this study and the subsequent development of the descriptive model, the authors opted for a Model-Based Systems Engineering (MBSE) tool and created Python scripts compatible with Python version 3.9 or higher.
Section 2 explains the motivation behind developing the Real-Time Communication and Data Acquisition (RT-CDA) methodology.
Section 3 provides an overview of the findings obtained from the literature review.
Section 4 explains the methodology development process.
Section 5 covers the model development phase; (i) it defines the architectural model, and specifies the required input parameters and the type of data stored; (ii) it details the behavioral models, including how the model executes commands, controls the electric unmanned autonomous vehicle, and collects outputs; and (iii) it also delves into the scripts created to achieve this approach.
Section 6 explains the physical asset development phase, and
Section 7 shows the configuration steps of the physical asset components.
Section 8 discusses the test and integration strategies. Finally,
Section 9 and
Section 10 provide the abstraction of the RT-CDA methodology proposed, and present the conclusion, respectively.
2. Motivation
This article aims to bridge the gap between the virtual and physical environments. The motivation stems from the need to seamlessly translate abstract Systems Modeling Language (SysML) models into actionable controls, enhancing the connection between theoretical models and their real-world applications. Simultaneously, the focus is on improving the system development process. By embedding mechanisms for real-time data exchange and control, the goal is to advance the efficiency of the system development process. The reduction in errors and the progression through the development lifecycle, particularly in complex systems such as electric unmanned autonomous vehicles, underscore the significance of this study. The importance of this study lies in understanding existing communication and software protocols that allow for a direct bidirectional connection between the two environments. Even though this article targets a specific application, the authors aim to propose an approach that has broad applicability for systems that use IoT technology.
3. Literature Review
The integration of mechanical systems with the Internet of Things (IoT)—a network of physical objects embedded with sensors, software, and other technologies that facilitate data collection and exchange—represents a significant advancement in modern engineering [
8,
9]. This integration offers a wide range of potential applications, from predictive maintenance and energy efficiency to automated robotic systems. Additionally, it empowers businesses to monitor assets and production lines in real-time, facilitating rapid issue identification and corrective action [
10]. The development of the IoT, alongside advanced sensors, has facilitated accurate and dependable data gathering from various devices, enhancing user convenience, control, and automation capabilities. As more electronic devices incorporate standard internet connectivity, the IoT landscape is inclined to grow exponentially [
11].
In parallel with these advancements, Model-Based Systems Engineering (MBSE) has become increasingly important for managing the complexity of modern engineering projects. Engineers face significant challenges related to workflow complexity, often requiring navigation between multiple tools. Closed-environment toolsets restrict interaction with other tools, leading to convoluted processes that hinder standardization and documentation. Additionally, some tools enforce rigid workflows, limiting adaptability to evolving MBSE needs, while overly generic tools need extensive time for developing and maintaining customized applications. Addressing these issues involves developing platforms or integrating tools to streamline user actions, processes, and interfaces [
12]. To address the need for more efficient MBSE methods and tools, various industries, including the automotive sector, have adopted the MBSE V-model. This model aims to tackle the intricate challenges of designing complex systems [
13]. Despite its widespread use, the V-model has its limitations, which has driven the exploration of alternative methods. One promising alternative involves using Systems Modeling Language (SysML) diagrams to ensure consistency between specifications and tests, with automated updates based on test results. This method enhances agility in modeling and enables automated test generation, thereby improving system validation [
14]. Efficiency in decision-making and reduced computing time are crucial when modeling complex systems. Research has demonstrated the benefits of implementing tool chains between SysML, Modelica, and surrogate modeling to optimize multidisciplinary design tasks [
15].
Unnamed Aerial Vehicle (UAV) communication and control systems must adhere to strict regulations, ensuring a reliable and continuous communication flow and implementing fail-safe mechanisms to address potential communication issues. The study by [
16] highlights potential wireless communication technologies relevant to UAV applications, focusing on communication protocols, their applications, characteristics, and limitations. Their goal was to characterize communication for a specific ROS network use case; this includes data exchange, command and control, and clustering use cases between the UAV and the control station. Various wireless communication standards are used in the UAV industry, enabling the exchange and reporting of selected variables through communication nodes. They also mentioned the use of cellular infrastructure on UAVs for mission offloading. However, there is still limited knowledge of UAV network communication protocols and the need to improve the scalability and enable real-time communication for the ROS network. Current systems often struggle with ensuring reliable operation in diverse and demanding scenarios.
Autonomous driving functionalities like electronic stability programs, adaptive cruise control, and highway assist rely heavily on accurate system identification, which demands precise knowledge of vehicle variables for effective operation. However, simulations often struggle to replicate critical real-world interactions, posing significant safety challenges for unmanned systems. In [
17], a dimensionless model approach was introduced to correlate various physical parameters in conventional model-based systems, particularly for nonlinear dynamics. This data-driven approach involved data collection tailored to specific prototypes and a training process to map accessible sensor data to latent variables. However, the need for repeated data collection and training across different vehicle platforms results in increased costs. This is a great approach but still does not account for added uncertainties from real-work environments and other un-modeled vehicle dynamics without adding more complexity to the methodology.
Concerning the publication of [
18], the authors developed a UAV controller model using SysML, demonstrating its structural realizations. They explored model-based approaches like those used in industrial control systems and proposed a hybrid control model that integrates MBSE methodologies with SysML. This approach aims to streamline the deployment of the Q-UAV controller in practical applications. The study involved creating a physical model of the controller, developing systems models, and conducting experimental runs with the Q-UAV, enhancing the robustness of the model design and verification process. However, the lack of real hardware data and adaptability in real-time environments remains a significant limitation.
These insights from the literature underscore the crucial aspects of integrating tools, ensuring consistency between specifications and tests, and optimizing decision-making processes within the Model-Based Systems Engineering (MBSE) workflow. In the context of this article, the authors aim to develop descriptive models functioning as a digital twin for an electric unmanned autonomous vehicle; these findings emphasize the importance of effectively defining a communication approach between hardware and software. Connectivity between hardware and software systems is critical in developing efficient communication and control systems. The Real-Time Communication and Data Acquisition (RT-CDA) methodology leverages real-time communication with hardware to streamline the design of the communication and control system alongside associated software. This methodology enables early-stage validation of system designs, enhancing the efficiency and robustness of the final product, and leading to more reliable and effective solutions.
4. Methodology
Development Process
The methodology developed in this study adopts a systematic approach aimed at creating descriptive models, specifically applied to an electric unmanned autonomous vehicle, functioning as a digital twin. The development process unfolds through well-defined steps, as illustrated in
Figure 3. This approach leverages the V-Model for systems development, which ensures that each phase of development is followed by corresponding validation and verification activities, leading to the transformation of the system into a digital twin that mirrors the physical asset.
The first step involves identifying key challenges and gaps in integrating real-time communication and control systems with Systems Modeling Language (SysML) models. This step provides the foundation for the methodology, ensuring that it addresses both the theoretical and practical aspects of the system’s functionality. The next step focuses on selecting the appropriate control software and determining the networks and communication protocols that will be used for bi-directional communication and interaction between the physical asset and the digital twin. In this article, special emphasis is placed on modeling the communication and control system, a subsystem within the electric unmanned autonomous vehicle descriptive model. This model is meticulously developed to accurately represent the behavior and functionality of the physical asset (which is also part of the physical asset development phase). A crucial aspect is ensuring consistency between system specifications and test data, which is achieved through automated updating mechanisms that allow the system model to reflect real-time data and system performance.
The communication and control systems model plays a dual role. First, it serves as a model to interface with the physical asset for system validation. Second, it functions as a controller, capable of receiving user inputs (both preset and live), thus facilitating real-time data transfer to the system model. This dual functionality enhances the accuracy and reliability of the digital twin and fosters seamless integration between simulation and physical testing. This integration helps bridge the gap between virtual models and physical systems, advancing the development and validation processes for electric unmanned autonomous vehicles.
The V-Model approach, which focuses on parallel development and validation stages, is applied throughout this process. As the system progresses through the design, implementation, and integration phases, corresponding validation steps are executed to verify that the models, including the digital twin, meet the required performance criteria. This ensures that both the simulation and the physical testing components work in harmony, validating the digital twin’s performance, accuracy, and real-world applicability.
The integration and testing phase is a critical stage where the developed models are integrated with the physical asset to validate the digital twin system’s performance and accuracy. Testing ensures that the communication and control systems model functions correctly, enabling real-time data transfer and supporting decision-making processes. Any discrepancies or issues identified during testing are addressed and refined iteratively, based on the findings from both simulation and real-world tests.
Evaluation and validation efforts focus on assessing the performance, accuracy, and reliability of the digital twin system. This is achieved by comparing simulated results with real-world data collected from physical tests, as well as evaluating the effectiveness of the integrated communication and control systems in enhancing the Model-Based Systems Engineering workflow. These steps help ensure that the digital twin not only reflects the physical asset’s behavior accurately but also enhances system performance, decision-making, and real-time operations. The dotted lines linking the work products of the model development phase and physical asset development phase shown in
Figure 3 represent the connection of the model with the electric unnamed autonomous vehicle. An overview of the full setup is shown in the
Supplementary Materials TR-CDA Github repository.
5. Model Development Phase: The Communication and Control System
5.1. Model Development Phase: Architecture Model Using Block Definition Diagrams
A block definition diagram (BDD) was used to define the data structure for Operational Scenario 1,
Move Northeast, since it allows blocks to store variables and own behavioral diagrams that represent the block behavior. In this study, a BDD is used to define a block that captures the set of inputs required to execute the model, and store output parameters’ values after execution.
Figure 4 shows the block definition diagram for the
Move Northeast scenario. It demonstrates the inputs and output parameters of the model. The block definition diagram contains a block named MOVE_NE, which also owns several other blocks. The blocks that are owned by MOVE_NE, such as GPS, contain sensor output data from the physical asset after the model is executed. The MOVE_NE block itself contains all the input model parameters that need to be set before using the model. The MOVE_NE block also owns an activity diagram that uses its input parameters to execute the Python scripts and generate the values of the output parameters. For MOVE_NE, the following input parameters shown in
Table 1 must be specified.
Each of the blocks owned by MOVE_NE block, such as GPS, IMU, ATTITUDE, BATTERY, LOCAL_POSITION_NED, and the GLOBAL_POSITION_INT, correspond to a specific type of MAVLink message that is received from the running physical asset. Within each block is a time array which contains the time at which each message is received in seconds. The other variables are arrays that contain the actual data of the messages at each time.
Table 2 provides more information on the MAVLink messages, and the linked documentation has details on their fields (
https://mavlink.io/en/messages/common.html, accessed on 28 June 2022).
The second operational scenario,
Run Mission, in
Figure 5 is like the
Move Northeast scenario in
Figure 4. The only difference in the scenarios is the input parameters of the RUN_MISSION block. The value properties that shall be included in the RUN_MISSION block are detailed in
Table 3.
5.2. Model Development Phase: Behavioral Model Using Activity Diagrams
SysML activity diagrams (ACT) contain a sequence of actions that can be executed. The activity diagrams specified in the two models are meant to control the electric unmanned autonomous vehicle using actions that directly communicate with the physical asset. The descriptive model files contain a main classifier activity diagram for the MOVE_NE and RUN_MISSION blocks in their respective model elements. The
Move Northeast activity diagram arms the electric unmanned autonomous vehicle, sets it to guided mode, commands the vehicle to move a certain distance north or east, and writes data to a CSV file, shown in
Appendix A. The
Run Mission activity diagram uploads a mission file, sets the vehicle to auto mode, arms the vehicle, and collects data in a CSV file, shown in
Appendix B.
Figure 6 is the current context for the
Move Northeast scenario. This context includes the parameters stored in the block that are required to execute the model, then the required parameter values are sent to the INIT_MAVLINK action, which sets certain global variables, enabling the use of other actions to communicate with the physical asset. The TEST_MAVLINK_CONNECTION action is a looping action until it outputs that the connection is successful. The next action is arming the physical asset, setting it to guided mode, and commanding it to move the desired distance north and east. For this example, the starting time is set right after the physical asset is commanded to move. In a conditional action, confirmation is required to check if the vehicle was armed successfully and set to guided mode successfully. If these conditions are met, then the action loops until the desired time passes. Inside this loop, MAVLink messages are received and stored in their corresponding block value properties. After the loop is complete, data from the executed model are saved to a CSV file.
Like the
Move Northeast scenario shown in
Figure 6, an activity diagram for the
Run Mission scenario was defined as part of the behavior model in
Appendix B. The context contains the parameters stored in the block. Then, the required parameters are sent to the INIT_MAVLINK action, which sets certain global variables, enabling the use of the other actions to communicate with the electric unmanned autonomous vehicle. The TEST_MAVLINK_CONNECTION action is in a looping action until it outputs that the connection is successful. Then, the user can set actions to disarm the vehicle, upload the waypoints file, set the mode to auto, and then arm the vehicle, which starts the mission. In a conditional action, it is checked if the electric unmanned autonomous vehicle was armed, set to auto, and the mission was uploaded successfully. If these conditions are met, then MAVLink messages will be received until the mission is complete; subsequently, these messages are stored in the block definition diagram, and a CSV file is generated. In the end, the physical asset is disarmed.
Opaque Behavior Documentation
In the previous section, an overview of the Python scripts used by the behavioral system model was provided. The discussion here will focus on the opaque actions within the system model that utilize these scripts. These opaque actions can execute Jython code. In Jython, subprocesses can be initiated with the subprocess module, allowing for the reading of exit codes and standard output streams. As a result, it is possible to call external Python scripts and obtain the necessary information.
Table 4 outlines the opaque behaviors used for the communication and control system model. In addition to these behaviors, there are also behaviors in the model for storing received MAVLink messages in the block diagram (STORE_MSG_DATA) and then writing the data from the block diagram to a CSV file (WRITE_CSV_FILE). These behaviors do not communicate with the physical asset or require any external scripts.
5.3. Script Documentation
In the behavioral model, opaque behaviors execute Python code. However, due to the limitations of the software, some of the required code cannot be run directly in the systems model. Magic Systems of Systems Architect software employs Python 2.7, a Java implementation of Python 2.7, rather than the standard C implementation of Python. Consequently, Jython cannot import the Pymavlink package, which is essential for this study. To address this issue, external Python scripts were developed for communicating with the physical asset. These scripts are invoked as subprocesses from Jython within behaviors in the systems model. Utilizing these scripts requires Python 3.9 or higher to be installed on the GCBS computer. The scripts used directly by the systems model, a brief description of each, and their command line arguments and printed outputs can be found in the
Supplementary Materials TR-CDA GitHub repository. Note that the most recent documentation of any of the following can viewed in the terminal by running the corresponding script with the –h command, >> python3 SCRIPT_PATH –h. Note that all these scripts have a –s or --source and –b or --baud rate parameters. The source parameter is a connection string with a format that is used for all MAVLink programs. The baud rate is the rate of communication and must be set correctly for the connection to work. While these parameters are listed as optional, for readability they should always be explicitly provided when invoking these scripts as a subprocess. When not provided, the default value for the source in these scripts is “udpin:0.0.0.0:14551” and the default value for baud rate is 115,200. To connect to the physical asset over the radio from a Windows computer, use the com port the telemetry radio is plugged into as the connection string (e.g., “com3”). The baud rate for the radio on the electric unmanned autonomous vehicle is 5760.
6. Physical Asset Development Phase: The Communication and Control System
The communication and control system is a pivotal element of the study, comprising interconnected modules designed to facilitate communication protocols and command execution for semi-autonomous systems. As shown in
Figure 7, the autopilot control board effectively regulates and controls the behavior and interactions of the physical asset, while also representing the intricate control logic at play. On the other hand, the communication system enables the seamless exchange of information between various systems, fostering efficient and reliable communication pathways. As part of this, the Raspberry Pi and/or SIK telemetry radio can be used.
Global Positioning System (GPS) and Real-Time Kinematics (RTK) Configuration and Setup
RTK, or real-time kinematic positioning, is a form of differential GPS (DGNSS or DGPS) using a ground station to make corrections to GPS position by measuring phase differences and other correctional factors. While 3D GPS fixes typically allow for 3–30 m accuracy depending on the surroundings [
19], RTK GPS allows for a centimeter or even millimeter-level precision [
20]. RTK corrections are transmitted via RTCM messages from a base station to a receiver using Networked Transport of RTCM via Internet Protocol (NTRIP). The receiver could be on a survey pole, unmanned aerial vehicle, self-driving car, or any other device that needs to receive RTK corrections. GPS modules with RTK capabilities may be configured as NTRIP servers or clients [
21].
RTK has some limitations. It has stricter receiver requirements than regular GPS, such as a clear view about 30° above the horizon, a strong 38 dB signal from no less than 7 satellites, and no interference from other electronic components [
19]. Base stations need to be within 20–30 miles [
19] and can be expensive to set up and maintain, and many base station services require a paid subscription. However, several state governments run freely available RTK networks which only require registration. A worldwide network of free-to-use base stations that do not require registration can be found on the rtk2go website [
22]. Unfortunately, none of the available RTK base stations on the rtk2go website are within range of the University of Connecticut. However, the state of Connecticut has a public RTK station service through a collaboration with the University of Connecticut, which is known as ACORN. Another form of RTK that allows for centimeter-level accuracy without a nearby base station uses precise-point positioning (PPP), a method of broadcasting error corrections over a wider area. PPP allows for higher precision but requires a much longer delivery time and is not suitable for real-time applications [
23]. The joint PPP-RTK positioning system, also known as State Space Representation (SSR), allows for real-time centimeter-level accuracy.
For this physical asset, the U-Blox PointPerfect correction service was used, and is shown in
Figure 8. If a base station is found or set up within range, connecting to it may be simpler and provide slightly more precise positioning. Either of these two setups will allow for the vehicle to travel autonomously.
7. The Physical Asset Configuration
The MAVLink (Micro-Air Vehicle Link) protocol is utilized to communicate with the electric unmanned autonomous vehicle, facilitating command exchanges between the physical asset and the ground control base station (GCBS). MAVLink supports a wide range of commands, including arming, mode switching, movement commands, mission uploads, and sensor data transmission. MAVLink protocol is not dependent on the underlying technology used; therefore, the protocol can be used over radio, Wi-Fi, or other connections, as shown in
Figure 9 [
24]. Connecting to the physical asset can be achieved by using a telemetry radio. For radio connectivity, a USB telemetry radio must be connected to a GCBS, and the second telemetry radio must be on board the physical asset. Alternatively, Wi-Fi can be used since telemetry radio restricts the communication range between the physical asset and the GCBS. This might require an onboard micro-computer such as the Raspberry Pi, which was tested as part of the hardware setup; in this case, the physical asset forwards MAVLink messages over MAVProxy installed in the on-board computer, as shown in
Figure 9.
Two approaches were tested in this study. The first uses the physical asset, while the second conducts tests using the Software-in-the-Loop (SITL) simulation tool. From the perspective of the systems model, the only difference between these is the MAVLink connection string and baud rate parameters used. These must be set correctly for the model to connect to the electric unmanned autonomous vehicle. For the telemetry radio on a GCBS computer, the connection string is simply the communication port the radio is connected to. For example, if the radio is connected to com3, the connection string should be com3. For the telemetry radio, the baud rate should always be 57,600. For the Raspberry Pi, the connection string should be udpin:localhost:14551. In this string, udpin specifies that the UDP protocol is used.
The word localhost is the hostname of the current computer and can also be replaced with XXX.XXX.XXX.XXX, which is the IP address of the current computer. Lastly, 14,551 is the port number on which the Raspberry Pi is forwarding the MAVLink messages to. It is standard to use port 14,550 for MAVLink in GCBS programs. For the Raspberry Pi UDP connection, a baud rate of 115,200 can be used [
24]. An example of this was shown in
Section 5 where the connection string is inputted into the behavioral model owned by the model element block.
8. Test and Integration: Key Results and Discussion
The electric unmanned autonomous vehicle digital twin runs on the ground control base station (GCBS) computer, and it was developed using Systems Modeling Language (SysML) and a Model-Based Systems Engineering (MBSE) tool. The radar system IoT simulation example by [
25] served as a baseline for the approach used in this study. This virtual replica can communicate live with the physical asset either using MQTT or using MAVLink, as discussed in
Section 7. Using the paho-mqtt library, users can send a text-based message on an MQTT topic to the Raspberry Pi [
26]. The
Supplementary Materials dives into detailed steps to establish the network and system connection. Several topics were defined to tell the Raspberry Pi to take snapshots or videos. Users can also publish MQTT messages containing commands that are sent to the MAVProxy program running on the Raspberry Pi. For example, the user can send the command arm throttle, which is then sent to the input of the MAVProxy program, which then sends the right MAVLink messages to arm the physical asset; this example is shown in
Figure 10.
However, the preferred approach between the digital twin and the physical asset is direct usage of the MAVLink protocol through the Pymavlink library. Using functions provided by this library, a MAVLink ARM_DISARM command can be sent directly to the physical asset, and then a MAVLink COMMAND_ACK (command acknowledgment) message can be received. For most tasks where commands need to be sent to the Pixhawk, this is preferred since it is easier to receive an acknowledgment that the command was executed successfully.
After a comprehensive analysis, the most direct and efficient configuration was identified between the descriptive model of the digital twin and the physical asset. Using the Pymavlink library makes it easier to directly read live MAVLink messages from the physical asset. Based on this configuration, a set of advanced algorithms was developed and thoroughly tested to facilitate a continuous and direct data stream between the digital twin and the physical asset. These algorithms leveraged state-of-the-art communication protocols, enabling seamless, real-time synchronization of data between the two entities. This ensured that the digital twin could accurately mirror the state and behavior of the physical asset, providing enhanced monitoring and control capabilities.
To validate the functionality of the communication system, the authors first tested its capabilities by implementing an opaque action. This action was designed to command the physical asset to capture a snapshot of its current camera view, ensuring the system could interact with the asset in a meaningful way. The opaque action block executed Python code that utilized the Python subprocess module—an essential built-in feature of Python for initiating and managing external programs. Specifically, the subprocess module was used to execute another Python script, which employed the paho-mqtt library. This library facilitated the creation of an MQTT client, which then published a message to the predefined topic, eUAV/camera/snapshot, signaling the physical asset to take the requested snapshot.
Further testing involved the development of an additional opaque action to arm the physical asset, verifying the asset’s response and readiness. In this case, the Python script, arm.py, employed the MAVLink protocol to send the arm command to the physical asset. The script was designed to wait for the MAVLink acknowledgment of the arm command, ensuring that the asset had received and successfully processed the command. The exit status of the script—coded as 0 for success and 1 for failure—provided immediate feedback on the success or failure of the arming process. By leveraging the subprocess module again, the opaque action in the systems model executed the arm.py script and monitored the exit code, enabling the system to respond to the asset’s state in real-time and ensure proper communication and command execution between the digital twin and the physical asset.
In addition to arming the physical asset, Pymavlink was used to create a script upload_waypoints.py for uploading waypoints to the physical asset. The script takes a waypoints file as input and uploads the waypoints contained in the file to the physical asset. It then waits to receive the MAVLink mission acknowledgment command from the physical asset. The script exits with code 0 to indicate success, and 1 to indicate failure. This script can be used in an opaque action in the same way the arm script was used.
These are just a few examples of how the scripts were executed from the descriptive model before finalizing the digital twin. This approach allowed for the emulation of the physical asset, contributing to the enhancement of the design process by evaluating key parameters and exploring all possible design configurations of the system components. While the primary focus of this article is on the integration and communication between the digital twin and the physical asset, it is worth noting that this methodology also lays the foundation for a more comprehensive failure analysis. By using the digital twin to simulate a wide range of operational scenarios, including edge cases and potential failure points, designers can gain deeper insights into how the system components might behave under various conditions. This approach enables the identification and mitigation of potential risks earlier in the design cycle, ensuring a more robust and resilient final system. Additionally, by thoroughly testing all configurations in the digital realm before physical deployment, engineers can refine system parameters, leading to optimized performance and more reliable failure prediction in the event of operational anomalies.
Some areas of the digital twin of the electric unmanned autonomous vehicle could benefit from refinement. One issue that could be resolved is related to receiving MAVLink messages. The simultaneous execution of multiple actions and the capability to store received MAVLink messages could be implemented to improve the data transfer rate from the physical asset. However, this may be difficult to implement, especially considering that it may require multiple processes listening to MAVLink messages on the same UDP or com port. Based on the research, an approach to a solution could be to have multiple processes listen to the same UDP port. However, it is not possible for the com port. But it could be possible if a program like MAVProxy is used to forward the com port to a UDP port. The model could also be expanded to include more parameters, as there are more than 5000 parameters identified in the control software.
9. Methodology Application Steps
The Real-Time Communication and Data Acquisition (RT-CDA) methodology consists of the following: (1) establishing communication between the communication and control SysML model with the test system (in this case the physical asset); (2) once stable communication health is achieved, the user will be able to either (3) upload missions or input manual (4) control or commands to the physical assets. The communication and control system digital twin will (5) monitor relevant data parameters of the physical asset established in the structural model represented in
Figure 4 and
Figure 5.
Figure 11 shows an overview of these steps after the system model and physical asset are developed.
10. Conclusions
The development of the Real-Time Communication and Data Acquisition (RT-CDA) methodology provided significant insights into network and communication protocols applicable to unmanned autonomous vehicles. A comprehensive approach combining both top-down and bottom-up strategies was utilized to develop the descriptive systems model for the communication and control subsystem. This included exploring both black-box and white-box views, which facilitated the modeling of components’ internal interfaces, while simultaneously testing the appropriate hardware on the physical asset. The primary requirements for the physical asset included autonomy (specifically accurate real-time positioning), long-range communication capabilities, and seamless integration with the systems model developed using Systems Modeling Language (SysML).
The main obstacles encountered in working with the physical asset involved selecting the correct hardware configuration to enable RTK functionality, as well as ensuring that the mission performance of the physical asset aligned with the control software’s expected outcomes. Significant efforts were made to minimize any deviation between these two elements through extensive test runs. Additionally, several parameters were configured and adjusted to identify (for example in the throttle and steering motor) optimal values to achieve the desired mission performance. To enable long-range communication, the physical asset needed to communicate with the ground control base station (GCBS) computer, where the model resided. This required testing various hardware configurations, including telemetry radio, bluetooth, and local Wi-Fi networks. Each communication method required different software configurations, as detailed in
Section 7, to interface with the system model’s software.
In parallel, structural models were developed to capture the critical parameters essential for mission success, while behavioral models were used to establish communication scripts, send commands, and receive data from the physical asset. One of the significant challenges involved the development of the final working scripts. A workaround was required to integrate the scripts with the control software to ensure proper communication with the physical asset. This workaround is discussed in
Section 5.3. and was essential to facilitate a successful and reliable interaction between the software and the physical asset. As part of future work, an API could be developed to facilitate the interchange of communication hardware selection and automate the process.
Through the RT-CDA methodology, significant advancements were made in model-based engineering, particularly in terms of enabling direct and bi-directional communication between SysML models and physical assets. The methodology allowed for real-time communication with the physical asset while addressing the integration of both software and hardware components. SysML opaque actions were effectively employed to execute external software pertinent to the system of interest (SOI), and the parameters of interest were continuously updated within the SysML model. This continuous feedback loop enabled dynamic system improvement during the design and implementation phases.
Existing research in this domain typically relies on intermediary tools to bridge the communication gap between system models and physical asset control components. In contrast, this work eliminates the need for such tools by developing custom software that directly facilitates communication between the system model and the physical assets. This innovation supports faster prototyping, enhances the robustness of system design methodologies, and extends the capabilities of Model-Based Systems Engineering by connecting SysML models directly to physical systems, eliminating the dependency on additional interfaces or tools.
The concept demonstrated in this study could be adapted for controlling vehicle dynamics based on real-time sensor feedback, with potential integration into other system models to simulate unmanned autonomous vehicle behavior. This article showcases the practical application of translating abstract models developed using SysML into actionable control for electric unmanned autonomous vehicles. Establishing an interactive communication link between the SysML model and the physical asset marks a significant advancement in the system development process. The integration of real-time data exchange and control improves efficiency and reduces the potential for errors.
The successful control of an electric unmanned autonomous vehicle serves as a tangible demonstration of the real-world potential for deploying SysML models. Furthermore, the exploration of various mission scenarios illustrates the versatility of the digital twin, making it applicable to a wide range of tasks, from simple to complex missions. Expanding the RT-CDA methodology could benefit industries involving autonomous and semi-autonomous systems that require communication and control components. This comprehensive demonstration underscores the value of digital twins using Model-Based Systems Engineering and SysML in advancing system development processes across diverse applications.