2.1. Method
The methodology used in this study was based on the principles of agile methodologies, which provide a set of guidelines that can be employed by a software development team to carry out the product development process [
16]. Specifically, the SCRUM methodology was implemented, as it was considered to have foundations that can help optimize the process, despite originally being designed for larger teams. These foundations include the Product Backlog, which is a list of previously completed work and the remaining tasks; the Sprint Backlog, which represents the part of the product that will be delivered within a typical timeframe of 1 to 4 weeks; and finally, the Weekly Review, conducted on a weekly basis [
18].
As a result, a work scheme consisting of approximately six sprints was proposed, with weekly progress deliveries and monitoring.
Figure 1 illustrates the proposed work plan and project completion schedule. The terms ‘Yes’ and ‘No’ represent decision points where the algorithm concludes.
The present approach aims to provide a detailed description of the different stages in the software development life cycle. These stages include the analysis, design, implementation, testing, and product delivery. To achieve this, a comprehensive analysis of the functional and non-functional requirements of the platform was carried out, enabling the selection of the most suitable architecture to meet these requirements.
Subsequently, the platform implementation was divided into three phases: signal acquisition, signal processing, and data implementation and visualization. Each of these phases is further divided into smaller tasks that are continuously executed to address any errors and improve the implemented processes.
In the signal acquisition phase, a system was implemented to capture various types of biomedical signals such as the temperature, heart rate, blood oxygen saturation, EKG, GSR, and airflow. Signal processing involved the use of digital signal processing techniques including filtering, feature extraction, and signal segmentation. Lastly, in the data implementation and visualization phase, a graphical user interface was developed to allow users to visualize real-time acquired data.
After implementation, rigorous testing was conducted to ensure that the platform meets the established functional and non-functional requirements. The project evaluation involved reviewing the proposed objectives to determine if they were successfully achieved.
2.3. Architecture Proposal
At first, the idea of using the MySignals® device for signal acquisition and transmission was considered. However, after a detailed evaluation, it was discovered that this device was discontinued in 2020. Unable to access the account managing it, the ability to transmit signals remotely was lost. As a result, the search for another device or system that could acquire and transmit the necessary signals for the project began.
During this search process, the e-Health v2.0 development board, also developed by Cooking Hacks, was found. This board enables the transmission of biomedical sensors through the Arduino Uno board, functioning as a module that utilizes the features of the Italian board to measure various sensors. To use it, the libraries provided by Cooking Hacks are required. However, one of the challenges encountered was obtaining these libraries. Like MySignals, the e-Health v2.0 board was also discontinued, and therefore its manuals were not available. Consequently, an exhaustive search was conducted, and the necessary libraries were found in a GitHub repository where their usage is explained with examples. Once the signal acquisition board was secured, the next step was to consider how to transmit the signals. For this purpose, the idea of using it with Raspberry Pi Model B+ was granted, as it functions as a small computer. This allows programming transmission capabilities and data acquisition through a serial connection. Thus, the Raspberry Pi would be connected to the Arduino, and together they would send the signals to the server. The next step here is to define the best alternative for signal transmission based on the current architecture. The following table presents the proposed options.
Table 2 displays a comparison between the two proposed options, HTTP and WebSocket.
Later on, the idea of migrating the server to the infrastructure of the TIGUM research group was proposed. The TIGUM research group operates on RedHat 8.8, a Linux distribution, and they have their own server. This would enable the platform to have constant availability and a non-variable IP address. Additionally, TIGUM’s server is dedicated to hosting research applications, providing a secure and stable environment for the platform’s operation.
In the same vein, it was decided to implement a database to store the information of the signals acquired by the platform, as well as user information. Furthermore, a frontend was developed using the React framework. Through components, it delegates the files so that users can visually access the platform. This approach resulted in a user-friendly and intuitive interface for the end user. Finally, to enhance task delegation on the server, a service-oriented architecture was implemented, containerizing the backend, database, and frontend components on the server. In
Figure 2, the applied architecture can be observed.
2.4. Development of Platform
As mentioned earlier, signal acquisition is carried out using Arduino along with the e-health platform V2.0 development board. In
Figure 3, it can be observed that the system operates through the libraries provided by this board, and each library allows the acquisition of signals from the pins identified on the board itself.
Within the framework of the project, the available sensors from the TIGUM research group were utilized, encompassing measurements of temperature, pulse oximetry, galvanic skin response (GSR), EKG, and airflow. In total, six signals are monitored, with sampling conducted every millisecond for each signal. This approach was adopted because implementing an asynchronous signal measurement system on Arduino exceeded the project’s scope. Moreover, measuring airflow and EKG signals was essential for generating diagrams that facilitate precise visualization. The functioning of the other signals will be elaborated on in the corresponding frontend section.
To initiate the sampling process, a signal is required, which is transmitted through the serial port dictated by the Raspberry Pi. Specifically, the system writes “T” to start the process and “D” to stop it. These directives are emitted whenever the “btn init” event is triggered via the WebSocket, depending on the assigned state. In
Figure 4, the previously mentioned information is presented in detail. Following this, Arduino constructs a frame that is transmitted through the serial port to the Raspberry Pi. This frame comprises only the sampled signal values, each separated by a space. Upon reaching the Raspberry Pi, the text is converted into an object, which is subsequently sent to the server through sockets.
The RPi will listen for events from the serial connection and, as mentioned earlier, will process them accordingly. After processing, the event will be transmitted through the socket, allowing the server to receive it and begin broadcasting it to the client.
The backend is implemented in 3 different parts:
Signal Acquisition and Transmission System: Implemented on a server using Express, this system consumes the socket server through Socket.io to listen to and send events and requests. It starts by listening for the event from the frontend, which contains the state of the button to start or stop data transmission. Once the event is received from the board, it begins continuously sending data to the frontend server. It is worth mentioning that at this point, the data are not sent asynchronously, as it was observed that the system started to slow down due to the constant influx of different events that overloaded the frontend.
Signal and User Storage System: It expects the information event, which allows for obtaining the acquired signal data from the previous system. Here, the sampling time of each signal is abstracted and assigned to a timer that takes samples each time it runs, acquiring the signal data through the data event. These data are then saved in the user’s table obtained from the information event. It is worth noting that the system has low latency because the database and backend are on the same server.
Information Acquisition API: It acts as an intermediary layer between the frontend and the database. This API provides information about registered users on the platform, as well as the latest acquired signals. The main goal of the API is to offer an efficient and secure way to access database information without direct user interaction. Additionally, the API has various functionalities that allow for modifying specific user values and obtaining users with specific characteristics. This streamlines the access and manipulation of database information to meet the requirements of the frontend application.
Figure 5 shows a diagram showing the architecture of a data transmission system involving three main components: a backend server, a Raspberry Pi, and an Arduino. A step-by-step description of the illustrated processes is shown below.
In the frontend, the decision was made to utilize the React framework due to its efficient handling of user elements. As mentioned in the theoretical framework, React is component-based, which serves as the fundamental building blocks of the project. In this case, several components were employed, starting with routers that define the routes for the three pages that can be accessed: the main page, the testing page, and the database page. Each of these pages has its own components, enabling the visualization of signals.
Figure 6 outlines the architecture of an interactive web application, highlighting the communication between the frontend and backend. It shows how user activity prompts API requests that trigger component rendering or user-related messages on the frontend, while the backend manages data flow and user information.
In the main page, an initial evaluation is performed to determine if data are already being measured for a user. If not, the first component is displayed, which corresponds to the slides. These slides are responsible for acquiring user data, including the user-defined sampling time. Once this information is defined, it is sent to the backend through the event with the same name, and the state is changed to “wait,” where the user awaits the next option.
These options can be the total shutdown option, defined by the red button, or the power-on option, defined by the blue button, which sends the status signal == 1 to the backend. After this, the following steps are executed through the graph components:
Prior to initialization, once each graph is rendered, an object is created for each graph, defining their characteristics and providing subsequent access. Most of our actions connected to other controllers are performed using these objects.
Upon receiving the data event, the object will execute its executor () method, which carries out subsequent actions.
In the ValuesAdm controller, the frontend graph information is managed. The next data point from the backend is assigned, and it is evaluated whether the maximum amount of data that can be added to the graph has been reached. This is determined by a conditional statement comparing the number of data points displayed on the current graph with the length of the initial graph. If the limit is reached, the behavior is changed, and old data are cleared as new data are added. This is carried out to prevent data accumulation in the graph and maintain optimal performance. This process is also performed for the graph labels, but in this case, they are automatically generated using a counter. That is, each time a new data point is added to the graph, the counter is incremented, and a new label is added for that value. This entire process has a time interval defined by the intervals specified by the user at the beginning.
Meanwhile, the evaluator controller is also executed, responsible for setting the current flag on the graph. The flag evaluation is conditioned based on the specific graph. For signals with long sampling times, such as the temperature, pulse oximeter, and heart rate, the average of the current and previous data points is obtained. For signals with short sampling times, such as airflow and EKG, the number of peaks obtained in a minute is evaluated to determine the frequency of these signals. This process is particularly useful for detecting patterns in the signals. After obtaining this value, a request is made to the “info” element stored in the “public” folder, where these values can be modified by accessing the JSON file. The limits for each flag can be seen in
Table 1. Finally, the flag to be displayed at that moment is defined, changing the color of the graph accordingly. It is important to mention that this process is continuously performed while the signal is being measured, allowing the real-time visualization of the data and its evaluation on the graph.
Finally, once the data collection has started, the system waits for the user to decide whether to continue or stop the data acquisition. If the acquisition is stopped, the system is not completely reset, allowing the user to continue working with the same profile and previously established sampling times. To stop the acquisition, the “state == 0” status is sent to the backend, which halts the data transmission and clears the graphs, resetting them to their initial values. It is important to note that at this point, the active user is not removed from the database. If the decision is made to restart the entire process, the previously described steps are followed, but a false status is sent for the active user, bringing the page back to its initial state where a new user can be selected along with their corresponding values. The
Figure 7 shows the frontend component
Additionally, for the testing protocol, the following architecture was developed within the TIGUM research group (
Figure 8). This technological infrastructure designed for the TIGUM system is supported by a redundant, high-availability platform. It is based on an Oracle architecture with X86 servers and storage systems that utilize both solid-state and rotational disks, allowing for high-speed performance in both data reading and writing.
Web Shock Performance Metrics: This section demonstrates the performance evaluation of the open-source telemedicine platform based on WebSockets. The primary performance metrics evaluated were the average latency, average fluctuation, data transmission speed, system stability, and precision. A comprehensive analysis of key performance metrics for the telemedicine system was conducted to validate the practical effectiveness of the platform.
Average Latency: In our test environment, latency refers to the time delay between the acquisition of a sensor measurement and the reception of this signal on the client side. Although WebSocket architecture is based on cable connections, we calculate the average latency using the formula
First, we determine the total latency (Ltotal) in milliseconds calculated by the sum of latency in each measurement (
L1 +
L2 +…
Ln) divided by the number of measurements (N). The analyzed results are 3 ms as average latency in the proposed system, 2 ms less latency than results of [
8]’s system for EKG measurement.
Average Jitter: Jitter in a communication system is a metric of the variability in latency over a period of time. Jitter is vital in telemedicine and health measurements due to response times according to the procedure. Jitter is calculated based on the next formula:
where,
is the difference between consecutive latencies from measurement 1 to
n−1; then, each calculation is divided by the total number of last measurements less 1. Computing all the results, the average jitter was 2 ms.
Precision: The precision in this study is evaluated through the standard deviation (SD) and the confidence intervals (ICs) for key biosignals, including HR (heart rate) and SPO2 (oxygen saturation). The low standard deviation (SD) values and the narrow IC, as seen in the lower limit of 95% IC of 84.74 and the upper limit of 85.62, reflect a high consistency in the measurements. When comparing the data transmitted through the platform with the reference devices, it was verified that the information is precise, clinically reliable, and timely.
Data Transmission Rate: The frequency of measurements (n = 7734) indicates the capacity of the platform to handle a substantial volume of data within a given period. This metric is correlated with the transmission speed by demonstrating the platform capacity to maintain data flow without loss or significant delay, even with fluctuating traffic levels.
System Stability: We assessed the system stability by calculating availability and error rates under varying conditions. The skewness and skewness analysis values in the dataset provide insight into the distribution of data under different usage scenarios with a value of −1.36 and a moderate skewness analysis of 3.66 for HR as the platform produces stable readings with minimal sudden variations, ensuring consistent performance even under high-traffic conditions.
Statistical Validation: To establish the reliability of the results, statistical validation was performed using methods such as hypothesis testing, confidence intervals, and a regression analysis where appropriate compared to article [
17]. Hypothesis testing was used to compare the average latency of our platform with that of other solutions, confirming that the observed differences are statistically significant and not due to random variations.
Comparison with Existing Solutions: To assess the relative performance of our platform, a comparative analysis was performed with existing telemedicine solutions on metrics such as latency, accuracy, transmission speed, and stability compared to article [
19].
Table 3 presents key performance metrics related to a telemedicine system’s data transmission and measurement reliability.
Latency, defined as the average response time, was recorded at 3 ms with a standard deviation of 0.5 ms, demonstrating consistent performance across 77.34 data points collected under diverse network conditions, including Wi-Fi and cellular data. The confidence interval of 2.8, 3.2 ms indicates a high level of measurement accuracy, with latency values ranging from a minimum of 2 ms to a maximum of 5 ms.
Jitter, which quantifies the variability in latency between consecutive transmissions, averaged 2 ms with a standard deviation of 0.3 ms. The confidence interval of 1.7, 2.3 ms confirms that jitter remains within a stable range, with observed minimum and maximum values of 1 ms and 4 ms, respectively.
The data transmission rate was measured at an average of 150 kbps, accompanied by a standard deviation of 20 kbps, indicating reliable performance across different load conditions while assuming no packet loss. The confidence interval of 140, 160 kbps suggests a dependable transmission rate, with minimum and maximum values of 120 kbps and 180 kbps, respectively.
Heart rate (HR) accuracy was reported with an average of 85.08 bpm and a substantial standard deviation of 24.20 bpm, reflecting variability in the measurements. The confidence interval of 84.74, 85.62 bpm indicates the system’s capability to provide consistent and reliable data compared to reference devices, with heart rate values ranging from a minimum of 70 bpm to a maximum of 100 bpm.
Regarding oxygen saturation (SpO2), the average measurement was 84.44%, with a standard deviation of 32.32%. The confidence interval of 83.72, 85.16% signifies reliable biosignal transmission, with recorded values extending from 70% to 97%.
Lastly, the system stability was assessed through skewness and a kurtosis analysis, yielding values of −1.36 and 3.66, respectively. The near-zero skewness, combined with moderate kurtosis, indicates that the system maintains stability across various usage conditions, reflecting robust overall performance.
In addition, a scatter graph was generated to analyze the performance metrics and statistical validation of the proposed system, which can be seen in
Figure 9.
This graph reveals that the metric trends show peaks in the series around latency values of 2 to 3 ms. Beyond this latency threshold, most of the variables stabilize and show a more consistent pattern. This suggests that these metrics do not vary significantly at higher latency levels, indicating solid system performance under various conditions.
Notably, the jitter metric, which remains at relatively low values, shows minimal variation, indicating high stability and minimal latency influence. This stability is a crucial advantage, as consistent jitter levels are essential for applications requiring precise timing and data consistency.
In comparison to article [
20], which often shows significant performance degradation at even moderate latency levels, these findings underscore the robustness and reliability of the proposed system. By maintaining steady performance across latency variations, our system can better support critical applications, particularly in telemedicine and health monitoring, where consistent data transmission and real-time accuracy are paramount. This analysis not only highlights the system’s resilience but also establishes its relevance as a dependable solution in contexts where fluctuating network conditions are common.