Next Article in Journal
Content and Face Validity of the Evaluation Tool of Health Information for Consumers (ETHIC): Getting Health Information Accessible to Patients and Citizens
Next Article in Special Issue
Accurate Prediction of Anxiety Levels in Asian Countries Using a Fuzzy Expert System
Previous Article in Journal
Health-Promoting Factors and Their Relationships with the Severity of Symptoms in Patients with Anxiety Disorders during the COVID-19 Pandemic
Previous Article in Special Issue
Detection of Depression-Related Tweets in Mexico Using Crosslingual Schemes and Knowledge Distillation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Geriatric Care Management System Powered by the IoT and Computer Vision Techniques

by
Agne Paulauskaite-Taraseviciene
1,
Julius Siaulys
1,
Kristina Sutiene
2,*,
Titas Petravicius
1,
Skirmantas Navickas
1,
Marius Oliandra
1,
Andrius Rapalis
3,4 and
Justinas Balciunas
5
1
Faculty of Informatics, Kaunas University of Technology, Studentu 50, 51368 Kaunas, Lithuania
2
Department of Mathematical Modeling, Kaunas University of Technology, Studentu 50, 51368 Kaunas, Lithuania
3
Biomedical Engineering Institute, Kaunas University of Technology, K. Barsausko 59, 51423 Kaunas, Lithuania
4
Faculty of Electrical and Electronics Engineering, Kaunas University of Technology, Studentu 48, 51367 Kaunas, Lithuania
5
Faculty of Medicine, Vilnius University, Universiteto 3, 01513 Vilnius, Lithuania
*
Author to whom correspondence should be addressed.
Healthcare 2023, 11(8), 1152; https://doi.org/10.3390/healthcare11081152
Submission received: 22 February 2023 / Revised: 3 April 2023 / Accepted: 13 April 2023 / Published: 17 April 2023
(This article belongs to the Special Issue Information Technologies Applied on Healthcare)

Abstract

:
The digitalisation of geriatric care refers to the use of emerging technologies to manage and provide person-centered care to the elderly by collecting patients’ data electronically and using them to streamline the care process, which improves the overall quality, accuracy, and efficiency of healthcare. In many countries, healthcare providers still rely on the manual measurement of bioparameters, inconsistent monitoring, and paper-based care plans to manage and deliver care to elderly patients. This can lead to a number of problems, including incomplete and inaccurate record-keeping, errors, and delays in identifying and resolving health problems. The purpose of this study is to develop a geriatric care management system that combines signals from various wearable sensors, noncontact measurement devices, and image recognition techniques to monitor and detect changes in the health status of a person. The system relies on deep learning algorithms and the Internet of Things (IoT) to identify the patient and their six most pertinent poses. In addition, the algorithm has been developed to monitor changes in the patient’s position over a longer period of time, which could be important for detecting health problems in a timely manner and taking appropriate measures. Finally, based on expert knowledge and a priori rules integrated in a decision tree-based model, the automated final decision on the status of nursing care plan is generated to support nursing staff.

1. Introduction

Geriatric care is a field of healthcare that focuses on the physical, mental, and social needs of older adults. As people age, they may experience physical, cognitive, and social changes that require special care and support. Geriatric care is based on the specific needs of older adults and aims to improve their health and well-being as well as manage age-related diseases and conditions so that they can maintain their independence, quality of life, and overall comfort. Such care often involves a multidisciplinary approach with care provided by a team of health care professionals, including physicians, nurses, therapists, and social workers, who are trained in gerontology and geriatrics [1,2]. The estimated number of dependent people in need of some form of long-term care in Europe is 30.8 million, and this is expected to increase to 38 million by 2050. Furthermore, the expected shortage of nurses will reach 2.3 million in 2030. By 2080, the population aged over 80 years and older in Europe will have multiplied by 2.5. It should be noted that the majority of dependent patients suffer from Alzheimer’s and chronic diseases, such as past myocardial infarction, congestive heart failure, cardiac arrhythmia, renal failure, and chronic pulmonary disease, have an increased risk of mortality in nursing homes [3].
Currently, the main problems are caused by the absence of tools to design automated care plans. The problems identified are related to the lack of digital evidence-based protocols for different situations and the nonadherence to existing protocols by nursing staff. Typically, an individualised nursing care plan is developed for the elderly patient upon admission to meet their needs. This plan is developed based on a thorough assessment of the person’s medical history and evidence-based care practices. As elderly individuals reside in nursing homes, it is common for their health to decline, which makes it crucial to monitor their health status while they are there. Thus, caregivers must regularly check important biometric data, such as blood pressure, heart rate, body temperature, and respiratory rate. Collecting and documenting patient vital signs data manually is a relatively slow and therefore inefficient process. Depending on the types of vital signs, it usually takes up to five minutes to assess three to six vital signs [4]. Moreover, this information is usually documented in paper form separately from the nursing care plans, and therefore, the whole process takes up to 13 min per patient [5]. Furthermore, care plans have to be regularly re-evaluated by comparing current and historical health records to look for abnormalities and changes that could have clinical significance. However, biometric data are documented separately from nursing care plans and records of doctors. With such fragmented data sources, the process is human-dependent, highly inefficient, and cumbersome and can take up to 37 min per patient [5,6]. Moreover, in the absence of a systematic approach in geriatric care management, it becomes challenging to quickly capture monitoring data and act on them. This can cause caregivers to miss any unusual changes in the biometric data, leading to delays in administering treatment.
During the course of our research, several hospices from Latvia, Estonia, and Poland (e.g., Orpea) were contacted, and it was concluded that geriatric care management systems with a digital care plan and remote monitoring solutions are currently not available in these markets. Facilities rely on outdated software that was developed for inpatient hospital services without taking into account the nursing care plan. In particular, in Scandinavian and UK markets (e.g., Appva), some tools have been developed that include a simple digitised nursing care plan without remote monitoring or decision support capabilities; however, none of these companies have shown an interest in providing the service in the Baltic States. Therefore, in many countries, including the Baltic States, nurses use paper-based care plan templates and manually prepare time-consuming documents. Consequently, data loss and missing information in care plans are common problems. Based on the problems identified during oral interviews and discussions with various stakeholders in Lithuania, the following needs for long-term care at home and in specialised institutions have been narrowed down as the most recurrent and yet relatively possible to complete with limited funding: (1) easily create nursing care plans for new patients with action protocols for nursing staff; (2) ensure adherence to and traceability of the execution of the protocols; (3) automate patient monitoring; (4) reduce manual paper documentation; (5) easily adapt nursing care plans according to changes in the health of the patient; and (6) enable a transition from reactive care to proactive care.
Digitalised care systems could be a solution to meet the multidimensional need to monitor whether elderly patients in geriatric care facilities are receiving optimal care, thus monitoring patients more efficiently and providing personalised care. Digitalisation also helps a relatively small number of healthcare workers to reduce the need for repetitive manual work and use the collected data for proactive decision making. Furthermore, the combination of Internet of Things (IoT) and artificial intelligence (AI) technologies can aid in the analysis of data and ensure continuous monitoring of elderly patients to positively impact their care and outcomes [7,8,9,10]. By collecting data on patient activity and health, advanced AI algorithms can analyse patterns and detect deviations from normal behaviour, allowing caregivers to respond in a timely manner.
In this study, we propose an intelligent geriatric care management system based on AI and IoT to track and detect changes in the health status of elderly patients, thus ensuring efficient digitalisation of personalised care plans. The proposed solution can be used to tackle two of the most urgent problems in the area: nursing staff shortages and the costly and inefficient long-term care process. Although home care for dependent and elderly people is becoming more and more popular, it is still not a viable option for everyone due to the expensive infrastructure required and the difficulties in gaining access to their homes in an emergency. Even if people choose to live in a nursing home, it is still difficult to monitor, care, and treat elderly residents on a regular basis. With the growing demand for healthcare nurses, fragmented remote health monitoring tools, and lack of existing solutions for real-time modifications of nursing care plans, it is crucial to have a cost-effective and semi-autonomous solution available in the market.

2. Related Works

In recent years, there has been a growing interest in the development of digital health solutions to support older people and promote healthy ageing [11,12]. However, elderly individuals are more likely to develop diseases such as dementia, diabetes, and cataracts, suffer from physical and cognitive impairments, and have low levels of physical activity, all of which lead to a continuous decline in their health. This makes it difficult for staff to keep track of elderly people, to monitor changes in their health, to record and store all readings systematically, and to always react quickly and appropriately to the changes and adjust the care plan. Furthermore, as life expectancy continues to increase, the need for nurses working in geriatrics is also increasing. As such, remote monitoring and wearable devices can be used to measure vital signals, evaluate physical activity, and inform caregivers or physicians about changes in their health, which aids in the early detection of health risks [13,14].

2.1. The Use of Wearable Devices

Recently, wearable technology has benefited from technological progress, as the size of devices has significantly reduced, while the efficiency of energy consumption has improved simultaneously [15]. In particular, wearable technology can be used for a variety of purposes, ranging from keeping track of physical activity to monitoring clinically important health and safety data. Wearable devices provide real-time monitoring of the wearer’s walking speed, respiratory rate, measuring sleep, energy expenditure, blood oxygen and pressure, and other related parameters [16]. Such devices can also be useful tools for people living with heart failure to facilitate exercise and recovery [17,18]. Comparatively, a study demonstrated the strong potential for improvement in healthcare through the use of wearable activity monitors in oncology trials [19]. The use of wearable technology to identify gait characteristics is another intriguing example [20], where lower limb joint angles and stride length were measured simultaneously with a prototype wearable sensor system. The study [21] investigated how a wearable device could help physicians to optimize antiepileptic treatment and prevent patients from sudden unexpected death due to epilepsy. For particular groups of individuals that suffer from chronic disease such as diabetes mellitus, cardiac disease, or chronic obstructive pulmonary disease, wearables may be used to monitor changes in health symptoms during treatment and may contribute to the personalisation of healthcare [22,23,24]. The use of wearables within a group of elderly population brings additional challenges. For example, it is very important to detect falls, which has already become a topic of particular importance in this field. For example, in [25], a framework was proposed for edge computing to detect individual’s falls using real-time monitoring by cost-effective wearable sensors. For this purpose, an IoT-based system that makes the use of big data, cloud computing, wireless sensor networks, and smart devices was developed and integrated with an LSTM model, showing very promising results for the detection of falls by elderly people in indoor circumstances. The validity and reliability of wearables have been addressed by many studies focusing on different classes of devices used to measure activity or biometric data [26,27,28,29]. Apparently, there is no consensus among researchers, as findings depend on the manufacturer, device type, and the purpose for which it was used. This is also true because devices are constantly being upgraded to new models, which suggests that their validity and reliability will improve with time.

2.2. Contactless Measurement of Vital Signs

There are still some concerns regarding the reliability and accuracy of wearables to detect physical activity and evaluate health-related outcomes within elderly individuals, as they are generally designed primarily to collect biometric information during activities of daily living in the general population [30,31,32,33]. First, the ability of older people to recognise the need for wearables and properly use them poses new challenges. Second, the high prevalence of different diseases in this population and the heterogeneity associated with their lifestyle, needs, preferences, and health point to the need for wearable devices that are valid and reliable and that can accurately measure and monitor important signals. Additionally, taking into account the problems associated with time-inefficient work in care homes, contactless monitoring of vital signs may be beneficial for healthcare [34,35,36]. In particular, contactless measurement techniques can be applied to measure the respiratory rate and monitor the heart rate variability, which is one of the fourth most important vital parameters [37]. Monitoring the respiratory rate is useful for the recognition of psychophysiological conditions, the treatment of chronic diseases of the respiratory system, and the recognition of dangerous conditions [38,39]. Combining respiratory rate and heart rate data provides even more useful information on the condition of the cardiovascular system [40,41]. The most promising method of noncontact monitoring of the respiratory process is through infrared and near-infrared cameras [42,43]. An infrared camera is a device that can capture small temperature changes on the surface of an object and/or in the environment. This device can record the temperature fluctuations of airflow from the mouth or nose. Infrared cameras can successfully measure the respiration rate if advanced computer vision algorithms that are insensitive to constantly varying lighting and temperature conditions are applied.

2.3. Benefits of Computer Vision Techniques

Image recognition is one of the main methods used to determine an individual’s pose and activity. The use of pose estimation technology in geriatric care offers several advantages, including the continuous monitoring of patients, early detection of potential health problems, essential data on the patient’s movements, and, in particular, the detection of extra situations (e.g., the person is lying on the ground and not moving) [44]. Pose estimation algorithms vary in complexity and accuracy, ranging from simple rule-based algorithms to more complex deep-learning-based algorithms. Simple algorithms may be faster and easier to implement but typically they are not as accurate as more complex ones. Deep-learning-based algorithms, on the other hand, may provide more accurate results but may be more computationally intensive and require large amounts of training data. Comparatively, deep-learning-based methods have shown great potential for improving the accuracy of human posture recognition, for both single individuals [45,46] and multiple individuals [47,48] in images or videos. In particular, methods such as the multisource deep model [49], the position refinement model [50,51], and the stacked hourglass network [52] have demonstrated the effectiveness of deep learning in human posture recognition. These methods use convolutional neural networks to extract features from input images and estimate the positions of human joints. However, the early detection of falls [53,54,55] is one of the most important functions of the geriatric care system as it allows prompt medical assistance to be provided and can prevent further injuries. Human fall detection systems can help to identify when a fall has occurred and alert caregivers or emergency services immediately. Therefore, various types of fall detection and prediction systems suggested in the field not only rely on image recognition techniques [42,56,57] but also employ other information sources, for example, biological factors or signals obtained by wearable devices that are more commonly used for fall risk assessments [58,59]. Although computer vision techniques have been used widely and very successfully in medicine, the monitoring and identification of patients in nursing homes should take into account the fact that image capture devices cannot always be used to track patients (e.g., hygiene rooms) according to privacy and ethical requirements [60,61]. In addition, capturing certain information with cameras may not always be possible due to changes in the environment or, for instance, in cases when the person reappears or is partially obscured by other objects, which poses the additional challenge of re-identifying the same individual. Therefore, it is important to determine which factors may be automatically recorded and tracked over time utilising image processing technology. It is also crucial that the solution is quick. As such, it is essential to carefully assess the trade-off between precision and speed in order to choose a solution that meets the specific requirements of the application.

3. Materials and Methods

The proposed solution includes (1) an IoT module with integrated wearable and contactless devices; (2) an AI module that utilises deep learning architectures for the image recognition of patient posture and activity; and (3) a decision support module for generating the patient-personalised nursing care plan.
An IoT module has been developed to monitor and transfer data in real time. It consists of sensors connected to an Arduino microprocessor to monitor the patient’s vital signs. This module integrates not only body-worn devices that are networked but also a number of remote devices for monitoring health data. In general, such devices can collect and transmit the collected data, such as heart rate, body temperature, and physical activity, to a remote system or application, usually through wireless connectivity (e.g., Bluetooth, Wi-Fi). Some wearable health devices also have built-in sensors and algorithms that can perform basic health assessments, such as tracking sleep patterns, counting steps taken, and estimating calorie expenditure.
In this study, four IoT devices, a Fitbit wristband, smart scale, smart blood pressure device, and a camera, were used to monitor the health of elderly patients in a nursing home (Table 1). Data collected from these devices were sent to the server and processed to obtain the final decision (Figure 1).
A patient room in a hospital for the elderly was equipped with cameras to continuously monitor the status of the patients in real time. The video footage from these cameras was sent directly to a server where it was stored, processed, and analysed using image processing algorithms. This was necessary to monitor patients’ motor activity, changes, or progress in movement and consequently make the necessary changes to the care plan or react in emergency situations such as falls, pressure sores, etc. In parallel to the cameras, the patients were also given Fitbit wristbands for the additional monitoring of physiological parameters. These wristbands were equipped with sensors to monitor the patient’s vital signs, such as heart rate and respiratory rate. The data from the Fitbit bracelets were sent to Google Cloud and then to a server using APIs. The geriatric nurse also used specialised equipment to monitor the patients’ weight and blood pressure. Withings body+ connected scales make it easy to monitor weight, BMI, fat, water, and body mass, which is later automatically synchronised with the smartphone via Wi-Fi or Bluetooth. In particular, monitoring the following parameters is important for patients at risk of complications such as high blood pressure, diabetes, etc.
One of the main limitations is that off-the-shelf IoT devices do not offer the option of sending data directly to a third-party server. As a result, all data must first pass through the provider’s cloud services and use their API. This also leads to software limitations, such as only allowing one IoT device of a certain type per account, making the data collection pipeline more complex than is necessary.
Data captured by all smart devices not only digitalise the tracking of key physiological parameters but also enable the investigation of dependencies between these indicators and a patient’s health status or its change, but only when a statistically reliable sample is collected. If computer-vision-based health monitoring is involved, real-time visual information collection must include data storage and analysis [44,62,63]. For the experiment, data collection started on 15 September 2022 and data were uploaded to the server Dell PowerEdge R7525 (AMD EPYC 7452 32-Core Processor/2350 MHz; 512 GB RAM; NVIDIA GA100 [A100 PCIe 40 GB], 2 × 450 GB SSD; 2 × 25 Gbps LAN MT27800 Family [ConnectX-5] 2 × 100 GBps [ConnectX-6]). In total, 1.412 TB of data were accumulated during the observation period between 15 September 2022 and 28 December 2022. In addition to the data collected from the IoT devices, the system also allowed manual input from healthcare personnel. This included additional parameters that were not captured by the IoT devices, such as bedsores, changes in eating habits, changes in bowel movements, etc. These data were entered into an Excel spreadsheet by healthcare professionals and then automatically uploaded to the database.
By continuously monitoring a patient, wearable health devices can provide a more comprehensive view of a patient’s health status. However, it is important to ensure that the system is secure, respects the patient’s privacy, and complies with relevant regulations and standards [64,65]. However, it has been observed that wearable gadgets are frequently taken off and thrown away for either purposeful or unintentional reasons, so a balance needs to be struck between functionality, dependability, and cost. This is a common issue with wearable health monitoring devices, particularly among patients with dementia, who may forget where they have placed their device or may not understand the importance of wearing it consistently.
Non-contact monitoring of vital signs using cameras and image recognition techniques is a promising area of development in healthcare technology and has the potential to improve the accessibility, efficiency, and cost-effectiveness of vital sign monitoring. The use of AI-based image recognition algorithms, mainly deep learning architectures, allows images to be automatically analysed to assess vital signs.
YOLOv3 (You Only Look Once, Version 3) [66] is a real-time object detection algorithm that allows specific objects to be identified in videos. YOLOv3 uses a variant of the Darknet neural network architecture, specifically Darknet-53 as its backbone network. The architecture consists of 53 convolutional layers, which was trained on the ImageNet dataset, which was designed for computer vision research [67]. YOLOv3 also contains several key features that help to improve the detection accuracy and performance, including residual skip connections, upsampling, and multiscale detection. The most important feature of the algorithm is that it performs detection at three different scales by downsampling the dimensions of the input image by factors of 32, 16, and 8, respectively (see Figure 2).
The AlphaPose algorithm allows us to detect keypoints in the bodies of several people with high accuracy in real-time video or images. The 17 keypoints detected by AlphaPose include the nose, eyes, ears, shoulders, elbows, wrists, hips, knees, and ankles (see Figure 3). As the Figure 3 shows, the algorithm can successfully detect the following keypoints in video footage of a patient in a movement position. All of these keypoints are used to construct a human body skeleton representation, which can be used for various applications such as activity estimation [68], process recognition [69], and human fall detection in different environments [54,55,70,71].
In particular, a decision support system relies primarily on the expert knowledge of geriatric staff nurses who are experienced in developing nursing plans for patients with different health problems. Their expertise has been used to create the rules that guide the decisions made by nursing professionals which, in this case, are mapped into the output of how to proceed with the nursing plan. Individual experts suggest different decisions based on critical factors in certain cases, so it would seem reasonable to use Fuzzy logic or Neuro-fuzzy models, which are more similar to human thinking. However, given that most of the input variables are of the verbal and integer type, the use of such models will not be efficient. In addition, we do not have enough statistical data to create mappings between numerical values and verbal estimates (e.g., Breathing: Increased → X breaths per minute) and to create fuzzy sets based on this. Therefore, we decided to rely on the Decision Tree supervised learning approach which can handle both numeric and non-numeric values, has fast decision times, enables parameter optimisation, and has the possibility of refinement if the accuracy of the result is not satisfactory (e.g., Random Forest). In the decision support module, a Decision Tree with a Gini impurity value was used, and a prepruning process was applied to prevent overfitting. The Gini impurity value is given by
G i n i = 1 j = 1 c p j 2 ,
where p j is a proportion of observations that belong to class c for a particular node.
The fine-tuning of Decision Tree hyperparameters involves a depth limited to a maximum of 3 and a minimum number of samples equal to 6 in a finite node. An average classification error of 92% was achieved.
For patient reidentification, the study made use of the Bag of Visual Words (BOVW), since it has been proven to be successful in a number of computer vision tasks, including human reidentification and human action classification [72,73,74]. With the BOVW approach, local features (such as SIFT descriptors) from images are first extracted and then grouped into a visual vocabulary. Each image is then represented as a histogram of visual words, which may be used for classification or retrieval tasks using machine learning algorithms. More specifically, the K-means algorithm was trained using the final list of features that were retrieved from patient images. As a result, the features were grouped into visual words. Finally, a ML-based classifier was used to generate a categorisation of images based on a newly created vocabulary.

Performance Metrics

The F1 score is a metric that is widely used to evaluate the performance of a classification model. For a multiclass classification, the F1 score for each class is calculated using the one-against-rest (OvR) method. In this approach, the metric for each class is determined separately. However, rather than assigning multiple F1 scores to each class, it is more common to take an average and obtain a single value to measure the overall performance. Three types of averaging methods are commonly used to calculate F1 scores in a multiclass classification, but only two of them are recommended for unbalanced data, as in our case. More specifically, macroaveraging calculates the F1 score for each class separately and derives an unweighted average of these scores. This means that each class is treated equally, regardless of the number of samples it contains. The macroaveraging F1 score is given by
M a c r o a v g F 1 = i = 1 n F 1 i n ,
where n is the number of classes. In contrast, a weighted averaging calculates the F1 score for each class separately and then takes the weighted average of these scores, where the weight for each class is proportional to the number of samples in that class. In this case, the F1 result is biased towards the larger classes, i.e.,
W e i g h t e d a v g F 1 = i = 1 n w i × F 1 i n ,
where w i = k i N is the weight of the class i, N is the total number of samples, and k i is the number of samples in the class i.

4. Results

4.1. Implementation of the Geriatric Care System

The geriatric care plan system for end-users, i.e., nursing home staff, was created using C# programming language and the ASP.NET Core 6.0 framework for the back-end. The front-end was built using Node.js version 19 and the Angular framework, while testing was carried out using Karma. PostgreSQL was used as an open-source relational database management system. The use of these technologies allowed developers to create a robust and scalable system that was able to handle the large amounts of data generated by IoT devices. In addition, Docker was used to containerise the software for deployment by combining the system and all of its dependencies into a single container that could be quickly deployed on any platform that is compatible with Docker. The architecture of the system is demonstrated in Figure 4.
Wearable gadgets synchronise the data with cloud servers, since the data they generate needs to be processed and analysed. Once the data have been received by the cloud servers, the company’s server pulls the data from the Google cloud servers using API and then parses the files and saves information in the Postgre database. In contrast, the data captured by the cameras are sent directly to the server. This dataset is then processed in the back-end and analysed alongside the wearable data in order to provide a more comprehensive view of a patient’s health status. The main purpose of .NET backend is to act as a bridge, passing data between the Angular front-end and the Postgre database. The back-end is written using REST API methodology to provide a standardised way for different applications or devices to communicate.

4.2. AI-Based Data Analytics and Decision Making

To prepare a nursing care plan, a rich set of data is collected about the patient, as summarised in Table 2. Then, the recommendations for the actions to be taken in a nursing plan are generated from the geriatric care management system.
For demonstration purposes, the collected data were analysed to detect possible dependencies. The radar graph below (see Figure 5) is a single patient’s chart of selected vital signs over a 50-day observation period, displaying SBP (systolic blood pressure), DBP (diastolic blood pressure), HR (heart rate), SPO2 (oxygen saturation), sleeping hours, and weight measurements. The data analysis was carried out on three patients on the ward, but no significant dependencies between variables were identified. It may be assumed that that some trends could be determined if the data were gathered over a longer period of time and additional variables, such as pain level, temperature, and even verbal type indicators, were included.
The geriatric care personnel was responsible for writing the rules for the care plans. These rules were based on best practise and experience in the field and were designed to ensure that patients receive the most appropriate and effective care. More specifically, care plans were tailored to the specific needs of current and future patients, taking into account their medical history, current condition, and other relevant factors. The variables listed in Table 2 were included in the care plan, as they indicate the patient’s medical history, current health status, and other relevant factors that can influence treatment. On the basis of this information, the initial set of rules covered a wide range of scenarios and options, but after optimisation, the patient care plan eventually consisted of 61 rules with the four possible outputs of the care plan: “Continue current treatment”, “Monitor”, “Adjust”, or “Extra situation” (see Figure 6). All remaining cases that were not included in the rules were assigned to the care plan “Continue current treatment” by default.
In particular, the nursing care plan was designed to be flexible and adaptable to allow healthcare professionals to adjust the patient’s geriatric care according to his or her health status and changing needs. Those rules and the output generated by the geriatric care management system help healthcare personnel to respond more quickly to changes in a patient’s health, shape the patient-personalised geriatric care, reduce the risk of human error, and make better use of staff time by concentrating more on essential social support.
Figure 7 shows a schema for an AI-based decision support system. Four of the 21 variables (see Table 2) are automatically registered; that is, three of them were retrieved from IoT devices and one (change in movement) was obtained from the camera. The value of the latter variable was generated from the AI-based image recognition module. The remaining variables were taken from the MS Excel spreadsheet file, where all data were entered manually.

Image Recognition Solution

An AI-based image recognition module is a block consisting of several sequential algorithms that detects changes in the movement of a patient. In this project, we used the camera to film nursing home patients, that is, one room with three patients. The video was recorded at 1920 × 1080 pixel resolution with a frequency of 10 FPS, therefore storing 10 unique images per second to obtain 10FPS × 60 = 600 images per minute. An image was analysed every five seconds with the assumption that no significant changes in motion would be detected in that time period.
The image processing included
  • Brightening: to increase the overall luminosity of the image, improve visibility, and increase the clarity of the image during low light conditions;
  • Cropping: to keep only regions of interest in the image;
  • Denoising: to remove noise from the image, typically by applying a low-pass filter. It also improved the quality and clarity of the image by removing noise, which could be especially useful if the image was taken under poor conditions or with a low-quality camera.
  • Edge detection: to identify edges in the image by finding points of a rapid intensity change. It can also be used to identify and extract features or objects in the image, such as lines, shapes, or boundaries.
After image processing, the algorithm integrating YOLOv3 and AlphaPose [75] was used to detect human poses. The algorithm includes the three main components [76]. First, the Symmetric Spatial Transformer Network (SSTN) takes the detected bounding boxes to generate pose proposals. The SSTN allows the spatial context and correlations between the keypoints to be captured, leading to more accurate pose estimates. Second, the Parametric Pose Non-Maximum-Suppression (NMS) is a component that is used to remove redundant pose detections and improve the overall accuracy of the pose estimation. Finally, the Pose-guided Proposals Generator is used to create a large sample of training proposals with the same distribution as the output of the human detector.
The next step is the problem of identifying and classifying patient postures, which in this case, included the following six postures: “walking”, “standing”, “sitting”, “fallen on the ground”, “lying in bed”, and “sleeping”. For the verification of all poses, a sequence of three images was taken for a period of 15 s, except for the last two poses. The poses of “sleeping“ and “laying in bed“ correlate with the parameters of the smart bracelet (sleep time and heart rate), so these parameters were also assessed. If the patient was found to be lying in bed, the assessment time was extended by up to one minute to identify whether the patient was “lying in bed” or “sleeping”. In particular, the pose was assessed every minute until a new pose was captured.
A pose change algorithm was developed to detect differences between adjacent images, that is, to identify that a person was walking rather than standing or that a person was just lying on a bed rather than sleeping. Figure 8 illustrates the example of three iterations of assessment frames of the “walking” pose taken every five seconds. Comparing the images taken every five seconds, we can see that the pose remained the same, although the frames were not identical and the patient’s coordinates varied.
In order to define changes in movement habits, an additional algorithm (see Algorithm 1) was created to evaluate movement changes over a longer period of time t m n , where m is a current time moment, n is a number of days before t m , 1 ≤ n ≤ 3. This algorithm calculates the duration (hours) in each pose per day. The percentage change is then evaluated, compared with threshold value k t h and a response is generated that includes three possible values: “Unchanged”, “Slowed down”, or “Increased”. The pseudocode of the algorithm is provided below.
Algorithm 1 Evaluation of changes in movement habits
  • A c t H = W a l k i n g ( h r s ) + S i t t i n g ( h r s ) + S t a n d i n g ( h r s )
  • k t h = 12.5 %
  • D i f f ( a , b ) = ( ( a b ) / b ) * 100
  • if  D i f f ( A c t H ( t m 1 ) , A c t H ( t m 2 ) k t h D i f f ( A c t H ( t m 2 ) , A c t H ( t m 3 ) k t h  then
  •      M o v e H  is slowed down
  • else if  D i f f ( A c t H ( t m 1 ) , A c t H ( t m 2 ) k t h D i f f ( A c t H ( t m 2 ) , A c t H ( t m 3 ) k t h  then
  •      M o v e H  is increased
  • else  M o v e H  is unchanged
  • end if
For demonstration purposes, the identification of tough poses observed in the real-world environment is shown below. For instance, Figure 9 shows a skeleton-based posture recognition in various lighting environments. In well-illuminated areas, patients can be detected by identifying all skeleton keypoints (Figure 9c). It has been observed that at night or at twilight/night, walking patients can be identified quite accurately with all keypoints (Figure 9a,b), but when patients are sleeping with their blankets, few keypoints were successfully detected (Figure 9d) or keypoints were not detected at all (Figure 9a).
Another example demonstrates a skeleton-based posture recognition for two different scenarios. In Figure 10a, the keypoints in the patient’s body were detected when the patient was lying on the ground, which refers to the status “falling on the ground”. To correctly recognise this pose, a training dataset with artificially simulated falling poses was created. Comparatively, Figure 10b shows that the keypoints in the body were identified for all persons located in the ward, but the nursing personnel needed to be the exception. Therefore, additional data were collected to train a deep learning algorithm to distinguish staff from patients. Consequently, the nursing personnel was identified by their clothes, more specifically, white trousers and a blue top, which they had to always wear.

4.3. Experimental Results

A posture detection algorithm of captured video material was tested to identify six different poses. The results are summarised in a confusion matrix to evaluate the performance of the algorithm. More specifically, the confusion matrix provides a visual representation of the number of correct and incorrect predictions made by the classifier: the rows represent the actual class labels, while the columns represent the predicted class labels. The diagonal elements show the number of correct predictions (see Figure 11).
The posture recognition algorithm was trained using 9300 labelled images and tested using 3792 images. An average posture recognition accuracy of 91.63% was achieved for the testing data set (Figure 11). Posture labelling was performed manually on the images obtained from the video stream for training and testing purposes. The Receiver Operating Characteristic (ROC) curve of the stratified testing dataset is provided in Figure 12.
The AUC values for each posture class ranged from 0.8790 to 0.9427, with the highest value obtained for the sitting posture class. The sleeping and lying in bed posture classes resulted in the lowest AUC values, with values of 0.9047 and 0.8790, respectively. These lower values suggest that it might be more difficult for the classifier to distinguish between these postures and others. Comparatively, the AUC value for the fallen on the ground posture class was 0.9177, which is slightly lower than those of the other more successfully recognised posture classes. This could be due to the lack of training data for this posture, which might have led to lower accuracy. Next, Table 3 summarises the estimated values of precision, recall, and F1 score for each class of interest, together with macro and weighted F1 scores for the evaluation of the overall performance of the posture recognition algorithm.
The patient re-identification testing results are summarised in Figure 13. The support vector machine (SVM) method was used to generate categories of images, providing labels for the patient classes. In our case, the maximum number of classes was set to four: three classes represented the maximum number of patients the ward can accommodate, while the separate class “None” referred to unauthorised individuals such as nursing staff, family members, doctors, or others. The class names for patients were labelled “First”, “Second”, and “Third” (see Figure 13).
The confusion matrix in Figure 13 summarises how successfully the algorithm identifies three ward patients in common areas. One can observe that an accuracy level of 90% was obtained for the “First” class, a value of 88% was obtained for the “Second” class, and a value of 91% was obtained for the “Third” class. Although the lowest accuracy level of 87% was achieved in the "None” class, considering that there can be around 6–13 people in a single tray, this is a pretty good accuracy level. It was observed that female patients and nursing home staff were more easily recognised, but other patients, nonmedical nursing home staff, and visiting relatives were the most confused with these patients.
Finally, to conduct a real-time experiment, patient positioning verification was carried out. This included 16 scenarios with diverse positions. The results are summarised in Table 4. Two prediction errors were determined. More specifically, the patient was “lying in the bed”, but he was detected as “sleeping”, as he was covered up, his heart rate was reduced, he did not move for more than one minute, and it was night time. Another prediction error also related to the sleeping pose. The patient was lying in the bed without covering up; however, it was determined that he was not sleeping based on readings from the smart wristband. It should be noted that the prediction may also be impacted by ambient light conditions. From a technical perspective, the proposed system performed pose estimation with an average output time of 182 ms, including the algorithm used to predict the pose from the possible outcomes.
To test the correctness of the output of the geriatric care management system, different scenarios involving nursing home staff were developed. The results revealed that the system provided the correct output in all cases. The system was designed to generate changes to the treatment plan immediately after any changes are made. When a healthcare professional makes a change to the care plan, the system analyses the data from the patient’s IoT devices and determines the appropriate course of action. The system then automatically updates the results of the action to be taken for the individual patient and alerts the healthcare professional. This allows healthcare professionals to stay up-to-date with the patient’s condition and make any necessary adjustments to their treatment in a timely manner.

5. Discussion

There are a few areas for improvement, as the proposed geriatric management care system is still in its initial stage of functioning. Personality identification, which relates to the continuous contactless assessment of the patient, is the most challenging concern. Comparatively, wearable devices do not raise any questions at the moment; their purpose is clear, but elderly people have a problem with wearing them because they find them annoying. The creation of the nursing care plan itself could be fully automated later on, with a follow-up on what action should be taken when the situation changes. However, to fully automate it, a lot of statistical data are needed, including actions taken by nurses, from which the system could be learnt, that is, from the actions taken by the care worker on each individual situation. Taking into account the current data (Table 2), there are at least 20,155,392,000 possible combinations of parameters that define the health condition, which are likely to increase in the future due to the inclusion of additional parameters. For this purpose, a list of actions is provided in the geriatric care management system, from which the care worker must indicate (select from the list) what they intend to do. In this way, a dataset of situations and decisions with all the actions taken accordingly is continuously accumulated. Once a representative sample of data has been accumulated (say after at least one year), the correctness of the automated action is improved.
The challenge with consistent and accurate patient identification makes it reasonable to consider other methods of individual identification than BOVW. As patients usually stay in their own wards, the accuracy of identification is high when the patients are present and nursing staff visit them a certain number of times per day. However, the accuracy drops in common areas (e.g., resting, eating) because there are more patients and personnel present. Mainly because of their distinguishing clothes, nursing workers are simpler to identify (see column “None” in Figure 14). However, the elderly patients themselves are more likely to be confused with each other in common areas, with a best individual identification result of 0.914 achieved (see Figure 14).
As an alternative method, gait recognition (GR) technology can be used for patient identification. This method examines the uniqueness of an individual’s walking or running pattern using machine learning (ML) techniques [77]. More specifically, ML algorithms are trained to recognize subtle differences in a person’s gait and thus can use this information to identify individuals even if their face is obscured in the image [78,79]. An additional benefit of GR technology is that gait information can be used not only for personal identification, but also for medical purposes, such as monitoring and for the diagnosis and treatment of various movement disorders [80,81]. For example, gait recognition technology can be used to identify and diagnose various types of neurodegenerative diseases (such as Parkinson’s and Alzheimer’s disease) or assess the course of disease [82,83,84,85]. This can help doctors and healthcare professionals to develop more effective nursing care plans and interventions as well as monitor the progression of these conditions over time. However, GR technology usually requires a variety of sources or capture devices to gather data about an individual’s gait, including multiple video cameras, motion sensors, radars, and other specialized equipment [79,86]. In addition, the accuracy of gait recognition technology can be affected by a range of factors, including the angle at which the gait is captured.
Finally, it should be noted that elderly people are choosing to live independently at home for as long as possible. In such cases, intelligent geriatric care management system monitoring adapted to the individual home and operating remotely can be very helpful for ensuring that the elderly person is safe and providing faster reactions to emergencies (i.e., fall detection) and appropriate care. In the near future, we plan to develop the necessary software and hardware package (e.g., for the proper functioning of the system such as a stable internet connection) for the home care services and to test it in real-world environment with the possibility of transmitting the data to the responsible physician for monitoring.

6. Conclusions

In this study, a geriatric care management system based on IoT and AI algorithms was proposed to monitor some of the most important vital signs in a noncontact manner and to facilitate the adjustment of the care plan. The system provides an intelligent assistance function, which suggests how to proceed with the patient’s care plan based on the available data and the decision support module.
A built-in posture recognition algorithm allows staff to react quickly to extreme situations, which are highly expected at night or during peak working hours. Another algorithm was developed to monitor changes in a patient’s movement habits over a longer period of time, which can be important for detecting health problems more quickly and taking appropriate action. This is a value-added functionality of the system, as it is very difficult for nursing staff to do this in a natural way, as it is not possible to monitor every patient 24 h a day without smart technology. During this study, it was observed that the most confusing poses are “lying in bed” and “sleeping”. Detecting the individual or pose when the patient is fully or partially occluded is also quite challenging. However, capturing the pulse and sleep mode and combining these indicators with the outputs of the image recognition algorithms resulted in better detection of the “sleeping” and “lying in bed” poses, i.e., the accuracy was improved by around 15.48% and 22.06%, respectively. Additionally, the system is resistant to data deficiencies; if certain data are not received at the current time, the value is taken from the last time of recording. In any case, the final decision is made by the human, and in case of error or incorrect output, one has the opportunity to correct it.
Other concerns are ensuring that smart health monitoring devices are worn and maintained at all times, as patients often want to remove devices (particularly patients with a dementia), and nursing staff do not always notice quickly when the devices need to be loaded. Therefore, the involvement of care specialists is crucial to ensure the system operates effectively and efficiently. In addition, it is equally important to make sure that patients feel comfortable and moreover that their privacy and trust in smart technologies are maintained at the appropriate level. By involving nursing staff in the implementation process, they can provide valuable feedback, suggestions, and ideas, leading to a better overall outcome.

Author Contributions

Conceptualization, A.P.-T., J.S. and J.B.; methodology, A.P.-T., J.B.; software, J.S., S.N., M.O. and T.P.; validation, J.S., M.O., S.N. and T.P.; formal analysis, A.P.-T., J.S., K.S. and A.R.; investigation, A.P.-T., J.S., K.S., A.R. and J.B.; resources, J.B., A.P.-T. and A.R.; data curation, A.P.-T., J.B., J.S., S.N. and A.R.; writing—original draft preparation, A.P.-T., J.S., K.S., S.N.; writing—review and editing, A.P.-T., K.S. and J.S.; visualization, A.P.-T. and K.S; supervision, A.P.-T. and J.B.; project administration, J.B.; funding acquisition, J.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the project EIT Regional Innovation Scheme (EIT RIS)-EIT Health-Nursing.AI, 2022, Project ID: 2021-RIS_Innovation-033.

Institutional Review Board Statement

The research study was reviewed and approved by the Kaunas Region Biomedical Research Ethics Committee (No. BE-2-24).

Informed Consent Statement

Informed consent was obtained from patients’ relatives/caregivers involved in the study.

Data Availability Statement

The data are not publicly available due to privacy restrictions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ellis, G.; Sevdalis, N. Understanding and improving multidisciplinary team working in geriatric medicine. Age Ageing 2019, 48, 498–505. [Google Scholar] [CrossRef] [PubMed]
  2. Elliott, M.N.; Beckett, M.K.; Cohea, C.; Lehrman, W.G.; Russ, C.; Cleary, P.D.; Giordano, L.A.; Goldstein, E.; Saliba, D. The hospital care experiences of older patients compared to younger patients. J. Am. Geriatr. Soc. 2022, 70, 3570–3577. [Google Scholar] [CrossRef] [PubMed]
  3. Reber, K.C.; Lindlbauer, I.; Schulz, C.; Rapp, K.; König, H.H. Impact of morbidity on care need increase and mortality in nursing homes: A retrospective longitudinal study using administrative claims data. BMC Geriatr. 2020, 20, 439. [Google Scholar] [CrossRef] [PubMed]
  4. Dall’Ora, C.; Griffiths, P.; Hope, J.; Briggs, J.; Jeremy, J.; Gerry, S.; Redfern, O. How long do nursing staff take to measure and record patients’ vital signs observations in hospital? A time-and-motion study. Int. J. Nurs. Stud. 2021, 118, 103921. [Google Scholar] [CrossRef] [PubMed]
  5. Tang, V.; Choy, K.; Ho, G.; Lam, H.; Tsang, Y.P. An IoMT-based geriatric care management system for achieving smart health in nursing homes. Ind. Manag. Data Syst. 2019; ahead-of-print. [Google Scholar] [CrossRef]
  6. Flores-Martin, D.; Rojo, J.; Moguel, E.; Berrocal, J.; Murillo, J.M.; Cai, Z. Smart Nursing Homes: Self-Management Architecture Based on IoT and Machine Learning for Rural Areas. Wirel. Commun. Mob. Comput. 2021, 2021. [Google Scholar] [CrossRef]
  7. Lu, Z.X.; Qian, P.; Bi, D.; Ye, Z.W.; He, X.; Zhao, Y.H.; Su, L.; Li, S.L.; Zhu, Z.L. Application of AI and IoT in Clinical Medicine: Summary and Challenges. Curr. Med Sci. 2021, 41, 1134–1150. [Google Scholar] [CrossRef] [PubMed]
  8. Mbunge, E.; Muchemwa, B.; Jiyane, S.; Batani, J. Sensors and healthcare 5.0: Transformative shift in virtual care through emerging digital health technologies. Glob. Health J. 2021, 5, 169–177. [Google Scholar] [CrossRef]
  9. Khan, M.F.; Ghazal, T.M.; Said, R.A.; Fatima, A.; Abbas, S.; Khan, M.A.; Issa, G.F.; Ahmad, M.; Khan, M.A. An IoMT-Enabled Smart Healthcare Model to Monitor Elderly People Using Machine Learning Technique. Comput. Intell. Neurosci. 2021, 2021, 1–10. [Google Scholar] [CrossRef]
  10. Alshamrani, M. IoT and artificial intelligence implementations for remote healthcare monitoring systems: A survey. J. King Saud Univ.—Comput. Inf. Sci. 2022, 34, 4687–4701. [Google Scholar] [CrossRef]
  11. Ienca, M.; Schneble, C.; Kressig, R.W.; Wangmo, T. Digital health interventions for healthy ageing: A qualitative user evaluation and ethical assessment. BMC Geriatr. 2021, 21, 412. [Google Scholar] [CrossRef]
  12. Andreoni, G.; Mambrettii, C. Privacy and Security Concerns in IoT-Based Healthcare Systems. In Digital Health Technology for Better Aging; Springer: Cham, Switzerland, 2021; p. 365. [Google Scholar] [CrossRef]
  13. Kekade, S.; Hseieh, C.H.; Islam, M.M.; Atique, S.; Mohammed Khalfan, A.; Li, Y.C.; Abdul, S.S. The usefulness and actual use of wearable devices among the elderly population. Comput. Methods Programs Biomed. 2018, 153, 137–159. [Google Scholar] [CrossRef]
  14. Chandrasekaran, R.; Katthula, V.; Moustakas, E. Too old for technology? Use of wearable healthcare devices by older adults and their willingness to share health data with providers. Health Inform. J. 2021, 27, 14604582211058073. [Google Scholar] [CrossRef]
  15. Prieto-Avalos, G.; Cruz-Ramos, N.A.; Alor-Hernández, G.; Sánchez-Cervantes, J.L.; Rodríguez-Mazahua, L.; Guarneros-Nolasco, L.R. Wearable Devices for Physical Monitoring of Heart: A Review. Biosensors 2022, 12, 292. [Google Scholar] [CrossRef]
  16. Lu, L.; Zhang, J.; Xie, Y.; Gao, F.; Xu, S.; Wu, X.; Ye, Z. Wearable health devices in health care: Narrative systematic review. JMIR mHealth uHealth 2020, 8, e18907. [Google Scholar] [CrossRef] [PubMed]
  17. Singhal, A.; Cowie, M.R. The Role of Wearables in Heart Failure. Curr. Heart Fail. Rep. 2020, 17, 125–132. [Google Scholar] [CrossRef] [PubMed]
  18. Alharbi, M.; Straiton, N.; Gallagher, R. Harnessing the Potential of Wearable Activity Trackers for Heart Failure Self-Care. Curr. Heart Fail. Rep. 2017, 14, 23–29. [Google Scholar] [CrossRef]
  19. Gresham, G.; Schrack, J.; Gresham, L.M.; Shinde, A.M.; Hendifar, A.E.; Tuli, R.; Rimel, B.; Figlin, R.; Meinert, C.L.; Piantadosi, S. Wearable activity monitors in oncology trials: Current use of an emerging technology. Contemp. Clin. Trials 2018, 64, 13–21. [Google Scholar] [CrossRef] [PubMed]
  20. Watanabe, T.; Saito, H.; Koike, E.; Nitta, K. A Preliminary Test of Measurement of Joint Angles and Stride Length with Wireless Inertial Sensors for Wearable Gait Evaluation System. Comput. Intell. Neurosci. 2011, 2011, 1–12. [Google Scholar] [CrossRef] [PubMed]
  21. Ryvlin, P.; Ciumas, C.; Wisniewski, I.; Beniczky, S. Wearable devices for sudden unexpected death in epilepsy prevention. Epilepsia 2018, 59 (Suppl. 1), 61–66. [Google Scholar] [CrossRef]
  22. Takei, K.; Honda, W.; Harada, S.; Arie, T.; Akita, S. Toward flexible and wearable human-interactive health-monitoring devices. Adv. Healthc. Mater. 2015, 4, 487–500. [Google Scholar] [CrossRef] [PubMed]
  23. Kamei, T.; Kanamori, T.; Yamamoto, Y.; Edirippulige, S. The use of wearable devices in chronic disease management to enhance adherence and improve telehealth outcomes: A systematic review and meta-analysis. J. Telemed. Telecare 2022, 28, 342–359. [Google Scholar] [CrossRef]
  24. Yu, S.; Chen, Z.; Wu, X. The Impact of Wearable Devices on Physical Activity for Chronic Disease Patients: Findings from the 2019 Health Information National Trends Survey. Int. J. Environ. Res. Public Health 2023, 20, 887. [Google Scholar] [CrossRef] [PubMed]
  25. Kulurkar, P.; kumar Dixit, C.; Bharathi, V.; Monikavishnuvarthini, A.; Dhakne, A.; Preethi, P. AI based elderly fall prediction system using wearable sensors: A smart home-care technology with IOT. Meas. Sensors 2023, 25, 100614. [Google Scholar] [CrossRef]
  26. Cudejko, T.; Button, K.; Al-Amri, M. Validity and reliability of accelerations and orientations measured using wearable sensors during functional activities. Sci. Rep. 2022, 12, 14619. [Google Scholar] [CrossRef] [PubMed]
  27. Fuller, D.; Colwell, E.; Low, J.; Orychock, K.; Tobin, M.A.; Simango, B.; Buote, R.; Heerden, D.V.; Luan, H.; Cullen, K.; et al. Reliability and Validity of Commercially Available Wearable Devices for Measuring Steps, Energy Expenditure, and Heart Rate: Systematic Review. JMIR mHealth uHealth 2020, 8, e18694. [Google Scholar] [CrossRef]
  28. Patel, V.; Orchanian-Cheff, A.; Wu, R. Evaluating the Validity and Utility of Wearable Technology for Continuously Monitoring Patients in a Hospital Setting: Systematic Review. JMIR mHealth uHealth 2021, 9, e17411. [Google Scholar] [CrossRef]
  29. Chan, A.; Chan, D.; Lee, H.; Ng, C.C.; Yeo, A.H.L. Reporting adherence, validity and physical activity measures of wearable activity trackers in medical research: A systematic review. Int. J. Med Inform. 2022, 160, 104696. [Google Scholar] [CrossRef] [PubMed]
  30. Teixeira, E.; Fonseca, H.; Diniz-Sousa, F.; Veras, L.; Boppre, G.; Oliveira, J.; Pinto, D.; Alves, A.J.; Barbosa, A.; Mendes, R.; et al. Wearable Devices for Physical Activity and Healthcare Monitoring in Elderly People: A Critical Review. Geriatrics 2021, 6, 38. [Google Scholar] [CrossRef]
  31. Moore, K.; O’Shea, E.; Kenny, L.; Barton, J.; Tedesco, S.; Sica, M.; Crowe, C.; Alamaki, A.; Condell, J.; Nordstrom, A.; et al. Older Adults’ Experiences With Using Wearable Devices: Qualitative Systematic Review and Meta-synthesis. JMIR mHealth uHealth 2021, 9, e23832. [Google Scholar] [CrossRef]
  32. Koerber, D.; Khan, S.; Shamsheri, T.; Kirubarajan, A.; Mehta, S. Accuracy of Heart Rate Measurement with Wrist-Worn Wearable Devices in Various Skin Tones: A Systematic Review. J. Racial Ethn. Health Disparities 2022. [Google Scholar] [CrossRef]
  33. Ferguson, C.; Hickman, L.D.; Turkmani, S.; Breen, P.; Gargiulo, G.; Inglis, S.C. “Wearables only work on patients that wear them”: Barriers and facilitators to the adoption of wearable cardiac monitoring technologies. Cardiovasc. Digit. Health J. 2021, 2, 137–147. [Google Scholar] [CrossRef] [PubMed]
  34. Kristoffersson, A.; Lindén, M. Wearable Sensors for Monitoring and Preventing Noncommunicable Diseases: A Systematic Review. Information 2020, 11, 521. [Google Scholar] [CrossRef]
  35. Rohmetra, H.; Raghunath, N.; Narang, P.; Chamola, V.; Guizani, M.; Lakkaniga, R. AI-enabled remote monitoring of vital signs for COVID-19: Methods, Prospects and Challenges. Computing 2021, 105, 783–809. [Google Scholar] [CrossRef]
  36. Guo, K.; Zhai, T.; Purushothama, M.H.; Dobre, A.; Meah, S.; Pashollari, E.; Vaish, A.; DeWilde, C.; Islam, M.N. Contactless Vital Sign Monitoring System for In-Vehicle Driver Monitoring Using a Near-Infrared Time-of-Flight Camera. Appl. Sci. 2022, 12, 4416. [Google Scholar] [CrossRef]
  37. Guo, K.; Zhai, T.; Pashollari, E.; Varlamos, C.J.; Ahmed, A.; Islam, M.N. Contactless Vital Sign Monitoring System for Heart and Respiratory Rate Measurements with Motion Compensation Using a Near-Infrared Time-of-Flight Camera. Appl. Sci. 2021, 11, 10913. [Google Scholar] [CrossRef]
  38. Jelinčić, V.; Diest, I.V.; Torta, D.M.; von Leupoldt, A. The breathing brain: The potential of neural oscillations for the understanding of respiratory perception in health and disease. Psychophysiology 2022, 59, e13844. [Google Scholar] [CrossRef] [PubMed]
  39. Andrea, N.; Carlo, M.; Emiliano, S.; Massimo, S. The Importance of Respiratory Rate Monitoring: From Healthcare to Sport and Exercise. Sensors 2020, 20, 6396. [Google Scholar] [CrossRef]
  40. Baumert, M.; Linz, D.; Stone, K.; McEvoy, R.D.; Cummings, S.; Redline, S.; Mehra, R.; Immanuel, S. Mean nocturnal respiratory rate predicts cardiovascular and all-cause mortality in community-dwelling older men and women. Eur. Respir. J. 2019, 54, 1802175. [Google Scholar] [CrossRef]
  41. Fox, H.; Rudolph, V.; Munt, O.; Malouf, G.; Graml, A.; Bitter, T.; Oldenburg, O. Early identification of heart failure deterioration through respiratory monitoring with adaptive servo-ventilation. J. Sleep Res. 2023, 32, e13749. [Google Scholar] [CrossRef]
  42. Scebba, G.; Da Poian, G.; Karlen, W. Multispectral Video Fusion for Non-Contact Monitoring of Respiratory Rate and Apnea. IEEE Trans. Biomed. Eng. 2021, 68, 350–359. [Google Scholar] [CrossRef]
  43. Nakagawa, K.; Sankai, Y. Noncontact Vital Sign Monitoring System with Dual Infrared Imaging for Discriminating Respiration Mode. Adv. Biomed. Eng. 2021, 10, 80–89. [Google Scholar] [CrossRef]
  44. Yacchirema, D.C.; de Puga, J.S.; Palau, C.E.; Esteve, M. Fall detection system for elderly people using IoT and ensemble machine learning algorithm. Pers. Ubiquitous Comput. 2019, 23, 801–817. [Google Scholar] [CrossRef]
  45. Esmaeili, B.; AkhavanPour, A.; Bosaghzadeh, A. An Ensemble Model For Human Posture Recognition. In Proceedings of the 2020 International Conference on Machine Vision and Image Processing (MVIP), Teheren, Iran, 18–20 February 2020; pp. 1–7. [Google Scholar] [CrossRef]
  46. Artacho, B.; Savakis, A.E. UniPose: Unified Human Pose Estimation in Single Images and Videos. CoRR 2020, abs/2001.08095. [Google Scholar]
  47. Insafutdinov, E.; Pishchulin, L.; Andres, B.; Andriluka, M.; Schiele, B. DeeperCut: A Deeper, Stronger, and Faster Multi-Person Pose Estimation Model. CoRR 2016, abs/1605.03170. [Google Scholar]
  48. Li, J.; Wang, C.; Zhu, H.; Mao, Y.; Fang, H.; Lu, C. CrowdPose: Efficient Crowded Scenes Pose Estimation and a New Benchmark. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, 16–20 June 2019; pp. 10863–10872. [Google Scholar] [CrossRef]
  49. Ouyang, W.; Chu, X.; Wang, X. Multi-source Deep Learning for Human Pose Estimation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2337–2344. [Google Scholar] [CrossRef]
  50. Moon, G.; Chang, J.; Lee, K.M. PoseFix: Model-Agnostic General Human Pose Refinement Network. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 7765–7773. [Google Scholar] [CrossRef]
  51. Nie, X.; Feng, J.; Xing, J.; Xiao, S.; Yan, S. Hierarchical Contextual Refinement Networks for Human Pose Estimation. IEEE Trans. Image Process. 2019, 28, 924–936. [Google Scholar] [CrossRef]
  52. Newell, A.; Yang, K.; Deng, J. Stacked Hourglass Networks for Human Pose Estimation. In Proceedings of the Computer Vision–ECCV 2016, Amsterdam, The Netherlands, 11–14 October 2016; pp. 483–499. [Google Scholar]
  53. Núñez-Marcos, A.; Azkune, G.; Arganda-Carreras, I. Vision-Based Fall Detection with Convolutional Neural Networks. Wirel. Commun. Mob. Comput. 2017, 2017, 1–16. [Google Scholar] [CrossRef]
  54. Xu, C.; Xu, Y.; Xu, Z.; Guo, B.; Zhang, C.; Huang, J.; Deng, X. Fall Detection in Elevator Cages Based on XGBoost and LSTM. In Proceedings of the 2021 26th International Conference on Automation and Computing (ICAC), Portsmouth, UK, 2–4 September 2021; pp. 1–6. [Google Scholar] [CrossRef]
  55. Ren, X.; Zhang, Y.; Yang, Y. Human Fall Detection Model with Lightweight Network and Tracking in Video. In Proceedings of the 2021 5th International Conference on Computer Science and Artificial Intelligence, CSAI 2021, Beijing, China, 4–6 December 2021; pp. 1–7. [Google Scholar] [CrossRef]
  56. De Miguel, K.; Brunete, A.; Hernando, M.; Gambao, E. Home Camera-Based Fall Detection System for the Elderly. Sensors 2017, 17, 2864. [Google Scholar] [CrossRef]
  57. Sadreazami, H.; Bolic, M.; Rajan, S. Contactless Fall Detection Using Time-Frequency Analysis and Convolutional Neural Networks. IEEE Trans. Ind. Informatics 2021, 17, 6842–6851. [Google Scholar] [CrossRef]
  58. Butt, F.S.; La Blunda, L.; Wagner, M.F.; Schafer, J.; Medina-Bulo, I.; Gomez-Ullate, D. Fall Detection from Electrocardiogram (ECG) Signals and Classification by Deep Transfer Learning. Information 2021, 12, 63. [Google Scholar] [CrossRef]
  59. Bhattacharya, A.; Vaughan, R. Deep Learning Radar Design for Breathing and Fall Detection. IEEE Sensors J. 2020, 20, 5072–5085. [Google Scholar] [CrossRef]
  60. Martinez-Martin, N.; Luo, Z.; Kaushal, A.; Adeli, E.; Haque, A.; Kelly, S.S.; Wieten, S.; Cho, M.K.; Magnus, D.; Fei-Fei, L.; et al. Ethical issues in using ambient intelligence in health-care settings. Lancet Digit. Health 2021, 3, e115–e123. [Google Scholar] [CrossRef]
  61. Esteva, A.; Chou, K.; Yeung, S.; Naik, N.; Madani, A.; Mottaghi, A.; Liu, Y.; Topol, E.; Dean, J.; Socher, R. Deep learning-enabled medical computer vision. NPJ Digit. Med. 2021, 4, 5. [Google Scholar] [CrossRef] [PubMed]
  62. Babar, M.; Rahman, A.; Arif, F.; Jeon, G. Energy-harvesting based on internet of things and big data analytics for smart health monitoring. Sustain. Comput. Inform. Syst. 2018, 20, 155–164. [Google Scholar] [CrossRef]
  63. Syed, L.; Jabeen, S.; Manimala, S.; Elsayed, H.A. Data Science Algorithms and Techniques for Smart Healthcare Using IoT and Big Data Analytics. In Smart Techniques for a Smarter Planet: Towards Smarter Algorithms; Springer International Publishing: Cham, Switzerland, 2019; pp. 211–241. [Google Scholar] [CrossRef]
  64. Tawalbeh, L.; Muheidat, F.; Tawalbeh, M.; Quwaider, M. IoT Privacy and Security: Challenges and Solutions. Appl. Sci. 2020, 10, 4102. [Google Scholar] [CrossRef]
  65. Awotunde, J.B.; Jimoh, R.G.; Folorunso, S.O.; Adeniyi, E.A.; Abiodun, K.M.; Banjo, O.O. Privacy and Security Concerns in IoT-Based Healthcare Systems. In The Fusion of Internet of Things, Artificial Intelligence, and Cloud Computing in Health Care; Springer International Publishing: Cham, Switzerland, 2021; pp. 105–134. [Google Scholar] [CrossRef]
  66. Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar] [CrossRef]
  67. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.S.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef]
  68. Pan, C.; Cao, H.; Zhang, W.; Song, X.; Li, M. Driver activity recognition using spatial-temporal graph convolutional LSTM networks with attention mechanism. IET Intell. Transp. Syst. 2020, 15, 297–307. [Google Scholar] [CrossRef]
  69. Vasconez, J.; Admoni, H.; Cheein, F.A. A methodology for semantic action recognition based on pose and human-object interaction in avocado harvesting processes. Comput. Electron. Agric. 2021, 184, 106057. [Google Scholar] [CrossRef]
  70. Zhang, C.; Yang, X. Bed-Leaving Action Recognition Based on YOLOv3 and AlphaPose. In Proceedings of the 2022 the 5th International Conference on Image and Graphics Processing (ICIGP), ICIGP 2022, Beijing, China, 7–9 January 2022; pp. 117–123. [Google Scholar] [CrossRef]
  71. Zhao, X.; Hou, F.; Su, J.; Davis, L. An Alphapose-Based Pedestrian Fall Detection Algorithm. In Proceedings of the Artificial Intelligence and Security, Qinghai, China, 15–20 July 2022; pp. 650–660. [Google Scholar]
  72. Cortés, X.; Conte, D.; Cardot, H. A new bag of visual words encoding method for human action recognition. In Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 2480–2485. [Google Scholar] [CrossRef]
  73. Aslan, M.; Durdu, A.; Sabanci, K. Human action recognition with bag of visual words using different machine learning methods and hyperparameter optimization. Neural Comput. Appl. 2020, 32, 8585–8597. [Google Scholar] [CrossRef]
  74. Nazir, S.; Yousaf, M.H.; Velastin, S.A. Evaluating a bag-of-visual features approach using spatio-temporal features for action recognition. Comput. Electr. Eng. 2018, 72, 660–669. [Google Scholar] [CrossRef]
  75. Fang, H.S.; Li, J.; Tang, H.; Xu, C.; Zhu, H.; Xiu, Y.; Li, Y.L.; Lu, C. AlphaPose: Whole-Body Regional Multi-Person Pose Estimation and Tracking in Real-Time. arXiv 2022, arXiv:2211.03375. [Google Scholar] [CrossRef]
  76. Fang, H.S.; Xie, S.; Tai, Y.W.; Lu, C. RMPE: Regional Multi-person Pose Estimation. arXiv 2016, arXiv:1612.00137. [Google Scholar] [CrossRef]
  77. Wan, C.; Wang, L.; Phoha, V.V. A Survey on Gait Recognition. ACM Comput. Surv. 2018, 51, 1–35. [Google Scholar] [CrossRef]
  78. Semwal, V.B.; Mazumdar, A.; Jha, A.; Gaud, N.; Bijalwan, V. Speed, Cloth and Pose Invariant Gait Recognition-Based Person Identification. In Machine Learning: Theoretical Foundations and Practical Applications; Springer Singapore: Singapore, 2021; pp. 39–56. [Google Scholar] [CrossRef]
  79. Elharrouss, O.; Almaadeed, N.; Al-ma’adeed, S.; Bouridane, A. Gait recognition for person re-identification. J. Supercomput. 2021, 77, 3653–3672. [Google Scholar] [CrossRef]
  80. Sun, F.; Zang, W.; Gravina, R.; Fortino, G.; Li, Y. Gait-based identification for elderly users in wearable healthcare systems. Inf. Fusion 2020, 53, 134–144. [Google Scholar] [CrossRef]
  81. Liu, X.; Zhao, C.; Zheng, B.; Guo, Q.; Duan, X.; Wulamu, A.; Zhang, D. Wearable Devices for Gait Analysis in Intelligent Healthcare. Front. Comput. Sci. 2021, 3, 661676. [Google Scholar] [CrossRef]
  82. Zhao, A.; Li, J.; Dong, J.; Qi, L.; Zhang, Q.; Li, N.; Wang, X.; Zhou, H. Multimodal Gait Recognition for Neurodegenerative Diseases. IEEE Trans. Cybern. 2022, 52, 9439–9453. [Google Scholar] [CrossRef]
  83. Din, S.; Elshehabi, M.; Galna, B.; Hobert, M.; Warmerdam, E.; Sünkel, U.; Brockmann, K.; Metzger, F.; Hansen, C.; Berg, D.; et al. Gait analysis with wearables predicts conversion to Parkinson disease. Ann. Neurol. 2019, 86, 357–367. [Google Scholar] [CrossRef] [PubMed]
  84. Rucco, R.; Agosti, V.; Jacini, F.; Sorrentino, P.; Varriale, P.; De Stefano, M.; Milan, G.; Montella, P.; Sorrentino, G. Spatio-temporal and kinematic gait analysis in patients with Frontotemporal dementia and Alzheimer’s disease through 3D motion capture. Gait Posture 2017, 52, 312–317. [Google Scholar] [CrossRef]
  85. de Oliveira Silva, F.; Ferreira, J.V.; Plácido, J.; Chagas, D.; Praxedes, J.; Guimarães, C.; Batista, L.A.; Laks, J.; Deslandes, A.C. Gait analysis with videogrammetry can differentiate healthy elderly, mild cognitive impairment, and Alzheimer’s disease: A cross-sectional study. Exp. Gerontol. 2020, 131, 110816. [Google Scholar] [CrossRef]
  86. Yamada, H.; Ahn, J.; Mozos, O.; Iwashita, Y.; Kurazume, R. Gait-based person identification using 3D LiDAR and long short-term memory deep networks. Adv. Robot. 2020, 34, 1201–1211. [Google Scholar] [CrossRef]
Figure 1. Data collection pipeline of the GCM system.
Figure 1. Data collection pipeline of the GCM system.
Healthcare 11 01152 g001
Figure 2. The architecture of YOLOv3 algorithm.
Figure 2. The architecture of YOLOv3 algorithm.
Healthcare 11 01152 g002
Figure 3. AlphaPose algorithm illustration: keypoints on patients’ bodies in video footage.
Figure 3. AlphaPose algorithm illustration: keypoints on patients’ bodies in video footage.
Healthcare 11 01152 g003
Figure 4. UML deployment diagram of the geriatric care system architecture.
Figure 4. UML deployment diagram of the geriatric care system architecture.
Healthcare 11 01152 g004
Figure 5. Six different health parameters collected for a single patient.
Figure 5. Six different health parameters collected for a single patient.
Healthcare 11 01152 g005
Figure 6. Examples of different care plan with IF-THEN rules defined by the staff.
Figure 6. Examples of different care plan with IF-THEN rules defined by the staff.
Healthcare 11 01152 g006
Figure 7. Schematic diagram of proposed geriatric care management systems.
Figure 7. Schematic diagram of proposed geriatric care management systems.
Healthcare 11 01152 g007
Figure 8. Three iterations of the assessment frames of the patient in the “walking” pose taken every five seconds.
Figure 8. Three iterations of the assessment frames of the patient in the “walking” pose taken every five seconds.
Healthcare 11 01152 g008
Figure 9. Examples of skeleton-based posture recognition in various ambient light conditions: (a) patient walks in a semi-lit environment; (b) patient walks during the night; (c) two patients sit in a fully-lit environment; (d) patient is lying down at night.
Figure 9. Examples of skeleton-based posture recognition in various ambient light conditions: (a) patient walks in a semi-lit environment; (b) patient walks during the night; (c) two patients sit in a fully-lit environment; (d) patient is lying down at night.
Healthcare 11 01152 g009
Figure 10. Examples of skeleton-based posture recognition in different scenarios: (a) the patient is lying on the ground; (b) patients are visited by nursing staff.
Figure 10. Examples of skeleton-based posture recognition in different scenarios: (a) the patient is lying on the ground; (b) patients are visited by nursing staff.
Healthcare 11 01152 g010
Figure 11. Testing results: confusion matrix of posture classification.
Figure 11. Testing results: confusion matrix of posture classification.
Healthcare 11 01152 g011
Figure 12. Testing results: ROC curve of the posture recognition algorithm.
Figure 12. Testing results: ROC curve of the posture recognition algorithm.
Healthcare 11 01152 g012
Figure 13. Confusion matrix showing the re-identification of three patients (referred to by the class labels “First", “Second", and “Third".)
Figure 13. Confusion matrix showing the re-identification of three patients (referred to by the class labels “First", “Second", and “Third".)
Healthcare 11 01152 g013
Figure 14. Confusion matrix of the re-identification of three patients (referred to class labels “First”, “Second”, and “Third”) in the two common areas of nursing homes.
Figure 14. Confusion matrix of the re-identification of three patients (referred to class labels “First”, “Second”, and “Third”) in the two common areas of nursing homes.
Healthcare 11 01152 g014
Table 1. Types of IoT devices used in the research.
Table 1. Types of IoT devices used in the research.
IoT Device TypeDevice Name
CameraEZVIZ CS-C3TN 1920 × 1080
Wrist bandFitbit Charge 5
Blood PressureWithings BPM Connect
ScalesWithings Body+
Table 2. Patient information.
Table 2. Patient information.
NoVariableDefinitionInstances of Possible Values/Range
1FNFirst name-
2LNLast name-
3BDBirth dateyyyy/mm/dd
4HEHeight1.20 m–2.20 m
Input data
1MoveCMovement capabilitiesLying; sitting in a wheelchair; with assistive devices; etc.
2RiskCRisk of collapseNone; low; medium; high
3BedsoresBedsoresYes; no
4DiseasesAll patient’s diseasesHeart failure; Alzheimer; dementia; Cancer; etc.
5MedTaken medicationsAntibiotics; antihypertensives; antidepressants; etc.
6BMIBMI unit change per week<0.5 plus; <0.5 minus; 0.5–1 plus; etc.
7MoveHMovement habitsUnchanged; slowed down; increased; falling on the ground
8EatHEating habitsParenteral nutrition; fed by another person; independent eating; etc.
9EatCEating capabilitiesSwallows solid food; swallows only mashed food; swallows only liquids; etc.
10BowelBowel habitsRegular bowel movements; diarrhoea; constipation; faecal incontinence
11SleepSleeping<4 h; 4–6 h; 6–8 h; >8 h; apnoea
12BreathBreathingIncreased; slowing down; with apnoeas
13PLPulseNormal; bradycardia; tachycardia
14BPBlood PressureNormotension; hypotension; hypertension mild; hypertension moderate; hypertension severe; etc.
15TempTemperature<36.0 °C; 36.0–37.4 °C; etc.
16SatSaturation≥94%; <94%
17UrineDaily urine outputConcentrated urine; very frequent; etc.
18FluidFluid tracking<500 mL; ≥500 mL
19GlyGlycaemia<2.5 mmol/l; ≥2.5 mmol/l
20ConConsciousnessUnchanged; changed; unconscious
21PainPerceived level of painNone; mild; moderate; severe; unbearable
Output data
1PlanNursing planContinue current plan; monitor; adjust; extra situation
Table 3. Testing results: performance metrics of the posture recognition algorithm.
Table 3. Testing results: performance metrics of the posture recognition algorithm.
ClassPrecisionRecallF1 Score
Walking (WAL)0.95540.93740.9463
Standing (STA)0.87220.91630.8937
Sitting (SIT)0.94060.94270.9416
Fallen on the ground (FOG)0.93540.83330.8814
Lying in bed (LIB)0.89510.88780.8914
Sleeping (SLE)0.88440.90470.8944
Macro F1 score0.9082
Weighted F1 score0.9125
Table 4. Real-time scenario testing results of posture recognition.
Table 4. Real-time scenario testing results of posture recognition.
No.Actual PosePredicted PoseAmbient LightingConfidence
1WalkingWalkingDay time (well-lit)98.0%
2SittingSittingDay time (well-lit)97.5%
3SittingSittingDay time (well-lit)98.2%
4Lying in bedSleepingNight time (poorly lit)89.3%
5StandingStandingDay time (perfect)99.7%
6Lying in bedLying in bedEvening time (semi-lit)87.9%
7SleepingLying in bedEvening time (semi-lit)88.6%
8StandingStandingDay time (perfect)99.1%
9SleepingSleepingNight time (poorly lit)85.4%
10WalkingWalkingDay time (perfect)93.6%
11Lying in bedLying in bedDay time (perfect)94.2%
12StandingStandingDay time (perfect)99.3%
13WalkingWalkingNight time (poorly lit)96.0%
14SittingSittingDay time (perfect)98.5%
15SleepingSleepingDay time (perfect)91.0%
16Fallen on the groundFallen on the groundDay time (perfect)99.8%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Paulauskaite-Taraseviciene, A.; Siaulys, J.; Sutiene, K.; Petravicius, T.; Navickas, S.; Oliandra, M.; Rapalis, A.; Balciunas, J. Geriatric Care Management System Powered by the IoT and Computer Vision Techniques. Healthcare 2023, 11, 1152. https://doi.org/10.3390/healthcare11081152

AMA Style

Paulauskaite-Taraseviciene A, Siaulys J, Sutiene K, Petravicius T, Navickas S, Oliandra M, Rapalis A, Balciunas J. Geriatric Care Management System Powered by the IoT and Computer Vision Techniques. Healthcare. 2023; 11(8):1152. https://doi.org/10.3390/healthcare11081152

Chicago/Turabian Style

Paulauskaite-Taraseviciene, Agne, Julius Siaulys, Kristina Sutiene, Titas Petravicius, Skirmantas Navickas, Marius Oliandra, Andrius Rapalis, and Justinas Balciunas. 2023. "Geriatric Care Management System Powered by the IoT and Computer Vision Techniques" Healthcare 11, no. 8: 1152. https://doi.org/10.3390/healthcare11081152

APA Style

Paulauskaite-Taraseviciene, A., Siaulys, J., Sutiene, K., Petravicius, T., Navickas, S., Oliandra, M., Rapalis, A., & Balciunas, J. (2023). Geriatric Care Management System Powered by the IoT and Computer Vision Techniques. Healthcare, 11(8), 1152. https://doi.org/10.3390/healthcare11081152

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop