Deep-cARe: Projection-Based Home Care Augmented Reality System with Deep Learning for Elderly
Abstract
:1. Introduction
2. Related Work
3. Material and Methods
3.1. Deep-Care System Implementation
3.1.1. Hardware Configuration
3.1.2. Software Configuration
3.1.3. PAR Module
- (a)
- 3D space reconstructionTo present dynamic information to the elderly, a 3D space should be constructed from the real space. To receive space information of an actual environment and reconstruct a 3D map, custom-made hardware, and map-construction technology based on a 3D point cloud are applied. A feature-matching method, which extracts features from the input images and calculates the pose by comparing them to the features of the previous frame, is applied. The surrounding space information is obtained from an RGB-depth camera by setting the servo motor rotation of the pan direction to units. Features are detected using the feature detection algorithm through the acquired color and depth input frames. Matching is then performed by creating feature descriptors, and the FAST [25] and BRISK [26] algorithms are applied to increase computational speed. The final pose of the current frame is obtained by using the RANdom SAmple Consensus(RANSAC) algorithm [27] on the 3D world coordinate point value of the previous depth image and the two-dimensional (2D) matching points of the current color image. The point cloud information of the current frame is then rotated and translated according to 3D world coordinates. 3D space map reconstruction result as shown in Figure 4a.
- (b)
- Optimal plane selectionIn this step, the region that can be projected is extracted to recommend the optimal projection region in the reconstructed 3D map to the user. Depth segmentation is first performed using depth information of the reconstructed 3D map. The plane area is detected by executing the randomized Hough transform algorithm [28] based on the segmented areas. Initially, in the hypothesis step, a model that satisfies the sample data is created by randomly selecting three points at certain distances from the detected point. Subsequently, in the verification step, the plane model is evaluated after the inliers are extracted. If the number of inliers is greater than that of the existing plane, the plane of the largest area is detected by updating a new model. The final projection location is selected by minimizing the projection distortion using the normal vector of the maximum area. Meaningful plane information is extracted for projection from the 3D environment point cloud through this plane detection process. Final optimal planes are represented red boxes as shown in Figure 4b.
3.1.4. Deep-Learning Module
- (a)
- Pose estimationTo facilitate monitoring system provision and perform projection on the optimal plane, the user location should be recognized in real time. To estimate the location and pose of the user, PoseNet [29] was used, which is an open-source deep-learning model. PoseNet can track joints and uses 2D images as input. The recognized joint information is converted into 2D coordinates, and the z-axis value (depth information) corresponding to the 2D coordinates is obtained through the depth camera. There are two methods for recognizing specific user states such as falling and tripping using the obtained joint information. First, a study on action recognition in 3D video [30] and a study on skeleton extraction [31] implemented a deep-learning model that can recognize falling. However, to recognize motions, it can be challenging to crop only the target motion and use it as an input. Therefore, although motions can be recognized with high accuracy in a laboratory environment, it can be difficult to apply the model immediately to a real environment.In this study, the proposed system was constructed using a second method -rule-based method- according to the joint information detected by the PoseNet deep-learning model. The rule-based method can configure the system by defining cases where the user’s joints are in unusual states. Abnormal states can be recognized by the defined states of the joints and are used for urgent situation alerts such as when the head is lower than other body parts and remains in that state or when most joints stay near the floor (Figure 5 shows that rule-based user state). An abnormal state is identified if the location of the user is the living room or kitchen rather than the bedroom where lying down occurs (Figure 5 shows that space recognition in user). After distinguishing an abnormal state, an alert is sent to the family or rescue team. However, the accuracy of the user’s state recognition can be low only with the joint information detected based on the vision information input through the RGB-D depth camera. To prevent misreporting because of erroneous recognition, the accurate state is determined through the feedback UI response of the user after the abnormal state is recognized (Figure 5 shows that user feedback check state)).
- (b)
- Face recognitionOrdinary elderly people do not suffer severe memory decline, as in dementia or Alzheimer’s disease but their ability to identify faces is deteriorated [32]. In particular, they have difficulty identifying people quickly. Erroneous identification of people can result in omission of visiting benefits and delivery services for elderly welfare and may lead to home invasion crimes. To prevent this and support long-term memory, deep learning is applied to the face recognition module. The deep-learning module used was FaceNet [33]. This model can operate at 12 fps near real-time face recognition. An intercom environment was set up to support remote door opening and closing in this scenario. Visitor identification is performed in real time using an image input to the RGB camera attached to the intercom. The images of the faces of likely visitors, such as family, acquaintances, and hospital personnel, are labeled and stored in advance. For previous irregular visitors, such as delivery personnel, the face images are saved as unlabeled data. The face recognition results are largely classified into three types. Family and acquaintances whose data are labeled and stored, are classified as “label name,” people corresponding to unlabeled data are classified as “undefined,” and those who have not visited and do not exist in the data are classified as “unknown.” This provides information regarding regular, irregular, and new visitors. This will be described in detail in Section 5.1.
- (c)
- Object detectionThere are many challenges to use the Deep-cARe system with real-time information in a real (non-predefined) environment. This problem is resolved by applying deep-learning object detection. The proposed system performs object detection for real world objects that may exist inside the house in which the elderly live. To provide information, content, and an appropriate UI for real-world objects in real time, YOLO v3 [34] with a fast processing time (45 fps based on GTX 1080Ti) was used. MS COCO [35] and Open image were used for the dataset. Object detection is performed in real time through the color image from the camera. In this study, a scenario was designed for object detection based on medicines, door, and windows, which are objects that exist in a house and those that can provide the necessary functions to the elderly. This will be described in detail in Section 5.1.
4. Performance Experiment of Deep Learning With Par
4.1. Pose Estimation Performance
4.2. Face Recognition Performance
4.3. Object Detection Performance
5. Deep-cARe System Application
5.1. Scenario Design
- (a)
- MonitoringThe elderly can experience sudden injuries caused by accidental falls or tripping owing to the deterioration of their physical abilities. Occasionally, emergencies that may threaten their life can occur. This problem is more serious for those who live alone and spend considerable time indoors. There is a growing need to develop a monitoring system that can take the appropriate action in the event of an emergency inside a residence. Therefore, we developed a monitoring system that can identify such emergencies using a pan-tilt system and an RGB-depth camera. This system continuously tracks the user location using a camera after a 3D geometric environment around the user is reconstructed. The user location is essentially denoted by 3D-space coordinates using a 3D camera, which are used to augment information through the projection when the contents approach a registered plane (Figure 13). Furthermore, the system can recognize a fall accident or an emergency by identifying abnormal situations through the deep-learning pose estimation module. The situation of elderly people living alone often worsens because they cannot request medical support in case of an emergency. Therefore, the proposed system performs continuous monitoring of the location and state of the elderly while rotating the pan-tilt system and provides information to linked users so that they can prepare for emergencies. Accordingly, elderly users can receive proper support.
- (b)
- Smart IoT intercomThe elderly who often experience problems associated with long-term memory exhibit reduced ability to recognize faces [32]. In particular, they have difficulty identifying people quickly. The proposed system can resolve this problem. We propose an Internet of Things (IoT) application that can identify users in shown Figure 14. The scenario is as follows. Assuming that a visitor rings the doorbell, the elderly receives an information of the visitor. A visitor identification can be performed using the deep-learning face recognition module. Therefore, if the person on the screen is registered, his/her name is provided to the user. If the person is an unregistered previous visitor, “undefined” is displayed. However, if the person is a new visitor, “unknown” is displayed. In addition to classifying visitors and providing the relevant information, as shown in data management in the Figure 15, if someone is classified as “unknown,” his/her input images are saved as an “undefined” class. For the “undefined” class, the face can be registered by saving the label information of the user. The system was designed to open or close the door by using a MUI (Figure 14c) or SUI (Figure 14-front wall). Therefore, for the elderly with discomfort in movement, physical support is provided by allowing remote control ability. Furthermore, long-term memory support is provided, and the elderly are alerted to unregistered visitors so that illicit activities may be prevented.
- (c)
- Daily AlarmThe elderly’s also experiences short-term and long-term memory problems, which result in forgetting important schedules or the state of the house. In addition to memory decline, the elderly also suffers from chronic diseases such as hypertension and diabetes. However, elderly patients can easily forget a dose or the time for taking a medicine. Therefore, their medication adherence (i.e., properly taking medicine according to the medical professional’s instructions) is poor. Medication notification can be provided through the proposed platform with no support devices, as in the case of smartphones and smart medicine. The purpose medication alert system intuitively and effectively notifies the time for taking medicines through audiovisual effects via a projector as shown in Figure 16. The UI for recognizing the medication and prompting its administration by using object detection is projected, and a sound alarm is provided to remind the user to take the medicine at the proper time.In addition to medication notifications, additional daily alarm scenarios were created pertaining to information on real-world objects that can be frequently lost by the elderly in their daily lives. As shown in Figure 17, Deep-cARe system provide various types of information such as house internal condition and weather information. Door and window recognition can suggest actions that can be taken before leaving. First, if the user is estimated to be near the door, the current behavior of the user is recognized as “leaving preparation state.” At this time, information such as the power stated of various devices (e.g., electric lights and electronic devices) and opening/closing of windows can be provided through projections near the door as shown in Figure 18. Consequently, the user can promptly receive information concerning the internal condition of the house and use a control center for the entire house before leaving. Furthermore, the system supports short-term memory by proposing clothes and accessories such as a hat and umbrella to prevent heatstroke. Moreover, through object detection, the elderly may be notified of the need to ventilate or open/close windows according to the weather using projections and sound alarms. This daily alarm induces the elderly to check the inside of the house and prevent accidents that can occur inside and outside the house.
5.2. User Interface and Interaction Design
- (a)
- Spatial user interfaceTo use a SUI in a PAR space where there is no touch sensor and panel, the touch interaction can be implemented using a depth-sensing camera. The formula of Wilson et al. [37] was applied to detect touch motions between the user and the PAR space. When the user touches the surface of the projection space, the depth pixel of the user appears nearer to the camera than the pixels of the projection space surface. In this case, in Equation (1) is the surface depth value of the projection space and stores the space surface depth information (Figure 19b). The elements that are considered to be in contact with the surface of the projection space are removed by setting the threshold of . Pixels smaller than have not contacted the surface of the projection space and other elements, and the contact of the user is not considered.When a change in depth information is detected in the area between the two thresholds, it is considered touch (Figure 19c). Interactions with the system are possible by directly touching the projected UI with no separate touch sensor using the space touch detection equation. However, in the case of the SUI, when the distance between the projected space and user is greater than a certain value, the user should move to the projected space. This can cause unnecessary movements for elderly people with movement difficulties. In this manner, the SUI can be used simply by directly touching the projected UI as shown in Figure 19d.
- (b)
- Mobile user interfaceUnderstanding smartphone operation is no longer restricted to young people because smart devices such as smartphones and tablets are easily available and ubiquitous. Furthermore, IoT technology for controlling the inside of a house via smart devices has been developed. Nevertheless, the elderly still experience difficulty in using small icons or complex functions [38]. Instead of a complex interface that requires separate learning for mobile device operations, a simple mobile interface is provided. Even the elderly who experience discomfort in movement and are far from the system can actively use the system because a mobile interface is provided in accordance with the user’s situation and application.
6. Conclusions and Future Work
6.1. Conclusions
6.2. Limitation and Future Work
- (a)
- Context-aware accuracyThe proposed system improved the ease of installation and use. However, it is difficult to perform context-aware operations according to the elderly’s behavior and environment using a single camera. In the future, an environment for highly accurate context awareness of the elderly’s behavior inside a house should be constructed by installing multiple cameras and sensors.
- (b)
- Connection with IoT deviceThis study described an IoT intercom for remote door opening and closing. However, home appliances include more functions and their control methods are becoming increasingly complex. Therefore, they can be difficult for the elderly to use. The elderly may find it physically burdensome to open or close windows and doors. They could be simply and intuitively controlled through an interaction between various IoT devices and the proposed platform.
- (c)
- Application of broad deep-learning technologiesAn AR environment for expanding the functions of the elderly and supporting their life can be provided by applying technologies with development potential in addition to the deep-learning technologies applied in this study. It is also possible to analyze correlations with the diseases that afflict individuals by analyzing the behavior of the elderly through activity recognition, which can be considered an extension of pose estimation. Furthermore, emotional care and physical support can be provided through feedback and therapy, in line with the user’s current emotion, using emotion recognition technology.
- (d)
- System expansionWhen a care system is designed, usability and user experience are important factors that must be considered. User experience can be enhanced by analyzing usability through a study involving multiple users. Therefore, the application of the proposed system can be expanded to sanatoriums, in addition to homes.
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- McNicoll, G. World Population Ageing 1950–2050. Popul. Dev. Rev. 2002, 28, 814–816. [Google Scholar]
- Kim, K. Policy Considerations for the Living Conditions of Older Koreans. Health Welf. Forum 2015, 223, 68–78. [Google Scholar]
- Jacoby, R.; Oppenheimer, C. Psychiatry in the Elderly; Oxford University Press: New York, NY, USA, 1991. [Google Scholar]
- Lee, S.J.; Kwon, H.J.; Lee, Y.S.; Min, B.A. Analysis of Universal Design Bathroom and Its Products Supporting Aging in Place of the Elderly. Archit. Inst. Korea 2007, 23, 125–134. [Google Scholar]
- Rashidi, P.; Mihailidis, A. A survey on ambient-assisted living tools for older adults. IEEE J. Biomed. Health Inform. 2012, 17, 579–590. [Google Scholar] [CrossRef]
- Uddin, M.; Khaksar, W.; Torresen, J. Ambient sensors for elderly care and independent living: A survey. Sensors 2018, 18, 2027. [Google Scholar] [CrossRef] [PubMed]
- Kawamoto, A.L.S.; da Silva, F.S.C. Depth-sensor applications for the elderly: A viable option to promote a better quality of life. IEEE Consum. Electron. Mag. 2017, 7, 47–56. [Google Scholar] [CrossRef]
- Lu, N.; Wu, Y.; Feng, L.; Song, J. Deep learning for fall detection: Three-dimensional CNN combined with LSTM on video kinematic data. IEEE J. Biomed. Health Inform. 2018, 23, 314–323. [Google Scholar] [CrossRef] [PubMed]
- Zhu, W.; Lan, C.; Xing, J.; Zeng, W.; Li, Y.; Shen, L.; Xie, X. Co-Occurrence Feature Learning for Skeleton Based Action Recognition Using Regularized Deep LSTM Networks. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016. [Google Scholar]
- Khanal, S.; Reis, A.; Barroso, J.; Filipe, V. Using Emotion Recognition in Intelligent Interface Design for Elderly Care. In Proceedings of the World Conference on Information Systems and Technologies, Naples, Italy, 27–29 March 2018; pp. 240–247. [Google Scholar]
- Mezgec, S.; Koroušić Seljak, B. Nutrinet: A deep learning food and drink image recognition system for dietary assessment. Nutrients 2017, 9, 657. [Google Scholar] [CrossRef] [PubMed]
- Wolf, D.; Besserer, D.; Sejunaite, K.; Riepe, M.; Rukzio, E. cARe: An Augmented Reality Support System for Dementia Patients. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology Adjunct Proceedings, Berlin, Germany, 14–17 October 2018; pp. 42–44. [Google Scholar]
- Aruanno, B.; Garzotto, F. MemHolo: Mixed reality experiences for subjects with Alzheimer’s disease. Multimed. Tools Appl. 2019, 78, 1–21. [Google Scholar] [CrossRef]
- Vovk, A.; Patel, A.; Chan, D. Augmented Reality for Early Alzheimer’s Disease Diagnosis. In Proceedings of the Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; p. LBW0212. [Google Scholar]
- Ingeson, M.; Blusi, M.; Nieves, J.C. Microsoft Hololens-A mHealth Solution for Medication Adherence. In Proceedings of the International Workshop on Artificial Intelligence in Health, Stockholm, Sweden, 13–14 July 2018; pp. 99–115. [Google Scholar]
- Hockett, P.; Ingleby, T. Augmented reality with HoloLens: Experiential architectures embedded in the real world. arXiv 2016, arXiv:1610.04281. [Google Scholar]
- Lera, F.J.; Rodríguez, V.; Rodríguez, C.; Matellán, V. Augmented reality in robotic assistance for the elderly. In International Technology Robotics Applications; Springer: Berlin/Heidelberg, Germany, 2014; pp. 3–11. [Google Scholar]
- Chae, S.; Yang, Y.; Choi, H.; Kim, I.J.; Byun, J.; Jo, J.; Han, T.D. Smart Advisor: Real-Time Information Provider with Mobile Augmented Reality. In Proceedings of the 2016 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 7–11 January 2016; pp. 97–98. [Google Scholar]
- Schlomann, A.; Rasche, P.; Seifert, A.; Schäfer, K.; Wille, M.; Bröhl, C.; Theis, S.; Mertens, A. Augmented Reality Games for Health Promotion in Old Age. In Augmented Reality Games II; Springer: Berlin/Heidelberg, Germany, 2019; pp. 159–177. [Google Scholar]
- Bianco, M.L.; Pedell, S.; Renda, G. Augmented Reality and Home Modifications: A Tool to Empower Older Adults in Fall Prevention. In Proceedings of the 28th Australian Conference on Computer-Human Interaction, Launceston, Australia, 29 November–2 December 2016; pp. 499–507. [Google Scholar]
- Yamamoto, G.; Hyry, J.; Krichenbauer, M.; Taketomi, T.; Sandor, C.; Kato, H.; Pulli, P. A User Interface Design for the Elderly Using a Projection Tabletop System. In Proceedings of the 2015 3rd IEEE VR International Workshop on Virtual and Augmented Assistive Technology (VAAT), Arles, France, 23 March 2015; pp. 29–32. [Google Scholar]
- Pizzagalli, S.; Spoladore, D.; Arlati, S.; Sacco, M.; Greci, L. HIC: An Interactive and Ubiquitous Home Controller System for the Smart Home. In Proceedings of the 2018 IEEE 6th International Conference on Serious Games and Applications for Health (SeGAH), Vienna, Austria, 16–18 May 2018; pp. 1–6. [Google Scholar]
- Vogiatzaki, E.; Krukowski, A. Maintaining Mental Wellbeing of Elderly at Home. In Enhanced Living Environments; Springer: Berlin/Heidelberg, Germany, 2019; pp. 177–209. [Google Scholar]
- Petsani, D.; Kostantinidis, E.I.; Diaz-Orueta, U.; Hopper, L.; Bamidis, P.D. Extending Exergame-Based Physical Activity for Older Adults: The e-Coaching Approach for Increased Adherence. In Proceedings of the International Conference on Information and Communication Technologies for Ageing Well and e-Health, Las Vegas, NV, USA, 15–20 July 2018; pp. 108–125. [Google Scholar]
- Montemerlo, M.; Thrun, S.; Koller, D.; Wegbreit, B. FastSLAM 2.0: An Improved Particle Filtering Algorithm for Simultaneous Localization and Mapping that Provably Converges. In Proceedings of the IJCAI, Acapulco, Mexico, 9–15 August 2003; pp. 1151–1156. [Google Scholar]
- Leutenegger, S.; Chli, M.; Siegwart, R. BRISK: Binary Robust Invariant Scalable Keypoints. In Proceedings of the 2011 IEEE International Conference on Computer Vision (ICCV), Barcelona, Spain, 6–13 November 2011; pp. 2548–2555. [Google Scholar]
- Whelan, T.; Kaess, M.; Johannsson, H.; Fallon, M.; Leonard, J.J.; McDonald, J. Real-time large-scale dense RGB-D SLAM with volumetric fusion. Int. J. Robot. Res. 2015, 34, 598–626. [Google Scholar] [CrossRef]
- Xu, L.; Oja, E.; Kultanen, P. A new curve detection method: randomized Hough transform (RHT). Pattern Recognit. Lett. 1990, 11, 331–338. [Google Scholar] [CrossRef]
- Kendall, A.; Grimes, M.; Cipolla, R. Posenet: A Convolutional Network for Real-Time 6-dof Camera Relocalization. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 2938–2946. [Google Scholar]
- Zhao, R.; Ali, H.; Van der Smagt, P. Two-Stream RNN/CNN for Action Recognition in 3D Videos. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 4260–4267. [Google Scholar]
- Hai, P.T.; Kha, H.H. An Efficient Star Skeleton Extraction for Human Action Recognition Using Hidden Markov Models. In Proceedings of the 2016 IEEE Sixth International Conference on Communications and Electronics (ICCE), Ha Long, Vietnam, 27–29 July 2016; pp. 351–356. [Google Scholar]
- Kawagoe, T.; Matsushita, M.; Hashimoto, M.; Ikeda, M.; Sekiyama, K. Face-specific memory deficits and changes in eye scanning patterns among patients with amnestic mild cognitive impairment. Sci. Rep. 2017, 7, 14344. [Google Scholar] [CrossRef]
- Schroff, F.; Kalenichenko, D.; Philbin, J. Facenet: A Unified Embedding for Face Recognition and Clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, Boston, MA, USA, 7–12 June 2015; pp. 815–823. [Google Scholar]
- Redmon, J. Darknet: Open Source Neural Networks in C. 2013–2016. Available online: http://pjreddie.com/darknet/ (accessed on 30 June 2016).
- Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft coco: Common Objects in Context. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2014; pp. 740–755. [Google Scholar]
- Andriluka, M.; Pishchulin, L.; Gehler, P.; Schiele, B. 2d Human Pose Estimation: New Benchmark and State of the Art Analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 3686–3693. [Google Scholar]
- Wilson, A.D. Using a Depth Camera as a Touch Sensor. In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces, Saarbrücken, Germany, 7–10 November 2010; pp. 69–72. [Google Scholar]
- Mohadisdudis, H.M.; Ali, N.M. A Study of Smartphone Usage and Barriers Among the Elderly. In Proceedings of the 2014 3rd International Conference on User Science and Engineering (i-USEr), Shah Alam, Malaysia, 2–5 September 2014; pp. 109–114. [Google Scholar]
Research Authors (Year) | Display | Purpose | Application | UI | Characteristic |
---|---|---|---|---|---|
Wolf et al. [12] (2018) | HMD (Wearable) | Daily support | Cooking | - Hand gesture - Host PC | - 3D space reconstruction - Spatial mapping |
Aruanno et al. [13] (2019) | Mental care | Game | - Hand gesture - Voice - Controller | - 3D space reconstruction - Spatial mapping | |
Vovk et al. [14] (2019) | Mental care | Game | Hand gesture | - 3D space reconstruction - Spatial mapping | |
Ingeson et al. [15] (2018) | Memory assistant | Medication management | None | Cloud-based image recognition | |
Lera et al. [17] (2014) | Mobile device (Hand-held) | Memory assistant | Medication management | Mobile UI (MUI) | Feature-matching-based image recognition |
Chae et al. [18] (2016) | Memory assistant | Medication management | MUI | Feature-matching-based image recognition | |
Schlomann et al. [19] (2019) | Physical care | Game | MUI | Location-based AR | |
Bianco et al. [20] (2016) | Safety | Augmented safety bar | MUI | Feature matching-based image recognition | |
Yamamoto et al. [21] (2015) | Projector (Fixed Table-top) | Daily support | Home control | Spatial UI (SUI) with finger ring | Spatial touch detection |
Pizzagalli et al. [22] (2018) | Daily support | Home control | SUI | - Cloud-based image recognition - Spatial touch detection | |
Vogiatzaki et al. [23] (2019) | Projector and Monitor | Mental and Physical care | Game | - Voice - Body gesture - Host PC | - EMG sensing - Body movement recognition |
Petsani et al. [24] (2018) | Mental and Physical care | Tele-stretching | - Voice - Body gesture | - Body movement recognition | |
Our propose Park et al. (2019) | Projector (Portable) | - Safety - Daily support - Memory assistant | - Monitoring - Home control - Medication management | - MUI - SUI | - 3D space reconstruction - Spatial mapping - Deep learning-based Pose estimation, face recognition, and object detection - Spatial touch detection |
Class | “Label Name” | “Undefined” | “Unknown” | ||||||
---|---|---|---|---|---|---|---|---|---|
Terms | Angle | Occlusion | Glasses | Angle | Occlusion | Glasses | Angle | Occlusion | Glasses |
TP | 157 | 29 | 56 | 113 | 2 | 17 | 136 | 30 | 60 |
FP | 0 | 0 | 1 | 22 | 20 | 43 | 1 | 4 | 0 |
Miss | 23 | 31 | 3 | 46 | 38 | 0 | 43 | 27 | 0 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Park, Y.J.; Ro, H.; Lee, N.K.; Han, T.-D. Deep-cARe: Projection-Based Home Care Augmented Reality System with Deep Learning for Elderly. Appl. Sci. 2019, 9, 3897. https://doi.org/10.3390/app9183897
Park YJ, Ro H, Lee NK, Han T-D. Deep-cARe: Projection-Based Home Care Augmented Reality System with Deep Learning for Elderly. Applied Sciences. 2019; 9(18):3897. https://doi.org/10.3390/app9183897
Chicago/Turabian StylePark, Yoon Jung, Hyocheol Ro, Nam Kyu Lee, and Tack-Don Han. 2019. "Deep-cARe: Projection-Based Home Care Augmented Reality System with Deep Learning for Elderly" Applied Sciences 9, no. 18: 3897. https://doi.org/10.3390/app9183897
APA StylePark, Y. J., Ro, H., Lee, N. K., & Han, T. -D. (2019). Deep-cARe: Projection-Based Home Care Augmented Reality System with Deep Learning for Elderly. Applied Sciences, 9(18), 3897. https://doi.org/10.3390/app9183897