2.1.1. Multimodal Human–Computer Interaction
Multimodal human–computer interaction is a new type of technology that is different from traditional unimodal human–computer interaction [
4]. HCI is also an emotional interaction. Multimodal HCI can effectively recognize and integrate multiple pieces of information and finally form comprehensive interaction information to provide users with a more natural and efficient interaction experience [
5]. In real life, a multimodal human–machine computer system usually consists of multiple recognition devices. During human interaction, there are not only single information channels such as visual, tactile, auditory, olfactory, and gustatory, but also multiple information input/output modalities. Among them, the multimodal information input from human to computer and the multimodal information presentation from computer to human belongs to a comprehensive discipline that is closely related to cognitive psychology, ergonomics, multimedia technology, virtual reality technology, etc. [
6]. The specific presentation is shown in
Figure 1 below.
As shown in
Figure 2, multimodal user interface refers to the use of speech recognition, line-of-sight tracking, gesture input, and other new technologies on the basis of multimedia interface, so that the user can use a variety of forms or multiple channels to interact in a natural, parallel, and collaborative way, and the system can quickly capture the user’s intention by integrating multi-channel precise and imprecise information, which effectively improves the naturalness and efficiency of human–computer interaction.
In most cases, interactivity is implemented through sensors that sense changes in a person’s position or body movements. The feedback from waving an arm or stretching the body is projected onto a specific interface. The feedback interface is usually mounted on a wall, floor, or an installation of a particular shape. It is also the most common type of interaction, and Kuflex’s Quantum Space makes the viewer part of the game. As shown in
Figure 3, when the viewer approaches the installation, it interacts one-to-one with the viewer’s movements, creating a “silhouette” that corresponds to the viewer. Each movement causes the image to change, simulating the properties of elementary particles (gravity, magnetism, and viscosity).
- 2.
Gesture Interaction
In addition to utilizing the overall movement of the body as a means of interaction, artists also narrow down the medium of eliciting interaction to the hands. For example, Design 1/O’s “mimicry” allows participants to interact with the installation through gestures. A wave of the hands may produce a change in the installation. It greatly enhances the fun of the installation. Gesture recognition is one of the most important forms of interaction input in multimodal human–computer interaction, and gesture recognition is an important means of multi-view image recognition [
7]. As shown in
Figure 4, this method utilizes multiple cameras to capture images simultaneously, compares the differences between the images captured by multiple cameras at the same time, and calculates the depth information to obtain a three-dimensional image.
- 3.
Touch Interaction
The popularity of touchscreen technology has enabled touch to become a common mode of interaction. Users can interact with the interface through touch, slide, and pinch-to-zoom gestures. Touch interaction is simple and intuitive, and is suited to touchscreen devices such as mobile devices and tablets. It offers a more direct method of operation, enabling users to accomplish tasks more quickly, such as zooming in and out, dragging and dropping, and turning pages. As shown in
Figure 5, such devices are dominated by touch sensors, and when the viewer’s hand touches the work, the corresponding feedback is projected on the touchscreen. Like Karina Smigla-Bobinski’s work, “Kaleidoscope”, whether it is a finger, a foot, or an entire body, pressing firmly onto the screen creates an ever-changing image.
- 4.
Facial Interaction
Facial expression recognition technology refers to a technology that acquires facial expression information from the human face through computer vision technology and analyzes and recognizes such information. Based on human understanding and recognition of facial expressions, it extracts and analyzes the features of the human face through computer programs, thus recognizing and analyzing the facial expression. Facial expression recognition technology has been extensively applied to computer vision, image processing, human–computer interaction, and other fields. As shown in
Figure 6, “Blinking” is a behavioral characteristic of living organisms. When the device captures the blinking moment of the viewer, the electronic eye on the wall will instantly repeat and learn the blinking behavior at this moment.
- 5.
Voice interaction
Voice interaction means that users can interact with computers by voice through speech recognition and speech synthesis technologies. Users can use voice commands to perform operations, such as voice search, voice control, and voice navigation. Voice interaction enables computers to be more intelligent, allowing users to complete tasks verbally without the use of a keyboard or mouse. As shown in
Figure 7, the device converts the captured digital signals into text through speech recognition technology. This process requires parsing and comparing voice signals to identify the user’s intentions and commands.
2.1.2. Intelligent Multimodal Human–Computer Interaction Models Applied to the Elderly
The application of technologies in elderly care has always been the focus of researchers’ attention. Based on the physiological and cognitive aging patterns of the elderly, it summarizes the evolution and development of the research methodology for elderly-friendly design and includes the elements of scientific and technological elderly-friendly design with more humanistic care. Three scenarios of intelligent technology supporting the elderly in taking photos, exercising, and outdoor activities are designed and analyzed, and the development direction to be concerned in the design process is discovered. The analysis shows that in the interaction design, the sensory state of the elderly should be fully considered, and an interface that is easy to use and user-friendly should be adopted, as well as a multimodal interaction approach. In terms of humanistic care, attention should be paid to the acceptance of technology and the emotional experience of the elderly in its use, which will facilitate the better integration of the elderly into the digital era. In the smart home, the new multimodal human–computer interaction mode that supports the elderly is to build a human–computer interaction prototype system for the smart home based on avatar. It combines speech processing and line-of-sight tracking functions, thus enabling dual-channel visual and auditory interaction [
8,
9]. Such interaction can effectively improve the interaction experience of the elderly.
The concept of aging experience design was originally derived from the idea of barrier-free design, which was conceived by the United Nations International Conference on the Physically and Mentally Challenged in 1974 [
10]. Disabled and elderly people were included in the target group of the barrier-free design. Before that, researchers in Europe and the United States had attempted to incorporate the idea of accessibility into the design by looking at human performance and human–machine relationships, which is considered to be the origin of human factors engineering.
By the 1980s, European, American, and Japanese researchers began to focus on the relationship and balance between humans and the environment. Ronald L. Mace of North Carolina State University in the United States advocated that there should be no difference in design in terms of age or gender. He argued that design should be inclusive and that people with disabilities and older people should not be treated unequally. He introduced the concept of universal design in his article and suggested that products, including architectural design, should satisfy the needs of all people to the greatest extent possible [
11]. Around 1990, Mace and other designers proposed the seven principles of universal design (Principle 1: Equitable Use; Principle 2: Flexibility in Use; Principle 3: Simple and Intuitive Use; Principle 4: Perceptible Information; Principle 5: Tolerance for Error; Principle 6: Low Physical Effort; Principle 7: Size and Space for Approach and Use). At the same time, the guidelines also cover 3 bylaws and 37 evaluation indicators.
The introduction of universal design has had a profound impact on aging experience design. In 1985, American professor Pirkl James Joseph coined the concept of transgenerational design at a time when the notion of an aging population was initially emerging in the West [
12]. Transgenerational design suggests that there should be products designed for ease of use at every stage of a person’s life. He proposed that the social responsibility of designers should encompass aesthetics, technology, and humanistic care. This design concept emphasizes the elderly, and more specifically the process of aging.
Based on the barrier-free design, accessible design was first described in the Americans with Disabilities Act of 1991. Accessibility is more concerned with people with severe disabilities and hearing impairments, which also includes a segment of the elderly population. Early on, these designs were referred to as disability design, which was determined to be a negative term and discriminatory [
13]. The ISO 71 (2001) standard interprets accessibility as a design standard that extends to people with limited abilities and facilitates their use of the product [
14]. Accessibility has contributed greatly to the social inclusion of some people with disabilities in the community and has driven developments in areas such as building technology. Since then, barrier-free design has advanced the study of elderly-friendly design and integrated more factors such as social support into design research.
In 1994, British scholar Roger Colema proposed the concept of inclusive design for the first time. Similar to the concept of universal design, inclusive design is a design approach that aims to take into account and reach as many people as possible [
15]. Unlike universal design, inclusive design was initially conceived as a way to help designers and suppliers work together to ensure that their products and services meet as many different needs as possible, out of a concern for social justice. Inclusive design has both social and commercial values and is both a philosophy and a methodology. The University of Cambridge, together with other universities in the UK, has developed and established inclusive design tools and criteria to facilitate designers to think and act in a better way. Despite the fact that the concept of inclusive design is not targeted at the elderly, it has had a profound impact on elderly-friendly design, especially in the digital age.
The International Society for Gerontechnology, founded in 1997, formally proposed gerontechnology, which was dedicated to supporting the elderly to integrate into society in a healthy and comfortable state [
16]. Although gerontechnology is technologically dependent, it is demand driven. In addition to the elderly themselves, other stakeholders who care for the elderly are also considered in the design of gerontechnology. At the same time, gerontechnology proposes to take care of the self-esteem of the elderly, not only in terms of products and services, but also in terms of their emotional needs.
In 2004, the concept of design for all was presented and explained by the European Institute for Design and Disability as an equal and inclusive approach to designing for human diversity [
17]. Subsequently, the concept of user-centered design was introduced. Aging experience design, as it has developed to date, is an interdisciplinary and multifaceted field of study that involves design, psychology, sociology, human factors engineering, computers, and automation. These proposed design concepts and methodologies are initially intended to provide inclusive and friendly living environments for an aging society.
Figure 8 illustrates the development process of elderly-friendly design.
- 2.
Barriers to older people’s use of smart products
Much research has been conducted on the acceptance of older adults’ use of technologies, as well as on the complex factors that influence acceptance and the relationships between them. However, many more details are needed to capture the differences in older people’s ability to use the Internet. The theory of digital inequality suggests that even when people surf the Internet, differences between them persist in critical areas, such as surfing skills. Data from a survey of Internet skills among older Americans explore whether the skills vary among this group and reveal considerable differences in Internet know-how. These are related to socioeconomic status and autonomy of use. The findings suggest that informing older adults about Internet use must be based on the socio-economic background of these users and the available access points. Most current home appliances do not have features that are friendly to older people, such as multi-functional, digitally integrated operating systems. A number of factors affect the usability of products for older people, mostly the decline of perceptual ability and motor control, such as blurred vision and hand tremors. In terms of intelligent sports products, many studies have addressed the issue of social and sports games for older adults. Some studies try to persuade older adults to increase their physical activity through social and sports games. Also, several studies have examined the feasibility of older adults engaging in exercise using multimedia devices. Paas et al. propose a specific approach to multimedia design that addresses the problem of information overload by presenting selected segments of information prior to a holistic presentation. Related studies have shown that multimedia design does not take into full consideration the special needs, preferences, and skills of elderly users, which further hinders the learning and use of technologies by elderly users.
2.1.3. Recognition of Emotions by Artificial Intelligence
With the rapid development of AI technologies, emotion recognition technology, as one of the key components, has received extensive attention from many experts and scholars [
18]. EEG (Electroencephalogram) is the expression of neurons inside the brain in the cerebral cortex tissue. EEG signals differ on human emotions, and they cannot be controlled by human initiative or activity. For this reason, it is highly reliable and feasible to recognize human emotions with physiological signals such as EEG.
For modern landscape design, “emotion” is similar to the “soul” of landscape design. This “soul” does not only refer to a customary connection formed between the landscape and the users during the use of the landscape, but also places the landscape in a special cultural and emotional setting. It may become an adequate medium of expression, serving as a specific emotional support and spiritual expression closely related to the user, such as correlation, progression, and deduction. In addition, it may also serve as a physical carrier, carrying the related content for elaboration, analysis, and derivation [
19].
The inclusion of this content constitutes the basis for evaluating the connotation of the product in the landscape design. The concrete form (shape, meaning, semantics, and structural expression of the landscape) is the base point that connects the user and the landscape with the diversity of expressions of perceptual elements other than the basic behavior of usage such as visual sensory services [
20]. Thus, from the user’s point of view, these “points” serve as a solid foundation for the realization of spiritual values. The root of such a foundation is the fulfillment of spiritual values, as shown in
Figure 9.