sensors-logo

Journal Browser

Journal Browser

Affective and Immersive Human Computer Interaction via Effective Sensor and Sensing (AI-HCIs) II

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (30 June 2023) | Viewed by 10849

Special Issue Editors


E-Mail Website
Guest Editor
School of Communication and Electrical Engineering, East China Normal University, Shanghai, China
Interests: data mining; machine learning; computer vision; image processing
Department of Electrical and Computer Engineering, Ajman University, Ajman, United Arab Emirates
Interests: light/image sensors; temperature sensors; computing devices/systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Human–computer interaction (HCI) is crucial for user-friendly interactions between human users and computer systems, which can be different from a conventional computer, and may appear to be a (portable) hardware device or a software package. As such, HCI is not only requested to provide effective input/output; it is also expected to understand users’ intentions and the environment for better service-oriented interactions. As a result, new challenges beyond conventional multimodal HCI have arisen, including audio, image, video and graphics, as well as a keyboard and mouse. To this end, AI-guided intelligent recognition of speech instructions and visual signs, such as gestures and gaze, has been widely adopted as a natural way to communicate in HCI.

Recently, thanks to emerging sensors and sensing techniques, HCI has been further developed for immersive and affective communication between human users and computer systems. Examples can be found in virtual-reality-based experiences, electroencephalogram-enabled brain–computer interfaces, and smart interactions between humans and robots. In addition to auditory and visual clues, touch, taste and smell have also been explored in this context. How to effectively use these individual sources of information, and also fuse some of them together for different levels of tasks, still requires exploration.

In this Special Issue, we aim to provide a forum for colleagues to report the most up-to-date results of developed models/algorithms/approaches/techniques, as well as comprehensive surveys of the state of the art in relevant fields. Both original contributions with theoretical novelty and practical solutions for addressing particular problems in HCI are solicited. Rather than reporting the results of HCI in particular applications, questions such as “how, why and when” should be answered when applying relevant HCI techniques in a specific context.

Prof. Dr. Jinchang Ren
Dr. Pourya Shamsolmoali
Dr. Maher Assaad
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • effective models and algorithms for HCI
  • emerging sensing techniques for HCI
  • systematic design and solutions for multimodal fusion in HCI
  • brain–computer interaction
  • HCI for human–robot interactions
  • novel applications and case studies for gaming, education, healthcare, etc.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 3329 KiB  
Article
The Application of Deep Learning for the Evaluation of User Interfaces
by Ana Keselj, Mario Milicevic, Krunoslav Zubrinic and Zeljka Car
Sensors 2022, 22(23), 9336; https://doi.org/10.3390/s22239336 - 30 Nov 2022
Cited by 7 | Viewed by 3616
Abstract
In this study, we tested the ability of a machine-learning model (ML) to evaluate different user interface designs within the defined boundaries of some given software. Our approach used ML to automatically evaluate existing and new web application designs and provide developers and [...] Read more.
In this study, we tested the ability of a machine-learning model (ML) to evaluate different user interface designs within the defined boundaries of some given software. Our approach used ML to automatically evaluate existing and new web application designs and provide developers and designers with a benchmark for choosing the most user-friendly and effective design. The model is also useful for any other software in which the user has different options to choose from or where choice depends on user knowledge, such as quizzes in e-learning. The model can rank accessible designs and evaluate the accessibility of new designs. We used an ensemble model with a custom multi-channel convolutional neural network (CNN) and an ensemble model with a standard architecture with multiple versions of down-sampled input images and compared the results. We also describe our data preparation process. The results of our research show that ML algorithms can estimate the future performance of completely new user interfaces within the given elements of user interface design, especially for color/contrast and font/layout. Full article
Show Figures

Figure 1

16 pages, 1424 KiB  
Article
Graph Theoretical Analysis of EEG Functional Connectivity Patterns and Fusion with Physiological Signals for Emotion Recognition
by Vasileios-Rafail Xefteris, Athina Tsanousa, Nefeli Georgakopoulou, Sotiris Diplaris, Stefanos Vrochidis and Ioannis Kompatsiaris
Sensors 2022, 22(21), 8198; https://doi.org/10.3390/s22218198 - 26 Oct 2022
Cited by 5 | Viewed by 2991
Abstract
Emotion recognition is a key attribute for realizing advances in human–computer interaction, especially when using non-intrusive physiological sensors, such as electroencephalograph (EEG) and electrocardiograph. Although functional connectivity of EEG has been utilized for emotion recognition, the graph theory analysis of EEG connectivity patterns [...] Read more.
Emotion recognition is a key attribute for realizing advances in human–computer interaction, especially when using non-intrusive physiological sensors, such as electroencephalograph (EEG) and electrocardiograph. Although functional connectivity of EEG has been utilized for emotion recognition, the graph theory analysis of EEG connectivity patterns has not been adequately explored. The exploitation of brain network characteristics could provide valuable information regarding emotions, while the combination of EEG and peripheral physiological signals can reveal correlation patterns of human internal state. In this work, a graph theoretical analysis of EEG functional connectivity patterns along with fusion between EEG and peripheral physiological signals for emotion recognition has been proposed. After extracting functional connectivity from EEG signals, both global and local graph theory features are extracted. Those features are concatenated with statistical features from peripheral physiological signals and fed to different classifiers and a Convolutional Neural Network (CNN) for emotion recognition. The average accuracy on the DEAP dataset using CNN was 55.62% and 57.38% for subject-independent valence and arousal classification, respectively, and 83.94% and 83.87% for subject-dependent classification. Those scores went up to 75.44% and 78.77% for subject-independent classification and 88.27% and 90.84% for subject-dependent classification using a feature selection algorithm, exceeding the current state-of-the-art results. Full article
Show Figures

Figure 1

17 pages, 3148 KiB  
Article
Few-Shot Fine-Grained Image Classification via GNN
by Xiangyu Zhou, Yuhui Zhang and Qianru Wei
Sensors 2022, 22(19), 7640; https://doi.org/10.3390/s22197640 - 9 Oct 2022
Cited by 7 | Viewed by 3189
Abstract
Traditional deep learning methods such as convolutional neural networks (CNN) have a high requirement for the number of labeled samples. In some cases, the cost of obtaining labeled samples is too high to obtain enough samples. To solve this problem, few-shot learning (FSL) [...] Read more.
Traditional deep learning methods such as convolutional neural networks (CNN) have a high requirement for the number of labeled samples. In some cases, the cost of obtaining labeled samples is too high to obtain enough samples. To solve this problem, few-shot learning (FSL) is used. Currently, typical FSL methods work well on coarse-grained image data, but not as well on fine-grained image classification work, as they cannot properly assess the in-class similarity and inter-class difference of fine-grained images. In this work, an FSL framework based on graph neural network (GNN) is proposed for fine-grained image classification. Particularly, we use the information transmission of GNN to represent subtle differences between different images. Moreover, feature extraction is optimized by the method of meta-learning to improve the classification. The experiments on three datasets (CIFAR-100, CUB, and DOGS) have shown that the proposed method yields better performances. This indicates that the proposed method is a feasible solution for fine-grained image classification with FSL. Full article
Show Figures

Figure 1

Back to TopTop