sensors-logo

Journal Browser

Journal Browser

Recent Advances in Smart Mobile Sensing Technology

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: 30 November 2024 | Viewed by 3063

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical and Electronic Engineering, The University of Melbourne, Melbourne, VIC 3010, Australia
Interests: future wireless networks; 5G; mobile edge networks; distributed computing; Internet of Things; big data analytics; cloud computing; network service virtualization; optical networks
Special Issues, Collections and Topics in MDPI journals
School of Information and Communications Engineering, Communication University of China, Beijing 100024,China
Interests: computer vision,convolutional neural nets,learning (artificial intelligence),object detection,5G mobile communication,cache storage,feature extraction,mobile computing,object recognition,Markov proces
Special Issues, Collections and Topics in MDPI journals
School of Information Science and Engineering, Southeast University, Nanjing, China
Interests: artificial intelligence-based image/video signal processing; algorithm design; wireless communications; cyberspace security theories and techniques
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The convergence of mobile sensing technologies with artificial intelligence (AI) and machine learning (ML) unlocks a transformative future. This powerful combination empowers a vast array of applications that can radically improve productivity, safety, health, and efficiency across diverse sectors like smart industry, healthcare, sports, and education.

Imagine a smart industry transformed by mobile wearables and sensing technologies that seamlessly sense and monitor processes and workers, creating a safer and more productive environment. Machine learning can elevate wireless sensor tracking and healthcare monitoring, particularly for vulnerable populations like the elderly in age care. Even remote learning and sports analytics can benefit from AI-powered mobile sensors. Finally, advanced mobile sensing integrated with AR and VR promises to revolutionize user experiences in both industrial and consumer settings.

This Special Issue seeks groundbreaking research that addresses the critical challenges at the forefront of smart mobile sensing powered by AI and machine learning (AI/ML). We invite submissions exploring innovative applications and use cases with the potential to significantly transform our daily lives.

The key topics of interest include (but are not limited to):

  • AI-enabled smart mobile sensing technology
  • Hardware, material, and signal processing of smart mobile sensing technology
  • Smart mobile sensing applications and use-cases
  • Machine learning, deep learning, and big data analytics
  • Privacy-preserving and security of smart mobile sensing
  • Intelligent AR/VR application with machine/deep learning

Dr. Chien Aun Chan
Dr. Ming Yan
Dr. Chunguo Li
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • AI-enabled smart mobile sensing technology
  • hardware, material, and signal processing of smart mobile sensing technology
  • smart mobile sensing applications and use-cases
  • machine learning, deep learning, and big data analytics
  • privacy-preserving and security of smart mobile sensing
  • intelligent AR/VR application with machine/deep learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 12596 KiB  
Article
ARMNet: A Network for Image Dimensional Emotion Prediction Based on Affective Region Extraction and Multi-Channel Fusion
by Jingjing Zhang, Jiaying Sun, Chunxiao Wang, Zui Tao and Fuxiao Zhang
Sensors 2024, 24(21), 7099; https://doi.org/10.3390/s24217099 - 4 Nov 2024
Viewed by 683
Abstract
Compared with discrete emotion space, image emotion analysis based on dimensional emotion space can more accurately represent fine-grained emotion. Meanwhile, this high-precision representation of emotion requires dimensional emotion prediction methods to sense and capture emotional information in images as accurately and richly as [...] Read more.
Compared with discrete emotion space, image emotion analysis based on dimensional emotion space can more accurately represent fine-grained emotion. Meanwhile, this high-precision representation of emotion requires dimensional emotion prediction methods to sense and capture emotional information in images as accurately and richly as possible. However, the existing methods mainly focus on emotion recognition by extracting the emotional regions where salient objects are located while ignoring the joint influence of objects and background on emotion. Furthermore, in the existing literature, when fusing multi-level features, no consideration has been given to the varying contributions of features from different levels to emotional analysis, which makes it difficult to distinguish valuable and useless features and cannot improve the utilization of effective features. This paper proposes an image emotion prediction network named ARMNet. In ARMNet, a unified affective region extraction method that integrates eye fixation detection and attention detection is proposed to enhance the combined influence of objects and backgrounds. Additionally, the multi-level features are fused with the consideration of their different contributions through an improved channel attention mechanism. In comparison to the existing methods, experiments conducted on the CGnA10766 dataset demonstrate that the performance of valence and arousal, as measured by Mean Squared Error (MSE), Mean Absolute Error (MAE), and Coefficient of Determination (R²), has improved by 4.74%, 3.53%, 3.62%, 1.93%, 6.29%, and 7.23%, respectively. Furthermore, the interpretability of the network is enhanced through the visualization of attention weights corresponding to emotional regions within the images. Full article
(This article belongs to the Special Issue Recent Advances in Smart Mobile Sensing Technology)
Show Figures

Figure 1

24 pages, 3388 KiB  
Article
An Audiovisual Correlation Matching Method Based on Fine-Grained Emotion and Feature Fusion
by Zhibin Su, Yiming Feng, Jinyu Liu, Jing Peng, Wei Jiang and Jingyu Liu
Sensors 2024, 24(17), 5681; https://doi.org/10.3390/s24175681 - 31 Aug 2024
Cited by 1 | Viewed by 1102
Abstract
Most existing intelligent editing tools for music and video rely on the cross-modal matching technology of the affective consistency or the similarity of feature representations. However, these methods are not fully applicable to complex audiovisual matching scenarios, resulting in low matching accuracy and [...] Read more.
Most existing intelligent editing tools for music and video rely on the cross-modal matching technology of the affective consistency or the similarity of feature representations. However, these methods are not fully applicable to complex audiovisual matching scenarios, resulting in low matching accuracy and suboptimal audience perceptual effects due to ambiguous matching rules and associated factors. To address these limitations, this paper focuses on both the similarity and integration of affective distribution for the artistic audiovisual works of movie and television video and music. Based on the rich emotional perception elements, we propose a hybrid matching model based on feature canonical correlation analysis (CCA) and fine-grained affective similarity. The model refines KCCA fusion features by analyzing both matched and unmatched music–video pairs. Subsequently, the model employs XGBoost to predict relevance and to compute similarity by considering fine-grained affective semantic distance as well as affective factor distance. Ultimately, the matching prediction values are obtained through weight allocation. Experimental results on a self-built dataset demonstrate that the proposed affective matching model balances feature parameters and affective semantic cognitions, yielding relatively high prediction accuracy and better subjective experience of audiovisual association. This paper is crucial for exploring the affective association mechanisms of audiovisual objects from a sensory perspective and improving related intelligent tools, thereby offering a novel technical approach to retrieval and matching in music–video editing. Full article
(This article belongs to the Special Issue Recent Advances in Smart Mobile Sensing Technology)
Show Figures

Figure 1

15 pages, 33766 KiB  
Article
EmotionCast: An Emotion-Driven Intelligent Broadcasting System for Dynamic Camera Switching
by Xinyi Zhang, Xinran Ba, Feng Hu and Jin Yuan
Sensors 2024, 24(16), 5401; https://doi.org/10.3390/s24165401 - 21 Aug 2024
Viewed by 683
Abstract
Traditional broadcasting methods often result in fatigue and decision-making errors when dealing with complex and diverse live content. Current research on intelligent broadcasting primarily relies on preset rules and model-based decisions, which have limited capabilities for understanding emotional dynamics. To address these issues, [...] Read more.
Traditional broadcasting methods often result in fatigue and decision-making errors when dealing with complex and diverse live content. Current research on intelligent broadcasting primarily relies on preset rules and model-based decisions, which have limited capabilities for understanding emotional dynamics. To address these issues, this study proposed and developed an emotion-driven intelligent broadcasting system, EmotionCast, to enhance the efficiency of camera switching during live broadcasts through decisions based on multimodal emotion recognition technology. Initially, the system employs sensing technologies to collect real-time video and audio data from multiple cameras, utilizing deep learning algorithms to analyze facial expressions and vocal tone cues for emotion detection. Subsequently, the visual, audio, and textual analyses were integrated to generate an emotional score for each camera. Finally, the score for each camera shot at the current time point was calculated by combining the current emotion score with the optimal scores from the preceding time window. This approach ensured optimal camera switching, thereby enabling swift responses to emotional changes. EmotionCast can be applied in various sensing environments such as sports events, concerts, and large-scale performances. The experimental results demonstrate that EmotionCast excels in switching accuracy, emotional resonance, and audience satisfaction, significantly enhancing emotional engagement compared to traditional broadcasting methods. Full article
(This article belongs to the Special Issue Recent Advances in Smart Mobile Sensing Technology)
Show Figures

Figure 1

Back to TopTop