applsci-logo

Journal Browser

Journal Browser

Machine Learning and Signal Processing for IOT Applications

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (20 December 2021) | Viewed by 29240

Special Issue Editor


E-Mail Website
Guest Editor
Department of Electrical Engineering, Yuan Ze University, Taoyuan, Taiwan
Interests: mobile positioning and applications; machine learning and pattern recognition; wireless network and communications; signal processing and applications
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue focuses on advanced technologies related to signal processing and machine learning technologies for novel IOT (Internet of Things) applications. Successful examples include environmental monitoring, artificial intelligence, health care, indoor localization, wireless networks and multimedia interactions. One of the objectives of this Special Issue is to present IOT applications that employ state-of-the-art signal processing and machine learning technologies. The other main purpose is to promote interdisciplinary collaborations between researchers in the fields of signal processing and machine learning technologies for novel IOT applications.

Prof. Shih-Hau Fang
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Signal processing
  • Machine learning
  • Internet of Things
  • Artificial intelligence

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

29 pages, 4912 KiB  
Article
A CSI-Based Multi-Environment Human Activity Recognition Framework
by Baha A. Alsaify, Mahmoud M. Almazari, Rami Alazrai, Sahel Alouneh and Mohammad I. Daoud
Appl. Sci. 2022, 12(2), 930; https://doi.org/10.3390/app12020930 - 17 Jan 2022
Cited by 24 | Viewed by 3948
Abstract
Passive human activity recognition (HAR) systems, in which no sensors are attached to the subject, provide great potentials compared to conventional systems. One of the recently used techniques showing tremendous potential is channel state information (CSI)-based HAR systems. In this work, we present [...] Read more.
Passive human activity recognition (HAR) systems, in which no sensors are attached to the subject, provide great potentials compared to conventional systems. One of the recently used techniques showing tremendous potential is channel state information (CSI)-based HAR systems. In this work, we present a multi-environment human activity recognition system based on observing the changes in the CSI values of the exchanged wireless packets carried by OFDM subcarriers. In essence, we introduce a five-stage CSI-based human activity recognition approach. First, the acquired CSI values associated with each recorded activity instance are processed to remove the existing noise from the recorded data. A novel segmentation algorithm is then presented to identify and extract the portion of the signal that contains the activity. Next, the extracted activity segment is processed using the procedure proposed in the first stage. After that, the relevant features are extracted, and the important features are selected. Finally, the selected features are used to train a support vector machine (SVM) classifier to identify the different performed activities. To validate the performance of the proposed approach, we collected data in two different environments. In each of the environments, several activities were performed by multiple subjects. The performed experiments showed that our proposed approach achieved an average activity recognition accuracy of 91.27%. Full article
(This article belongs to the Special Issue Machine Learning and Signal Processing for IOT Applications)
Show Figures

Figure 1

18 pages, 6963 KiB  
Article
Unsupervised Clustering Pipeline to Obtain Diversified Light Spectra for Subject Studies and Correlation Analyses
by Stefan Klir, Reda Fathia, Sebastian Babilon, Simon Benkner and Tran Quoc Khanh
Appl. Sci. 2021, 11(19), 9062; https://doi.org/10.3390/app11199062 - 28 Sep 2021
Cited by 4 | Viewed by 1903
Abstract
Current subject studies and data-driven approaches in lighting research often use manually selected light spectra, which usually exhibit a large bias due to the applied selection criteria. This paper, therefore, presents a novel approach to minimize this bias by using a data-driven framework [...] Read more.
Current subject studies and data-driven approaches in lighting research often use manually selected light spectra, which usually exhibit a large bias due to the applied selection criteria. This paper, therefore, presents a novel approach to minimize this bias by using a data-driven framework for selecting the most diverse candidates from a given larger set of possible light spectra. The spectral information per wavelength is first reduced by applying a convolutional autoencoder. The relevant features are then selected based on Laplacian Scores and transformed to a two-dimensional embedded space for subsequent clustering. The low dimensional embedding, from which the required diversity follows, is done with respect to the locality of the features. In a second step, photometric parameters are considered and a second clustering is performed. As a result of this algorithmic pipeline, the most diverse selection of light spectra complying with a given set of relevant photometric parameters can be extracted and used for further experiments or applications. Full article
(This article belongs to the Special Issue Machine Learning and Signal Processing for IOT Applications)
Show Figures

Figure 1

16 pages, 8210 KiB  
Article
A Speech Command Control-Based Recognition System for Dysarthric Patients Based on Deep Learning Technology
by Yu-Yi Lin, Wei-Zhong Zheng, Wei Chung Chu, Ji-Yan Han, Ying-Hsiu Hung, Guan-Min Ho, Chia-Yuan Chang and Ying-Hui Lai
Appl. Sci. 2021, 11(6), 2477; https://doi.org/10.3390/app11062477 - 10 Mar 2021
Cited by 22 | Viewed by 3952
Abstract
Voice control is an important way of controlling mobile devices; however, using it remains a challenge for dysarthric patients. Currently, there are many approaches, such as automatic speech recognition (ASR) systems, being used to help dysarthric patients control mobile devices. However, the large [...] Read more.
Voice control is an important way of controlling mobile devices; however, using it remains a challenge for dysarthric patients. Currently, there are many approaches, such as automatic speech recognition (ASR) systems, being used to help dysarthric patients control mobile devices. However, the large computation power requirement for the ASR system increases implementation costs. To alleviate this problem, this study proposed a convolution neural network (CNN) with a phonetic posteriorgram (PPG) speech feature system to recognize speech commands, called CNN–PPG; meanwhile, the CNN model with Mel-frequency cepstral coefficient (CNN–MFCC model) and ASR-based systems were used for comparison. The experiment results show that the CNN–PPG system provided 93.49% accuracy, better than the CNN–MFCC (65.67%) and ASR-based systems (89.59%). Additionally, the CNN–PPG used a smaller model size comprising only 54% parameter numbers compared with the ASR-based system; hence, the proposed system could reduce implementation costs for users. These findings suggest that the CNN–PPG system could augment a communication device to help dysarthric patients control the mobile device via speech commands in the future. Full article
(This article belongs to the Special Issue Machine Learning and Signal Processing for IOT Applications)
Show Figures

Figure 1

9 pages, 3108 KiB  
Article
Personal Atmosphere: Estimation of Air Conditioner Parameters for Personalizing Thermal Comfort
by Tomohiro Mashita, Tetsuya Kanayama and Photchara Ratsamee
Appl. Sci. 2020, 10(22), 8067; https://doi.org/10.3390/app10228067 - 13 Nov 2020
Cited by 3 | Viewed by 2001
Abstract
Air conditioners enable a comfortable environment for people in a variety of scenarios. However, in the case of a room with multiple people, the specific comfort for a particular person is highly dependent on their clothes, metabolism, preference, and so on, and the [...] Read more.
Air conditioners enable a comfortable environment for people in a variety of scenarios. However, in the case of a room with multiple people, the specific comfort for a particular person is highly dependent on their clothes, metabolism, preference, and so on, and the ideal conditions for each person in a room can conflict with each other. An ideal way to resolve these kinds of conflicts is an intelligent air conditioning system that can independently control air temperature and flow at different areas in a room and then produce thermal comfort for multiple users, which we define as the personal preference of air flow and temperature. In this paper, we propose Personal Atmosphere, a machine learning based method to obtain parameters of air conditioners which generate non-uniform distributions of air temperature and flow in a room. In this method, two dimensional air-temperature and -flow distributions in a room are used as input to a machine learning model. These inputs can be considered a summary of each user’s preference. Then the model outputs a parameter set for air conditioners in a given room. We utilized ResNet-50 as the model and generated a data set of air temperature and flow distributions using computational fluid dynamics (CFD) software. We then conducted evaluations with two rooms that have two and four air conditioners under the ceiling. We then confirmed that the estimated parameters of the air conditioners can generate air temperature and flow distributions close to those required in simulation. We also evaluated the performance of a ResNet-50 with fine tuning. This result shows that its learning time is significantly decreased, but performance is also decreased. Full article
(This article belongs to the Special Issue Machine Learning and Signal Processing for IOT Applications)
Show Figures

Figure 1

16 pages, 2862 KiB  
Article
Utility-Based Wireless Routing Algorithm for Massive MIMO Heterogeneous Networks
by Wei Zhao and Wen-Hsing Kuo
Appl. Sci. 2020, 10(20), 7261; https://doi.org/10.3390/app10207261 - 17 Oct 2020
Cited by 1 | Viewed by 1720
Abstract
With the development of 5G communication, massive multiple input multiple output (MIMO) technology is getting more and more attention. Massive MIMO uses a large amount of simultaneous transmitting and receiving antennas to reduce power consumption and raise the level of transmission quality. Meanwhile, [...] Read more.
With the development of 5G communication, massive multiple input multiple output (MIMO) technology is getting more and more attention. Massive MIMO uses a large amount of simultaneous transmitting and receiving antennas to reduce power consumption and raise the level of transmission quality. Meanwhile, the diversification of user equipment (UE) in the 5G environment also makes heterogeneous networks (HetNets) more prevalent. HetNets allow UE of different network standards to access small cells, while the base stations of small cells access a macro base station (BS) to form a multihop wireless heterogeneous backhaul network. However, how to effectively combine these two technologies by efficiently allocating the antennas of each BS during the route construction process of heterogeneous wireless backhaul networks is still an important issue that is yet to be solved. In this paper, we propose an algorithm called preallocated sequential routing (PSR). Based on the links’ channel conditions and the available antennas and location of BSs, it builds a wireless heterogeneous network backhaul topology and adjusts each link’s transmitting and receiving antennas to maximize total utility. Simulation results showed that the proposed algorithm significantly improved the overall utility and the utility of the outer area of heterogeneous networks. Full article
(This article belongs to the Special Issue Machine Learning and Signal Processing for IOT Applications)
Show Figures

Figure 1

14 pages, 984 KiB  
Article
Color Classification of Wooden Boards Based on Machine Vision and the Clustering Algorithm
by Ye Lin, Dan Chen, Shijia Liang, Zhezhuang Xu, Yang Qiu, Jiahao Zhang and Xinxiang Liu
Appl. Sci. 2020, 10(19), 6816; https://doi.org/10.3390/app10196816 - 29 Sep 2020
Cited by 12 | Viewed by 3032
Abstract
Color classification of wooden boards is helpful to improve the appearance of wooden furniture that is spliced from multiple wooden boards. Due to the similarity of colors among wooden boards, manual color classification is inaccurate and unstable. Thus, supervised learning algorithms can hardly [...] Read more.
Color classification of wooden boards is helpful to improve the appearance of wooden furniture that is spliced from multiple wooden boards. Due to the similarity of colors among wooden boards, manual color classification is inaccurate and unstable. Thus, supervised learning algorithms can hardly be used in this scenario. Moreover, wooden boards are long, and their images have a high resolution, which may lead to the growth of computational complexity. To overcome these challenges, in this paper, we propose a new mechanism for color classification of wooden boards based on machine vision. The image of the wooden board is preprocessed to subtract irrelevant colors, and the feature vector is extracted based on 3D color histogram to reduce the computational complexity. In the offline clustering, the feature vector sets are partitioned into different clusters through the K-means algorithm. Then, the clustering result can be used in the online classification to classify the new wood image. Furthermore, to process the abnormal images of wooden boards, we propose an improved algorithm with centroid improvement and image filtering. The experimental results verify the effectiveness of the proposed mechanism. Full article
(This article belongs to the Special Issue Machine Learning and Signal Processing for IOT Applications)
Show Figures

Figure 1

21 pages, 6116 KiB  
Article
Noise Prediction Using Machine Learning with Measurements Analysis
by Po-Jiun Wen and Chihpin Huang
Appl. Sci. 2020, 10(18), 6619; https://doi.org/10.3390/app10186619 - 22 Sep 2020
Cited by 8 | Viewed by 8313
Abstract
The noise prediction using machine learning is a special study that has recently received increased attention. This is particularly true in workplaces with noise pollution, which increases noise exposure for general laborers. This study attempts to analyze the noise equivalent level (Leq) at [...] Read more.
The noise prediction using machine learning is a special study that has recently received increased attention. This is particularly true in workplaces with noise pollution, which increases noise exposure for general laborers. This study attempts to analyze the noise equivalent level (Leq) at the National Synchrotron Radiation Research Center (NSRRC) facility and establish a machine learning model for noise prediction. This study utilized the gradient boosting model (GBM) as the learning model in which past noise measurement records and many other features are integrated as the proposed model makes a prediction. This study analyzed the time duration and frequency of the collected Leq and also investigated the impact of training data selection. The results presented in this paper indicate that the proposed prediction model works well in almost noise sensors and frequencies. Moreover, the model performed especially well in sensor 8 (125 Hz), which was determined to be a serious noise zone in the past noise measurements. The results also show that the root-mean-square-error (RMSE) of the predicted harmful noise was less than 1 dBA and the coefficient of determination (R2) value was greater than 0.7. That is, the working field showed a favorable noise prediction performance using the proposed method. This positive result shows the ability of the proposed approach in noise prediction, thus providing a notification to the laborer to prevent long-term exposure. In addition, the proposed model accurately predicts noise future pollution, which is essential for laborers in high-noise environments. This would keep employees healthy in avoiding noise harmful positions to prevent people from working in that environment. Full article
(This article belongs to the Special Issue Machine Learning and Signal Processing for IOT Applications)
Show Figures

Figure 1

15 pages, 1311 KiB  
Article
Phonocardiography Signals Compression with Deep Convolutional Autoencoder for Telecare Applications
by Ying-Ren Chien, Kai-Chieh Hsu and Hen-Wai Tsao
Appl. Sci. 2020, 10(17), 5842; https://doi.org/10.3390/app10175842 - 24 Aug 2020
Cited by 12 | Viewed by 3088
Abstract
Phonocardiography (PCG) signals that can be recorded using the electronic stethoscopes play an essential role in detecting the heart valve abnormalities and assisting in the diagnosis of heart disease. However, it consumes more bandwidth when transmitting these PCG signals to remote sites for [...] Read more.
Phonocardiography (PCG) signals that can be recorded using the electronic stethoscopes play an essential role in detecting the heart valve abnormalities and assisting in the diagnosis of heart disease. However, it consumes more bandwidth when transmitting these PCG signals to remote sites for telecare applications. This paper presents a deep convolutional autoencoder to compress the PCG signals. At the encoder side, seven convolutional layers were used to compress the PCG signals, which are collected on the patients in the rural areas, into the feature maps. At the decoder side, the doctors at the remote hospital use the other seven convolutional layers to decompress the feature maps and reconstruct the original PCG signals. To confirm the effectiveness of our method, we used an open accessed dataset on PHYSIONET. The achievable compress ratio (CR) is 32 when the percent root-mean-square difference (PRD) is less than 5%. Full article
(This article belongs to the Special Issue Machine Learning and Signal Processing for IOT Applications)
Show Figures

Figure 1

Back to TopTop