Electronic Solutions for Artificial Intelligence Healthcare Volume II

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: closed (31 August 2023) | Viewed by 135530

Special Issue Editor

Special Issue Information

Dear Colleagues,

Currently, diverse, innovative technology is being used in electronics and ubiquitous computing environments. This allows us to create a better world by providing the backbone for remarkable development in our human society in the fields of electronics, devices, computer science, and engineering. Healthcare and bioelectronics in artificial intelligence are becoming more and more complex and sophisticated faster than ever before.

Thus, in this SI, we aim to start a discussion about a basic convergent study that would contribute to humanity by respecting human beings and their lives, while aiding and serving neglected or isolated people. For this purpose, this Special Issue is open to receiving a variety of meaningful and valuable manuscripts concerning the purpose of solving the healthcare issue based on electronic solutions. Participants may write about one of the subjects listed below, but they are not limited to them.

> Electronic service respecting human beings and their lives;

> Electronic solutions to artificial intelligence and Big Data;

> Means of aiding and serving neglected people like the disabled or elderly;

> Electronic engineering mathematical theories that deeply affect science and industry;

> Intelligent media techniques and services for systems engineering;

> Intelligent Security/Blockchain techniques and services for improved systems engineering;

> A public electronic engineering integration system for the future systems;

Prof. Dr. Jun-Ho Huh
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Humanity Solution
  • Artificial Intelligence
  • Application
  • Big Data
  • Intelligent media techniques
  • Mathematical theories
  • Healthcare

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (16 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

26 pages, 2752 KiB  
Article
A Greedy Heuristic Based on Optimizing Battery Consumption and Routing Distance for Transporting Blood Using Unmanned Aerial Vehicles
by Sumayah Al-Rabiaah, Manar Hosny and Sarab AlMuhaideb
Electronics 2022, 11(20), 3399; https://doi.org/10.3390/electronics11203399 - 20 Oct 2022
Cited by 5 | Viewed by 1702
Abstract
Unmanned Aerial Vehicles (UAVs) play crucial roles in numerous applications, such as healthcare services. For example, UAVs can help in disaster relief and rescue missions, such as by delivering blood samples and medical supplies. In this work, we studied a problem related to [...] Read more.
Unmanned Aerial Vehicles (UAVs) play crucial roles in numerous applications, such as healthcare services. For example, UAVs can help in disaster relief and rescue missions, such as by delivering blood samples and medical supplies. In this work, we studied a problem related to the routing of UAVs in a healthcare approach known as the UAV-based Capacitated Vehicle Routing Problem (UCVRP). This is classified as an NP-hard problem. The problem deals with utilizing UAVs to deliver blood to patients in emergency situations while minimizing the number of UAVs and the total routing distance. The UCVRP is a variant of the well-known capacitated vehicle routing problem, with additional constraints that fit the shipment type and the characteristics of the UAV. To solve this problem, we developed a heuristic known as the Greedy Battery—Distance Optimizing Heuristic (GBDOH). The idea was to assign patients to a UAV in such a way as to minimize the battery consumption and the number of UAVs. Then, we rearranged the patients of each UAV in order to minimize the total routing distance. We performed extensive experiments on the proposed GBDOH using instances tested by other methods in the literature. The results reveal that GBDOH demonstrates a more efficient performance with lower computational complexity and provides a better objective value by approximately 27% compared to the best methods used in the literature. Full article
(This article belongs to the Special Issue Electronic Solutions for Artificial Intelligence Healthcare Volume II)
Show Figures

Figure 1

15 pages, 5372 KiB  
Article
Simulations of the Comparative Study of the Single-Phase Shift and the Dual-Phase Shift-Controlled Triple Active Bridge Converter
by Norbert Njuanyi Koneh, Jae-Sub Ko and Dae-Kyong Kim
Electronics 2022, 11(20), 3274; https://doi.org/10.3390/electronics11203274 - 12 Oct 2022
Cited by 3 | Viewed by 2393
Abstract
This paper presents a comparative study between the traditional phase shift (also referred to as the Single-Phase Shift (SPS)) and the Dual-Phase Shift (DPS) controlled Triple Active Bridge (TAB) converter. Being a multi-port DC-DC converter with flexible power flow control and characterized by [...] Read more.
This paper presents a comparative study between the traditional phase shift (also referred to as the Single-Phase Shift (SPS)) and the Dual-Phase Shift (DPS) controlled Triple Active Bridge (TAB) converter. Being a multi-port DC-DC converter with flexible power flow control and characterized by high power density, the TAB converter is applicable in almost any situation where a DC-DC converter is needed. With the availability of multiple control schemes, this work highlights the advantages and disadvantages of the most employed control scheme used on the TAB converter, in comparison with the DPS control scheme that has so far been applied only on Dual-Active Bridge (DAB) converters. As an example, for a TAB converter with a 14 kW maximum power capacity, the work sees the comparison of the backflow power, the maximum possible current, the processed power at the different ports of the converter, the transformer voltage and current waveforms, and the Total Harmonic Distortion (THD). Based on the results obtained, we found that the DPS-controlled TAB converter was more efficient when applied to the TAB converter compared to the traditional phase shift control algorithm. Full article
(This article belongs to the Special Issue Electronic Solutions for Artificial Intelligence Healthcare Volume II)
Show Figures

Figure 1

32 pages, 7159 KiB  
Article
Blockchain Smart Contract to Prevent Forgery of Degree Certificates: Artificial Intelligence Consensus Algorithm
by Seong-Kyu Kim
Electronics 2022, 11(14), 2112; https://doi.org/10.3390/electronics11142112 - 6 Jul 2022
Cited by 7 | Viewed by 4555
Abstract
Certificates are often falsified, such as fake diplomas and forged transcripts. As such, many schools and educational institutions have begun to issue diplomas online. Although diplomas can be issued conveniently anytime, anywhere, there are many cases wherein diplomas are forged through hacking and [...] Read more.
Certificates are often falsified, such as fake diplomas and forged transcripts. As such, many schools and educational institutions have begun to issue diplomas online. Although diplomas can be issued conveniently anytime, anywhere, there are many cases wherein diplomas are forged through hacking and forgery. This paper deals with the required Blockchain diploma. In addition, we use an automatic translation system, which incorporates natural language processing, to perform verification work that does not require an existing public certificate. The hash algorithm is used to authenticate security. This paper also proposes the use of these security protocols to provide more secure data protection. In addition, each transaction history, whether a diploma is true or not, may be different in length if it is presented in text, but converting it into a hash function means that it is always more than a certain length of SHA-512 or higher. It is then verified using the time stamp values. These chaining codes are designed. This paper also provides the necessary experimental environment. At least 10 nodes are constructed. Blockchain platform development applies and references Blockchain standardization, and a platform test, measurement test, and performance measurement test are conducted to assess the smart contract development and performance measurement. A total of 500 nodes were obtained by averaging 200 times, and a Blockchain-based diploma file was agreed upon at the same time. It shows performance information of about 4100 TPS. In addition, the analysis of artificial intelligence distribution diagram was conducted using a four-point method, and the distribution chart was evenly distributed, confirming the diploma with the highest similarity. The verified values were then analyzed. This paper proposes these natural language processing-based Blockchain algorithms. Full article
(This article belongs to the Special Issue Electronic Solutions for Artificial Intelligence Healthcare Volume II)
Show Figures

Figure 1

44 pages, 17214 KiB  
Article
Coronary Artery Disease Detection Model Based on Class Balancing Methods and LightGBM Algorithm
by Shasha Zhang, Yuyu Yuan, Zhonghua Yao, Jincui Yang, Xinyan Wang and Jianwei Tian
Electronics 2022, 11(9), 1495; https://doi.org/10.3390/electronics11091495 - 6 May 2022
Cited by 11 | Viewed by 3174
Abstract
Coronary artery disease (CAD) is a disease with high mortality and disability. By 2019, there were 197 million CAD patients in the world. Additionally, the number of disability-adjusted life years (DALYs) owing to CAD reached 182 million. It is widely known that the [...] Read more.
Coronary artery disease (CAD) is a disease with high mortality and disability. By 2019, there were 197 million CAD patients in the world. Additionally, the number of disability-adjusted life years (DALYs) owing to CAD reached 182 million. It is widely known that the early and accurate diagnosis of CAD is the most efficient method to reduce the damage of CAD. In medical practice, coronary angiography is considered to be the most reliable basis for CAD diagnosis. However, unfortunately, due to the limitation of inspection equipment and expert resources, many low- and middle-income countries do not have the ability to perform coronary angiography. This has led to a large loss of life and medical burden. Therefore, many researchers expect to realize the accurate diagnosis of CAD based on conventional medical examination data with the help of machine learning and data mining technology. The goal of this study is to propose a model for early, accurate and rapid detection of CAD based on common medical test data. This model took the classical logistic regression algorithm, which is the most commonly used in medical model research as the classifier. The advantages of feature selection and feature combination of tree models were used to solve the problem of manual feature engineering in logical regression. At the same time, in order to solve the class imbalance problem in Z-Alizadeh Sani dataset, five different class balancing methods were applied to balance the dataset. In addition, according to the characteristics of the dataset, we also adopted appropriate preprocessing methods. These methods significantly improved the classification performance of logistic regression classifier in terms of accuracy, recall, precision, F1 score, specificity and AUC when used for CAD detection. The best accuracy, recall, F1 score, precision, specificity and AUC were 94.7%, 94.8%, 94.8%, 95.3%, 94.5% and 0.98, respectively. Experiments and results have confirmed that, according to common medical examination data, our proposed model can accurately identify CAD patients in the early stage of CAD. Our proposed model can be used to help clinicians make diagnostic decisions in clinical practice. Full article
(This article belongs to the Special Issue Electronic Solutions for Artificial Intelligence Healthcare Volume II)
Show Figures

Figure 1

19 pages, 2847 KiB  
Article
A Deep Learning-Based Action Recommendation Model for Cryptocurrency Profit Maximization
by Jaehyun Park and Yeong-Seok Seo
Electronics 2022, 11(9), 1466; https://doi.org/10.3390/electronics11091466 - 3 May 2022
Cited by 9 | Viewed by 2849
Abstract
Research on the prediction of cryptocurrency prices has been actively conducted, as cryptocurrencies have attracted considerable attention. Recently, researchers have aimed to improve the performance of price prediction methods by applying deep learning-based models. However, most studies have focused on predicting cryptocurrency prices [...] Read more.
Research on the prediction of cryptocurrency prices has been actively conducted, as cryptocurrencies have attracted considerable attention. Recently, researchers have aimed to improve the performance of price prediction methods by applying deep learning-based models. However, most studies have focused on predicting cryptocurrency prices for the following day. Therefore, clients are inconvenienced by the necessity of rapidly making complex decisions on actions that support maximizing their profit, such as “Sell”, “Buy”, and “Wait”. Furthermore, very few studies have explored the use of deep learning models to make recommendations for these actions, and the performance of such models remains low. Therefore, to solve these problems, we propose a deep learning model and three input features: sellProfit, buyProfit, and maxProfit. Through these concepts, clients are provided with criteria on which action would be most beneficial at a given current time. These criteria can be used as decision-making indices to facilitate profit maximization. To verify the effectiveness of the proposed method, daily price data of six representative cryptocurrencies were used to conduct an experiment. The results confirm that the proposed model showed approximately 13% to 21% improvement over existing methods and is statistically significant. Full article
(This article belongs to the Special Issue Electronic Solutions for Artificial Intelligence Healthcare Volume II)
Show Figures

Figure 1

19 pages, 1811 KiB  
Article
Improvement of the Performance of Models for Predicting Coronary Artery Disease Based on XGBoost Algorithm and Feature Processing Technology
by Shasha Zhang, Yuyu Yuan, Zhonghua Yao, Xinyan Wang and Zhen Lei
Electronics 2022, 11(3), 315; https://doi.org/10.3390/electronics11030315 - 20 Jan 2022
Cited by 12 | Viewed by 2671
Abstract
Coronary artery disease (CAD) is one of the diseases with the highest morbidity and mortality in the world. In 2019, the number of deaths caused by CAD reached 9.14 million. The detection and treatment of CAD in the early stage is crucial to [...] Read more.
Coronary artery disease (CAD) is one of the diseases with the highest morbidity and mortality in the world. In 2019, the number of deaths caused by CAD reached 9.14 million. The detection and treatment of CAD in the early stage is crucial to save lives and improve prognosis. Therefore, the purpose of this research is to develop a machine-learning system that can be used to help diagnose CAD accurately in the early stage. In this paper, two classical ensemble learning algorithms, namely, XGBoost algorithm and Random Forest algorithm, were used as the classification model. In order to improve the classification accuracy and performance of the model, we applied four feature processing techniques to process features respectively. In addition, synthetic minority oversampling technology (SMOTE) and adaptive synthetic (ADASYN) were used to balance the dataset, which included 71.29% CAD samples and 28.71% normal samples. The four feature processing technologies improved the performance of the classification models in terms of classification accuracy, precision, recall, F1 score and specificity. In particular, the XGBboost algorithm achieved the best prediction performance results on the dataset processed by feature construction and the SMOTE method. The best classification accuracy, recall, specificity, precision, F1 score and AUC were 94.7%, 96.1%, 93.2%, 93.4%, 94.6% and 98.0%, respectively. The experimental results prove that the proposed method can accurately and reliably identify CAD patients from suspicious patients in the early stage and can be used by medical staff for auxiliary diagnosis. Full article
(This article belongs to the Special Issue Electronic Solutions for Artificial Intelligence Healthcare Volume II)
Show Figures

Figure 1

23 pages, 4561 KiB  
Article
GAN-Based ROI Image Translation Method for Predicting Image after Hair Transplant Surgery
by Do-Yeon Hwang, Seok-Hwan Choi, Jinmyeong Shin, Moonkyu Kim and Yoon-Ho Choi
Electronics 2021, 10(24), 3066; https://doi.org/10.3390/electronics10243066 - 9 Dec 2021
Cited by 3 | Viewed by 3996
Abstract
In this paper, we propose a new deep learning-based image translation method to predict and generate images after hair transplant surgery from images before hair transplant surgery. Since existing image translation models use a naive strategy that trains the whole distribution of translation, [...] Read more.
In this paper, we propose a new deep learning-based image translation method to predict and generate images after hair transplant surgery from images before hair transplant surgery. Since existing image translation models use a naive strategy that trains the whole distribution of translation, the image translation models using the original image as the input data result in converting not only the hair transplant surgery region, which is the region of interest (ROI) for image translation, but also the other image regions, which are not the ROI. To solve this problem, we proposed a novel generative adversarial network (GAN)-based ROI image translation method, which converts only the ROI and retains the image for the non-ROI. Specifically, by performing image translation and image segmentation independently, the proposed method generates predictive images from the distribution of images after hair transplant surgery and specifies the ROI to be used for generated images. In addition, by applying the ensemble method to image segmentation, we propose a more robust method through complementing the shortages of various image segmentation models. From the experimental results using a real medical image dataset, e.g., 1394 images before hair transplantation and 896 images after hair transplantation, to train the GAN model, we show that the proposed GAN-based ROI image translation method performed better than the other GAN-based image translation methods, e.g., by 23% in SSIM (Structural Similarity Index Measure), 452% in IoU (Intersection over Union), and 42% in FID (Frechet Inception Distance), on average. Furthermore, the ensemble method that we propose not only improves ROI detection performance but also shows consistent performances in generating better predictive images from preoperative images taken from diverse angles. Full article
(This article belongs to the Special Issue Electronic Solutions for Artificial Intelligence Healthcare Volume II)
Show Figures

Figure 1

20 pages, 3039 KiB  
Article
Small-Scale Depthwise Separable Convolutional Neural Networks for Bacteria Classification
by Duc-Tho Mai and Koichiro Ishibashi
Electronics 2021, 10(23), 3005; https://doi.org/10.3390/electronics10233005 - 2 Dec 2021
Cited by 9 | Viewed by 2762
Abstract
Bacterial recognition and classification play a vital role in diagnosing disease by determining the presence of large bacteria in the specimens and the symptoms. Artificial intelligence and computer vision widely applied in the medical domain enable improving accuracy and reducing the bacterial recognition [...] Read more.
Bacterial recognition and classification play a vital role in diagnosing disease by determining the presence of large bacteria in the specimens and the symptoms. Artificial intelligence and computer vision widely applied in the medical domain enable improving accuracy and reducing the bacterial recognition and classification time, which aids in making clinical decisions and choosing the proper treatment. This paper aims to provide an approach of 33 bacteria strains’ automated classification from the Digital Images of Bacteria Species (DIBaS) dataset based on small-scale depthwise separable convolutional neural networks. Our five-layer architecture has significant advantages due to the compact model, low computational cost, and reliable recognition accuracy. The experimental results proved that the proposed design reached the highest accuracy of 96.28% with a total of 6600 images and can be executed on limited-resource devices of 3.23 million parameters and 40.02 million multiply–accumulate operations (MACs). The number of parameters in this architecture is seven times less than the smallest model listed in the literature. Full article
(This article belongs to the Special Issue Electronic Solutions for Artificial Intelligence Healthcare Volume II)
Show Figures

Figure 1

21 pages, 4216 KiB  
Article
Word Sense Disambiguation Using Prior Probability Estimation Based on the Korean WordNet
by Minho Kim and Hyuk-Chul Kwon
Electronics 2021, 10(23), 2938; https://doi.org/10.3390/electronics10232938 - 26 Nov 2021
Cited by 2 | Viewed by 2267
Abstract
Supervised disambiguation using a large amount of corpus data delivers better performance than other word sense disambiguation methods. However, it is not easy to construct large-scale, sense-tagged corpora since this requires high cost and time. On the other hand, implementing unsupervised disambiguation is [...] Read more.
Supervised disambiguation using a large amount of corpus data delivers better performance than other word sense disambiguation methods. However, it is not easy to construct large-scale, sense-tagged corpora since this requires high cost and time. On the other hand, implementing unsupervised disambiguation is relatively easy, although most of the efforts have not been satisfactory. A primary reason for the performance degradation of unsupervised disambiguation is that the semantic occurrence probability of ambiguous words is not available. Hence, a data deficiency problem occurs while determining the dependency between words. This paper proposes an unsupervised disambiguation method using a prior probability estimation based on the Korean WordNet. This performs better than supervised disambiguation. In the Korean WordNet, all the words have similar semantic characteristics to their related words. Thus, it is assumed that the dependency between words is the same as the dependency between their related words. This resolves the data deficiency problem by determining the dependency between words by calculating the χ2 statistic between related words. Moreover, in order to have the same effect as using the semantic occurrence probability as prior probability, which is used in supervised disambiguation, semantically related words of ambiguous vocabulary are obtained and utilized as prior probability data. An experiment was conducted with Korean, English, and Chinese to evaluate the performance of our proposed lexical disambiguation method. We found that our proposed method had better performance than supervised disambiguation methods even though our method is based on unsupervised disambiguation (using a knowledge-based approach). Full article
(This article belongs to the Special Issue Electronic Solutions for Artificial Intelligence Healthcare Volume II)
Show Figures

Figure 1

20 pages, 24808 KiB  
Article
Classical Music Specific Mood Automatic Recognition Model Proposal
by Suyeon Lee, Haemin Jeong and Hyeyoung Ko
Electronics 2021, 10(20), 2489; https://doi.org/10.3390/electronics10202489 - 13 Oct 2021
Cited by 5 | Viewed by 2868
Abstract
The purpose of this study was to propose an effective model for recognizing the detailed mood of classical music. First, in this study, the subject classical music was segmented via MFCC analysis by tone, which is one of the acoustic features. Short segments [...] Read more.
The purpose of this study was to propose an effective model for recognizing the detailed mood of classical music. First, in this study, the subject classical music was segmented via MFCC analysis by tone, which is one of the acoustic features. Short segments of 5 s or under, which are not easy to use in mood recognition or service, were merged with the preceding or rear segment using an algorithm. In addition, 18 adjective classes that can be used as representative moods of classical music were defined. Finally, after analyzing 19 kinds of acoustic features of classical music segments using XGBoost, a model was proposed that can automatically recognize the music mood through learning. The XGBoost algorithm that is proposed in this study, which uses the automatic music segmentation method according to the characteristics of tone and mood using acoustic features, was evaluated and shown to improve the performance of mood recognition. The result of this study will be used as a basis for the production of an affect convergence platform service where the mood is fused with similar visual media when listening to classical music by recognizing the mood of the detailed section. Full article
(This article belongs to the Special Issue Electronic Solutions for Artificial Intelligence Healthcare Volume II)
Show Figures

Figure 1

18 pages, 6228 KiB  
Article
A Novel on Conditional Min Pooling and Restructured Convolutional Neural Network
by Jun Park, Jun-Yeong Kim, Jun-Ho Huh, Han-Sung Lee, Se-Hoon Jung and Chun-Bo Sim
Electronics 2021, 10(19), 2407; https://doi.org/10.3390/electronics10192407 - 2 Oct 2021
Cited by 5 | Viewed by 2492
Abstract
There is no doubt that CNN has made remarkable technological developments as the core technology of computer vision, but the pooling technique used for CNN has its own issues. This study set out to solve the issues of the pooling technique by proposing [...] Read more.
There is no doubt that CNN has made remarkable technological developments as the core technology of computer vision, but the pooling technique used for CNN has its own issues. This study set out to solve the issues of the pooling technique by proposing conditional min pooling and a restructured convolutional neural network that improved the pooling structure to ensure efficient use of the conditional min pooling. Some Caltech 101 and crawling data were used to test the performance of the conditional min pooling and restructured convolutional neural network. The pooling performance test based on Caltech 101 increased in accuracy by 0.16~0.52% and decreased in loss by 19.98~28.71% compared with the old pooling technique. The restructured convolutional neural network did not have a big improvement in performance compared to the old algorithm, but it provided significant outcomes with similar performance results to the algorithm. This paper presents the results that the loss rate was reduced rather than the accuracy rate, and this result was achieved without the improvement of convolution. Full article
(This article belongs to the Special Issue Electronic Solutions for Artificial Intelligence Healthcare Volume II)
Show Figures

Figure 1

20 pages, 543 KiB  
Article
Automated Identification of Sleep Disorder Types Using Triplet Half-Band Filter and Ensemble Machine Learning Techniques with EEG Signals
by Manish Sharma, Jainendra Tiwari, Virendra Patel and U. Rajendra Acharya
Electronics 2021, 10(13), 1531; https://doi.org/10.3390/electronics10131531 - 25 Jun 2021
Cited by 31 | Viewed by 5633
Abstract
A sleep disorder is a medical condition that affects an individual’s regular sleeping pattern and routine, hence negatively affecting the individual’s health. The traditional procedures of identifying sleep disorders by clinicians involve questionnaires and polysomnography (PSG), which are subjective, time-consuming, and inconvenient. Hence, [...] Read more.
A sleep disorder is a medical condition that affects an individual’s regular sleeping pattern and routine, hence negatively affecting the individual’s health. The traditional procedures of identifying sleep disorders by clinicians involve questionnaires and polysomnography (PSG), which are subjective, time-consuming, and inconvenient. Hence, an automated sleep disorder identification is required to overcome these limitations. In the proposed study, we have proposed a method using electroencephalogram (EEG) signals for the automated identification of six sleep disorders, namely insomnia, nocturnal frontal lobe epilepsy (NFLE), narcolepsy, rapid eye movement behavior disorder (RBD), periodic leg movement disorder (PLM), and sleep-disordered breathing (SDB). To the best of our belief, this is one of the first studies ever undertaken to identify sleep disorders using EEG signals employing cyclic alternating pattern (CAP) sleep database. After sleep-scoring EEG epochs, we have created eight different data subsets of EEG epochs to develop the proposed model. A novel optimal triplet half-band filter bank (THFB) is used to obtain the subbands of EEG signals. We have extracted Hjorth parameters from subbands of EEG epochs. The selected features are fed to various supervised machine learning algorithms for the automated classification of sleep disorders. Our proposed system has obtained the highest accuracy of 99.2%, 98.2%, 96.2%, 98.3%, 98.8%, and 98.8% for insomnia, narcolepsy, NFLE, PLM, RBD, and SDB classes against normal healthy subjects, respectively, applying ensemble boosted trees classifier. As a result, we have attained the highest accuracy of 91.3% to identify the type of sleep disorder. The proposed method is simple, fast, efficient, and may reduce the challenges faced by medical practitioners during the diagnosis of various sleep disorders accurately in less time at sleep clinics and homes. Full article
(This article belongs to the Special Issue Electronic Solutions for Artificial Intelligence Healthcare Volume II)
Show Figures

Figure 1

Review

Jump to: Research

48 pages, 12306 KiB  
Review
A Survey of Recommendation Systems: Recommendation Models, Techniques, and Application Fields
by Hyeyoung Ko, Suyeon Lee, Yoonseo Park and Anna Choi
Electronics 2022, 11(1), 141; https://doi.org/10.3390/electronics11010141 - 3 Jan 2022
Cited by 238 | Viewed by 48062
Abstract
This paper reviews the research trends that link the advanced technical aspects of recommendation systems that are used in various service areas and the business aspects of these services. First, for a reliable analysis of recommendation models for recommendation systems, data mining technology, [...] Read more.
This paper reviews the research trends that link the advanced technical aspects of recommendation systems that are used in various service areas and the business aspects of these services. First, for a reliable analysis of recommendation models for recommendation systems, data mining technology, and related research by application service, more than 135 top-ranking articles and top-tier conferences published in Google Scholar between 2010 and 2021 were collected and reviewed. Based on this, studies on recommendation system models and the technology used in recommendation systems were systematized, and research trends by year were analyzed. In addition, the application service fields where recommendation systems were used were classified, and research on the recommendation system model and recommendation technique used in each field was analyzed. Furthermore, vast amounts of application service-related data used by recommendation systems were collected from 2010 to 2021 without taking the journal ranking into consideration and reviewed along with various recommendation system studies, as well as applied service field industry data. As a result of this study, it was found that the flow and quantitative growth of various detailed studies of recommendation systems interact with the business growth of the actual applied service field. While providing a comprehensive summary of recommendation systems, this study provides insight to many researchers interested in recommendation systems through the analysis of its various technologies and trends in the service field to which recommendation systems are applied. Full article
(This article belongs to the Special Issue Electronic Solutions for Artificial Intelligence Healthcare Volume II)
Show Figures

Graphical abstract

26 pages, 7936 KiB  
Review
A Survey to Reduce STDs Infection in Mongolia and Big Data Virtualization Propagation
by Woo-Hyuk Choi and Jun-Ho Huh
Electronics 2021, 10(24), 3101; https://doi.org/10.3390/electronics10243101 - 13 Dec 2021
Cited by 1 | Viewed by 5378
Abstract
Sexually transmitted diseases refer to clinical syndromes and infections that are acquired and transmitted through sexual activity. Worldwide, more than 340 million cases of sexually transmitted disease occur each year, placing a great burden on individuals as well as communities and countries. The [...] Read more.
Sexually transmitted diseases refer to clinical syndromes and infections that are acquired and transmitted through sexual activity. Worldwide, more than 340 million cases of sexually transmitted disease occur each year, placing a great burden on individuals as well as communities and countries. The proportion of sexually transmitted diseases (STDs) in Mongolia is relatively high due to their inadequate treatment technologies, religious or local customs, and regional differences. It is rather difficult to grasp the exact number of patients as these diseases are considered ones that should not be disclosed to others. Therefore, this study aims to accurately identify sexually transmitted diseases in Mongolia and reduce infection through an analytic approach of big data virtualization propagation. Full article
(This article belongs to the Special Issue Electronic Solutions for Artificial Intelligence Healthcare Volume II)
Show Figures

Figure 1

26 pages, 105158 KiB  
Review
Overview of Smart Aquaculture System: Focusing on Applications of Machine Learning and Computer Vision
by Thi Thu Em Vo, Hyeyoung Ko, Jun-Ho Huh and Yonghoon Kim
Electronics 2021, 10(22), 2882; https://doi.org/10.3390/electronics10222882 - 22 Nov 2021
Cited by 47 | Viewed by 27933
Abstract
Smart aquaculture is nowadays one of the sustainable development trends for the aquaculture industry in intelligence and automation. Modern intelligent technologies have brought huge benefits to many fields including aquaculture to reduce labor, enhance aquaculture production, and be friendly to the environment. Machine [...] Read more.
Smart aquaculture is nowadays one of the sustainable development trends for the aquaculture industry in intelligence and automation. Modern intelligent technologies have brought huge benefits to many fields including aquaculture to reduce labor, enhance aquaculture production, and be friendly to the environment. Machine learning is a subdivision of artificial intelligence (AI) by using trained algorithm models to recognize and learn traits from the data it watches. To date, there are several studies about applications of machine learning for smart aquaculture including measuring size, weight, grading, disease detection, and species classification. This review provides and overview of the development of smart aquaculture and intelligent technology. We summarized and collected 100 articles about machine learning in smart aquaculture from nearly 10 years about the methodology, results as well as the recent technology that should be used for development of smart aquaculture. We hope that this review will give readers interested in this field useful information. Full article
(This article belongs to the Special Issue Electronic Solutions for Artificial Intelligence Healthcare Volume II)
Show Figures

Figure 1

40 pages, 35280 KiB  
Review
Review on Generative Adversarial Networks: Focusing on Computer Vision and Its Applications
by Sung-Wook Park, Jae-Sub Ko, Jun-Ho Huh and Jong-Chan Kim
Electronics 2021, 10(10), 1216; https://doi.org/10.3390/electronics10101216 - 20 May 2021
Cited by 61 | Viewed by 13789
Abstract
The emergence of deep learning model GAN (Generative Adversarial Networks) is an important turning point in generative modeling. GAN is more powerful in feature and expression learning compared to machine learning-based generative model algorithms. Nowadays, it is also used to generate non-image data, [...] Read more.
The emergence of deep learning model GAN (Generative Adversarial Networks) is an important turning point in generative modeling. GAN is more powerful in feature and expression learning compared to machine learning-based generative model algorithms. Nowadays, it is also used to generate non-image data, such as voice and natural language. Typical technologies include BERT (Bidirectional Encoder Representations from Transformers), GPT-3 (Generative Pretrained Transformer-3), and MuseNet. GAN differs from the machine learning-based generative model and the objective function. Training is conducted by two networks: generator and discriminator. The generator converts random noise into a true-to-life image, whereas the discriminator distinguishes whether the input image is real or synthetic. As the training continues, the generator learns more sophisticated synthesis techniques, and the discriminator grows into a more accurate differentiator. GAN has problems, such as mode collapse, training instability, and lack of evaluation matrix, and many researchers have tried to solve these problems. For example, solutions such as one-sided label smoothing, instance normalization, and minibatch discrimination have been proposed. The field of application has also expanded. This paper provides an overview of GAN and application solutions for computer vision and artificial intelligence healthcare field researchers. The structure and principle of operation of GAN, the core models of GAN proposed to date, and the theory of GAN were analyzed. Application examples of GAN such as image classification and regression, image synthesis and inpainting, image-to-image translation, super-resolution and point registration were then presented. The discussion tackled GAN’s problems and solutions, and the future research direction was finally proposed. Full article
(This article belongs to the Special Issue Electronic Solutions for Artificial Intelligence Healthcare Volume II)
Show Figures

Figure 1

Back to TopTop