Next Issue
Volume 12, June
Previous Issue
Volume 12, April
 
 

Information, Volume 12, Issue 5 (May 2021) – 40 articles

Cover Story (view full-size image): As the Android platform continues to dominate the market, malware writers consider it as their preferred target. State-of-the-art malware detection solutions capitalize on ML to detect pieces of malware. Nevertheless, our findings clearly indicate that the majority of existing works utilize different metrics and models and employ diverse datasets and classification features. This complicates the cross-comparison and may also raise doubts about the derived results. This work attempts to schematize the so far ML-powered malware detection approaches by organizing them under four axes, namely, the age of the dataset, the analysis type, the employed ML techniques, and the chosen performance metrics. Moreover, we introduce a converging scheme that can guide future detection techniques and provide a solid baseline to ML practices in this field. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
11 pages, 227 KiB  
Article
Autonomous Weapons Systems and the Contextual Nature of Hors de Combat Status
by Steven Umbrello and Nathan Gabriel Wood
Information 2021, 12(5), 216; https://doi.org/10.3390/info12050216 - 20 May 2021
Cited by 6 | Viewed by 4653
Abstract
Autonomous weapons systems (AWS), sometimes referred to as “killer robots”, are receiving ever more attention, both in public discourse as well as by scholars and policymakers. Much of this interest is connected to emerging ethical and legal problems linked to increasing autonomy in [...] Read more.
Autonomous weapons systems (AWS), sometimes referred to as “killer robots”, are receiving ever more attention, both in public discourse as well as by scholars and policymakers. Much of this interest is connected to emerging ethical and legal problems linked to increasing autonomy in weapons systems, but there is a general underappreciation for the ways in which existing law might impact on these new technologies. In this paper, we argue that as AWS become more sophisticated and increasingly more capable than flesh-and-blood soldiers, it will increasingly be the case that such soldiers are “in the power” of those AWS which fight against them. This implies that such soldiers ought to be considered hors de combat, and not targeted. In arguing for this point, we draw out a broader conclusion regarding hors de combat status, namely that it must be viewed contextually, with close reference to the capabilities of combatants on both sides of any discreet engagement. Given this point, and the fact that AWS may come in many shapes and sizes, and can be made for many different missions, we argue that each particular AWS will likely need its own standard for when enemy soldiers are deemed hors de combat. We conclude by examining how these nuanced views of hors de combat status might impact on meaningful human control of AWS. Full article
17 pages, 2689 KiB  
Article
Network Traffic Anomaly Detection via Deep Learning
by Konstantina Fotiadou, Terpsichori-Helen Velivassaki, Artemis Voulkidis, Dimitrios Skias, Sofia Tsekeridou and Theodore Zahariadis
Information 2021, 12(5), 215; https://doi.org/10.3390/info12050215 - 19 May 2021
Cited by 44 | Viewed by 10562
Abstract
Network intrusion detection is a key pillar towards the sustainability and normal operation of information systems. Complex threat patterns and malicious actors are able to cause severe damages to cyber-systems. In this work, we propose novel Deep Learning formulations for detecting threats and [...] Read more.
Network intrusion detection is a key pillar towards the sustainability and normal operation of information systems. Complex threat patterns and malicious actors are able to cause severe damages to cyber-systems. In this work, we propose novel Deep Learning formulations for detecting threats and alerts on network logs that were acquired by pfSense, an open-source software that acts as firewall on FreeBSD operating system. pfSense integrates several powerful security services such as firewall, URL filtering, and virtual private networking among others. The main goal of this study is to analyse the logs that were acquired by a local installation of pfSense software, in order to provide a powerful and efficient solution that controls traffic flow based on patterns that are automatically learnt via the proposed, challenging DL architectures. For this purpose, we exploit the Convolutional Neural Networks (CNNs), and the Long Short Term Memory Networks (LSTMs) in order to construct robust multi-class classifiers, able to assign each new network log instance that reaches our system into its corresponding category. The performance of our scheme is evaluated by conducting several quantitative experiments, and by comparing to state-of-the-art formulations. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

16 pages, 487 KiB  
Article
Missing Link Prediction Using Non-Overlapped Features and Multiple Sources of Social Networks
by Pokpong Songmuang, Chainarong Sirisup and Aroonwan Suebsriwichai
Information 2021, 12(5), 214; https://doi.org/10.3390/info12050214 - 18 May 2021
Cited by 4 | Viewed by 2834
Abstract
The current methods for missing link prediction in social networks focus on using data from overlapping users from two social network sources to recommend links between unconnected users. To improve prediction of the missing link, this paper presents the use of information from [...] Read more.
The current methods for missing link prediction in social networks focus on using data from overlapping users from two social network sources to recommend links between unconnected users. To improve prediction of the missing link, this paper presents the use of information from non-overlapping users as additional features in training a prediction model using a machine-learning approach. The proposed features are designed to use together with the common features as extra features to help in tuning up for a better classification model. The social network data sources used in this paper are Twitter and Facebook where Twitter is a main data for prediction and Facebook is a supporting data. For evaluations, a comparison using different machine-learning techniques, feature settings, and different network-density level of data source is studied. The experimental results can be concluded that the prediction model using a combination of the proposed features and the common features with Random Forest technique gained the best efficiency using percentage amount of recovering missing links and F1 score. The model of combined features yields higher percentage of recovering link by an average of 23.25% and the F1-measure by an average of 19.80% than the baseline of multi-social network source. Full article
Show Figures

Figure 1

12 pages, 2816 KiB  
Article
Design Guidelines for the Size and Length of Chinese Characters Displayed in the Intelligent Vehicle’s Central Console Interface
by Fang You, Yi-Fan Yang, Meng-Ting Fu, Jun Zhang and Jian-Min Wang
Information 2021, 12(5), 213; https://doi.org/10.3390/info12050213 - 18 May 2021
Cited by 2 | Viewed by 2809
Abstract
In order to ensure the driver’s safe driving, the human–computer interaction interface of an intelligent vehicle needs to convey important information. The text is an important carrier of this kind of information. The design criteria of English characters have been widely discussed, including [...] Read more.
In order to ensure the driver’s safe driving, the human–computer interaction interface of an intelligent vehicle needs to convey important information. The text is an important carrier of this kind of information. The design criteria of English characters have been widely discussed, including the color, meaning, size and length. However, design guidelines for Chinese characters in central consoles of vehicles have rarely been discussed from a human–computer interaction perspective. In this paper, we investigated the size and the length of Chinese characters in the intelligent vehicle’s central control screen, based on international design guidelines and standards. The experiment involved 30 participants performing simulated in-vehicle secondary tasks. The result from the experiments shows that the usability of characters increases and the driver’s workload decreases as the characters get larger and shorter. We also propose a set of recommended values for the size and length of Chinese characters in this context. Future work will focus on providing design guidelines for other aspects of HMI design in intelligent vehicles. Full article
Show Figures

Figure 1

12 pages, 330 KiB  
Review
A Survey on Old and New Approximations to the Function ϕ(x) Involved in LDPC Codes Density Evolution Analysis Using a Gaussian Approximation
by Francesca Vatta, Alessandro Soranzo, Massimiliano Comisso, Giulia Buttazzoni and Fulvio Babich
Information 2021, 12(5), 212; https://doi.org/10.3390/info12050212 - 17 May 2021
Cited by 3 | Viewed by 2170
Abstract
Low Density Parity Check (LDPC) codes are currently being deeply analyzed through algorithms that require the capability of addressing their iterative decoding convergence performance. Since it has been observed that the probability distribution function of the decoder’s log-likelihood ratio messages is roughly Gaussian, [...] Read more.
Low Density Parity Check (LDPC) codes are currently being deeply analyzed through algorithms that require the capability of addressing their iterative decoding convergence performance. Since it has been observed that the probability distribution function of the decoder’s log-likelihood ratio messages is roughly Gaussian, a multiplicity of moderate entanglement strategies to this analysis has been suggested. The first of them was proposed in Chung et al.’s 2001 paper, where the recurrent sequence, characterizing the passage of messages between variable and check nodes, concerns the function ϕ(x), therein specified, and its inverse. In this paper, we review this old approximation to the function ϕ(x), one variant on it obtained in the same period (proposed in Ha et al.’s 2004 paper), and some new ones, recently published in two 2019 papers by Vatta et al. The objective of this review is to analyze the differences among them and their characteristics in terms of accuracy and computational complexity. In particular, the explicitly invertible, not piecewise defined approximation of the function ϕ(x), published in the second of the two abovementioned 2019 papers, is shown to have less relative error in any x than most of the other approximations. Moreover, its use conducts to an important complexity reduction, and allows better Gaussian approximated thresholds to be obtained. Full article
(This article belongs to the Section Review)
Show Figures

Figure 1

20 pages, 1333 KiB  
Article
Sub-Tree-Based Approach for Reconfiguration of Light-Tree Pair without Flow Interruption in Sparse Wavelength Converter Network
by Amanvon Ferdinand Atta, Joël Christian Adépo, Bernard Cousin and Souleymane Oumtanaga
Information 2021, 12(5), 211; https://doi.org/10.3390/info12050211 - 17 May 2021
Viewed by 1946
Abstract
Network reconfiguration is an important mechanism for network operators to optimize network performance and optical flow transfer. It concerns unicast and multicast connections. Multicast connections are required to meet the bandwidth requirements of multicast applications, such as Internet Protocol-based TeleVision (IPTV), distance learning, [...] Read more.
Network reconfiguration is an important mechanism for network operators to optimize network performance and optical flow transfer. It concerns unicast and multicast connections. Multicast connections are required to meet the bandwidth requirements of multicast applications, such as Internet Protocol-based TeleVision (IPTV), distance learning, and telemedicine. In optical networks, a multicast connection is made possible by the creation of an optical tree-shaped path called a light-tree. The problem of light-tree pair reconfiguration is addressed in this study. Given an initial light-tree used to transfer an optical flow and a final light-tree that is computed by the network operator to optimize network performance, the goal is to migrate the optical flow from the initial light-tree to the final light-tree without flow interruption. Flow interruption is not desirable for network operators because it forces them to pay financial penalties to their customers. To solve this problem, existing methods use a branch approach that is inefficient if some network nodes do not have wavelength conversion capability. Therefore, we proposed in this study a sub-tree-based method. This approach selects and configures sub-tree pairs from the light-tree pair (initial light-tree, final light-tree) to be reconfigured. Then, we produce a sequence of configurations. The performance study confirms that our method is efficient in solving the problem of light-tree pair reconfiguration because our method does not cause flow interruption. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

12 pages, 3771 KiB  
Article
A Hybrid Model for Air Quality Prediction Based on Data Decomposition
by Shurui Fan, Dongxia Hao, Yu Feng, Kewen Xia and Wenbiao Yang
Information 2021, 12(5), 210; https://doi.org/10.3390/info12050210 - 15 May 2021
Cited by 15 | Viewed by 3446
Abstract
Accurate and reliable air quality predictions are critical to the ecological environment and public health. For the traditional model fails to make full use of the high and low frequency information obtained after wavelet decomposition, which easily leads to poor prediction performance of [...] Read more.
Accurate and reliable air quality predictions are critical to the ecological environment and public health. For the traditional model fails to make full use of the high and low frequency information obtained after wavelet decomposition, which easily leads to poor prediction performance of the model. This paper proposes a hybrid prediction model based on data decomposition, choosing wavelet decomposition (WD) to generate high-frequency detail sequences WD(D) and low-frequency approximate sequences WD(A), using sliding window high-frequency detail sequences WD(D) for reconstruction processing, and long short-term memory (LSTM) neural network and autoregressive moving average (ARMA) model for WD(D) and WD(A) sequences for prediction. The final prediction results of air quality can be obtained by accumulating the predicted values of each sub-sequence, which reduces the root mean square error (RMSE) by 52%, mean absolute error (MAE) by 47%, and increases the goodness of fit (R2) by 18% compared with the single prediction model. Compared with the mixed model, reduced the RMSE by 3%, reduced the MAE by 3%, and increased the R2 by 0.5%. The experimental verification found that the proposed prediction model solves the problem of lagging prediction results of single prediction model, which is a feasible air quality prediction method. Full article
Show Figures

Figure 1

21 pages, 3125 KiB  
Article
Digital Learning Support for Makers: Integrating Technical Development and Educational Design
by Claudia Kaar and Christian Stary
Information 2021, 12(5), 209; https://doi.org/10.3390/info12050209 - 15 May 2021
Cited by 3 | Viewed by 2558
Abstract
Makerspaces have gained momentum, not only due to novel manufacturing technologies but also the need for qualified workforce in production industries. Capacity building should not follow ad hoc procedures or arbitrary project designs to qualify for digital production, but rather should still leave [...] Read more.
Makerspaces have gained momentum, not only due to novel manufacturing technologies but also the need for qualified workforce in production industries. Capacity building should not follow ad hoc procedures or arbitrary project designs to qualify for digital production, but rather should still leave room for creativity. As such, the quest has arisen for structured while empowering guidance of additive manufacturing. This can be of benefit for timely education, not only for qualifying existing workforce in production industries but also to attract students in production-related domains. In this paper, we aim to develop an integrated understanding of technical development and capacity-building support activities. We exemplify the proposed design science approach with a regional makerspace. This provides us with the user-centered evaluation of structuring additive manufacturing along an individualized education scheme. Thereby, additive manufacturing capacity building starts with individual goal setting and structuring requirements for an envisioned solution, which becomes part of a learning contract of a specific project. Learning steps are framed by design science and its stages and cycles, since artifacts can be of various kinds, stemming either from construction, modeling, material selection, or manufacturing. The evaluation study revealed essential benefits in terms of structured planning and individualization of capacity-building processes. Full article
Show Figures

Figure 1

13 pages, 238 KiB  
Article
Measuring Awareness of Social Engineering in the Educational Sector in the Kingdom of Saudi Arabia
by Majid H. Alsulami, Fawaz D. Alharbi, Hamdan M. Almutairi, Bandar S. Almutairi, Mohammed M. Alotaibi, Majdi E. Alanzi, Khaled G. Alotaibi and Sultan S. Alharthi
Information 2021, 12(5), 208; https://doi.org/10.3390/info12050208 - 13 May 2021
Cited by 8 | Viewed by 5432
Abstract
Social engineering is one of the most inventive methods of gaining unauthorized access to information systems and obtaining sensitive information. This type of cybersecurity threat requires minimal technical knowledge because it relies on the organization’s human element. Social engineers use various techniques, such [...] Read more.
Social engineering is one of the most inventive methods of gaining unauthorized access to information systems and obtaining sensitive information. This type of cybersecurity threat requires minimal technical knowledge because it relies on the organization’s human element. Social engineers use various techniques, such as phishing, to manipulate users into either granting them access to various systems or disclosing their private data and information. Social engineering attacks can cost organizations more than 100,000 USD per instance. Therefore, it is necessary for organizations to increase their users’ awareness of social engineering attacks to mitigate the problem. The aim of this study is to provide a measurement of social engineering awareness in the Saudi educational sector. To achieve the aim of this study, a questionnaire was developed and evaluated. A total of 465 respondents completed the survey and answered questions related to measuring their knowledge of social engineering. The results show that 34% of participants (158 participants) had previous knowledge of social engineering approaches. The results also indicate that there are significant differences between participants with prior knowledge of social engineering and those with no such knowledge in terms of their security practices and skills. The implication of this study is that training is an essential factor in increasing the awareness of social engineering attacks in the Saudi educational sector. Full article
(This article belongs to the Section Information Systems)
13 pages, 1336 KiB  
Article
Multi-Task Learning for Sentiment Analysis with Hard-Sharing and Task Recognition Mechanisms
by Jian Zhang, Ke Yan and Yuchang Mo
Information 2021, 12(5), 207; https://doi.org/10.3390/info12050207 - 12 May 2021
Cited by 16 | Viewed by 3840
Abstract
In the era of big data, multi-task learning has become one of the crucial technologies for sentiment analysis and classification. Most of the existing multi-task learning models for sentiment analysis are developed based on the soft-sharing mechanism that has less interference between different [...] Read more.
In the era of big data, multi-task learning has become one of the crucial technologies for sentiment analysis and classification. Most of the existing multi-task learning models for sentiment analysis are developed based on the soft-sharing mechanism that has less interference between different tasks than the hard-sharing mechanism. However, there are also fewer essential features that the model can extract with the soft-sharing method, resulting in unsatisfactory classification performance. In this paper, we propose a multi-task learning framework based on a hard-sharing mechanism for sentiment analysis in various fields. The hard-sharing mechanism is achieved by a shared layer to build the interrelationship among multiple tasks. Then, we design a task recognition mechanism to reduce the interference of the hard-shared feature space and also to enhance the correlation between multiple tasks. Experiments on two real-world sentiment classification datasets show that our approach achieves the best results and improves the classification accuracy over the existing methods significantly. The task recognition training process enables a unique representation of the features of different tasks in the shared feature space, providing a new solution reducing interference in the shared feature space for sentiment analysis. Full article
Show Figures

Figure 1

16 pages, 4460 KiB  
Article
Vectorization of Floor Plans Based on EdgeGAN
by Shuai Dong, Wei Wang, Wensheng Li and Kun Zou
Information 2021, 12(5), 206; https://doi.org/10.3390/info12050206 - 12 May 2021
Cited by 18 | Viewed by 5128
Abstract
A 2D floor plan (FP) often contains structural, decorative, and functional elements and annotations. Vectorization of floor plans (VFP) is an object detection task that involves the localization and recognition of different structural primitives in 2D FPs. The detection results can be used [...] Read more.
A 2D floor plan (FP) often contains structural, decorative, and functional elements and annotations. Vectorization of floor plans (VFP) is an object detection task that involves the localization and recognition of different structural primitives in 2D FPs. The detection results can be used to generate 3D models directly. The conventional pipeline of VFP often consists of a series of carefully designed complex algorithms with insufficient generalization ability and suffer from low computing speed. Considering the VFP is not suitable for deep learning-based object detection frameworks, this paper proposed a new VFP framework to solve this problem based on a generative adversarial network (GAN). First, a private dataset called ZSCVFP is established. Unlike current public datasets that only own not more than 5000 black and white samples, ZSCVFP contains 10,800 colorful samples disturbed by decorative textures in different styles. Second, a new edge-extracting GAN (EdgeGAN) is designed for the new task by formulating the VFP task as an image translation task innovatively that involves the projection of the original 2D FPs into a primitive space. The output of EdgeGAN is a primitive feature map, each channel of which only contains one category of the detected primitives in the form of lines. A self-supervising term is introduced to the generative loss of EdgeGAN to ensure the quality of generated images. EdgeGAN is faster than the conventional and object-detection-framework-based pipeline with minimal performance loss. Lastly, two inspection modules that are also suitable for conventional pipelines are proposed to check the connectivity and consistency of PFM based on the subspace connective graph (SCG). The first module contains four criteria that correspond to the sufficient conditions of a fully connected graph. The second module that classifies the category of all subspaces via one single graph neural network (GNN) should be consistent with the text annotations in the original FP (if available). The reason is that GNN treats the adjacent matrix of SCG as weights directly. Thus, GNN can utilize the global layout information and achieve higher accuracy than other common classifying methods. Experimental results are given to illustrate the efficiency of the proposed EdgeGAN and inspection approaches. Full article
(This article belongs to the Special Issue Machine Learning and Accelerator Technology)
Show Figures

Figure 1

16 pages, 1236 KiB  
Article
A Study of Multilingual Toxic Text Detection Approaches under Imbalanced Sample Distribution
by Guizhe Song, Degen Huang and Zhifeng Xiao
Information 2021, 12(5), 205; https://doi.org/10.3390/info12050205 - 12 May 2021
Cited by 11 | Viewed by 5691
Abstract
Multilingual characteristics, lack of annotated data, and imbalanced sample distribution are the three main challenges for toxic comment analysis in a multilingual setting. This paper proposes a multilingual toxic text classifier which adopts a novel fusion strategy that combines different loss functions and [...] Read more.
Multilingual characteristics, lack of annotated data, and imbalanced sample distribution are the three main challenges for toxic comment analysis in a multilingual setting. This paper proposes a multilingual toxic text classifier which adopts a novel fusion strategy that combines different loss functions and multiple pre-training models. Specifically, the proposed learning pipeline starts with a series of pre-processing steps, including translation, word segmentation, purification, text digitization, and vectorization, to convert word tokens to a vectorized form suitable for the downstream tasks. Two models, multilingual bidirectional encoder representation from transformers (MBERT) and XLM-RoBERTa (XLM-R), are employed for pre-training through Masking Language Modeling (MLM) and Translation Language Modeling (TLM), which incorporate semantic and contextual information into the models. We train six base models and fuse them to obtain three fusion models using the F1 scores as the weights. The models are evaluated on the Jigsaw Multilingual Toxic Comment dataset. Experimental results show that the best fusion model outperforms the two state-of-the-art models, MBERT and XLM-R, in F1 score by 5.05% and 0.76%, respectively, verifying the effectiveness and robustness of the proposed fusion strategy. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

16 pages, 3592 KiB  
Article
Twitter Sentiment Analysis towards COVID-19 Vaccines in the Philippines Using Naïve Bayes
by Charlyn Villavicencio, Julio Jerison Macrohon, X. Alphonse Inbaraj, Jyh-Horng Jeng and Jer-Guang Hsieh
Information 2021, 12(5), 204; https://doi.org/10.3390/info12050204 - 11 May 2021
Cited by 133 | Viewed by 18795
Abstract
A year into the COVID-19 pandemic and one of the longest recorded lockdowns in the world, the Philippines received its first delivery of COVID-19 vaccines on 1 March 2021 through WHO’s COVAX initiative. A month into inoculation of all frontline health professionals and [...] Read more.
A year into the COVID-19 pandemic and one of the longest recorded lockdowns in the world, the Philippines received its first delivery of COVID-19 vaccines on 1 March 2021 through WHO’s COVAX initiative. A month into inoculation of all frontline health professionals and other priority groups, the authors of this study gathered data on the sentiment of Filipinos regarding the Philippine government’s efforts using the social networking site Twitter. Natural language processing techniques were applied to understand the general sentiment, which can help the government in analyzing their response. The sentiments were annotated and trained using the Naïve Bayes model to classify English and Filipino language tweets into positive, neutral, and negative polarities through the RapidMiner data science software. The results yielded an 81.77% accuracy, which outweighs the accuracy of recent sentiment analysis studies using Twitter data from the Philippines. Full article
(This article belongs to the Special Issue News Research in Social Networks and Social Media)
Show Figures

Figure 1

20 pages, 5669 KiB  
Article
BCoT Sentry: A Blockchain-Based Identity Authentication Framework for IoT Devices
by Liangqin Gong, Daniyal M. Alghazzawi and Li Cheng
Information 2021, 12(5), 203; https://doi.org/10.3390/info12050203 - 10 May 2021
Cited by 33 | Viewed by 5543
Abstract
In Internet of Things (IoT) environments, privacy and security are among some of the significant challenges. Recently, several studies have attempted to apply blockchain technology to increase IoT network security. However, the lightweight feature of IoT devices commonly fails to meet computational intensive [...] Read more.
In Internet of Things (IoT) environments, privacy and security are among some of the significant challenges. Recently, several studies have attempted to apply blockchain technology to increase IoT network security. However, the lightweight feature of IoT devices commonly fails to meet computational intensive requirements for blockchain-based security models. In this work, we propose a mechanism to address this issue. We design an IoT blockchain architecture to store device identity information in a distributed ledger. We propose a Blockchain of Things (BCoT) Gateway to facilitate the recording of authentication transactions in a blockchain network without modifying existing device hardware or applications. Furthermore, we introduce a new device recognition model that is suitable for blockchain-based identity authentication, where we employ a novel feature selection method for device traffic flow. Finally, we develop the BCoT Sentry framework as a reference implementation of our proposed method. Experiment results verify the feasibility of our proposed framework. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

19 pages, 793 KiB  
Article
TraceAll: A Real-Time Processing for Contact Tracing Using Indoor Trajectories
by Louai Alarabi, Saleh Basalamah, Abdeltawab Hendawi and Mohammed Abdalla
Information 2021, 12(5), 202; https://doi.org/10.3390/info12050202 - 6 May 2021
Cited by 14 | Viewed by 3324
Abstract
The rapid spread of infectious diseases is a major public health problem. Recent developments in fighting these diseases have heightened the need for a contact tracing process. Contact tracing can be considered an ideal method for controlling the transmission of infectious diseases. The [...] Read more.
The rapid spread of infectious diseases is a major public health problem. Recent developments in fighting these diseases have heightened the need for a contact tracing process. Contact tracing can be considered an ideal method for controlling the transmission of infectious diseases. The result of the contact tracing process is performing diagnostic tests, treating for suspected cases or self-isolation, and then treating for infected persons; this eventually results in limiting the spread of diseases. This paper proposes a technique named TraceAll that traces all contacts exposed to the infected patient and produces a list of these contacts to be considered potentially infected patients. Initially, it considers the infected patient as the querying user and starts to fetch the contacts exposed to him. Secondly, it obtains all the trajectories that belong to the objects moved nearby the querying user. Next, it investigates these trajectories by considering the social distance and exposure period to identify if these objects have become infected or not. The experimental evaluation of the proposed technique with real data sets illustrates the effectiveness of this solution. Comparative analysis experiments confirm that TraceAll outperforms baseline methods by 40% regarding the efficiency of answering contact tracing queries. Full article
(This article belongs to the Special Issue Big Spatial Data Management)
Show Figures

Figure 1

25 pages, 3106 KiB  
Article
Combatting Visual Fake News with a Professional Fact-Checking Tool in Education in France, Romania, Spain and Sweden
by Thomas Nygren, Mona Guath, Carl-Anton Werner Axelsson and Divina Frau-Meigs
Information 2021, 12(5), 201; https://doi.org/10.3390/info12050201 - 6 May 2021
Cited by 31 | Viewed by 9620
Abstract
Educational and technical resources are regarded as central in combating disinformation and safeguarding democracy in an era of ‘fake news’. In this study, we investigated whether a professional fact-checking tool could be utilised in curricular activity to make pupils more skilled in determining [...] Read more.
Educational and technical resources are regarded as central in combating disinformation and safeguarding democracy in an era of ‘fake news’. In this study, we investigated whether a professional fact-checking tool could be utilised in curricular activity to make pupils more skilled in determining the credibility of digital news and to inspire them to use digital tools to further their transliteracy and technocognition. In addition, we explored how pupils’ performance and attitudes regarding digital news and tools varied across four countries (France, Romania, Spain, and Sweden). Our findings showed that a two-hour intervention had a statistically significant impact on teenagers’ abilities to determine the credibility of fake images and videos. We also found that the intervention inspired pupils to use digital tools in information credibility assessments. Importantly, the intervention did not make pupils more sceptical of credible news. The impact of the intervention was greater in Romania and Spain than among pupils in Sweden and France. The greater impact in these two countries, we argue, is due to cultural context and the fact that pupils in Romania and Spain learned to focus less on ’gut feelings’, increased their use of digital tools, and had a more positive attitude toward the use of the fact-checking tool than pupils in Sweden and France. Full article
(This article belongs to the Special Issue Evaluating Methods and Decision Making)
Show Figures

Figure 1

21 pages, 964 KiB  
Review
Ontology-Based Approach to Semantically Enhanced Question Answering for Closed Domain: A Review
by Ammar Arbaaeen and Asadullah Shah
Information 2021, 12(5), 200; https://doi.org/10.3390/info12050200 - 1 May 2021
Cited by 9 | Viewed by 5866
Abstract
For many users of natural language processing (NLP), it can be challenging to obtain concise, accurate and precise answers to a question. Systems such as question answering (QA) enable users to ask questions and receive feedback in the form of quick answers to [...] Read more.
For many users of natural language processing (NLP), it can be challenging to obtain concise, accurate and precise answers to a question. Systems such as question answering (QA) enable users to ask questions and receive feedback in the form of quick answers to questions posed in natural language, rather than in the form of lists of documents delivered by search engines. This task is challenging and involves complex semantic annotation and knowledge representation. This study reviews the literature detailing ontology-based methods that semantically enhance QA for a closed domain, by presenting a literature review of the relevant studies published between 2000 and 2020. The review reports that 83 of the 124 papers considered acknowledge the QA approach, and recommend its development and evaluation using different methods. These methods are evaluated according to accuracy, precision, and recall. An ontological approach to semantically enhancing QA is found to be adopted in a limited way, as many of the studies reviewed concentrated instead on NLP and information retrieval (IR) processing. While the majority of the studies reviewed focus on open domains, this study investigates the closed domain. Full article
Show Figures

Figure 1

21 pages, 307 KiB  
Article
Exploring Reusability and Reproducibility for a Research Infrastructure for L1 and L2 Learner Corpora
by Alexander König, Jennifer-Carmen Frey and Egon W. Stemle
Information 2021, 12(5), 199; https://doi.org/10.3390/info12050199 - 30 Apr 2021
Cited by 3 | Viewed by 2475
Abstract
Up until today research in various educational and linguistic domains such as learner corpus research, writing research, or second language acquisition has produced a substantial amount of research data in the form of L1 and L2 learner corpora. However, the multitude of individual [...] Read more.
Up until today research in various educational and linguistic domains such as learner corpus research, writing research, or second language acquisition has produced a substantial amount of research data in the form of L1 and L2 learner corpora. However, the multitude of individual solutions combined with domain-inherent obstacles in data sharing have so far hampered comparability, reusability and reproducibility of data and research results. In this article, we present work in creating a digital infrastructure for L1 and L2 learner corpora and populating it with data collected in the past. We embed our infrastructure efforts in the broader field of infrastructures for scientific research, drawing from technical solutions and frameworks from research data management, among which the FAIR guiding principles for data stewardship. We share our experiences from integrating some L1 and L2 learner corpora from concluded projects into the infrastructure while trying to ensure compliance with the FAIR principles and the standards we established for reproducibility, discussing how far research data that has been collected in the past can be made comparable, reusable and reproducible. Our results show that some basic needs for providing comparable and reusable data are covered by existing general infrastructure solutions and can be exploited for domain-specific infrastructures such as the one presented in this article. Other aspects need genuinely domain-driven approaches. The solutions found for the corpora in the presented infrastructure can only be a preliminary attempt, and further community involvement would be needed to provide templates and models acknowledged and promoted by the community. Furthermore, forward-looking data management would be needed starting from the beginning of new corpus creation projects to ensure that all requirements for FAIR data can be met. Full article
(This article belongs to the Special Issue ICT Enhanced Social Sciences and Humanities)
17 pages, 491 KiB  
Article
Exploring Clustering-Based Reinforcement Learning for Personalized Book Recommendation in Digital Library
by Xinhua Wang, Yuchen Wang, Lei Guo, Liancheng Xu, Baozhong Gao, Fangai Liu and Wei Li
Information 2021, 12(5), 198; https://doi.org/10.3390/info12050198 - 30 Apr 2021
Cited by 6 | Viewed by 3596
Abstract
Digital library as one of the most important ways in helping students acquire professional knowledge and improve their professional level has gained great attention in recent years. However, its large collection (especially the book resources) hinders students from finding the resources that they [...] Read more.
Digital library as one of the most important ways in helping students acquire professional knowledge and improve their professional level has gained great attention in recent years. However, its large collection (especially the book resources) hinders students from finding the resources that they are interested in. To overcome this challenge, many researchers have already turned to recommendation algorithms. Compared with traditional recommendation tasks, in the digital library, there are two challenges in book recommendation problems. The first is that users may borrow books that they are not interested in (i.e., noisy borrowing behaviours), such as borrowing books for classmates. The second is that the number of books in a digital library is usually very large, which means one student can only borrow a small set of books in history (i.e., data sparsity issue). As the noisy interactions in students’ borrowing sequences may harm the recommendation performance of a book recommender, we focus on refining recommendations via filtering out data noises. Moreover, due to the the lack of direct supervision information, we treat noise filtering in sequences as a decision-making process and innovatively introduce a reinforcement learning method as our recommendation framework. Furthermore, to overcome the sparsity issue of students’ borrowing behaviours, a clustering-based reinforcement learning algorithm is further developed. Experimental results on two real-world datasets demonstrate the superiority of our proposed method compared with several state-of-the-art recommendation methods. Full article
Show Figures

Figure 1

19 pages, 1862 KiB  
Article
A Machine Learning Approach for the Tune Estimation in the LHC
by Leander Grech, Gianluca Valentino and Diogo Alves
Information 2021, 12(5), 197; https://doi.org/10.3390/info12050197 - 29 Apr 2021
Cited by 1 | Viewed by 2377
Abstract
The betatron tune in the Large Hadron Collider (LHC) is measured using a Base-Band Tune (BBQ) system. The processing of these BBQ signals is often perturbed by 50 Hz noise harmonics present in the beam. This causes the tune measurement algorithm, currently based [...] Read more.
The betatron tune in the Large Hadron Collider (LHC) is measured using a Base-Band Tune (BBQ) system. The processing of these BBQ signals is often perturbed by 50 Hz noise harmonics present in the beam. This causes the tune measurement algorithm, currently based on peak detection, to provide incorrect tune estimates during the acceleration cycle with values that oscillate between neighbouring harmonics. The LHC tune feedback (QFB) cannot be used to its full extent in these conditions as it relies on stable and reliable tune estimates. In this work, we propose new tune estimation algorithms, designed to mitigate this problem through different techniques. As ground-truth of the real tune measurement does not exist, we developed a surrogate model, which allowed us to perform a comparative analysis of a simple weighted moving average, Gaussian Processes and different deep learning techniques. The simulated dataset used to train the deep models was also improved using a variant of Generative Adversarial Networks (GANs) called SimGAN. In addition, we demonstrate how these methods perform with respect to the present tune estimation algorithm. Full article
(This article belongs to the Special Issue Machine Learning and Accelerator Technology)
Show Figures

Figure 1

15 pages, 8954 KiB  
Article
Edge Detecting Method for Microscopic Image of Cotton Fiber Cross-Section Using RCF Deep Neural Network
by Defeng He and Quande Wang
Information 2021, 12(5), 196; https://doi.org/10.3390/info12050196 - 29 Apr 2021
Cited by 7 | Viewed by 3115
Abstract
Currently, analyzing the microscopic image of cotton fiber cross-section is the most accurate and effective way to measure its grade of maturity and then evaluate the quality of cotton samples. However, existing methods cannot extract the edge of the cross-section intact, which will [...] Read more.
Currently, analyzing the microscopic image of cotton fiber cross-section is the most accurate and effective way to measure its grade of maturity and then evaluate the quality of cotton samples. However, existing methods cannot extract the edge of the cross-section intact, which will affect the measurement accuracy of maturity grade. In this paper, a new edge detection algorithm that is based on the RCF convolutional neural network (CNN) is proposed. For the microscopic image dataset of the cotton fiber cross-section constructed in this paper, the original RCF was firstly used to extract the edge of the cotton fiber cross-section in the image. After analyzing the output images of RCF in each convolution stage, the following two conclusions are drawn: (1) the shallow layers contain a lot of important edge information of the cotton fiber cross-section; (2) because the size of the cotton fiber cross-section in the image is relatively small and the receptive field of the convolutional layer gradually increases with the deepening of the number of layers, the edge information detected by the deeper layers becomes increasingly coarse. In view of the above two points, the following improvements are proposed in this paper: (1) modify the network supervision model and loss calculation structure; (2) the dilated convolution in the deeper layers is removed; therefore, the receptive field in the deeper layers is reduced to adapt to the detection of small objects. The experimental results show that the proposed method can effectively improve the accuracy of edge extraction of cotton fiber cross-section. Full article
Show Figures

Figure 1

10 pages, 6802 KiB  
Article
Edge-Based Missing Data Imputation in Large-Scale Environments
by Davide Andrea Guastella, Guilhem Marcillaud and Cesare Valenti
Information 2021, 12(5), 195; https://doi.org/10.3390/info12050195 - 29 Apr 2021
Cited by 12 | Viewed by 2620
Abstract
Smart cities leverage large amounts of data acquired in the urban environment in the context of decision support tools. These tools enable monitoring the environment to improve the quality of services offered to citizens. The increasing diffusion of personal Internet of things devices [...] Read more.
Smart cities leverage large amounts of data acquired in the urban environment in the context of decision support tools. These tools enable monitoring the environment to improve the quality of services offered to citizens. The increasing diffusion of personal Internet of things devices capable of sensing the physical environment allows for low-cost solutions to acquire a large amount of information within the urban environment. On the one hand, the use of mobile and intermittent sensors implies new scenarios of large-scale data analysis; on the other hand, it involves different challenges such as intermittent sensors and integrity of acquired data. To this effect, edge computing emerges as a methodology to distribute computation among different IoT devices to analyze data locally. We present here a new methodology for imputing environmental information during the acquisition step, due to missing or otherwise out of order sensors, by distributing the computation among a variety of fixed and mobile devices. Numerous experiments have been carried out on real data to confirm the validity of the proposed method. Full article
(This article belongs to the Special Issue Smart IoT Systems)
Show Figures

Figure 1

29 pages, 77707 KiB  
Article
Identification of Driving Safety Profiles in Vehicle to Vehicle Communication System Based on Vehicle OBD Information
by Hussein Ali Ameen, Abd Kadir Mahamad, Sharifah Saon, Rami Qays Malik, Zahraa Hashim Kareem, Mohd Anuaruddin Bin Ahmadon and Shingo Yamaguchi
Information 2021, 12(5), 194; https://doi.org/10.3390/info12050194 - 29 Apr 2021
Cited by 5 | Viewed by 4923
Abstract
Driver behavior is a determining factor in more than 90% of road accidents. Previous research regarding the relationship between speeding behavior and crashes suggests that drivers who engage in frequent and extreme speeding behavior are overinvolved in crashes. Consequently, there is a significant [...] Read more.
Driver behavior is a determining factor in more than 90% of road accidents. Previous research regarding the relationship between speeding behavior and crashes suggests that drivers who engage in frequent and extreme speeding behavior are overinvolved in crashes. Consequently, there is a significant benefit in identifying drivers who engage in unsafe driving practices to enhance road safety. The proposed method uses continuously logged driving data to collect vehicle operation information, including vehicle speed, engine revolutions per minute (RPM), throttle position, and calculated engine load via the on-board diagnostics (OBD) interface. Then the proposed method makes use of severity stratification of acceleration to create a driving behavior classification model to determine whether the current driving behavior belongs to safe driving or not. The safe driving behavior is characterized by an acceleration value that ranges from about ±2 m/s2. The risk of collision starts from ±4 m/s2, which represents in this study the aggressive drivers. By measuring the in-vehicle accelerations, it is possible to categorize the driving behavior into four main classes based on real-time experiments: safe drivers, normal, aggressive, and dangerous drivers. Subsequently, the driver’s characteristics derived from the driver model are embedded into the advanced driver assistance systems. When the vehicle is in a risk situation, the system based on nRF24L01 + power amplifier/low noise amplifier PA/LNA, global positioning system GPS, and OBD-II passes a signal to the driver using a dedicated liquid-crystal display LCD and light signal. Experimental results show the correctness of the proposed driving behavior analysis method can achieve an average of 90% accuracy rate in various driving scenarios. Full article
(This article belongs to the Special Issue Recent Advances in IoT and Cyber/Physical Security)
Show Figures

Figure 1

9 pages, 252 KiB  
Article
New Generalized Cyclotomic Quaternary Sequences with Large Linear Complexity and a Product of Two Primes Period
by Jiang Ma, Wei Zhao, Yanguo Jia and Haiyang Jiang
Information 2021, 12(5), 193; https://doi.org/10.3390/info12050193 - 28 Apr 2021
Cited by 1 | Viewed by 1788
Abstract
Linear complexity is an important criterion to characterize the unpredictability of pseudo-random sequences, and large linear complexity corresponds to high cryptographic strength. Pseudo-random Sequences with a large linear complexity property are of importance in many domains. In this paper, based on the theory [...] Read more.
Linear complexity is an important criterion to characterize the unpredictability of pseudo-random sequences, and large linear complexity corresponds to high cryptographic strength. Pseudo-random Sequences with a large linear complexity property are of importance in many domains. In this paper, based on the theory of inverse Gray mapping, two classes of new generalized cyclotomic quaternary sequences with period pq are constructed, where pq is a product of two large distinct primes. In addition, we give the linear complexity over the residue class ring Z4 via the Hamming weights of their Fourier spectral sequence. The results show that these two kinds of sequences have large linear complexity. Full article
15 pages, 836 KiB  
Article
Understanding the Effects of eWOM Antecedents on Online Purchase Intention in China
by Muhammad Bilal, Zeng Jianqiu, Suad Dukhaykh, Mingyue Fan and Aleš Trunk
Information 2021, 12(5), 192; https://doi.org/10.3390/info12050192 - 28 Apr 2021
Cited by 33 | Viewed by 12044
Abstract
Drawing on social identity theory, this study aims to examine the impact of antecedents of eWOM on the online purchase intention (OPI) of fashion-related products. In addition, social media usage moderates the relationship between eWOM and OPI. A structured questionnaire was completed by [...] Read more.
Drawing on social identity theory, this study aims to examine the impact of antecedents of eWOM on the online purchase intention (OPI) of fashion-related products. In addition, social media usage moderates the relationship between eWOM and OPI. A structured questionnaire was completed by a sample of 477 Chinese WeChat users. An online survey was conducted in two metropolitan cities (Beijing and Shanghai). The hypotheses were tested using structural equation modeling (SEM) generated by AMOS 22. The findings of this study found that all five antecedents of eWOM, such as fashion involvement, sense of belonging, trust, tie strength, and informational influence, positively related to the OPI of fashion products in China. Furthermore, eWOM significantly mediates the relationship between fashion involvement, sense of belonging, trust, informational influence, and OPI. The current study is considered the first to examine the role of eWOM in stimulating OPI through social media usage for fashion-oriented products in China. As such, it enriches the online buying literature by exploring the OPI mechanism through eWOM antecedents and validating the importance of social media factors in the development of online buying intention compared to previous studies. Furthermore, it provides several theoretical and practical implications, along with future opportunities. Full article
Show Figures

Figure 1

18 pages, 6129 KiB  
Article
Face Recognition Based on Lightweight Convolutional Neural Networks
by Wenting Liu, Li Zhou and Jie Chen
Information 2021, 12(5), 191; https://doi.org/10.3390/info12050191 - 28 Apr 2021
Cited by 20 | Viewed by 5784
Abstract
Face recognition algorithms based on deep learning methods have become increasingly popular. Most of these are based on highly precise but complex convolutional neural networks (CNNs), which require significant computing resources and storage, and are difficult to deploy on mobile devices or embedded [...] Read more.
Face recognition algorithms based on deep learning methods have become increasingly popular. Most of these are based on highly precise but complex convolutional neural networks (CNNs), which require significant computing resources and storage, and are difficult to deploy on mobile devices or embedded terminals. In this paper, we propose several methods to improve the algorithms for face recognition based on a lightweight CNN, which is further optimized in terms of the network architecture and training pattern on the basis of MobileFaceNet. Regarding the network architecture, we introduce the Squeeze-and-Excitation (SE) block and propose three improved structures via a channel attention mechanism—the depthwise SE module, the depthwise separable SE module, and the linear SE module—which are able to learn the correlation of information between channels and assign them different weights. In addition, a novel training method for the face recognition task combined with an additive angular margin loss function is proposed that performs the compression and knowledge transfer of the deep network for face recognition. Finally, we obtained high-precision and lightweight face recognition models with fewer parameters and calculations that are more suitable for applications. Through extensive experiments and analysis, we demonstrate the effectiveness of the proposed methods. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

18 pages, 3913 KiB  
Article
The Visualization of ISO/IEC29110 on SCRUM under EPF Composer
by Kittitouch Suteeca and Sakgasit Ramingwong
Information 2021, 12(5), 190; https://doi.org/10.3390/info12050190 - 28 Apr 2021
Viewed by 2709
Abstract
In the midst of an increasingly competitive software industry, very small entities (VSEs) have inevitably faced many challenges. High user expectations, frequent changes of user requirements, and the need for rapid deployment are classic examples of these challenges. Many software companies attempt to [...] Read more.
In the midst of an increasingly competitive software industry, very small entities (VSEs) have inevitably faced many challenges. High user expectations, frequent changes of user requirements, and the need for rapid deployment are classic examples of these challenges. Many software companies attempt to implement measures for preventing or solving the aforementioned problems. The use of agile methodologies and the implementation of software development standards are usually perceived to be promising solutions to improve the quality of the software development process. Nevertheless, there are several strong incompatibilities between standards and the Agile approach to software development. For example, the need identified in the standards to create many quality artifacts does not conform to agility philosophies. Since Agile focuses on the working software over the documentation, the use of the Agile with standards can be difficult to implement. Additionally, there has been none guidelines for VSE therefore, an external consultant is usually required. This research analyzes various cases of implementing ISO/IEC29110, a software development standard developed especially for VSEs in Scrum environments. The results of this study provide an Eclipse Process Framework (EPF) for effectively and conveniently implementing this standard in Scrum software development. Full article
Show Figures

Figure 1

15 pages, 5829 KiB  
Article
Exploring the Effects of Normative Beliefs toward Citizen Engagement on eParticipation Technologies
by Muhammad Rifki Shihab, Achmad Nizar Hidayanto and Panca Hadi Putra
Information 2021, 12(5), 189; https://doi.org/10.3390/info12050189 - 26 Apr 2021
Cited by 8 | Viewed by 3046
Abstract
This research evaluates the effects of normative beliefs toward citizen engagement on eParticipation. Normative beliefs herein were assessed from the perspectives of citizenship norms, which include engaged-citizenship norms and duty-based norms, as well as the perspective of subjective norms, namely civic norms. A [...] Read more.
This research evaluates the effects of normative beliefs toward citizen engagement on eParticipation. Normative beliefs herein were assessed from the perspectives of citizenship norms, which include engaged-citizenship norms and duty-based norms, as well as the perspective of subjective norms, namely civic norms. A questionnaire was devised as the research instrument, and a survey was conducted as a means for data collection. The respondents were citizens who reside in the Greater Jakarta Region, in Indonesia, whom have had previous experiences with eParticipation. A total of 172 valid responses were collected in this study. Data were analyzed using Partial Least Squares Structural Equational Modeling (PLS-SEM), aided with SmartPLS 3 as a tool. The research results confirmed that perceived public value and perceived public satisfaction both concertedly shape citizens’ engagement in eParticipation. Furthermore, perceived public value as a pre-transactional norm also served as an antecedent to the post-transactional norm of perceived public satisfaction. The results also revealed that perceived public value was affected by a sole citizenship norm, namely, duty-based norm. Additionally, perceived public satisfaction was not affected by neither engaged-citizenship norm nor duty-based norm. Conversely, civic norms showed significant effects on both perceived public value and perceived public satisfaction. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

17 pages, 748 KiB  
Article
A Semi-Automatic Semantic Consistency-Checking Method for Learning Ontology from Relational Database
by Chuangtao Ma, Bálint Molnár and András Benczúr
Information 2021, 12(5), 188; https://doi.org/10.3390/info12050188 - 26 Apr 2021
Cited by 2 | Viewed by 3167
Abstract
To tackle the issues of semantic collision and inconsistencies between ontologies and the original data model while learning ontology from relational database (RDB), a semi-automatic semantic consistency checking method based on graph intermediate representation and model checking is presented. Initially, the W-Graph, as [...] Read more.
To tackle the issues of semantic collision and inconsistencies between ontologies and the original data model while learning ontology from relational database (RDB), a semi-automatic semantic consistency checking method based on graph intermediate representation and model checking is presented. Initially, the W-Graph, as an intermediate model between databases and ontologies, was utilized to formalize the semantic correspondences between databases and ontologies, which were then transformed into the Kripke structure and eventually encoded with the SMV program. Meanwhile, description logics (DLs) were employed to formalize the semantic specifications of the learned ontologies, since the OWL DL showed good semantic compatibility and the DLs presented an excellent expressivity. Thereafter, the specifications were converted into a computer tree logic (CTL) formula to improve machine readability. Furthermore, the task of checking semantic consistency could be converted into a global model checking problem that could be solved automatically by the symbolic model checker. Moreover, an example is given to demonstrate the specific process of formalizing and checking the semantic consistency between learned ontologies and RDB, and a verification experiment was conducted to verify the feasibility of the presented method. The results showed that the presented method could correctly check and identify the different kinds of inconsistencies between learned ontologies and its original data model. Full article
(This article belongs to the Special Issue Semantic Web and Information Systems)
Show Figures

Figure 1

16 pages, 549 KiB  
Article
Classification of Relaxation and Concentration Mental States with EEG
by Shingchern D. You
Information 2021, 12(5), 187; https://doi.org/10.3390/info12050187 - 26 Apr 2021
Cited by 12 | Viewed by 6492
Abstract
In this paper, we study the use of EEG (Electroencephalography) to classify between concentrated and relaxed mental states. In the literature, most EEG recording systems are expensive, medical-graded devices. The expensive devices limit the availability in a consumer market. The EEG signals are [...] Read more.
In this paper, we study the use of EEG (Electroencephalography) to classify between concentrated and relaxed mental states. In the literature, most EEG recording systems are expensive, medical-graded devices. The expensive devices limit the availability in a consumer market. The EEG signals are obtained from a toy-grade EEG device with one channel of output data. The experiments are conducted in two runs, with 7 and 10 subjects, respectively. Each subject is asked to silently recite a five-digit number backwards given by the tester. The recorded EEG signals are converted to time-frequency representations by the software accompanying the device. A simple average is used to aggregate multiple spectral components into EEG bands, such as α, β, and γ bands. The chosen classifiers are SVM (support vector machine) and multi-layer feedforward network trained individually for each subject. Experimental results show that features, with α+β+γ bands and bandwidth 4 Hz, the average accuracy over all subjects in both runs can reach more than 80% and some subjects up to 90+% with the SVM classifier. The results suggest that a brain machine interface could be implemented based on the mental states of the user even with the use of a cheap EEG device. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop