Next Issue
Volume 11, June
Previous Issue
Volume 11, April
 
 

Information, Volume 11, Issue 5 (May 2020) – 47 articles

Cover Story (view full-size image): We explore big data-driven artificial intelligence (AI) applied to social systems, i.e., social computing, the concept of artificial intelligence as an enabler of novel social solutions. Through a critical analysis of the literature, we elaborate on the social and human interaction aspects of technology that must be in place to achieve such enablement and address the limitations of the current state of the art in this regard. We review cultural, political, and other societal impacts of social computing, its impact on vulnerable groups, and the ethically aligned design of social computing systems. We show that this is not merely an engineering problem, but rather the intersection of engineering with health sciences, social sciences, psychology, policy, and law. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 3337 KiB  
Article
Machine Learning Based Sentiment Text Classification for Evaluating Treatment Quality of Discharge Summary
by Samer Abdulateef Waheeb, Naseer Ahmed Khan, Bolin Chen and Xuequn Shang
Information 2020, 11(5), 281; https://doi.org/10.3390/info11050281 - 23 May 2020
Cited by 23 | Viewed by 6529
Abstract
Patients’ discharge summaries (documents) are health sensors that are used for measuring the quality of treatment in medical centers. However, extracting information automatically from discharge summaries with unstructured natural language is considered challenging. These kinds of documents include various aspects of patient information [...] Read more.
Patients’ discharge summaries (documents) are health sensors that are used for measuring the quality of treatment in medical centers. However, extracting information automatically from discharge summaries with unstructured natural language is considered challenging. These kinds of documents include various aspects of patient information that could be used to test the treatment quality for improving medical-related decisions. One of the significant techniques in literature for discharge summaries classification is feature extraction techniques from the domain of natural language processing on text data. We propose a novel sentiment analysis method for discharge summaries classification that relies on vector space models, statistical methods, association rule, and extreme learning machine autoencoder (ELM-AE). Our novel hybrid model is based on statistical methods that build the lexicon in a domain related to health and medical records. Meanwhile, our method examines treatment quality based on an idea inspired by sentiment analysis. Experiments prove that our proposed method obtains a higher F1 value of 0.89 with good TPR (True Positive Rate) and FPR (False Positive Rate) values compared with various well-known state-of-the-art methods with different size of training and testing datasets. The results also prove that our method provides a flexible and effective technique to examine treatment quality based on positive, negative, and neutral terms for sentence-level in each discharge summary. Full article
(This article belongs to the Special Issue Natural Language Processing in Healthcare and Medical Informatics)
Show Figures

Figure 1

15 pages, 4491 KiB  
Article
Emotion-Semantic-Enhanced Bidirectional LSTM with Multi-Head Attention Mechanism for Microblog Sentiment Analysis
by Shaoxiu Wang, Yonghua Zhu, Wenjing Gao, Meng Cao and Mengyao Li
Information 2020, 11(5), 280; https://doi.org/10.3390/info11050280 - 22 May 2020
Cited by 19 | Viewed by 5450
Abstract
The sentiment analysis of microblog text has always been a challenging research field due to the limited and complex contextual information. However, most of the existing sentiment analysis methods for microblogs focus on classifying the polarity of emotional keywords while ignoring the transition [...] Read more.
The sentiment analysis of microblog text has always been a challenging research field due to the limited and complex contextual information. However, most of the existing sentiment analysis methods for microblogs focus on classifying the polarity of emotional keywords while ignoring the transition or progressive impact of words in different positions in the Chinese syntactic structure on global sentiment, as well as the utilization of emojis. To this end, we propose the emotion-semantic-enhanced bidirectional long short-term memory (BiLSTM) network with the multi-head attention mechanism model (EBILSTM-MH) for sentiment analysis. This model uses BiLSTM to learn feature representation of input texts, given the word embedding. Subsequently, the attention mechanism is used to assign the attentive weights of each words to the sentiment analysis based on the impact of emojis. The attentive weights can be combined with the output of the hidden layer to obtain the feature representation of posts. Finally, the sentiment polarity of microblog can be obtained through the dense connection layer. The experimental results show the feasibility of our proposed model on microblog sentiment analysis when compared with other baseline models. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

11 pages, 1804 KiB  
Article
Intrusion Detection in IoT Networks Using Deep Learning Algorithm
by Bambang Susilo and Riri Fitri Sari
Information 2020, 11(5), 279; https://doi.org/10.3390/info11050279 - 21 May 2020
Cited by 133 | Viewed by 14345
Abstract
The internet has become an inseparable part of human life, and the number of devices connected to the internet is increasing sharply. In particular, Internet of Things (IoT) devices have become a part of everyday human life. However, some challenges are increasing, and [...] Read more.
The internet has become an inseparable part of human life, and the number of devices connected to the internet is increasing sharply. In particular, Internet of Things (IoT) devices have become a part of everyday human life. However, some challenges are increasing, and their solutions are not well defined. More and more challenges related to technology security concerning the IoT are arising. Many methods have been developed to secure IoT networks, but many more can still be developed. One proposed way to improve IoT security is to use machine learning. This research discusses several machine-learning and deep-learning strategies, as well as standard datasets for improving the security performance of the IoT. We developed an algorithm for detecting denial-of-service (DoS) attacks using a deep-learning algorithm. This research used the Python programming language with packages such as scikit-learn, Tensorflow, and Seaborn. We found that a deep-learning model could increase accuracy so that the mitigation of attacks that occur on an IoT network is as effective as possible. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

20 pages, 435 KiB  
Article
Capturing the Silences in Digital Archaeological Knowledge
by Jeremy Huggett
Information 2020, 11(5), 278; https://doi.org/10.3390/info11050278 - 21 May 2020
Cited by 20 | Viewed by 5842
Abstract
The availability and accessibility of digital data are increasingly significant in the creation of archaeological knowledge with, for example, multiple datasets being brought together to perform extensive analyses that would not otherwise be possible. However, this makes capturing the silences in those data—what [...] Read more.
The availability and accessibility of digital data are increasingly significant in the creation of archaeological knowledge with, for example, multiple datasets being brought together to perform extensive analyses that would not otherwise be possible. However, this makes capturing the silences in those data—what is absent as well as present, what is unknown as well as what is known—a critical challenge for archaeology in terms of the suitability and appropriateness of data for subsequent reuse. This paper reverses the usual focus on knowledge and considers the role of ignorance—the lack of knowledge, or nonknowledge—in archaeological data and knowledge creation. Examining aspects of archaeological practice in the light of different dimensions of ignorance, it proposes ways in which the silences, the range of unknowns, can be addressed within a digital environment and the benefits which may accrue. Full article
(This article belongs to the Special Issue Digital Humanities)
13 pages, 770 KiB  
Article
Mode Awareness and Automated Driving—What Is It and How Can It Be Measured?
by Christina Kurpiers, Bianca Biebl, Julia Mejia Hernandez and Florian Raisch
Information 2020, 11(5), 277; https://doi.org/10.3390/info11050277 - 21 May 2020
Cited by 20 | Viewed by 5147
Abstract
In SAE (Society of Automotive Engineers) Level 2, the driver has to monitor the traffic situation and system performance at all times, whereas the system assumes responsibility within a certain operational design domain in SAE Level 3. The different responsibility allocation in these [...] Read more.
In SAE (Society of Automotive Engineers) Level 2, the driver has to monitor the traffic situation and system performance at all times, whereas the system assumes responsibility within a certain operational design domain in SAE Level 3. The different responsibility allocation in these automation modes requires the driver to always be aware of the currently active system and its limits to ensure a safe drive. For that reason, current research focuses on identifying factors that might promote mode awareness. There is, however, no gold standard for measuring mode awareness and different approaches are used to assess this highly complex construct. This circumstance complicates the comparability and validity of study results. We thus propose a measurement method that combines the knowledge and the behavior pillar of mode awareness. The latter is represented by the relational attention ratio in manual, Level 2 and Level 3 driving as well as the controllability of a system limit in Level 2. The knowledge aspect of mode awareness is operationalized by a questionnaire on the mental model for the automation systems after an initial instruction as well as an extensive enquiry following the driving sequence. Further assessments of system trust, engagement in non-driving related tasks and subjective mode awareness are proposed. Full article
Show Figures

Figure 1

17 pages, 294 KiB  
Article
Media Education in the ICT Era: Theoretical Structure for Innovative Teaching Styles
by José Gómez-Galán
Information 2020, 11(5), 276; https://doi.org/10.3390/info11050276 - 20 May 2020
Cited by 31 | Viewed by 7944
Abstract
The era of information and communication technologies (ICTs) in which we live has transformed the foundations of education. This article starts from the premise that there is a convergence between technologies and media that makes ICTs adopt strategies and forms similar to traditional [...] Read more.
The era of information and communication technologies (ICTs) in which we live has transformed the foundations of education. This article starts from the premise that there is a convergence between technologies and media that makes ICTs adopt strategies and forms similar to traditional media, especially in their quest to create influence on citizens. For this reason, curricular objectives should include a critical analysis of this new reality in order to train new generations. We propose, based on the traditional parameters of media education, a new theoretical framework for their development that includes innovative teaching styles to achieve these goals. We used a critical pedagogy methodology with a qualitative and descriptive approach through the analysis of the content of theoretical studies and field work through which to establish an innovative pedagogical structure. The main result is that the influence that ICTs have on children and young people today is as strong as, or stronger than, that traditionally received by the classical media, and that there is a lack of adequate framework to address the problem. In this sense, and as a conclusion, we consider that they must create critical attitudes before the power of influence that ICT has from very early ages, which generate problems like consumerism, addiction, cyber-bullying, and ignorance of the reality. This requires new teaching styles in line with the current social context. Full article
(This article belongs to the Special Issue Cultural Studies of Digital Society)
19 pages, 9381 KiB  
Article
Optimization of a Pre-Trained AlexNet Model for Detecting and Localizing Image Forgeries
by Soad Samir, Eid Emary, Khaled El-Sayed and Hoda Onsi
Information 2020, 11(5), 275; https://doi.org/10.3390/info11050275 - 20 May 2020
Cited by 52 | Viewed by 6642
Abstract
With the advance of many image manipulation tools, carrying out image forgery and concealing the forgery is becoming easier. In this paper, the convolution neural network (CNN) innovation for image forgery detection and localization is discussed. A novel image forgery detection model using [...] Read more.
With the advance of many image manipulation tools, carrying out image forgery and concealing the forgery is becoming easier. In this paper, the convolution neural network (CNN) innovation for image forgery detection and localization is discussed. A novel image forgery detection model using AlexNet framework is introduced. We proposed a modified model to optimize the AlexNet model by using batch normalization instead of local Response normalization, a maxout activation function instead of a rectified linear unit, and a softmax activation function in the last layer to act as a classifier. As a consequence, the AlexNet proposed model can carry out feature extraction and as well as detection of forgeries without the need for further manipulations. Throughout a number of experiments, we examine and differentiate the impacts of several important AlexNet design choices. The proposed networks model is applied on CASIA v2.0, CASIA v1.0, DVMM, and NIST Nimble Challenge 2017 datasets. We also apply k-fold cross-validation on datasets to divide them into training and test data samples. The experimental results achieved prove that the proposed model can accomplish a great performance for detecting different sorts of forgeries. Quantitative performance analysis of the proposed model can detect image forgeries with 98.176% accuracy. Full article
Show Figures

Figure 1

18 pages, 18197 KiB  
Article
Quantitative Evaluation of Dense Skeletons for Image Compression
by Jieying Wang, Maarten Terpstra, Jiří Kosinka and Alexandru Telea
Information 2020, 11(5), 274; https://doi.org/10.3390/info11050274 - 20 May 2020
Cited by 5 | Viewed by 3369
Abstract
Skeletons are well-known descriptors used for analysis and processing of 2D binary images. Recently, dense skeletons have been proposed as an extension of classical skeletons as a dual encoding for 2D grayscale and color images. Yet, their encoding power, measured by the quality [...] Read more.
Skeletons are well-known descriptors used for analysis and processing of 2D binary images. Recently, dense skeletons have been proposed as an extension of classical skeletons as a dual encoding for 2D grayscale and color images. Yet, their encoding power, measured by the quality and size of the encoded image, and how these metrics depend on selected encoding parameters, has not been formally evaluated. In this paper, we fill this gap with two main contributions. First, we improve the encoding power of dense skeletons by effective layer selection heuristics, a refined skeleton pixel-chain encoding, and a postprocessing compression scheme. Secondly, we propose a benchmark to assess the encoding power of dense skeletons for a wide set of natural and synthetic color and grayscale images. We use this benchmark to derive optimal parameters for dense skeletons. Our method, called Compressing Dense Medial Descriptors (CDMD), achieves higher-compression ratios at similar quality to the well-known JPEG technique and, thereby, shows that skeletons can be an interesting option for lossy image encoding. Full article
Show Figures

Figure 1

22 pages, 419 KiB  
Article
SAVTA: A Hybrid Vehicular Threat Model: Overview and Case Study
by Mohammad Hamad and Vassilis Prevelakis
Information 2020, 11(5), 273; https://doi.org/10.3390/info11050273 - 19 May 2020
Cited by 18 | Viewed by 6093
Abstract
In recent years, significant developments were introduced within the vehicular domain, evolving the vehicles to become a network of many embedded systems which depend on a set of sensors to interact with each other and with the surrounding environment. While these improvements have [...] Read more.
In recent years, significant developments were introduced within the vehicular domain, evolving the vehicles to become a network of many embedded systems which depend on a set of sensors to interact with each other and with the surrounding environment. While these improvements have increased the safety and incontestability of the automotive system, they have opened the door for new potential security threats which need to be defined, assessed, and mitigated. The SAE J3061 standard has defined threat modeling as a critical step toward the secure development process for vehicle systems, but it did not determine which method could be used to achieve this process. Therefore, many threat modeling approaches were adopted. However, using one individual approach will not identify all the threats which could target the system, and may lead to insufficient mitigation mechanisms. Thus, having complete security requires the usage of a comprehensive threat model which identifies all the potential threats and vulnerabilities. In this work, we tried to revise the existing threat modeling efforts in the vehicular domain. Also, we proposed using a hybrid method called the Software, Asset, Vulnerability, Threat, and Attacker (SAVTA)-centric method to support security analysis for vehicular systems. SAVTA combines different existing threat modeling approaches to create a comprehensive and hybridized threat model. The model is used as an aid to construct general attack trees which illustrate attack vectors that threaten a particular vehicle asset and classify these attacks under different sub-trees. Full article
Show Figures

Figure 1

21 pages, 5582 KiB  
Article
Multi-Vehicle Simulation in Urban Automated Driving: Technical Implementation and Added Benefit
by Alexander Feierle, Michael Rettenmaier, Florian Zeitlmeir and Klaus Bengler
Information 2020, 11(5), 272; https://doi.org/10.3390/info11050272 - 19 May 2020
Cited by 12 | Viewed by 4186
Abstract
This article investigates the simultaneous interaction between an automated vehicle (AV) and its passenger, and between the same AV and a human driver of another vehicle. For this purpose, we have implemented a multi-vehicle simulation consisting of two driving simulators, one for the [...] Read more.
This article investigates the simultaneous interaction between an automated vehicle (AV) and its passenger, and between the same AV and a human driver of another vehicle. For this purpose, we have implemented a multi-vehicle simulation consisting of two driving simulators, one for the AV and one for the manual vehicle. The considered scenario is a road bottleneck with a double-parked vehicle either on one side of the road or on both sides of the road where an AV and a simultaneously oncoming human driver negotiate the right of way. The AV communicates to its passenger via the internal automation human–machine interface (HMI) and it concurrently displays the right of way to the human driver via an external HMI. In addition to the regular encounters, this paper analyzes the effect of an automation failure, where the AV first communicates to yield the right of way and then changes its strategy and passes through the bottleneck first despite oncoming traffic. The research questions the study aims to answer are what methods should be used for the implementation of multi-vehicle simulations with one AV, and if there is an added benefit of this multi-vehicle simulation compared to single-driver simulator studies. The results show an acceptable synchronicity for using traffic lights as basic synchronization and a distance control as the detail synchronization method. The participants had similar passing times in the multi-vehicle simulation compared to a previously conducted single-driver simulation. Moreover, there was a lower crash rate in the multi-vehicle simulation during the automation failure. Concluding the results, the proposed method seems to be an appropriate solution to implement multi-vehicle simulation with one AV. Additionally, multi-vehicle simulation offers a benefit if more than one human affects the interaction within a scenario. Full article
Show Figures

Figure 1

21 pages, 1642 KiB  
Article
A Social Multi-Agent Cooperation System Based on Planning and Distributed Task Allocation
by Atef Gharbi
Information 2020, 11(5), 271; https://doi.org/10.3390/info11050271 - 18 May 2020
Cited by 2 | Viewed by 4721
Abstract
Planning and distributed task allocation are considered challenging problems. To address them, autonomous agents called planning agents situated in a multi-agent system should cooperate to achieve planning and complete distributed tasks. We propose a solution for distributed task allocation where agents dynamically allocate [...] Read more.
Planning and distributed task allocation are considered challenging problems. To address them, autonomous agents called planning agents situated in a multi-agent system should cooperate to achieve planning and complete distributed tasks. We propose a solution for distributed task allocation where agents dynamically allocate the tasks while they are building the plans. We model and verify some properties using computation tree logic (CTL) with the model checker its-ctl. Lastly, simulations are performed to verify the effectiveness of our proposed solution. The result proves that it is very efficient as it requires little message exchange and computational time. A benchmark production system is used as a running example to explain our contribution. Full article
(This article belongs to the Special Issue Modeling Distributed Information Systems)
Show Figures

Figure 1

23 pages, 8936 KiB  
Article
Modeling Road Accident Severity with Comparisons of Logistic Regression, Decision Tree and Random Forest
by Mu-Ming Chen and Mu-Chen Chen
Information 2020, 11(5), 270; https://doi.org/10.3390/info11050270 - 18 May 2020
Cited by 57 | Viewed by 7059
Abstract
To reduce the damage caused by road accidents, researchers have applied different techniques to explore correlated factors and develop efficient prediction models. The main purpose of this study is to use one statistical and two nonparametric data mining techniques, namely, logistic regression (LR), [...] Read more.
To reduce the damage caused by road accidents, researchers have applied different techniques to explore correlated factors and develop efficient prediction models. The main purpose of this study is to use one statistical and two nonparametric data mining techniques, namely, logistic regression (LR), classification and regression tree (CART), and random forest (RF), to compare their prediction capability, identify the significant variables (identified by LR) and important variables (identified by CART or RF) that are strongly correlated with road accident severity, and distinguish the variables that have significant positive influence on prediction performance. In this study, three prediction performance evaluation measures, accuracy, sensitivity and specificity, are used to find the best integrated method which consists of the most effective prediction model and the input variables that have higher positive influence on accuracy, sensitivity and specificity. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

13 pages, 2501 KiB  
Article
MNCE: Multi-Hop Node Localization Algorithm for Wireless Sensor Network Based on Error Correction
by Yinghui Meng, Yuewen Chen, Qiuwen Zhang and Weiwei Zhang
Information 2020, 11(5), 269; https://doi.org/10.3390/info11050269 - 18 May 2020
Cited by 1 | Viewed by 3317
Abstract
Considering the problems of large error and high localization costs of current range-free localization algorithms, a MNCE algorithm based on error correction is proposed in this study. This algorithm decomposes the multi-hop distance between nodes into several small hops. The distance of each [...] Read more.
Considering the problems of large error and high localization costs of current range-free localization algorithms, a MNCE algorithm based on error correction is proposed in this study. This algorithm decomposes the multi-hop distance between nodes into several small hops. The distance of each small hop is estimated by using the connectivity information of adjacent nodes; small hops are accumulated to obtain the initial estimated distance. Then, the error-correction rate based on the error-correction concept is proposed to correct the initial estimated distance. Finally, the location of the target node is resolved by total least square methods, according to the information on the anchor nodes and estimated distances. Simulation experiments show that the MNCE algorithm is superior to the similar types of localization algorithms. Full article
Show Figures

Figure 1

20 pages, 308 KiB  
Article
Fully-Unsupervised Embeddings-Based Hypernym Discovery
by Maurizio Atzori and Simone Balloccu
Information 2020, 11(5), 268; https://doi.org/10.3390/info11050268 - 18 May 2020
Cited by 8 | Viewed by 5559
Abstract
The hypernymy relation is the one occurring between an instance term and its general term (e.g., “lion” and “animal”, “Italy” and “country”). This paper we addresses Hypernym Discovery, the NLP task that aims at finding valid hypernyms from words in a given text, [...] Read more.
The hypernymy relation is the one occurring between an instance term and its general term (e.g., “lion” and “animal”, “Italy” and “country”). This paper we addresses Hypernym Discovery, the NLP task that aims at finding valid hypernyms from words in a given text, proposing HyperRank, an unsupervised approach that therefore does not require manually-labeled training sets as most approaches in the literature. The proposed algorithm exploits the cosine distance of points in the vector space of word embeddings, as already proposed by previous state of the art approaches, but the ranking is then corrected by also weighting word frequencies and the absolute level of similarity, which is expected to be similar when measuring co-hyponyms and their common hypernym. This brings us two major advantages over other approaches—(1) we correct the inadequacy of semantic similarity which is known to cause a significant performance drop and (2) we take into accounts multiple words if provided, allowing to find common hypernyms for a set of co-hyponyms—a task ignored in other systems but very useful when coupled with set expansion (that finds co-hyponyms automatically). We then evaluate HyperRank against the SemEval 2018 Hypernym Discovery task and show that, regardless of the language or domain, our algorithm significantly outperforms all the existing unsupervised algorithms and some supervised ones as well. We also evaluate the algorithm on a new dataset to measure the improvements when finding hypernyms for sets of words instead of singletons. Full article
(This article belongs to the Special Issue Advances in Computational Linguistics)
Show Figures

Figure 1

15 pages, 1785 KiB  
Article
Reliable Estimation of Urban Link Travel Time Using Multi-Sensor Data Fusion
by Yajuan Guo and Licai Yang
Information 2020, 11(5), 267; https://doi.org/10.3390/info11050267 - 16 May 2020
Cited by 9 | Viewed by 3055
Abstract
Travel time is one of the most critical indexes to describe urban traffic operating states. How to obtain accurate and robust travel time estimates, so as to facilitate to make traffic control decision-making for administrators and trip-planning for travelers, is an urgent issue [...] Read more.
Travel time is one of the most critical indexes to describe urban traffic operating states. How to obtain accurate and robust travel time estimates, so as to facilitate to make traffic control decision-making for administrators and trip-planning for travelers, is an urgent issue of wide concern. This paper proposes a reliable estimation method of urban link travel time using multi-sensor data fusion. Utilizing the characteristic analysis of each individual traffic sensor data, we first extract link travel time from license plate recognition data, geomagnetic detector data and floating car data, respectively, and find that their distribution patterns are similar and follow logarithmic normal distribution. Then, a support degree algorithm based on similarity function and a credibility algorithm based on membership function are developed, aiming to overcome the conflicts among multi-sensor traffic data and the uncertainties of single-sensor traffic data. The reliable fusion weights for each type of traffic sensor data are further determined by integrating the corresponding support degree with credibility. A case study was conducted using real-world data from a link of Jingshi Road in Jinan, China and demonstrated that the proposed method can effectively improve the accuracy and reliability of link travel time estimations in urban road systems. Full article
(This article belongs to the Section Information Processes)
Show Figures

Figure 1

19 pages, 9104 KiB  
Article
Gear Fault Diagnosis through Vibration and Acoustic Signal Combination Based on Convolutional Neural Network
by Liya Yu, Xuemei Yao, Jing Yang and Chuanjiang Li
Information 2020, 11(5), 266; https://doi.org/10.3390/info11050266 - 14 May 2020
Cited by 13 | Viewed by 5396
Abstract
Equipment condition monitoring and diagnosis is an important means to detect and eliminate mechanical faults in real time, thereby ensuring safe and reliable operation of equipment. This traditional method uses contact measurement vibration signals to perform fault diagnosis. However, a special environment of [...] Read more.
Equipment condition monitoring and diagnosis is an important means to detect and eliminate mechanical faults in real time, thereby ensuring safe and reliable operation of equipment. This traditional method uses contact measurement vibration signals to perform fault diagnosis. However, a special environment of high temperature and high corrosion in the industrial field exists. Industrial needs cannot be met through measurement. Mechanical equipment with complex working conditions has various types of faults and different fault characterizations. The sound signal of the microphone non-contact measuring device can effectively adapt to the complex environment and also reflect the operating state of the device. For the same workpiece, if it can simultaneously collect its vibration and sound signals, the two complement each other, which is beneficial for fault diagnosis. One of the limitations of the signal source and sensor is the difficulty in assessing the gear state under different working conditions. This study proposes a method based on improved evidence theory method (IDS theory), which uses convolutional neural network to combine vibration and sound signals to realize gear fault diagnosis. Experimental results show that our fusion method based on IDS theory obtains a more accurate and reliable diagnostic rate than the other fusion methods. Full article
Show Figures

Graphical abstract

22 pages, 3070 KiB  
Article
Methodological Considerations Concerning Motion Sickness Investigations during Automated Driving
by Dominik Mühlbacher, Markus Tomzig, Katharina Reinmüller and Lena Rittger
Information 2020, 11(5), 265; https://doi.org/10.3390/info11050265 - 13 May 2020
Cited by 26 | Viewed by 6173
Abstract
Automated driving vehicles will allow all occupants to spend their time with various non-driving related tasks like relaxing, working, or reading during the journey. However, a significant percentage of people is susceptible to motion sickness, which limits the comfort of engaging in those [...] Read more.
Automated driving vehicles will allow all occupants to spend their time with various non-driving related tasks like relaxing, working, or reading during the journey. However, a significant percentage of people is susceptible to motion sickness, which limits the comfort of engaging in those tasks during automated driving. Therefore, it is necessary to investigate the phenomenon of motion sickness during automated driving and to develop countermeasures. As most existing studies concerning motion sickness are fundamental research studies, a methodology for driving studies is yet missing. This paper discusses methodological aspects for investigating motion sickness in the context of driving including measurement tools, test environments, sample, and ethical restrictions. Additionally, methodological considerations guided by different underlying research questions and hypotheses are provided. Selected results from own studies concerning motion sickness during automated driving which were conducted in a motion-based driving simulation and a real vehicle are used to support the discussion. Full article
Show Figures

Figure 1

16 pages, 633 KiB  
Article
Characterizing the Nature of Probability-Based Proof Number Search: A Case Study in the Othello and Connect Four Games
by Anggina Primanita, Mohd Nor Akmal Khalid and Hiroyuki Iida
Information 2020, 11(5), 264; https://doi.org/10.3390/info11050264 - 13 May 2020
Cited by 1 | Viewed by 4369
Abstract
Variants of best-first search algorithms and their expansions have continuously been introduced to solve challenging problems. The probability-based proof number search (PPNS) is a best-first search algorithm that can be used to solve positions in AND/OR game tree structures. It combines information from [...] Read more.
Variants of best-first search algorithms and their expansions have continuously been introduced to solve challenging problems. The probability-based proof number search (PPNS) is a best-first search algorithm that can be used to solve positions in AND/OR game tree structures. It combines information from explored (based on winning status) and unexplored (through Monte Carlo simulation) nodes from a game tree using an indicator called the probability-based proof number (PPN). In this study, PPNS is employed to solve randomly generated positions in Connect Four and Othello, in which the results are compared with the two well-known best-first search algorithms (proof number search (PNS) and Monte Carlo proof number search). Adopting a simple improvement parameter in PPNS reduces the number of nodes that need to be explored by up to 57%. Moreover, further observation showed the varying importance of information from explored and unexplored nodes in which PPNS relies critically on the combination of such information in earlier stages of the Othello game. Discussion and insights from these findings are provided where the potential future works are briefly described. Full article
(This article belongs to the Section Information Theory and Methodology)
Show Figures

Figure 1

37 pages, 3305 KiB  
Article
Modeling Popularity and Reliability of Sources in Multilingual Wikipedia
by Włodzimierz Lewoniewski, Krzysztof Węcel and Witold Abramowicz
Information 2020, 11(5), 263; https://doi.org/10.3390/info11050263 - 13 May 2020
Cited by 19 | Viewed by 24353
Abstract
One of the most important factors impacting quality of content in Wikipedia is presence of reliable sources. By following references, readers can verify facts or find more details about described topic. A Wikipedia article can be edited independently in any of over 300 [...] Read more.
One of the most important factors impacting quality of content in Wikipedia is presence of reliable sources. By following references, readers can verify facts or find more details about described topic. A Wikipedia article can be edited independently in any of over 300 languages, even by anonymous users, therefore information about the same topic may be inconsistent. This also applies to use of references in different language versions of a particular article, so the same statement can have different sources. In this paper we analyzed over 40 million articles from the 55 most developed language versions of Wikipedia to extract information about over 200 million references and find the most popular and reliable sources. We presented 10 models for the assessment of the popularity and reliability of the sources based on analysis of meta information about the references in Wikipedia articles, page views and authors of the articles. Using DBpedia and Wikidata we automatically identified the alignment of the sources to a specific domain. Additionally, we analyzed the changes of popularity and reliability in time and identified growth leaders in each of the considered months. The results can be used for quality improvements of the content in different languages versions of Wikipedia. Full article
(This article belongs to the Special Issue Quality of Open Data)
Show Figures

Figure 1

21 pages, 1032 KiB  
Article
Algorithmic Improvements of the KSU-STEM Method Verified on a Fund Portfolio Selection
by Adam Borovička
Information 2020, 11(5), 262; https://doi.org/10.3390/info11050262 - 12 May 2020
Cited by 2 | Viewed by 2627
Abstract
The topic of this article is inspired by the problem faced by many people around the world: investment portfolio selection. Apart from the standardly used methods and approaches, non-traditional multiple objective programming methods can also be significant, providing even more efficient support for [...] Read more.
The topic of this article is inspired by the problem faced by many people around the world: investment portfolio selection. Apart from the standardly used methods and approaches, non-traditional multiple objective programming methods can also be significant, providing even more efficient support for making a satisfactory investment decision. A more suitable method for this purpose seems to be a concept working with an interactive procedure through the portfolio that may gradually be adapted to the investor’s preferences. Such a method is clearly the Step Method (STEM) or the more suitable improved version KSU-STEM. This method is still burdened by partial algorithmic weaknesses or methodical aspects to think about, but not as much as the other methods. The potentially stronger application power of the KSU-STEM concept motivates its revision. Firstly, an unnecessarily negative principle to determine the basal value of the objectives is revised. Further, the fuzzy goals are specified, which leads to a reformulation of the revealed defuzzified multi-objective model. Finally, the imperfect re-setting of the weights (importance) of unsatisfactory objectives is revealed. Thus, the alternative approaches are proposed. The interventions to the algorithm are empirically verified through a real-life selection of a portfolio of the open unit trusts offered by CONSEQ Investment Management traded on the Czech capital market. This application confirms a significant supporting power of the revised multiple objective programming approach KSU-STEM in a portfolio-making process. Full article
(This article belongs to the Special Issue Selected Papers from ESM 2019)
Show Figures

Figure 1

27 pages, 1482 KiB  
Article
Selecting a Secure Cloud Provider—An Empirical Study and Multi Criteria Approach
by Sebastian Pape, Federica Paci, Jan Jürjens and Fabio Massacci
Information 2020, 11(5), 261; https://doi.org/10.3390/info11050261 - 11 May 2020
Cited by 6 | Viewed by 4830
Abstract
Security has become one of the primary factors that cloud customers consider when they select a cloud provider for migrating their data and applications into the Cloud. To this end, the Cloud Security Alliance (CSA) has provided the Consensus Assessment Questionnaire (CAIQ), which [...] Read more.
Security has become one of the primary factors that cloud customers consider when they select a cloud provider for migrating their data and applications into the Cloud. To this end, the Cloud Security Alliance (CSA) has provided the Consensus Assessment Questionnaire (CAIQ), which consists of a set of questions that providers should answer to document which security controls their cloud offerings support. In this paper, we adopted an empirical approach to investigate whether the CAIQ facilitates the comparison and ranking of the security offered by competitive cloud providers. We conducted an empirical study to investigate if comparing and ranking the security posture of a cloud provider based on CAIQ’s answers is feasible in practice. Since the study revealed that manually comparing and ranking cloud providers based on the CAIQ is too time-consuming, we designed an approach that semi-automates the selection of cloud providers based on CAIQ. The approach uses the providers’ answers to the CAIQ to assign a value to the different security capabilities of cloud providers. Tenants have to prioritize their security requirements. With that input, our approach uses an Analytical Hierarchy Process (AHP) to rank the providers’ security based on their capabilities and the tenants’ requirements. Our implementation shows that this approach is computationally feasible and once the providers’ answers to the CAIQ are assessed, they can be used for multiple CSP selections. To the best of our knowledge this is the first approach for cloud provider selection that provides a way to assess the security posture of a cloud provider in practice. Full article
(This article belongs to the Special Issue Cloud Security Risk Management)
Show Figures

Figure 1

13 pages, 336 KiB  
Article
A New Approach to Keep the Privacy Information of the Signer in a Digital Signature Scheme
by Dung Hoang Duong, Willy Susilo and Viet Cuong Trinh
Information 2020, 11(5), 260; https://doi.org/10.3390/info11050260 - 11 May 2020
Viewed by 2809
Abstract
In modern applications, such as Electronic Voting, e-Health, e-Cash, there is a need that the validity of a signature should be verified by only one responsible person. This is opposite to the traditional digital signature scheme where anybody can verify a signature. There [...] Read more.
In modern applications, such as Electronic Voting, e-Health, e-Cash, there is a need that the validity of a signature should be verified by only one responsible person. This is opposite to the traditional digital signature scheme where anybody can verify a signature. There have been several solutions for this problem, the first one is we combine a signature scheme with an encryption scheme; the second one is to use the group signature; and the last one is to use the strong designated verifier signature scheme with the undeniable property. In this paper, we extend the traditional digital signature scheme to propose a new solution for the aforementioned problem. Our extension is in the sense that only a designated verifier (responsible person) can verify a signer’s signature, and if necessary (in case the signer refuses to admit his/her signature) the designated verifier without revealing his/her secret key is able to prove to anybody that the signer has actually generated the signature. The comparison between our proposed solution and the three existing solutions shows that our proposed solution is the best one in terms of both security and efficiency. Full article
14 pages, 6690 KiB  
Article
Heuristic Analysis for In-Plane Non-Contact Calibration of Rulers Using Mask R-CNN
by Michael Telahun, Daniel Sierra-Sossa and Adel S. Elmaghraby
Information 2020, 11(5), 259; https://doi.org/10.3390/info11050259 - 9 May 2020
Cited by 3 | Viewed by 4022
Abstract
Determining an object measurement is a challenging task without having a well-defined reference. When a ruler is placed in the same plane of an object being measured it can serve as metric reference, thus a measurement system can be defined and calibrated to [...] Read more.
Determining an object measurement is a challenging task without having a well-defined reference. When a ruler is placed in the same plane of an object being measured it can serve as metric reference, thus a measurement system can be defined and calibrated to correlate actual dimensions with pixels contained in an image. This paper describes a system for non-contact object measurement by sensing and assessing the distinct spatial frequency of the graduations on a ruler. The approach presented leverages Deep Learning methods, specifically Mask Region proposal based Convolutional Neural Networks (R-CNN), for rulers’ recognition and segmentation, as well as several other computer vision (CV) methods such as adaptive thresholding and template matching. We developed a heuristic analytical method for calibrating an image by applying several filters to extract the spatial frequencies corresponding to the ticks on a given ruler. We propose an automated in-plane optical scaling calibration system for non-contact measurement. Full article
Show Figures

Figure 1

13 pages, 6667 KiB  
Review
Telecommunication Systems for Small Satellites Operating at High Frequencies: A Review
by Alessandra Babuscia
Information 2020, 11(5), 258; https://doi.org/10.3390/info11050258 - 8 May 2020
Cited by 8 | Viewed by 4571
Abstract
Small Satellites and in particular CubeSats are becoming extremely popular platforms with which to perform space research. They allow for rapid prototyping with considerable cost savings with respect to traditional missions. However, as small satellite missions become more ambitious in terms of destinations [...] Read more.
Small Satellites and in particular CubeSats are becoming extremely popular platforms with which to perform space research. They allow for rapid prototyping with considerable cost savings with respect to traditional missions. However, as small satellite missions become more ambitious in terms of destinations to reach (from Low Earth Orbit to interplanetary) and in terms of the amount of data to transmit, new technologies need to be developed to provide adequate telecommunication support. This paper aims to review the telecommunication systems that have been developed at the Jet Propulsion Laboratory for some of the most recent CubeSat missions operating at different frequency bands: ASTERIA (S-Band), MarCO (X-Band and UHF) and ISARA (Ka-Band and UHF). For each of these missions: the telecommunication challenges and requirements are listed; the final system design is presented; the characteristics of the different hardware components are shown; and the lessons learned through operations are discussed. Full article
(This article belongs to the Special Issue Satellite Communication at Ka and Q/V Frequency Bands)
Show Figures

Figure 1

20 pages, 8737 KiB  
Article
Comparative Analysis among Discrete Fourier Transform, K-Means and Artificial Neural Networks Image Processing Techniques Oriented on Quality Control of Assembled Tires
by Alessandro Massaro, Giovanni Dipierro, Emanuele Cannella and Angelo Maurizio Galiano
Information 2020, 11(5), 257; https://doi.org/10.3390/info11050257 - 8 May 2020
Cited by 19 | Viewed by 5114
Abstract
The present paper discusses a comparative application of image processing techniques, i.e., Discrete Fourier Transform, K-Means clustering and Artificial Neural Network, for the detection of defects in the industrial context of assembled tires. The used Artificial Neural Network technique is based on Long [...] Read more.
The present paper discusses a comparative application of image processing techniques, i.e., Discrete Fourier Transform, K-Means clustering and Artificial Neural Network, for the detection of defects in the industrial context of assembled tires. The used Artificial Neural Network technique is based on Long Short-Term Memory and Fully Connected neural networks. The investigations focus on the monitoring and quality control of defects, which may appear on the external surface of tires after being assembled. Those defects are caused from tires which are not properly assembled to their respective metallic wheel rim, generating deformations and scrapes which are not desired. The proposed image processing techniques are applied on raw high-resolution images, which are acquired by in-line imaging and optical instruments. All the described techniques, i.e., Discrete Fourier Transform, K-Means clustering and Long Short-Term Memory, were able to determine defected and acceptable external tire surfaces. The proposed research is taken in the context of an industrial project which focuses on the development of automated quality control and monitoring methodologies, within the field of Industry 4.0 facilities. The image processing techniques are thus meant to be adopted into production processes, giving a strong support to the in-line quality control phase. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

20 pages, 1863 KiB  
Article
Software Support for Discourse-Based Textual Information Analysis: A Systematic Literature Review and Software Guidelines in Practice
by Patricia Martin-Rodilla and Miguel Sánchez
Information 2020, 11(5), 256; https://doi.org/10.3390/info11050256 - 7 May 2020
Cited by 6 | Viewed by 4552
Abstract
The intrinsic characteristics of humanities research require technological support and software assistance that also necessarily goes through the analysis of textual narratives. When these narratives become increasingly complex, pragmatics analysis (i.e., at discourse or argumentation levels) assisted by software is a great ally [...] Read more.
The intrinsic characteristics of humanities research require technological support and software assistance that also necessarily goes through the analysis of textual narratives. When these narratives become increasingly complex, pragmatics analysis (i.e., at discourse or argumentation levels) assisted by software is a great ally in the digital humanities. In recent years, solutions have been developed from the information visualization domain to support discourse analysis or argumentation analysis of textual sources via software, with applications in political speeches, debates, online forums, but also in written narratives, literature or historical sources. This paper presents a wide and interdisciplinary systematic literature review (SLR), both in software-related areas and humanities areas, on the information visualization and the software solutions adopted to support pragmatics textual analysis. As a result of this review, this paper detects weaknesses in existing works on the field, especially related to solutions’ availability, pragmatic framework dependence and lack of information sharing and reuse software mechanisms. The paper also provides some software guidelines for improving the detected weaknesses, exemplifying some guidelines in practice through their implementation in a new web tool, Viscourse. Viscourse is conceived as a complementary tool to assist textual analysis and to facilitate the reuse of informational pieces from discourse and argumentation text analysis tasks. Full article
(This article belongs to the Special Issue Digital Humanities)
Show Figures

Figure 1

12 pages, 674 KiB  
Article
A Diverse Data Augmentation Strategy for Low-Resource Neural Machine Translation
by Yu Li, Xiao Li, Yating Yang and Rui Dong
Information 2020, 11(5), 255; https://doi.org/10.3390/info11050255 - 6 May 2020
Cited by 24 | Viewed by 5163
Abstract
One important issue that affects the performance of neural machine translation is the scale of available parallel data. For low-resource languages, the amount of parallel data is not sufficient, which results in poor translation quality. In this paper, we propose a diversity data [...] Read more.
One important issue that affects the performance of neural machine translation is the scale of available parallel data. For low-resource languages, the amount of parallel data is not sufficient, which results in poor translation quality. In this paper, we propose a diversity data augmentation method that does not use extra monolingual data. We expand the training data by generating diversity pseudo parallel data on the source and target sides. To generate diversity data, the restricted sampling strategy is employed at the decoding steps. Finally, we filter and merge origin data and synthetic parallel corpus to train the final model. In the experiment, the proposed approach achieved 1.96 BLEU points in the IWSLT2014 German–English translation tasks, which was used to simulate a low-resource language. Our approach also consistently and substantially obtained 1.0 to 2.0 BLEU improvement in three other low-resource translation tasks, including English–Turkish, Nepali–English, and Sinhala–English translation tasks. Full article
(This article belongs to the Special Issue Advances in Computational Linguistics)
Show Figures

Figure 1

15 pages, 269 KiB  
Article
Communication Strategies in Social Media in the Example of ICT Companies
by Anna Losa-Jonczyk
Information 2020, 11(5), 254; https://doi.org/10.3390/info11050254 - 5 May 2020
Cited by 9 | Viewed by 4584
Abstract
This article aims to present the results of pilot studies on the involvement of the four largest Information Communication Technology (ICT) companies in promoting the Sustainable Development Goals (SDGs) through social media. Studies examine which communication strategy is used by companies in social [...] Read more.
This article aims to present the results of pilot studies on the involvement of the four largest Information Communication Technology (ICT) companies in promoting the Sustainable Development Goals (SDGs) through social media. Studies examine which communication strategy is used by companies in social media. The research was carried out using the method of the content of messages posted on the official Facebook and Twitter accounts of the ICT companies’ analysis. The analysis showed that the companies prefer corporate ability communication strategy over Corporate Social Responsibility (CSR) or a hybrid one. Posts rarely concern the company’s activities related to social and environmental responsibility. Although they engage in activities supporting the achievement of the SDGs and provide information about it on their corporate websites, the topic of sustainable development has been taken up in small numbers in the posts examined. Full article
15 pages, 1489 KiB  
Article
Knowledge Graphs for Online Marketing and Sales of Touristic Services
by Anna Fensel, Zaenal Akbar, Elias Kärle, Christoph Blank, Patrick Pixner and Andreas Gruber
Information 2020, 11(5), 253; https://doi.org/10.3390/info11050253 - 4 May 2020
Cited by 11 | Viewed by 6224
Abstract
Direct online marketing and sales are nowadays an essential part of almost any business that addresses an end consumer, such as in tourism. On the downside, the data and content required for such marketing and sales are typically distributed, and difficult to identify [...] Read more.
Direct online marketing and sales are nowadays an essential part of almost any business that addresses an end consumer, such as in tourism. On the downside, the data and content required for such marketing and sales are typically distributed, and difficult to identify and use, especially for small and medium enterprises. Further, a combination of content management and semantics for automated online marketing and sales is becoming practically feasible now, especially with the global adoption of knowledge graphs. A design and feasibility pilot of a solution implementing semantic content and data value chain for online direct marketing and sales, basing on knowledge graphs, and efficiently addressing multiple channels and stakeholders, is provided and evaluated with the end-users. The implementation is shown to be suitable for the use on the Web, social media and mobile channels. The proof of concept addresses the tourism sector, exploring, in particular, the case of touristic service packaging, and is applicable globally. The typically encountered challenges, particularly, the ones related to data quality, are identified, and the ways to overcome them are discussed. The paper advances the knowledge of employment of knowledge graphs in online marketing and sales, and showcases its related innovative practical application, co-created by the industry providing marketing and sales solutions for Austria, one of the world’s leading touristic regions. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

12 pages, 852 KiB  
Article
Hedonic Pricing on the Fine Art Market
by Anna Zhukova, Valeriya Lakshina and Liudmila Leonova
Information 2020, 11(5), 252; https://doi.org/10.3390/info11050252 - 4 May 2020
Cited by 2 | Viewed by 4246
Abstract
In conditions of the stock market instability the art assets could be considered as an attractive investment. The fine art market is very heterogeneous which is featured by uniqueness of the goods, specific costs and risks, various peculiarities of functioning, different effects and, [...] Read more.
In conditions of the stock market instability the art assets could be considered as an attractive investment. The fine art market is very heterogeneous which is featured by uniqueness of the goods, specific costs and risks, various peculiarities of functioning, different effects and, hence, needs special treatment. However, due to the diversity of the fine art market’s goods and the absence of the systematic information about the sales, researchers do not come to the same opinion about the merits of the art assets conducting studies on single segments of the market. We make an attempt to investigate attractiveness of the fine art market for investors. Extensive data was collected to obtain a complete pattern of the market analyzing it within different segments. We use the Heckman model in order to estimate the art asset return and find out the most influential factors of art price dynamics. Based on the estimates obtained we construct monthly art price index and compare it with S&P500 benchmark. Full article
(This article belongs to the Special Issue Computer Modelling in Decision Making (CMDM 2019))
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop