Next Issue
Volume 11, August
Previous Issue
Volume 11, June
 
 

Information, Volume 11, Issue 7 (July 2020) – 30 articles

Cover Story (view full-size image): The collection and processing of personal data offer great opportunities for technological advances, but the accumulation of vast amounts of personal data increases the risk of malicious misuse, especially in healthcare. Therefore, personal data is legally protected. Privacy policies transparently inform users about the collection and processing of their data. In addition, various approaches exist to technically protect personal data when processed or shared. In this work, we define a personal privacy workflow, considering the negotiation of privacy policies, privacy-preserving processing, and secondary use of personal data, in the context of healthcare data processing. We survey applicable privacy-enhancing technologies for each step of the workflow to identify open research opportunities. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
12 pages, 1758 KiB  
Article
SVD++ Recommendation Algorithm Based on Backtracking
by Shijie Wang, Guiling Sun and Yangyang Li
Information 2020, 11(7), 369; https://doi.org/10.3390/info11070369 - 21 Jul 2020
Cited by 18 | Viewed by 9043
Abstract
Collaborative filtering (CF) has successfully achieved application in personalized recommendation systems. The singular value decomposition (SVD)++ algorithm is employed as an optimized SVD algorithm to enhance the accuracy of prediction by generating implicit feedback. However, the SVD++ algorithm is limited primarily by its [...] Read more.
Collaborative filtering (CF) has successfully achieved application in personalized recommendation systems. The singular value decomposition (SVD)++ algorithm is employed as an optimized SVD algorithm to enhance the accuracy of prediction by generating implicit feedback. However, the SVD++ algorithm is limited primarily by its low efficiency of calculation in the recommendation. To address this limitation of the algorithm, this study proposes a novel method to accelerate the computation of the SVD++ algorithm, which can help achieve more accurate recommendation results. The core of the proposed method is to conduct a backtracking line search in the SVD++ algorithm, optimize the recommendation algorithm, and find the optimal solution via the backtracking line search on the local gradient of the objective function. The algorithm is compared with the conventional CF algorithm in the FilmTrust, MovieLens 1 M and 10 M public datasets. The effectiveness of the proposed method is demonstrated by comparing the root mean square error, absolute mean error and recall rate simulation results. Full article
(This article belongs to the Special Issue Data Modeling and Predictive Analytics)
Show Figures

Figure 1

15 pages, 1403 KiB  
Article
Topic Jerk Detector: Detection of Tweet Bursts Related to the Fukushima Daiichi Nuclear Disaster
by Hiroshi Nagaya, Teruaki Hayashi, Hiroyuki A. Torii and Yukio Ohsawa
Information 2020, 11(7), 368; https://doi.org/10.3390/info11070368 - 21 Jul 2020
Cited by 3 | Viewed by 3152
Abstract
In recent disaster situations, social media platforms, such as Twitter, played a major role in information sharing and widespread communication. These situations require efficient information sharing; therefore, it is important to understand the trends in popular topics and the underlying dynamics of information [...] Read more.
In recent disaster situations, social media platforms, such as Twitter, played a major role in information sharing and widespread communication. These situations require efficient information sharing; therefore, it is important to understand the trends in popular topics and the underlying dynamics of information flow on social media better. Developing new methods to help us in these situations, and testing their effectiveness so that they can be used in future disasters is an important research problem. In this study, we proposed a new model, “topic jerk detector.” This model is ideal for identifying topic bursts. The main advantage of this method is that it is better fitted to sudden bursts, and accurately detects the timing of the bursts of topics compared to the existing method, topic dynamics. Our model helps capture important topics that have rapidly risen to the top of the agenda in respect of time in the study of specific social issues. It is also useful to track the transition of topics more effectively and to monitor tweets related to specific events, such as disasters. We attempted three experiments that verified its effectiveness. First, we presented a case study applied to the tweet dataset related to the Fukushima disaster to show the outcomes of the proposed method. Next, we performed a comparison experiment with the existing method. We showed that the proposed method is better fitted to sudden burst accurately detects the timing of the bursts of the topic. Finally, we received expert feedback on the validity of the results and the practicality of the methodology. Full article
(This article belongs to the Special Issue CDEC: Cross-disciplinary Data Exchange and Collaboration)
Show Figures

Figure 1

17 pages, 1539 KiB  
Article
Personality Traits, Gamification and Features to Develop an App to Reduce Physical Inactivity
by Charlotte Meixner, Hannes Baumann and Bettina Wollesen
Information 2020, 11(7), 367; https://doi.org/10.3390/info11070367 - 19 Jul 2020
Cited by 7 | Viewed by 4789
Abstract
Background: Health benefits from physical activity (PA) can be achieved by following the WHO recommendation for PA. To increase PA in inactive individuals, digital interventions can provide cost-effective and low-threshold access. Moreover, gamification elements can raise the motivation for PA. This study analyzed [...] Read more.
Background: Health benefits from physical activity (PA) can be achieved by following the WHO recommendation for PA. To increase PA in inactive individuals, digital interventions can provide cost-effective and low-threshold access. Moreover, gamification elements can raise the motivation for PA. This study analyzed which factors (personality traits, app features, gamification) are relevant to increasing PA within this target group. Methods: N = 808 inactive participants (f = 480; m = 321; age = 48 ± 6) were integrated into the analysis of the desire for PA, the appearance of personality traits and resulting interest in app features and gamification. The statistical analysis included chi-squared tests, one-way ANOVA and regression analysis. Results: The main interests in PA were fitness (97%) and outdoor activities (75%). No significant interaction between personality traits, interest in PA goals, app features and gamification were found. The interest in gamification was determined by the PA goal. Participants’ requirements for features included feedback and suggestions for activities. Monetary incentives were reported as relevant gamification aspects. Conclusion: Inactive people can be reached by outdoor activities, interventions to increase an active lifestyle, fitness and health sports. The study highlighted the interest in specific app features and gamification to increase PA in inactive people through an app. Full article
(This article belongs to the Special Issue e-Health Pervasive Wireless Applications and Services (e-HPWAS'19))
Show Figures

Figure 1

24 pages, 2475 KiB  
Article
Methodological Proposal for the Study of Temporal and Spatial Dynamics during the Late Period between the Middle Ebro and the Pyrenees
by Leticia Tobalina Pulido
Information 2020, 11(7), 366; https://doi.org/10.3390/info11070366 - 17 Jul 2020
Cited by 3 | Viewed by 3282
Abstract
The article I present here deals with the methodological approach carried out in my PhD in which I analyzed the spatial and temporal dynamics of late rural settlements during five centuries in the southern Pyrenees area, using geographic information systems, spatial databases, and [...] Read more.
The article I present here deals with the methodological approach carried out in my PhD in which I analyzed the spatial and temporal dynamics of late rural settlements during five centuries in the southern Pyrenees area, using geographic information systems, spatial databases, and descriptive statistics to establish models of space occupation and try to determine how these vary over the different centuries. Full article
(This article belongs to the Special Issue Digital Humanities)
Show Figures

Figure 1

22 pages, 8239 KiB  
Article
An Improved Traffic Congestion Monitoring System Based on Federated Learning
by Chenming Xu and Yunlong Mao
Information 2020, 11(7), 365; https://doi.org/10.3390/info11070365 - 16 Jul 2020
Cited by 14 | Viewed by 5529
Abstract
This study introduces a software-based traffic congestion monitoring system. The transportation system controls the traffic between cities all over the world. Traffic congestion happens not only in cities, but also on highways and other places. The current transportation system is not satisfactory in [...] Read more.
This study introduces a software-based traffic congestion monitoring system. The transportation system controls the traffic between cities all over the world. Traffic congestion happens not only in cities, but also on highways and other places. The current transportation system is not satisfactory in the area without monitoring. In order to improve the limitations of the current traffic system in obtaining road data and expand its visual range, the system uses remote sensing data as the data source for judging congestion. Since some remote sensing data needs to be kept confidential, this is a problem to be solved to effectively protect the safety of remote sensing data during the deep learning training process. Compared with the general deep learning training method, this study provides a federated learning method to identify vehicle targets in remote sensing images to solve the problem of data privacy in the training process of remote sensing data. The experiment takes the remote sensing image data sets of Los Angeles Road and Washington Road as samples for training, and the training results can achieve an accuracy of about 85%, and the estimated processing time of each image can be as low as 0.047 s. In the final experimental results, the system can automatically identify the vehicle targets in the remote sensing images to achieve the purpose of detecting congestion. Full article
Show Figures

Figure 1

13 pages, 543 KiB  
Review
Industry 4.0 Readiness Models: A Systematic Literature Review of Model Dimensions
by Mohd Hizam-Hanafiah, Mansoor Ahmed Soomro and Nor Liza Abdullah
Information 2020, 11(7), 364; https://doi.org/10.3390/info11070364 - 15 Jul 2020
Cited by 129 | Viewed by 14989
Abstract
It is critical for organizations to self-assess their Industry 4.0 readiness to survive and thrive in the age of the Fourth Industrial Revolution. Thereon, conceptualization or development of an Industry 4.0 readiness model with the fundamental model dimensions is needed. This paper used [...] Read more.
It is critical for organizations to self-assess their Industry 4.0 readiness to survive and thrive in the age of the Fourth Industrial Revolution. Thereon, conceptualization or development of an Industry 4.0 readiness model with the fundamental model dimensions is needed. This paper used a systematic literature review (SLR) methodology with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines and content analysis strategy to review 97 papers in peer-reviewed academic journals and industry reports published from 2000 to 2019. The review identifies 30 Industry 4.0 readiness models with 158 unique model dimensions. Based on this review, there are two theoretical contributions. First, this paper proposes six dimensions (Technology, People, Strategy, Leadership, Process and Innovation) that can be considered as the most important dimensions for organizations. Second, this review reveals that 70 (44%) out of total 158 total unique dimensions on Industry 4.0 pertain to the assessment of technology alone. This establishes that organizations need to largely improve on their technology readiness, to strengthen their Industry 4.0 readiness. In summary, these six most common dimensions, and in particular, the dominance of the technology dimension provides a research agenda for future research on Industry 4.0 readiness. Full article
(This article belongs to the Section Review)
Show Figures

Figure 1

15 pages, 1767 KiB  
Article
Applying DevOps Practices of Continuous Automation for Machine Learning
by Ioannis Karamitsos, Saeed Albarhami and Charalampos Apostolopoulos
Information 2020, 11(7), 363; https://doi.org/10.3390/info11070363 - 13 Jul 2020
Cited by 54 | Viewed by 15292
Abstract
This paper proposes DevOps practices for machine learning application, integrating both the development and operation environment seamlessly. The machine learning processes of development and deployment during the experimentation phase may seem easy. However, if not carefully designed, deploying and using such models may [...] Read more.
This paper proposes DevOps practices for machine learning application, integrating both the development and operation environment seamlessly. The machine learning processes of development and deployment during the experimentation phase may seem easy. However, if not carefully designed, deploying and using such models may lead to a complex, time-consuming approaches which may require significant and costly efforts for maintenance, improvement, and monitoring. This paper presents how to apply continuous integration (CI) and continuous delivery (CD) principles, practices, and tools so as to minimize waste, support rapid feedback loops, explore the hidden technical debt, improve value delivery and maintenance, and improve operational functions for real-world machine learning applications. Full article
(This article belongs to the Section Information Theory and Methodology)
Show Figures

Figure 1

23 pages, 5363 KiB  
Article
Design and Execution of Integrated Clinical Pathway: A Simplified Meta-Model and Associated Methodology
by Carmelo Ardito, Danilo Caivano, Lucio Colizzi, Giovanni Dimauro and Loredana Verardi
Information 2020, 11(7), 362; https://doi.org/10.3390/info11070362 - 13 Jul 2020
Cited by 7 | Viewed by 8356
Abstract
Integrated clinical pathways (ICPs) are task-oriented care plans detailing the essential steps of the therapeutic pathway referring to a specific clinical problem with a patient’s expected clinical course. ICPs represent an effective tool for resource management in the public and private health domains. [...] Read more.
Integrated clinical pathways (ICPs) are task-oriented care plans detailing the essential steps of the therapeutic pathway referring to a specific clinical problem with a patient’s expected clinical course. ICPs represent an effective tool for resource management in the public and private health domains. To be automatically executed, the ICP process has to be described by means of complex general purpose description language (GPDL) formalisms. However, GPDLs make the process model difficult to grasp by a human. On the other hand, the adoption of a reduced set of graphical constructs prevents a fully automated process execution due to the lack of information required by a machine. Unfortunately, it is difficult to find a balance between modelling language expressiveness and the automated execution of the modelled processes. In this paper, we present a meta-model based on a GPDL to organize the ICP process knowledge. This meta-model allows the management of ICP information in a way that is independent from the graphic representation of the adopted modelling standard. We also propose a general framework and a methodology that aim to guarantee a high degree of automation in process execution. In particular, the corresponding execution engine is implemented as a chatbot (integrated with social media), which plays a two-fold role: during the actual execution of the entire ICP, it acts as a virtual assistant and gathers the patient’s health data. Tests performed on a real ICP showed that, thanks to the proposed solution, the chatbot engine is able to engage in a dialogue with the patient. We provide discussion about how the system could be extended and how it could be seen as an alternative to Artificial Intelligence (AI) and Natural Language Processing (NLP)-based approaches. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

21 pages, 399 KiB  
Article
Extraction Patterns to Derive Social Networks from Linked Open Data Using SPARQL
by Raji Ghawi and Jürgen Pfeffer
Information 2020, 11(7), 361; https://doi.org/10.3390/info11070361 - 12 Jul 2020
Cited by 3 | Viewed by 3555
Abstract
Linked Open Data (LOD) refers to freely available data on the World Wide Web that are typically represented using the Resource Description Framework (RDF) and standards built on it. LOD is an invaluable resource of information due to its richness and openness, which [...] Read more.
Linked Open Data (LOD) refers to freely available data on the World Wide Web that are typically represented using the Resource Description Framework (RDF) and standards built on it. LOD is an invaluable resource of information due to its richness and openness, which create new opportunities for many areas of application. In this paper, we address the exploitation of LOD by utilizing SPARQL queries in order to extract social networks among entities. This enables the application of de-facto techniques from Social Network Analysis (SNA) to study social relations and interactions among entities, providing deep insights into their latent social structure. Full article
(This article belongs to the Special Issue Conceptual Structures 2019)
Show Figures

Figure 1

21 pages, 46200 KiB  
Article
Efficient Paradigm to Measure Street-Crossing Onset Time of Pedestrians in Video-Based Interactions with Vehicles
by Stefanie M. Faas, Stefan Mattes, Andrea C. Kao and Martin Baumann
Information 2020, 11(7), 360; https://doi.org/10.3390/info11070360 - 11 Jul 2020
Cited by 11 | Viewed by 4287
Abstract
With self-driving vehicles (SDVs), pedestrians can no longer rely on a human driver. Previous research suggests that pedestrians may benefit from an external Human–Machine Interface (eHMI) displaying information to surrounding traffic participants. This paper introduces a natural methodology to compare eHMI concepts from [...] Read more.
With self-driving vehicles (SDVs), pedestrians can no longer rely on a human driver. Previous research suggests that pedestrians may benefit from an external Human–Machine Interface (eHMI) displaying information to surrounding traffic participants. This paper introduces a natural methodology to compare eHMI concepts from a pedestrian’s viewpoint. To measure eHMI effects on traffic flow, previous video-based studies instructed participants to indicate their crossing decision with interfering data collection devices, such as pressing a button or slider. We developed a quantifiable concept that allows participants to naturally step off a sidewalk to cross the street. Hidden force-sensitive resistor sensors recorded their crossing onset time (COT) in response to real-life videos of approaching vehicles in an immersive crosswalk simulation environment. We validated our method with an initial study of N = 34 pedestrians by showing (1) that it is able to detect significant eHMI effects on COT as well as subjective measures of perceived safety and user experience. The approach is further validated by (2) replicating the findings of a test track study and (3) participants’ reports that it felt natural to take a step forward to indicate their street crossing decision. We discuss the benefits and limitations of our method with regard to related approaches. Full article
Show Figures

Figure 1

2 pages, 160 KiB  
Editorial
Editorial for the Special Issue on “Digital Humanities”
by Cesar Gonzalez-Perez
Information 2020, 11(7), 359; https://doi.org/10.3390/info11070359 - 10 Jul 2020
Cited by 1 | Viewed by 2932
Abstract
Digital humanities are often described in terms of humanistic work being carried out with the aid of digital tools, usually computer-based [...] Full article
(This article belongs to the Special Issue Digital Humanities)
8 pages, 2460 KiB  
Article
Prediction Framework with Kalman Filter Algorithm
by Janis Peksa
Information 2020, 11(7), 358; https://doi.org/10.3390/info11070358 - 10 Jul 2020
Cited by 7 | Viewed by 3657
Abstract
The article describes the autonomous open data prediction framework, which is in its infancy and is designed to automate predictions with a variety of data sources that are mostly external. The framework has been implemented with the Kalman filter approach, and an experiment [...] Read more.
The article describes the autonomous open data prediction framework, which is in its infancy and is designed to automate predictions with a variety of data sources that are mostly external. The framework has been implemented with the Kalman filter approach, and an experiment with road maintenance weather station data is being performed. The framework was written in Python programming language; the frame is published on GitHub with all currently available results. The experiment is performed with 34 weather station data, which are time-series data, and the specific measurements that are predicted are dew points. The framework is published as a Web service to be able to integrate with ERP systems and be able to be reusable. Full article
(This article belongs to the Special Issue Cloud Gamification 2019)
Show Figures

Figure 1

17 pages, 1726 KiB  
Article
Privacy Preservation of Data-Driven Models in Smart Grids Using Homomorphic Encryption
by Dabeeruddin Syed, Shady S. Refaat and Othmane Bouhali
Information 2020, 11(7), 357; https://doi.org/10.3390/info11070357 - 8 Jul 2020
Cited by 19 | Viewed by 4763
Abstract
Deep learning models have been applied for varied electrical applications in smart grids with a high degree of reliability and accuracy. The development of deep learning models requires the historical data collected from several electric utilities during the training of the models. The [...] Read more.
Deep learning models have been applied for varied electrical applications in smart grids with a high degree of reliability and accuracy. The development of deep learning models requires the historical data collected from several electric utilities during the training of the models. The lack of historical data for training and testing of developed models, considering security and privacy policy restrictions, is considered one of the greatest challenges to machine learning-based techniques. The paper proposes the use of homomorphic encryption, which enables the possibility of training the deep learning and classical machine learning models whilst preserving the privacy and security of the data. The proposed methodology is tested for applications of fault identification and localization, and load forecasting in smart grids. The results for fault localization show that the classification accuracy of the proposed privacy-preserving deep learning model while using homomorphic encryption is 97–98%, which is close to 98–99% classification accuracy of the model on plain data. Additionally, for load forecasting application, the results show that RMSE using the homomorphic encryption model is 0.0352 MWh while RMSE without application of encryption in modeling is around 0.0248 MWh. Full article
(This article belongs to the Special Issue Machine Learning for Cyber-Physical Security)
Show Figures

Figure 1

28 pages, 1416 KiB  
Article
Big Picture on Privacy Enhancing Technologies in e-Health: A Holistic Personal Privacy Workflow
by Stefan Becher, Armin Gerl, Bianca Meier and Felix Bölz
Information 2020, 11(7), 356; https://doi.org/10.3390/info11070356 - 8 Jul 2020
Cited by 10 | Viewed by 5842
Abstract
The collection and processing of personal data offers great opportunities for technological advances, but the accumulation of vast amounts of personal data also increases the risk of misuse for malicious intentions, especially in health care. Therefore, personal data are legally protected, e.g., by [...] Read more.
The collection and processing of personal data offers great opportunities for technological advances, but the accumulation of vast amounts of personal data also increases the risk of misuse for malicious intentions, especially in health care. Therefore, personal data are legally protected, e.g., by the European General Data Protection Regulation (GDPR), which states that individuals must be transparently informed and have the right to take control over the processing of their personal data. In real applications privacy policies are used to fulfill these requirements which can be negotiated via user interfaces. The literature proposes privacy languages as an electronic format for privacy policies while the users privacy preferences are represented by preference languages. However, this is only the beginning of the personal data life-cycle, which also includes the processing of personal data and its transfer to various stakeholders. In this work we define a personal privacy workflow, considering the negotiation of privacy policies, privacy-preserving processing and secondary use of personal data, in context of health care data processing to survey applicable Privacy Enhancing Technologies (PETs) to ensure the individuals’ privacy. Based on a broad literature review we identify open research questions for each step of the workflow. Full article
(This article belongs to the Special Issue e-Health Pervasive Wireless Applications and Services (e-HPWAS'19))
Show Figures

Figure 1

13 pages, 3158 KiB  
Article
Dual Threshold Self-Corrected Minimum Sum Algorithm for 5G LDPC Decoders
by Rong Chen and Lan Chen
Information 2020, 11(7), 355; https://doi.org/10.3390/info11070355 - 7 Jul 2020
Cited by 3 | Viewed by 3763
Abstract
Fifth generation (5G) is a new generation mobile communication system developed for the growing demand for mobile communication. Channel coding is an indispensable part of most modern digital communication systems, for it can improve the transmission reliability and anti-interference. In order to meet [...] Read more.
Fifth generation (5G) is a new generation mobile communication system developed for the growing demand for mobile communication. Channel coding is an indispensable part of most modern digital communication systems, for it can improve the transmission reliability and anti-interference. In order to meet the requirements of 5G communication, a dual threshold self-corrected minimum sum (DT-SCMS) algorithm for low-density parity-check (LDPC) decoders is proposed in this paper. Besides, an architecture of LDPC decoders is designed. By setting thresholds to judge the reliability of messages, the DT-SCMS algorithm erases unreliable messages, improving the decoding performance and efficiency. Simulation results show that the performance of DT-SCMS is better than that of SCMS. When the code rate is 1/3, the performance of DT-SCMS has been improved by 0.2 dB at the bit error rate of 10 4 compared with SCMS. In terms of the convergence, when the code rate is 2/3, the number of iterations of DT-SCMS can be reduced by up to 20.46% compared with SCMS, and the average proportion of reduction is 18.68%. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

15 pages, 1422 KiB  
Article
Early Prediction of Quality Issues in Automotive Modern Industry
by Reza Khoshkangini, Peyman Sheikholharam Mashhadi, Peter Berck, Saeed Gholami Shahbandi, Sepideh Pashami, Sławomir Nowaczyk and Tobias Niklasson
Information 2020, 11(7), 354; https://doi.org/10.3390/info11070354 - 6 Jul 2020
Cited by 21 | Viewed by 8084
Abstract
Many industries today are struggling with early the identification of quality issues, given the shortening of product design cycles and the desire to decrease production costs, coupled with the customer requirement for high uptime. The vehicle industry is no exception, as breakdowns often [...] Read more.
Many industries today are struggling with early the identification of quality issues, given the shortening of product design cycles and the desire to decrease production costs, coupled with the customer requirement for high uptime. The vehicle industry is no exception, as breakdowns often lead to on-road stops and delays in delivery missions. In this paper we consider quality issues to be an unexpected increase in failure rates of a particular component; those are particularly problematic for the original equipment manufacturers (OEMs) since they lead to unplanned costs and can significantly affect brand value. We propose a new approach towards the early detection of quality issues using machine learning (ML) to forecast the failures of a given component across the large population of units. In this study, we combine the usage information of vehicles with the records of their failures. The former is continuously collected, as the usage statistics are transmitted over telematics connections. The latter is based on invoice and warranty information collected in the workshops. We compare two different ML approaches: the first is an auto-regression model of the failure ratios for vehicles based on past information, while the second is the aggregation of individual vehicle failure predictions based on their individual usage. We present experimental evaluations on the real data captured from heavy-duty trucks demonstrating how these two formulations have complementary strengths and weaknesses; in particular, they can outperform each other given different volumes of the data. The classification approach surpasses the regressor model whenever enough data is available, i.e., once the vehicles are in-service for a longer time. On the other hand, the regression shows better predictive performance with a smaller amount of data, i.e., for vehicles that have been deployed recently. Full article
Show Figures

Figure 1

24 pages, 2012 KiB  
Article
Feeling Uncertain—Effects of a Vibrotactile Belt that Communicates Vehicle Sensor Uncertainty
by Matti Krüger, Tom Driessen, Christiane B. Wiebel-Herboth, Joost C. F. de Winter and Heiko Wersing
Information 2020, 11(7), 353; https://doi.org/10.3390/info11070353 - 6 Jul 2020
Cited by 9 | Viewed by 4007
Abstract
With the rise of partially automated cars, drivers are more and more required to judge the degree of responsibility that can be delegated to vehicle assistant systems. This can be supported by utilizing interfaces that intuitively convey real-time reliabilities of system functions such [...] Read more.
With the rise of partially automated cars, drivers are more and more required to judge the degree of responsibility that can be delegated to vehicle assistant systems. This can be supported by utilizing interfaces that intuitively convey real-time reliabilities of system functions such as environment sensing. We designed a vibrotactile interface that communicates spatiotemporal information about surrounding vehicles and encodes a representation of spatial uncertainty in a novel way. We evaluated this interface in a driving simulator experiment with high and low levels of human and machine confidence respectively caused by simulated degraded vehicle sensor precision and limited human visibility range. Thereby we were interested in whether drivers (i) could perceive and understand the vibrotactile encoding of spatial uncertainty, (ii) would subjectively benefit from the encoded information, (iii) would be disturbed in cases of information redundancy, and (iv) would gain objective safety benefits from the encoded information. To measure subjective understanding and benefit, a custom questionnaire, Van der Laan acceptance ratings and NASA TLX scores were used. To measure the objective benefit, we computed the minimum time-to-contact as a measure of safety and gaze distributions as an indicator for attention guidance. Results indicate that participants were able to understand the encoded uncertainty and spatiotemporal information and purposefully utilized it when needed. The tactile interface provided meaningful support despite sensory restrictions. By encoding spatial uncertainties, it successfully extended the operating range of the assistance system. Full article
Show Figures

Figure 1

31 pages, 463 KiB  
Review
Testing the “(Neo-)Darwinian” Principles against Reticulate Evolution: How Variation, Adaptation, Heredity and Fitness, Constraints and Affordances, Speciation, and Extinction Surpass Organisms and Species
by Nathalie Gontier
Information 2020, 11(7), 352; https://doi.org/10.3390/info11070352 - 5 Jul 2020
Cited by 2 | Viewed by 4135
Abstract
Variation, adaptation, heredity and fitness, constraints and affordances, speciation, and extinction form the building blocks of the (Neo-)Darwinian research program, and several of these have been called “Darwinian principles”. Here, we suggest that caution should be taken in calling these principles Darwinian because [...] Read more.
Variation, adaptation, heredity and fitness, constraints and affordances, speciation, and extinction form the building blocks of the (Neo-)Darwinian research program, and several of these have been called “Darwinian principles”. Here, we suggest that caution should be taken in calling these principles Darwinian because of the important role played by reticulate evolutionary mechanisms and processes in also bringing about these phenomena. Reticulate mechanisms and processes include symbiosis, symbiogenesis, lateral gene transfer, infective heredity mediated by genetic and organismal mobility, and hybridization. Because the “Darwinian principles” are brought about by both vertical and reticulate evolutionary mechanisms and processes, they should be understood as foundational for a more pluralistic theory of evolution, one that surpasses the classic scope of the Modern and the Neo-Darwinian Synthesis. Reticulate evolution moreover demonstrates that what conventional (Neo-)Darwinian theories treat as intra-species features of evolution frequently involve reticulate interactions between organisms from very different taxonomic categories. Variation, adaptation, heredity and fitness, constraints and affordances, speciation, and extinction therefore cannot be understood as “traits” or “properties” of genes, organisms, species, or ecosystems because the phenomena are irreducible to specific units and levels of an evolutionary hierarchy. Instead, these general principles of evolution need to be understood as common goods that come about through interactions between different units and levels of evolutionary hierarchies, and they are exherent rather than inherent properties of individuals. Full article
(This article belongs to the Section Review)
18 pages, 10881 KiB  
Article
Bit Reduced FCM with Block Fuzzy Transforms for Massive Image Segmentation
by Barbara Cardone and Ferdinando Di Martino
Information 2020, 11(7), 351; https://doi.org/10.3390/info11070351 - 5 Jul 2020
Cited by 2 | Viewed by 2863
Abstract
A novel bit reduced fuzzy clustering method applied to segment high resolution massive images is proposed. The image is decomposed in blocks and compressed by using the fuzzy transform method, then adjoint pixels with same gray level are binned and the fuzzy c-means [...] Read more.
A novel bit reduced fuzzy clustering method applied to segment high resolution massive images is proposed. The image is decomposed in blocks and compressed by using the fuzzy transform method, then adjoint pixels with same gray level are binned and the fuzzy c-means algorithm is applied on the bins to segment the image. This method has the advantage to be applied to massive images as the compressed image can be stored in memory and the runtime to segment the image are reduced. Comparison tests are performed with respect to the fuzzy c-means algorithm to segment high resolution images; the results shown that for not very high compression the results are comparable with the ones obtained applying to the fuzzy c-means algorithm on the source image and the runtimes are reduced by about an eighth with respect to the runtimes of fuzzy c-means. Full article
(This article belongs to the Special Issue New Trends in Massive Data Clustering)
Show Figures

Figure 1

19 pages, 1153 KiB  
Article
Consumer Attitudes toward News Delivering: An Experimental Evaluation of the Use and Efficacy of Personalized Recommendations
by Paula Viana, Márcio Soares, Rita Gaio and Amilcar Correia
Information 2020, 11(7), 350; https://doi.org/10.3390/info11070350 - 4 Jul 2020
Viewed by 3563
Abstract
This paper presents an experiment on newsreaders’ behavior and preferences on the interaction with online personalized news. Different recommendation approaches, based on consumption profiles and user location, and the impact of personalized news on several aspects of consumer decision-making are examined on a [...] Read more.
This paper presents an experiment on newsreaders’ behavior and preferences on the interaction with online personalized news. Different recommendation approaches, based on consumption profiles and user location, and the impact of personalized news on several aspects of consumer decision-making are examined on a group of volunteers. Results show a significant preference for reading recommended news over other news presented on the screen, regardless of the chosen editorial layout. In addition, the study also provides support for the creation of profiles taking into consideration the evolution of user’s interests. The proposed solution is valid for users with different reading habits and can be successfully applied even to users with small consumption history. Our findings can be used by news providers to improve online services, thus increasing readers’ perceived satisfaction. Full article
Show Figures

Figure 1

15 pages, 281 KiB  
Article
Prolegomena to an Operator Theory of Computation
by Mark Burgin and Gordana Dodig-Crnkovic
Information 2020, 11(7), 349; https://doi.org/10.3390/info11070349 - 4 Jul 2020
Viewed by 3229
Abstract
Defining computation as information processing (information dynamics) with information as a relational property of data structures (the difference in one system that makes a difference in another system) makes it very suitable to use operator formulation, with similarities to category theory. The concept [...] Read more.
Defining computation as information processing (information dynamics) with information as a relational property of data structures (the difference in one system that makes a difference in another system) makes it very suitable to use operator formulation, with similarities to category theory. The concept of the operator is exceedingly important in many knowledge areas as a tool of theoretical studies and practical applications. Here we introduce the operator theory of computing, opening new opportunities for the exploration of computing devices, processes, and their networks. Full article
(This article belongs to the Section Information Theory and Methodology)
21 pages, 2623 KiB  
Article
An Empirical Study on the Evolution of Design Smells
by Lerina Aversano, Umberto Carpenito and Martina Iammarino
Information 2020, 11(7), 348; https://doi.org/10.3390/info11070348 - 4 Jul 2020
Cited by 7 | Viewed by 3564
Abstract
The evolution of software systems often leads to its architectural degradation due to the presence of design problems. In the literature, design smells have been defined as indicators of such problems. In particular, the presence of design smells could indicate the use of [...] Read more.
The evolution of software systems often leads to its architectural degradation due to the presence of design problems. In the literature, design smells have been defined as indicators of such problems. In particular, the presence of design smells could indicate the use of constructs that are harmful to system maintenance activities. In this work, an investigation on the nature and presence of design smells has been performed. An empirical study has been conducted considering the complete history of eight software systems, commit by commit. The detection of instances of multiple design smell types has been performed at each commit, and the analysis of the relationships between the detected smells and the maintenance activities, specifically due to refactoring activities, has been investigated. The proposed study evidenced that classes affected by design smells are more subject to change, especially when multiple smells are detected in the same classes. Moreover, it emerged that in some cases these smells are removed, and this occurs involving more smells at the same time. Finally, results indicate that smells removals are not correlated to the refactoring activities. Full article
Show Figures

Figure 1

12 pages, 2148 KiB  
Article
An Internet of Things Approach to Contact Tracing—The BubbleBox System
by Andrea Polenta, Pietro Rignanese, Paolo Sernani, Nicola Falcionelli, Dagmawi Neway Mekuria, Selene Tomassini and Aldo Franco Dragoni
Information 2020, 11(7), 347; https://doi.org/10.3390/info11070347 - 3 Jul 2020
Cited by 20 | Viewed by 6468
Abstract
The COVID-19 pandemic exploded at the beginning of 2020, with over four million cases in five months, overwhelming the healthcare sector. Several national governments decided to adopt containment measures, such as lockdowns, social distancing, and quarantine. Among these measures, contact tracing can contribute [...] Read more.
The COVID-19 pandemic exploded at the beginning of 2020, with over four million cases in five months, overwhelming the healthcare sector. Several national governments decided to adopt containment measures, such as lockdowns, social distancing, and quarantine. Among these measures, contact tracing can contribute in bringing under control the outbreak, as quickly identifying contacts to isolate suspected cases can limit the number of infected people. In this paper we present BubbleBox, a system relying on a dedicated device to perform contact tracing. BubbleBox integrates Internet of Things and software technologies into different components to achieve its goal—providing a tool to quickly react to further outbreaks, by allowing health operators to rapidly reach and test possible infected people. This paper describes the BubbleBox architecture, presents its prototype implementation, and discusses its pros and cons, also dealing with privacy concerns. Full article
(This article belongs to the Special Issue Ubiquitous Sensing for Smart Health Monitoring)
Show Figures

Figure 1

15 pages, 2218 KiB  
Article
How Much Space Is Required? Effect of Distance, Content, and Color on External Human–Machine Interface Size
by Michael Rettenmaier, Jonas Schulze and Klaus Bengler
Information 2020, 11(7), 346; https://doi.org/10.3390/info11070346 - 3 Jul 2020
Cited by 22 | Viewed by 4938
Abstract
The communication of an automated vehicle (AV) with human road users can be realized by means of an external human–machine interface (eHMI), such as displays mounted on the AV’s surface. For this purpose, the amount of time needed for a human interaction partner [...] Read more.
The communication of an automated vehicle (AV) with human road users can be realized by means of an external human–machine interface (eHMI), such as displays mounted on the AV’s surface. For this purpose, the amount of time needed for a human interaction partner to perceive the AV’s message and to act accordingly has to be taken into account. Any message displayed by an AV must satisfy minimum size requirements based on the dynamics of the road traffic and the time required by the human. This paper examines the size requirements of displayed text or symbols for ensuring the legibility of a message. Based on the limitations of available package space in current vehicle models and the ergonomic requirements of the interface design, an eHMI prototype was developed. A study involving 30 participants varied the content type (text and symbols) and content color (white, red, green) in a repeated measures design. We investigated the influence of content type on content size to ensure legibility from a constant distance. We also analyzed the influence of content type and content color on the human detection range. The results show that, at a fixed distance, text has to be larger than symbols in order to maintain legibility. Moreover, symbols can be discerned from a greater distance than text. Color had no content overlapping effect on the human detection range. In order to ensure the maximum possible detection range among human road users, an AV should display symbols rather than text. Additionally, the symbols could be color-coded for better message comprehension without affecting the human detection range. Full article
Show Figures

Figure 1

14 pages, 333 KiB  
Article
Looking Back to Lower-Level Information in Few-Shot Learning
by Zhongjie Yu and Sebastian Raschka
Information 2020, 11(7), 345; https://doi.org/10.3390/info11070345 - 2 Jul 2020
Cited by 5 | Viewed by 4797
Abstract
Humans are capable of learning new concepts from small numbers of examples. In contrast, supervised deep learning models usually lack the ability to extract reliable predictive rules from limited data scenarios when attempting to classify new examples. This challenging scenario is commonly known [...] Read more.
Humans are capable of learning new concepts from small numbers of examples. In contrast, supervised deep learning models usually lack the ability to extract reliable predictive rules from limited data scenarios when attempting to classify new examples. This challenging scenario is commonly known as few-shot learning. Few-shot learning has garnered increased attention in recent years due to its significance for many real-world problems. Recently, new methods relying on meta-learning paradigms combined with graph-based structures, which model the relationship between examples, have shown promising results on a variety of few-shot classification tasks. However, existing work on few-shot learning is only focused on the feature embeddings produced by the last layer of the neural network. The novel contribution of this paper is the utilization of lower-level information to improve the meta-learner performance in few-shot learning. In particular, we propose the Looking-Back method, which could use lower-level information to construct additional graphs for label propagation in limited data settings. Our experiments on two popular few-shot learning datasets, miniImageNet and tieredImageNet, show that our method can utilize the lower-level information in the network to improve state-of-the-art classification performance. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

15 pages, 4236 KiB  
Article
Ensemble-Based Spam Detection in Smart Home IoT Devices Time Series Data Using Machine Learning Techniques
by Ameema Zainab, Shady S. Refaat and Othmane Bouhali
Information 2020, 11(7), 344; https://doi.org/10.3390/info11070344 - 2 Jul 2020
Cited by 30 | Viewed by 6520
Abstract
The number of Internet of Things (IoT) devices is growing at a fast pace in smart homes, producing large amounts of data, which are mostly transferred over wireless communication channels. However, various IoT devices are vulnerable to different threats, such as cyber-attacks, fluctuating [...] Read more.
The number of Internet of Things (IoT) devices is growing at a fast pace in smart homes, producing large amounts of data, which are mostly transferred over wireless communication channels. However, various IoT devices are vulnerable to different threats, such as cyber-attacks, fluctuating network connections, leakage of information, etc. Statistical analysis and machine learning can play a vital role in detecting the anomalies in the data, which enhances the security level of the smart home IoT system which is the goal of this paper. This paper investigates the trustworthiness of the IoT devices sending house appliances’ readings, with the help of various parameters such as feature importance, root mean square error, hyper-parameter tuning, etc. A spamicity score was awarded to each of the IoT devices by the algorithm, based on the feature importance and the root mean square error score of the machine learning models to determine the trustworthiness of the device in the home network. A dataset publicly available for a smart home, along with weather conditions, is used for the methodology validation. The proposed algorithm is used to detect the spamicity score of the connected IoT devices in the network. The obtained results illustrate the efficacy of the proposed algorithm to analyze the time series data from the IoT devices for spam detection. Full article
(This article belongs to the Special Issue Machine Learning for Cyber-Physical Security)
Show Figures

Figure 1

20 pages, 438 KiB  
Review
Mobile Applications for Training Plan Using Android Devices: A Systematic Review and a Taxonomy Proposal
by Bruno F. Tavares, Ivan Miguel Pires, Gonçalo Marques, Nuno M. Garcia, Eftim Zdravevski, Petre Lameski, Vladimir Trajkovik and Aleksandar Jevremovic
Information 2020, 11(7), 343; https://doi.org/10.3390/info11070343 - 2 Jul 2020
Cited by 14 | Viewed by 7401
Abstract
Fitness and physical exercise are preferred in the pursuit of healthier and active lifestyles. The number of mobile applications aiming to replace or complement a personal trainer is increasing. However, this also raises questions about the reliability, integrity, and even safety of the [...] Read more.
Fitness and physical exercise are preferred in the pursuit of healthier and active lifestyles. The number of mobile applications aiming to replace or complement a personal trainer is increasing. However, this also raises questions about the reliability, integrity, and even safety of the information provided by such applications. In this study, we review mobile applications that serve as virtual personal trainers. We present a systematic review of 36 related mobile applications, updated between 2017 and 2020, classifying them according to their characteristics. The selection criteria considers the following combination of keywords: “workout”, “personal trainer”, “physical activity”, “fitness”, “gymnasium”, and “daily plan”. Based on the analysis of the identified mobile applications, we propose a new taxonomy and present detailed guidelines on creating mobile applications for personalised workouts. Finally, we investigated how can mobile applications promote health and well-being of users and whether the identified applications are used in any scientific studies. Full article
(This article belongs to the Special Issue Ubiquitous Sensing for Smart Health Monitoring)
Show Figures

Figure 1

15 pages, 1776 KiB  
Article
Sleep Inertia Countermeasures in Automated Driving: A Concept of Cognitive Stimulation
by Johanna Wörle, Ramona Kenntner-Mabiala, Barbara Metz, Samantha Fritzsch, Christian Purucker, Dennis Befelein and Andy Prill
Information 2020, 11(7), 342; https://doi.org/10.3390/info11070342 - 30 Jun 2020
Cited by 8 | Viewed by 4215
Abstract
When highly automated driving is realized, the role of the driver will change dramatically. Drivers will even be able to sleep during the drive. However, when awaking from sleep, drivers often experience sleep inertia, meaning they are feeling groggy and are impaired in [...] Read more.
When highly automated driving is realized, the role of the driver will change dramatically. Drivers will even be able to sleep during the drive. However, when awaking from sleep, drivers often experience sleep inertia, meaning they are feeling groggy and are impaired in their driving performance―which can be an issue with the concept of dual-mode vehicles that allow both manual and automated driving. Proactive methods to avoid sleep inertia like the widely applied ‘NASA nap’ are not immediately practicable in automated driving. Therefore, a reactive countermeasure, the sleep inertia counter-procedure for drivers (SICD), has been developed with the aim to activate and motivate the driver as well as to measure the driver’s alertness level. The SICD is evaluated in a study with N = 21 drivers in a level highly automation driving simulator. The SICD was able to activate the driver after sleep and was perceived as “assisting” by the drivers. It was not capable of measuring the driver’s alertness level. The interpretation of the findings is limited due to a lack of a comparative baseline condition. Future research is needed on direct comparisons of different countermeasures to sleep inertia that are effective and accepted by drivers. Full article
Show Figures

Figure 1

23 pages, 1571 KiB  
Article
Real-Time Tweet Analytics Using Hybrid Hashtags on Twitter Big Data Streams
by Vibhuti Gupta and Rattikorn Hewett
Information 2020, 11(7), 341; https://doi.org/10.3390/info11070341 - 30 Jun 2020
Cited by 14 | Viewed by 6845
Abstract
Twitter is a microblogging platform that generates large volumes of data with high velocity. This daily generation of unbounded and continuous data leads to Big Data streams that often require real-time distributed and fully automated processing. Hashtags, hyperlinked words in tweets, are widely [...] Read more.
Twitter is a microblogging platform that generates large volumes of data with high velocity. This daily generation of unbounded and continuous data leads to Big Data streams that often require real-time distributed and fully automated processing. Hashtags, hyperlinked words in tweets, are widely used for tweet topic classification, retrieval, and clustering. Hashtags are used widely for analyzing tweet sentiments where emotions can be classified without contexts. However, regardless of the wide usage of hashtags, general tweet topic classification using hashtags is challenging due to its evolving nature, lack of context, slang, abbreviations, and non-standardized expression by users. Most existing approaches, which utilize hashtags for tweet topic classification, focus on extracting hashtag concepts from external lexicon resources to derive semantics. However, due to the rapid evolution and non-standardized expression of hashtags, the majority of these lexicon resources either suffer from the lack of hashtag words in their knowledge bases or use multiple resources at once to derive semantics, which make them unscalable. Along with scalable and automated techniques for tweet topic classification using hashtags, there is also a requirement for real-time analytics approaches to handle huge and dynamic flows of textual streams generated by Twitter. To address these problems, this paper first presents a novel semi-automated technique that derives semantically relevant hashtags using a domain-specific knowledge base of topic concepts and combines them with the existing tweet-based-hashtags to produce Hybrid Hashtags. Further, to deal with the speed and volume of Big Data streams of tweets, we present an online approach that updates the preprocessing and learning model incrementally in a real-time streaming environment using the distributed framework, Apache Storm. Finally, to fully exploit the batch and stream environment performance advantages, we propose a comprehensive framework (Hybrid Hashtag-based Tweet topic classification (HHTC) framework) that combines batch and online mechanisms in the most effective way. Extensive experimental evaluations on a large volume of Twitter data show that the batch and online mechanisms, along with their combination in the proposed framework, are scalable, efficient, and provide effective tweet topic classification using hashtags. Full article
(This article belongs to the Special Issue Big Data Research, Development, and Applications––Big Data 2018)
Show Figures

Figure 1

32 pages, 1747 KiB  
Article
Methodological Approach towards Evaluating the Effects of Non-Driving Related Tasks during Partially Automated Driving
by Cornelia Hollander, Nadine Rauh, Frederik Naujoks, Sebastian Hergeth, Josef F. Krems and Andreas Keinath
Information 2020, 11(7), 340; https://doi.org/10.3390/info11070340 - 30 Jun 2020
Cited by 4 | Viewed by 3758
Abstract
Partially automated driving (PAD, Society of Automotive Engineers (SAE) level 2) features provide steering and brake/acceleration support, while the driver must constantly supervise the support feature and intervene if needed to maintain safety. PAD could potentially increase comfort, road safety, and traffic efficiency. [...] Read more.
Partially automated driving (PAD, Society of Automotive Engineers (SAE) level 2) features provide steering and brake/acceleration support, while the driver must constantly supervise the support feature and intervene if needed to maintain safety. PAD could potentially increase comfort, road safety, and traffic efficiency. As during manual driving, users might engage in non-driving related tasks (NDRTs). However, studies systematically examining NDRT execution during PAD are rare and most importantly, no established methodologies to systematically evaluate driver distraction during PAD currently exist. The current project’s goal was to take the initial steps towards developing a test protocol for systematically evaluating NDRT’s effects during PAD. The methodologies used for manual driving were extended to PAD. Two generic take-over situations addressing system limits of a given PAD regarding longitudinal and lateral control were implemented to evaluate drivers’ supervisory and take-over capabilities while engaging in different NDRTs (e.g., manual radio tuning task). The test protocol was evaluated and refined across the three studies (two simulator and one test track). The results indicate that the methodology could sensitively detect differences between the NDRTs’ influences on drivers’ take-over and especially supervisory capabilities. Recommendations were formulated regarding the test protocol’s use in future studies examining the effects of NDRTs during PAD. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop