Next Issue
Volume 12, September
Previous Issue
Volume 12, July
 
 

Information, Volume 12, Issue 8 (August 2021) – 54 articles

Cover Story (view full-size image): Knowledge Graphs (KGs) represent a network of real-world entities, i.e., objects, events, or concepts, and illustrate their relationships. They are widely used in Question Answering and Dialogue systems, Recommendation engines, etc. KGs, built in an automated fashion, utilize NLP techniques to construct an extensive view of nodes, edges, and labels. Expanding their size and coverage is often tricky as it introduces noise that requires cleaning, usually via a manual process. This work introduces a fully automated system to extend a Knowledge Graph (KG) that uses external information from web-scale corpora. Our work utilizes global structure information of the induced KG to refine the confidence of the newly discovered relations, with the intention of minimizing the error rate in the expanded KG. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
15 pages, 38546 KiB  
Article
The Expectations of the Residents of Szczecin in the Field of Telematics Solutions after the Launch of the Szczecin Metropolitan Railway
by Agnieszka Barczak
Information 2021, 12(8), 339; https://doi.org/10.3390/info12080339 - 23 Aug 2021
Cited by 5 | Viewed by 2008
Abstract
Transport is integral to every city, having a crucial impact on its functioning and development. As road infrastructure does not keep up to speed with the constantly growing numbers of vehicles on roads, new solutions are required. Fast urban railway systems are a [...] Read more.
Transport is integral to every city, having a crucial impact on its functioning and development. As road infrastructure does not keep up to speed with the constantly growing numbers of vehicles on roads, new solutions are required. Fast urban railway systems are a solution that can reduce transport congestion, with environmental protection issues also taken into account. Contemporary public transport cannot function without modern communication and information technologies. The use of telematics in public transport allows passenger mobility to be sustainable and efficient. Therefore, it seems justified to conduct research on this issue. The aim of the study is to analyze the perception of the use of telematics solutions to service SKM in Szczecin (Poland) with the use of multivariate correspondence analysis. Results of the research indicate that people living in the area of gravity of the SKM have a positive opinion on the application of telematics solutions in the activities of the Szczecin Metropolitan Railway. The results obtained are local in nature, but show the direction that researchers can take in analyzing public transport in other agglomerations. In addition, the article presents a tool that greatly facilitates the analysis of survey data, even with a large number of results. Full article
Show Figures

Figure 1

17 pages, 6601 KiB  
Article
Making Information Measurement Meaningful: The United Nations’ Sustainable Development Goals and the Social and Human Capital Protocol
by John P. Wilson
Information 2021, 12(8), 338; https://doi.org/10.3390/info12080338 - 23 Aug 2021
Cited by 5 | Viewed by 3140
Abstract
Drucker’s saying that “What gets measured gets managed” is examined in the context of corporate social responsibility. The United Nations’ Sustainable Development Goals have encouraged sustainability reporting, and a reporting tool, the Social and Human Capital Protocol, has been developed to assist measurement [...] Read more.
Drucker’s saying that “What gets measured gets managed” is examined in the context of corporate social responsibility. The United Nations’ Sustainable Development Goals have encouraged sustainability reporting, and a reporting tool, the Social and Human Capital Protocol, has been developed to assist measurement and provide information to support the achievement of sustainability. This information should be valid and reliable; however, it is not easy to measure social and human capital factors. Additionally, companies use a large number of methodologies and indicators that are difficult to compare, and they may sometimes only present positive outcomes as a form of greenwashing. This lack of full transparency and comparability with other companies has the potential to discredit their reports, thereby supporting the claims of climate change deniers, free-market idealogues and conspiracy theorists who often use social media to spread their perspectives. This paper will describe the development of environmental reporting and CSR, discuss the natural capital protocol, and assess the extent to which the Social and Human Capital Protocol is able to fulfil its purpose of providing SMART objective measurements. It is the first academic article to provide a detailed examination of the Social and Human Capital Protocol. Full article
Show Figures

Figure 1

17 pages, 411 KiB  
Article
Ranking Algorithms for Word Ordering in Surface Realization
by Alessandro Mazzei, Mattia Cerrato, Roberto Esposito and Valerio Basile
Information 2021, 12(8), 337; https://doi.org/10.3390/info12080337 - 23 Aug 2021
Cited by 1 | Viewed by 2353
Abstract
In natural language generation, word ordering is the task of putting the words composing the output surface form in the correct grammatical order. In this paper, we propose to apply general learning-to-rank algorithms to the task of word ordering in the broader context [...] Read more.
In natural language generation, word ordering is the task of putting the words composing the output surface form in the correct grammatical order. In this paper, we propose to apply general learning-to-rank algorithms to the task of word ordering in the broader context of surface realization. The major contributions of this paper are: (i) the design of three deep neural architectures implementing pointwise, pairwise, and listwise approaches for ranking; (ii) the testing of these neural architectures on a surface realization benchmark in five natural languages belonging to different typological families. The results of our experiments show promising results, in particular highlighting the performance of the pairwise approach, paving the way for a more transparent surface realization from arbitrary tree- and graph-like structures. Full article
(This article belongs to the Special Issue Neural Natural Language Generation)
Show Figures

Figure 1

16 pages, 1566 KiB  
Article
Prediction of Tomato Yield in Chinese-Style Solar Greenhouses Based on Wavelet Neural Networks and Genetic Algorithms
by Yonggang Wang, Ruimin Xiao, Yizhi Yin and Tan Liu
Information 2021, 12(8), 336; https://doi.org/10.3390/info12080336 - 22 Aug 2021
Cited by 10 | Viewed by 2257
Abstract
Yield prediction for tomatoes in greenhouses is an important basis for making production plans, and yield prediction accuracy directly affects economic benefits. To improve the prediction accuracy of tomato yield in Chinese-style solar greenhouses (CSGs), a wavelet neural network (WNN) model optimized by [...] Read more.
Yield prediction for tomatoes in greenhouses is an important basis for making production plans, and yield prediction accuracy directly affects economic benefits. To improve the prediction accuracy of tomato yield in Chinese-style solar greenhouses (CSGs), a wavelet neural network (WNN) model optimized by a genetic algorithm (GA-WNN) is applied. Eight variables are selected as input parameters and the tomato yield is the prediction output. The GA is adopted to optimize the initial weights, thresholds, and translation factors of the WNN. The experiment results show that the mean relative errors (MREs) of the GA-WNN model, WNN model, and backpropagation (BP) neural network model are 0.0067, 0.0104, and 0.0242, respectively. The results root mean square errors (RMSEs) are 1.725, 2.520, and 5.548, respectively. The EC values are 0.9960, 0.9935, and 0.9868, respectively. Therefore, the GA-WNN model has a higher prediction precision and a better fitting ability compared with the BP and the WNN prediction models. The research of this paper is useful from both theoretical and technical perspectives for quantitative tomato yield prediction in the CSGs. Full article
Show Figures

Figure 1

13 pages, 484 KiB  
Article
Interference Alignment Inspired Opportunistic Communications in Multi-Cluster MIMO Networks with Wireless Power Transfer
by Yuan Ren, Xuewei Zhang and Meruyert Makhanbet
Information 2021, 12(8), 335; https://doi.org/10.3390/info12080335 - 21 Aug 2021
Cited by 2 | Viewed by 1989
Abstract
In this work, we jointly investigate the issues of node scheduling and transceiver design in a sensor network with multiple clusters, which is endowed with simultaneous wireless information and power transfer. In each cluster of the observed network, S out of N nodes [...] Read more.
In this work, we jointly investigate the issues of node scheduling and transceiver design in a sensor network with multiple clusters, which is endowed with simultaneous wireless information and power transfer. In each cluster of the observed network, S out of N nodes are picked, each of which is capable of performing information transmission (IT) via uplink communications. As for the remaining idle nodes, they can harvest energy from radio-frequency signals around their ambient wireless environments. Aiming to boost the intra-cluster performance, we advocate an interference alignment enabled opportunistic communication (IAOC) scheme. This scheme can yield better tradeoffs between IT and wireless power transfer (WPT). With the aid of IAOC scheme, the signal projected onto the direction of the receive combining vector is adopted as the accurate measurement of effective signal strength, and then the high-efficiency scheduling metric for each node can be accordingly obtained. Additionally, an algorithm, based on alternative optimization and dedicated for transceiver design, is also put forward, which is able to promote the achievable sum rate performance as well as the total harvested power. Our simulation results verify the effectiveness of the designed IAOC scheme in terms of improving the performance of IT and WPT in multi-cluster scenarios. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

16 pages, 2036 KiB  
Article
Goal-Driven Visual Question Generation from Radiology Images
by Mourad Sarrouti, Asma Ben Abacha and Dina Demner-Fushman
Information 2021, 12(8), 334; https://doi.org/10.3390/info12080334 - 20 Aug 2021
Cited by 8 | Viewed by 3156
Abstract
Visual Question Generation (VQG) from images is a rising research topic in both fields of natural language processing and computer vision. Although there are some recent efforts towards generating questions from images in the open domain, the VQG task in the medical domain [...] Read more.
Visual Question Generation (VQG) from images is a rising research topic in both fields of natural language processing and computer vision. Although there are some recent efforts towards generating questions from images in the open domain, the VQG task in the medical domain has not been well-studied so far due to the lack of labeled data. In this paper, we introduce a goal-driven VQG approach for radiology images called VQGRaD that generates questions targeting specific image aspects such as modality and abnormality. In particular, we study generating natural language questions based on the visual content of the image and on additional information such as the image caption and the question category. VQGRaD encodes the dense vectors of different inputs into two latent spaces, which allows generating, for a specific question category, relevant questions about the images, with or without their captions. We also explore the impact of domain knowledge incorporation (e.g., medical entities and semantic types) and data augmentation techniques on visual question generation in the medical domain. Experiments performed on the VQA-RAD dataset of clinical visual questions showed that VQGRaD achieves 61.86% BLEU score and outperforms strong baselines. We also performed a blinded human evaluation of the grammaticality, fluency, and relevance of the generated questions. The human evaluation demonstrated the better quality of VQGRaD outputs and showed that incorporating medical entities improves the quality of the generated questions. Using the test data and evaluation process of the ImageCLEF 2020 VQA-Med challenge, we found that relying on the proposed data augmentation technique to generate new training samples by applying different kinds of transformations, can mitigate the lack of data, avoid overfitting, and bring a substantial improvement in medical VQG. Full article
(This article belongs to the Special Issue Neural Natural Language Generation)
Show Figures

Figure 1

23 pages, 2306 KiB  
Article
Geometric Regularization of Local Activations for Knowledge Transfer in Convolutional Neural Networks
by Ilias Theodorakopoulos, Foteini Fotopoulou and George Economou
Information 2021, 12(8), 333; https://doi.org/10.3390/info12080333 - 19 Aug 2021
Cited by 1 | Viewed by 1960
Abstract
In this work, we propose a mechanism for knowledge transfer between Convolutional Neural Networks via the geometric regularization of local features produced by the activations of convolutional layers. We formulate appropriate loss functions, driving a “student” model to adapt such that its local [...] Read more.
In this work, we propose a mechanism for knowledge transfer between Convolutional Neural Networks via the geometric regularization of local features produced by the activations of convolutional layers. We formulate appropriate loss functions, driving a “student” model to adapt such that its local features exhibit similar geometrical characteristics to those of an “instructor” model, at corresponding layers. The investigated functions, inspired by manifold-to-manifold distance measures, are designed to compare the neighboring information inside the feature space of the involved activations without any restrictions in the features’ dimensionality, thus enabling knowledge transfer between different architectures. Experimental evidence demonstrates that the proposed technique is effective in different settings, including knowledge-transfer to smaller models, transfer between different deep architectures and harnessing knowledge from external data, producing models with increased accuracy compared to a typical training. Furthermore, results indicate that the presented method can work synergistically with methods such as knowledge distillation, further increasing the accuracy of the trained models. Finally, experiments on training with limited data show that a combined regularization scheme can achieve the same generalization as a non-regularized training with 50% of the data in the CIFAR-10 classification task. Full article
Show Figures

Figure 1

18 pages, 4760 KiB  
Article
Studying and Clustering Cities Based on Their Non-Emergency Service Requests
by Mahdi Hashemi
Information 2021, 12(8), 332; https://doi.org/10.3390/info12080332 - 19 Aug 2021
Cited by 1 | Viewed by 2023
Abstract
This study offers a new perspective in analyzing 311 service requests (SRs) across the country by representing cities based on the types of their SRs. This not only uncovers temporal patterns of SRs in each city over the years but also detects cities [...] Read more.
This study offers a new perspective in analyzing 311 service requests (SRs) across the country by representing cities based on the types of their SRs. This not only uncovers temporal patterns of SRs in each city over the years but also detects cities with the most or least similarity to other cities based on their SR types. The first challenge is to gather 311 SRs for different cities and standardize their types since they differ in various cities. Implementing our analyses on close to 42 million SR records in 20 cities from 2006 to 2019 is the second challenge. Representing clusters of cities and outliers effectively, and providing justifications for them, is the last challenge. Our attempt resulted in 79 standardized SR types. We applied the principal component analysis to depict cities on a two-dimensional canvas based on their standardized SR types. Among our main findings are the following: many cities are observing a fall in requests regarding the condition of roads and sidewalks but a rise in requests concerning transportation and traffic; requests regarding garbage, cleaning, rodents, and complaints have also been rising in some cities; new types of requests have emerged and soared in recent years, such as requests for information and regarding shared mobility devices; requests about parking meters, information, sidewalks, curbs, graffities, and missed garbage pick up have the highest variance in their rates across different cities, i.e., they have a large rate in some cities while a low rate in others; the most consistent outliers, in terms of SR types, are Washington DC, Baltimore, Las Vegas, Philadelphia, Chicago, and Baton Rouge. Full article
Show Figures

Figure 1

17 pages, 572 KiB  
Article
A Survey on Sentiment Analysis and Opinion Mining in Greek Social Media
by Georgios Alexandridis, Iraklis Varlamis, Konstantinos Korovesis, George Caridakis and Panagiotis Tsantilas
Information 2021, 12(8), 331; https://doi.org/10.3390/info12080331 - 18 Aug 2021
Cited by 29 | Viewed by 6983
Abstract
As the amount of content that is created on social media is constantly increasing, more and more opinions and sentiments are expressed by people in various subjects. In this respect, sentiment analysis and opinion mining techniques can be valuable for the automatic analysis [...] Read more.
As the amount of content that is created on social media is constantly increasing, more and more opinions and sentiments are expressed by people in various subjects. In this respect, sentiment analysis and opinion mining techniques can be valuable for the automatic analysis of huge textual corpora (comments, reviews, tweets etc.). Despite the advances in text mining algorithms, deep learning techniques, and text representation models, the results in such tasks are very good for only a few high-density languages (e.g., English) that possess large training corpora and rich linguistic resources; nevertheless, there is still room for improvement for the other lower-density languages as well. In this direction, the current work employs various language models for representing social media texts and text classifiers in the Greek language, for detecting the polarity of opinions expressed on social media. The experimental results on a related dataset collected by the authors of the current work are promising, since various classifiers based on the language models (naive bayesian, random forests, support vector machines, logistic regression, deep feed-forward neural networks) outperform those of word or sentence-based embeddings (word2vec, GloVe), achieving a classification accuracy of more than 80%. Additionally, a new language model for Greek social media has also been trained on the aforementioned dataset, proving that language models based on domain specific corpora can improve the performance of generic language models by a margin of 2%. Finally, the resulting models are made freely available to the research community. Full article
(This article belongs to the Special Issue Sentiment Analysis and Affective Computing)
Show Figures

Figure 1

13 pages, 3619 KiB  
Article
A Node Localization Algorithm for Wireless Sensor Networks Based on Virtual Partition and Distance Correction
by Yinghui Meng, Qianying Zhi, Minghao Dong and Weiwei Zhang
Information 2021, 12(8), 330; https://doi.org/10.3390/info12080330 - 16 Aug 2021
Cited by 8 | Viewed by 2445
Abstract
The coordinates of nodes are very important in the application of wireless sensor networks (WSN). The range-free localization algorithm is the best method to obtain the coordinates of sensor nodes at present. Range-free localization algorithm can be divided into two stages: distance estimation [...] Read more.
The coordinates of nodes are very important in the application of wireless sensor networks (WSN). The range-free localization algorithm is the best method to obtain the coordinates of sensor nodes at present. Range-free localization algorithm can be divided into two stages: distance estimation and coordinate calculation. For reduce the error in the distance estimation stage, a node localization algorithm for WSN based on virtual partition and distance correction (VP-DC) is proposed in this paper. In the distance estimation stage, firstly, the distance of each hop on the shortest communication path between the unknown node and the beacon node is calculated with the employment of virtual partition algorithm; then, the length of the shortest communication path is obtained by summing the distance of each hop; finally, the unknown distance between nodes is obtained according to the optimal path search algorithm and the distance correction formula. This paper innovative proposes the virtual partition algorithm and the optimal path search algorithm, which effectively avoids the distance estimation error caused by hop number and hop distance, and improves the localization accuracy of unknown nodes. Full article
(This article belongs to the Special Issue 5G Networks and Wireless Communication Systems)
Show Figures

Figure 1

24 pages, 1303 KiB  
Article
Semantic Systematicity in Connectionist Language Production
by Jesús Calvillo, Harm Brouwer and Matthew W. Crocker
Information 2021, 12(8), 329; https://doi.org/10.3390/info12080329 - 16 Aug 2021
Viewed by 2933
Abstract
Decades of studies trying to define the extent to which artificial neural networks can exhibit systematicity suggest that systematicity can be achieved by connectionist models but not by default. Here we present a novel connectionist model of sentence production that employs rich situation [...] Read more.
Decades of studies trying to define the extent to which artificial neural networks can exhibit systematicity suggest that systematicity can be achieved by connectionist models but not by default. Here we present a novel connectionist model of sentence production that employs rich situation model representations originally proposed for modeling systematicity in comprehension. The high performance of our model demonstrates that such representations are also well suited to model language production. Furthermore, the model can produce multiple novel sentences for previously unseen situations, including in a different voice (actives vs. passive) and with words in new syntactic roles, thus demonstrating semantic and syntactic generalization and arguably systematicity. Our results provide yet further evidence that such connectionist approaches can achieve systematicity, in production as well as comprehension. We propose our positive results to be a consequence of the regularities of the microworld from which the semantic representations are derived, which provides a sufficient structure from which the neural network can interpret novel inputs. Full article
(This article belongs to the Special Issue Neural Natural Language Generation)
Show Figures

Figure 1

19 pages, 1187 KiB  
Article
Detecting Cyber Attacks in Smart Grids Using Semi-Supervised Anomaly Detection and Deep Representation Learning
by Ruobin Qi, Craig Rasband, Jun Zheng and Raul Longoria
Information 2021, 12(8), 328; https://doi.org/10.3390/info12080328 - 15 Aug 2021
Cited by 30 | Viewed by 5913
Abstract
Smart grids integrate advanced information and communication technologies (ICTs) into traditional power grids for more efficient and resilient power delivery and management, but also introduce new security vulnerabilities that can be exploited by adversaries to launch cyber attacks, causing severe consequences such as [...] Read more.
Smart grids integrate advanced information and communication technologies (ICTs) into traditional power grids for more efficient and resilient power delivery and management, but also introduce new security vulnerabilities that can be exploited by adversaries to launch cyber attacks, causing severe consequences such as massive blackout and infrastructure damages. Existing machine learning-based methods for detecting cyber attacks in smart grids are mostly based on supervised learning, which need the instances of both normal and attack events for training. In addition, supervised learning requires that the training dataset includes representative instances of various types of attack events to train a good model, which is sometimes hard if not impossible. This paper presents a new method for detecting cyber attacks in smart grids using PMU data, which is based on semi-supervised anomaly detection and deep representation learning. Semi-supervised anomaly detection only employs the instances of normal events to train detection models, making it suitable for finding unknown attack events. A number of popular semi-supervised anomaly detection algorithms were investigated in our study using publicly available power system cyber attack datasets to identify the best-performing ones. The performance comparison with popular supervised algorithms demonstrates that semi-supervised algorithms are more capable of finding attack events than supervised algorithms. Our results also show that the performance of semi-supervised anomaly detection algorithms can be further improved by augmenting with deep representation learning. Full article
(This article belongs to the Special Issue Information Technology: New Generations (ITNG 2020 & 2021))
Show Figures

Figure 1

21 pages, 881 KiB  
Article
Towards a Human Capabilities Conscious Enterprise Architecture
by Ermias Abebe Kassa and Jan C. Mentz
Information 2021, 12(8), 327; https://doi.org/10.3390/info12080327 - 13 Aug 2021
Cited by 5 | Viewed by 3383
Abstract
This conceptual paper argues that enterprise architecture (EA) should move towards a conscious human-centered conception of the enterprise. Employing the conceptual methodological approach of theory synthesis and drawing on the extant literature in enterprise architecture as well as pertinent social and organizational theories, [...] Read more.
This conceptual paper argues that enterprise architecture (EA) should move towards a conscious human-centered conception of the enterprise. Employing the conceptual methodological approach of theory synthesis and drawing on the extant literature in enterprise architecture as well as pertinent social and organizational theories, we suggested foundational propositions that could holistically serve as a theoretical lens for (re)viewing the foundations of EA within a progressive conscious enterprise agenda. The novel contribution of the paper is the introduction of human capabilities approach (HCA) as a method theory, to supplement systems and stakeholder theories, for design and evaluation of enterprise architecture in the modern enterprise. The paper concludes by showing the implications of the propositions for practitioners and researchers. Full article
(This article belongs to the Special Issue Enterprise Architecture in the Digital Era)
Show Figures

Figure 1

27 pages, 5627 KiB  
Article
Data Analysis of the Risks of Type 2 Diabetes Mellitus Complications before Death Using a Data-Driven Modelling Approach: Methodologies and Challenges in Prolonged Diseases
by Ming-Yen Lin, Jia-Sin Liu, Tzu-Yang Huang, Ping-Hsun Wu, Yi-Wen Chiu, Yihuang Kang, Chih-Cheng Hsu, Shang-Jyh Hwang and Hsing Luh
Information 2021, 12(8), 326; https://doi.org/10.3390/info12080326 - 12 Aug 2021
Cited by 5 | Viewed by 3687
Abstract
(1) Background: A disease prediction model derived from real-world data is an important tool for managing type 2 diabetes mellitus (T2D). However, an appropriate prediction model for the Asian T2D population has not yet been developed. Hence, this study described construction details of [...] Read more.
(1) Background: A disease prediction model derived from real-world data is an important tool for managing type 2 diabetes mellitus (T2D). However, an appropriate prediction model for the Asian T2D population has not yet been developed. Hence, this study described construction details of the T2D Holistic Care model via estimating the probability of diabetes-related complications and the time-to-occurrence from a population-based database. (2) Methods: The model was based on the database of a Taiwan pay-for-performance reimbursement scheme for T2D between November 2002 and July 2017. A nonhomogeneous Markov model was applied to simulate multistate (7 main complications and death) transition probability after considering the sequential and repeated difficulties. (3) Results: The Markov model was constructed based on clinical care information from 163,452 patients with T2D, with a mean follow-up time of 5.5 years. After simulating a cohort of 100,000 hypothetical patients over a 10-year time horizon based on selected patient characteristics at baseline, a good predicted complication and mortality rates with a small range of absolute error (0.3–3.2%) were validated in the original cohort. Better and optimal predictabilities were further confirmed compared to the UKPDS Outcomes model and applied the model to other Asian populations, respectively. (4) Contribution: The study provides well-elucidated evidence to apply real-world data to the estimation of the occurrence and time point of major diabetes-related complications over a patient’s lifetime. Further applications in health decision science are encouraged. Full article
Show Figures

Figure 1

14 pages, 662 KiB  
Article
Agent-Based Simulation Framework for Epidemic Forecasting during Hajj Seasons in Saudi Arabia
by Sultanah Mohammed Alshammari, Mohammed Hassan Ba-Aoum, Nofe Ateq Alganmi and Arwa AbdulAziz Allinjawi
Information 2021, 12(8), 325; https://doi.org/10.3390/info12080325 - 12 Aug 2021
Cited by 2 | Viewed by 2847
Abstract
The religious pilgrimage of Hajj is one of the largest annual gatherings in the world. Every year approximately three million pilgrims travel from all over the world to perform Hajj in Mecca in Saudi Arabia. The high population density of pilgrims in confined [...] Read more.
The religious pilgrimage of Hajj is one of the largest annual gatherings in the world. Every year approximately three million pilgrims travel from all over the world to perform Hajj in Mecca in Saudi Arabia. The high population density of pilgrims in confined settings throughout the Hajj rituals can facilitate infectious disease transmission among the pilgrims and their contacts. Infected pilgrims may enter Mecca without being detected and potentially transmit the disease to other pilgrims. Upon returning home, infected international pilgrims may introduce the disease into their home countries, causing a further spread of the disease. Computational modeling and simulation of social mixing and disease transmission between pilgrims can enhance the prevention of potential epidemics. Computational epidemic models can help public health authorities predict the risk of disease outbreaks and implement necessary intervention measures before or during the Hajj season. In this study, we proposed a conceptual agent-based simulation framework that integrates agent-based modeling to simulate disease transmission during the Hajj season from the arrival of the international pilgrims to their departure. The epidemic forecasting system provides a simulation of the phases and rituals of Hajj following their actual sequence to capture and assess the impact of each stage in the Hajj on the disease dynamics. The proposed framework can also be used to evaluate the effectiveness of the different public health interventions that can be implemented during the Hajj, including size restriction and screening at entry points. Full article
(This article belongs to the Special Issue Discrete-Event Simulation Modeling)
Show Figures

Figure 1

13 pages, 767 KiB  
Article
An Empirical Study on the Impact of E-Commerce Live Features on Consumers’ Purchase Intention: From the Perspective of Flow Experience and Social Presence
by Haijian Wang, Jianyi Ding, Umair Akram, Xialei Yue and Yitao Chen
Information 2021, 12(8), 324; https://doi.org/10.3390/info12080324 - 12 Aug 2021
Cited by 54 | Viewed by 16033
Abstract
The COVID-19 pandemic and the continuous advancement of live e-commerce technology pushed the swift growth of live e-commerce in China. Based on the S–O–R theoretical framework, this study investigates the impact of live broadcast characteristics on consumers’ social presence and flow experience, along [...] Read more.
The COVID-19 pandemic and the continuous advancement of live e-commerce technology pushed the swift growth of live e-commerce in China. Based on the S–O–R theoretical framework, this study investigates the impact of live broadcast characteristics on consumers’ social presence and flow experience, along with their impact on the consumers’ consumption intention in live e-commerce scenarios through questionnaires. Using structural equation modeling, data processing and involvement were introduced as regulating variables. Host charm, interaction, and trust in the host exerted a significant positive impact on social presence. In addition, host charm and trust in host significantly affected flow experience, and social presence significantly affected flow experience. Both social presence and flow experience significantly affected consumption intention, while involvement affected all paths to some extent. Overall, this study illustrates the significance of host in live e-commerce, and consumers with low involvement should be the focus of attention in live e-commerce. Full article
Show Figures

Figure 1

12 pages, 2056 KiB  
Article
Personality Traits Affecting Opinion Leadership Propensity in Social Media: An Empirical Examination in Saudi Arabia
by Suad Dukhaykh
Information 2021, 12(8), 323; https://doi.org/10.3390/info12080323 - 11 Aug 2021
Cited by 1 | Viewed by 2652
Abstract
Few studies have examined the personality traits that may predict opinion leadership behavior in social media. This study aims to examine the personality traits of individuals who use social media platforms and engage in social networking in Saudi Arabia. This study investigates the [...] Read more.
Few studies have examined the personality traits that may predict opinion leadership behavior in social media. This study aims to examine the personality traits of individuals who use social media platforms and engage in social networking in Saudi Arabia. This study investigates the extent to which innovativeness, competence in interpersonal relationships, and extraversion affect the opinion leadership propensity in social media. The data were collected via an online structured questionnaire which was completed by a sample of 321 social media users. The results of this study show that people with a high level of innovativeness and interpersonal relationship competency are more likely to be opinion leaders on social media. However, the personality trait of extraversion does not affect the propensity to be an opinion leader. The results indicate that the effect of innovativeness on opinion leadership propensity is lower for Generation Y than Generation X. Full article
Show Figures

Figure 1

26 pages, 11164 KiB  
Article
Indigenous Food Recognition Model Based on Various Convolutional Neural Network Architectures for Gastronomic Tourism Business Analytics
by Mohd Norhisham Razali, Ervin Gubin Moung, Farashazillah Yahya, Chong Joon Hou, Rozita Hanapi, Raihani Mohamed and Ibrahim Abakr Targio Hashem
Information 2021, 12(8), 322; https://doi.org/10.3390/info12080322 - 11 Aug 2021
Cited by 23 | Viewed by 3680
Abstract
In gastronomic tourism, food is viewed as the central tourist attraction. Specifically, indigenous food is known to represent the expression of local culture and identity. To promote gastronomic tourism, it is critical to have a model for the food business analytics system. This [...] Read more.
In gastronomic tourism, food is viewed as the central tourist attraction. Specifically, indigenous food is known to represent the expression of local culture and identity. To promote gastronomic tourism, it is critical to have a model for the food business analytics system. This research undertakes an empirical evaluation of recent transfer learning models for deep learning feature extraction for a food recognition model. The VIREO-Food172 Dataset and a newly established Sabah Food Dataset are used to evaluate the food recognition model. Afterwards, the model is implemented into a web application system as an attempt to automate food recognition. In this model, a fully connected layer with 11 and 10 Softmax neurons is used as the classifier for food categories in both datasets. Six pre-trained Convolutional Neural Network (CNN) models are evaluated as the feature extractors to extract essential features from food images. From the evaluation, the research found that the EfficientNet feature extractor-based and CNN classifier achieved the highest classification accuracy of 94.01% on the Sabah Food Dataset and 86.57% on VIREO-Food172 Dataset. EFFNet as a feature representation outperformed Xception in terms of overall performance. However, Xception can be considered despite some accuracy performance drawback if computational speed and memory space usage are more important than performance. Full article
Show Figures

Figure 1

25 pages, 3518 KiB  
Article
Semantically-Aware Retrieval of Oceanographic Phenomena Annotated on Satellite Images
by Vasilis Kopsachilis, Lucia Siciliani, Marco Polignano, Pol Kolokoussis, Michail Vaitis, Marco de Gemmis and Konstantinos Topouzelis
Information 2021, 12(8), 321; https://doi.org/10.3390/info12080321 - 11 Aug 2021
Cited by 2 | Viewed by 2452
Abstract
Scientists in the marine domain process satellite images in order to extract information that can be used for monitoring, understanding, and forecasting of marine phenomena, such as turbidity, algal blooms and oil spills. The growing need for effective retrieval of related information has [...] Read more.
Scientists in the marine domain process satellite images in order to extract information that can be used for monitoring, understanding, and forecasting of marine phenomena, such as turbidity, algal blooms and oil spills. The growing need for effective retrieval of related information has motivated the adoption of semantically aware strategies on satellite images with different spatio-temporal and spectral characteristics. A big issue of these approaches is the lack of coincidence between the information that can be extracted from the visual data and the interpretation that the same data have for a user in a given situation. In this work, we bridge this semantic gap by connecting the quantitative elements of the Earth Observation satellite images with the qualitative information, modelling this knowledge in a marine phenomena ontology and developing a question answering mechanism based on natural language that enables the retrieval of the most appropriate data for each user’s needs. The main objective of the presented methodology is to realize the content-based search of Earth Observation images related to the marine application domain on an application-specific basis that can answer queries such as “Find oil spills that occurred this year in the Adriatic Sea”. Full article
(This article belongs to the Special Issue Information Retrieval, Recommender Systems and Adaptive Systems)
Show Figures

Figure 1

11 pages, 7665 KiB  
Article
Adaptive Combined Channel-Network Coding for Cooperative Relay Aided Cognitive Radio Networks
by Mohamed S. AbuZeid, Yasmine A. Fahmy and Magdy S. El-Soudani
Information 2021, 12(8), 320; https://doi.org/10.3390/info12080320 - 9 Aug 2021
Cited by 4 | Viewed by 2246
Abstract
Cognitive radio (CR) is one of the emerging technologies for 4G/5G applications. Cooperative relay communications and network coding are some techniques that helped in enhancing the CR applications. This paper considers a primary broadcasting system for multimedia video streaming applications that broadcasts data [...] Read more.
Cognitive radio (CR) is one of the emerging technologies for 4G/5G applications. Cooperative relay communications and network coding are some techniques that helped in enhancing the CR applications. This paper considers a primary broadcasting system for multimedia video streaming applications that broadcasts data to the primary users and to an aiding cooperative relay CR secondary system. The cooperative overlay secondary system can use many error control coding techniques for point-to-point data retransmissions such as channel coding, network coding, and combined coding techniques to enhance the system performance under variable channel conditions. This work proposes a novel adaptive combined channel network coding (AC2NC) technique for data retransmissions. The new AC2NC first analyses the channel feedback information and then selects the best retransmission coding technique based on the targeted bandwidth or transmission time optimization. This is instead of using a single static channel or network coding technique with dynamic channel conditions. The proposed AC2NC improves the system throughput, decreases the retransmission time, and avails more spectrum access opportunities for the secondary system’s own data transmissions. The AC2NC relative bandwidth and time saving opportunities for CR users can exceed 90% under certain channel conditions versus some static coding techniques. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

16 pages, 3416 KiB  
Article
Improving Ruby on Rails-Based Web Application Performance
by Denys Klochkov and Jan Mulawka
Information 2021, 12(8), 319; https://doi.org/10.3390/info12080319 - 9 Aug 2021
Cited by 4 | Viewed by 4529
Abstract
The evolution of web development and web applications has resulted in creation of numerous tools and frameworks that facilitate the development process. Even though those frameworks make web development faster and more efficient, there are certain downsides to using them. A decrease in [...] Read more.
The evolution of web development and web applications has resulted in creation of numerous tools and frameworks that facilitate the development process. Even though those frameworks make web development faster and more efficient, there are certain downsides to using them. A decrease in application performance when using an “off the shelf” framework might be a crucial disadvantage, especially given the vital role web application response time plays in user experience. This contribution focuses on a particular framework—Ruby on Rails. Once the most popular framework, it has now lost its leading position, partially due to slow performance metrics and response times, especially in larger applications. Improving and expanding upon the previous work in this field, an attempt to improve the response time of a specially developed benchmark application is made. This is achieved by performing optimizations that can be roughly divided into two groups. The first group concerns the frontend improvements, which include: adopting the client-side rendering, JavaScript Document Object Model (DOM) manipulation and asynchronous requests. Another group can be described as the backend improvements, which include implementing intelligent, granular caching, disabling redundant modules, as well as profiling and optimizing database requests and reducing database access inefficiencies. Those improvements resulted in overall up to 74% decreased page loading times, with perceived application performance being improved above this mark due to the adoption of a client-side rendering strategy. Using the different metrics of application performance measurements, each of the improvement steps is evaluated with regards to its effect on different aspects of overall performance. In conclusion, this work presents a way to significantly decrease the response time of a particular Ruby on Rails application and simultaneously provide a better user experience. Even though the majority of this process is specific to Rails, similar steps can be taken to improve applications implemented with the use of other similar frameworks. As the result of the work, a groundwork is laid for the development of the tool that could assist the developers in improving their applications as well. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

13 pages, 2144 KiB  
Article
PocketCTF: A Fully Featured Approach for Hosting Portable Attack and Defense Cybersecurity Exercises
by Stylianos Karagiannis, Christoforos Ntantogian, Emmanouil Magkos, Luís L. Ribeiro and Luís Campos
Information 2021, 12(8), 318; https://doi.org/10.3390/info12080318 - 8 Aug 2021
Cited by 7 | Viewed by 5197
Abstract
Capture the flag (CTF) challenges are broadly used for engaging trainees in the technical aspects of cybersecurity, maintaining hands-on lab exercises, and integrating gamification elements. However, deploying the appropriate digital environment for conducting cybersecurity exercises can be challenging and typically requires a lot [...] Read more.
Capture the flag (CTF) challenges are broadly used for engaging trainees in the technical aspects of cybersecurity, maintaining hands-on lab exercises, and integrating gamification elements. However, deploying the appropriate digital environment for conducting cybersecurity exercises can be challenging and typically requires a lot of effort and system resources by educators. In this paper, we present PocketCTF, an extensible and fully independent CTF platform, open to educators to run realistic virtual labs to host cybersecurity exercises in their classrooms. PocketCTF is based on containerization technologies to minimize the deployment effort and to utilize less system resources. A proof-of-concept implementation demonstrates the feasibility of deploying CTF challenges that allows the trainees to engage not only in offensive security but also in defensive tasks that have to be conducted during cybersecurity incidents. When using PocketCTF, educators can deploy hands-on labs, spending less time on the deployment and without necessarily having the advanced technical background to deploy complex labs and scenarios. Full article
(This article belongs to the Special Issue Detecting Attack and Incident Zone System)
Show Figures

Figure 1

29 pages, 8026 KiB  
Article
Design of Generalized Search Interfaces for Health Informatics
by Jonathan Demelo and Kamran Sedig
Information 2021, 12(8), 317; https://doi.org/10.3390/info12080317 - 6 Aug 2021
Viewed by 2501
Abstract
In this paper, we investigate ontology-supported interfaces for health informatics search tasks involving large document sets. We begin by providing background on health informatics, machine learning, and ontologies. We review leading research on health informatics search tasks to help formulate high-level design criteria. [...] Read more.
In this paper, we investigate ontology-supported interfaces for health informatics search tasks involving large document sets. We begin by providing background on health informatics, machine learning, and ontologies. We review leading research on health informatics search tasks to help formulate high-level design criteria. We use these criteria to examine traditional design strategies for search interfaces. To demonstrate the utility of the criteria, we apply them to the design of ONTology-supported Search Interface (ONTSI), a demonstrative, prototype system. ONTSI allows users to plug-and-play document sets and expert-defined domain ontologies through a generalized search interface. ONTSI’s goal is to help align users’ common vocabulary with the domain-specific vocabulary of the plug-and-play document set. We describe the functioning and utility of ONTSI in health informatics search tasks through a workflow and a scenario. We conclude with a summary of ongoing evaluations, limitations, and future research. Full article
(This article belongs to the Special Issue The Digital Health New Era: Where We Stand and the Challenges)
Show Figures

Figure 1

13 pages, 1068 KiB  
Article
Populating Web-Scale Knowledge Graphs Using Distantly Supervised Relation Extraction and Validation
by Sarthak Dash, Michael R. Glass, Alfio Gliozzo, Mustafa Canim and Gaetano Rossiello
Information 2021, 12(8), 316; https://doi.org/10.3390/info12080316 - 6 Aug 2021
Cited by 2 | Viewed by 2595
Abstract
In this paper, we propose a fully automated system to extend knowledge graphs using external information from web-scale corpora. The designed system leverages a deep-learning-based technology for relation extraction that can be trained by a distantly supervised approach. In addition, the system uses [...] Read more.
In this paper, we propose a fully automated system to extend knowledge graphs using external information from web-scale corpora. The designed system leverages a deep-learning-based technology for relation extraction that can be trained by a distantly supervised approach. In addition, the system uses a deep learning approach for knowledge base completion by utilizing the global structure information of the induced KG to further refine the confidence of the newly discovered relations. The designed system does not require any effort for adaptation to new languages and domains as it does not use any hand-labeled data, NLP analytics, and inference rules. Our experiments, performed on a popular academic benchmark, demonstrate that the suggested system boosts the performance of relation extraction by a wide margin, reporting error reductions of 50%, resulting in relative improvement of up to 100%. Furthermore, a web-scale experiment conducted to extend DBPedia with knowledge from Common Crawl shows that our system is not only scalable but also does not require any adaptation cost, while yielding a substantial accuracy gain. Full article
(This article belongs to the Collection Knowledge Graphs for Search and Recommendation)
Show Figures

Figure 1

12 pages, 274 KiB  
Article
Use Dynamic Scheduling Algorithm to Assure the Quality of Educational Programs and Secure the Integrity of Reports in a Quality Management System
by Yasser Ali Alshehri and Najwa Mordhah
Information 2021, 12(8), 315; https://doi.org/10.3390/info12080315 - 6 Aug 2021
Cited by 1 | Viewed by 2511
Abstract
The implementation of quality processes is essential for an academic setting to meet the standards of different accreditation bodies. However, processes are complex because they involve several steps and several entities. Manual implementation (i.e., using paperwork), which many institutions use, has difficulty following [...] Read more.
The implementation of quality processes is essential for an academic setting to meet the standards of different accreditation bodies. However, processes are complex because they involve several steps and several entities. Manual implementation (i.e., using paperwork), which many institutions use, has difficulty following up the progress and closing the cycle. It becomes more challenging when more processes are in place, especially when an academic department runs more than one program. Having n programs per department means that the work is replicated n times. Our proposal in this study is to use the concept of the Tomasulo algorithm to schedule all processes of an academic institution dynamically. Because of the similarities between computer tasks and the processes of workplaces, applying this method enhances work efficiencies and reduces efforts. Further, the method provides a mechanism to secure the integrity of the reports of these processes. In this paper, we provided an educational institution case study to understand the mechanism of this method and how it can be applied in an actual workplace. The case study included operational activities that are implemented to assure the program’s quality. Full article
Show Figures

Figure 1

17 pages, 333 KiB  
Article
A Study of Analogical Density in Various Corpora at Various Granularity
by Rashel Fam and Yves Lepage
Information 2021, 12(8), 314; https://doi.org/10.3390/info12080314 - 5 Aug 2021
Cited by 3 | Viewed by 2426
Abstract
In this paper, we inspect the theoretical problem of counting the number of analogies between sentences contained in a text. Based on this, we measure the analogical density of the text. We focus on analogy at the sentence level, based on the level [...] Read more.
In this paper, we inspect the theoretical problem of counting the number of analogies between sentences contained in a text. Based on this, we measure the analogical density of the text. We focus on analogy at the sentence level, based on the level of form rather than on the level of semantics. Experiments are carried on two different corpora in six European languages known to have various levels of morphological richness. Corpora are tokenised using several tokenisation schemes: character, sub-word and word. For the sub-word tokenisation scheme, we employ two popular sub-word models: unigram language model and byte-pair-encoding. The results show that the corpus with a higher Type-Token Ratio tends to have higher analogical density. We also observe that masking the tokens based on their frequency helps to increase the analogical density. As for the tokenisation scheme, the results show that analogical density decreases from the character to word. However, this is not true when tokens are masked based on their frequencies. We find that tokenising the sentences using sub-word models and masking the least frequent tokens increase analogical density. Full article
(This article belongs to the Special Issue Novel Methods and Applications in Natural Language Processing)
Show Figures

Figure 1

26 pages, 1153 KiB  
Article
Design of an Architecture Contributing to the Protection and Privacy of the Data Associated with the Electronic Health Record
by Edwar Andrés Pineda Rincón and Luis Gabriel Moreno-Sandoval
Information 2021, 12(8), 313; https://doi.org/10.3390/info12080313 - 2 Aug 2021
Cited by 8 | Viewed by 4408
Abstract
The Electronic Health Record (EHR) has brought numerous challenges since its inception that have prevented a unified implementation from being carried out in Colombia. Within these challenges, we find a lack of security, auditability, and interoperability. Moreover, there is no general vision of [...] Read more.
The Electronic Health Record (EHR) has brought numerous challenges since its inception that have prevented a unified implementation from being carried out in Colombia. Within these challenges, we find a lack of security, auditability, and interoperability. Moreover, there is no general vision of the patient’s history throughout its life since different systems store the information separately. This lack of unified history leads to multiple risks for patients’ lives and the leakage of private data because each system has different mechanisms to safeguard and protect the information, and in several cases, these mechanisms do not exist. Many researchers tried to build multiple information systems attempting to solve this problem. However, these systems do not have a formal and rigorous architectural design to analyze and obtain health needs through architectural drivers to construct robust systems to solve these problems. This article describes the process of designing a software architecture that provides security to the information that makes up the Electronic Health Record in Colombia (EHR). Once we obtained the architectural drivers, we proposed Blockchain mainly due to its immutable distributed ledger, consensus algorithms, and smart contracts that securely transport this sensitive information. With this design decision, we carried out the construction of structures and necessary architectural documentation. We also develop a Proof of Concept (POC) using Hyperledger Fabric according to the literature analysis review in order to build a primary health network, in addition to a Smart Contract (Chaincode) using the Go programming language to perform a performance evaluation and do a safety analysis that demonstrates that the proposed design is reliable. The proposed design allows us to conclude that it is possible to build a secure architecture that protects patient health data privacy, facilitating the EHR’s construction in Colombia. Full article
(This article belongs to the Special Issue Blockchain-Based Digital Services)
Show Figures

Figure 1

24 pages, 120116 KiB  
Article
AthPPA: A Data Visualization Tool for Identifying Political Popularity over Twitter
by Alexandros Britzolakis, Haridimos Kondylakis and Nikolaos Papadakis
Information 2021, 12(8), 312; https://doi.org/10.3390/info12080312 - 31 Jul 2021
Cited by 2 | Viewed by 4041
Abstract
Sentiment Analysis is an actively growing field with demand in both scientific and industrial sectors. Political sentiment analysis is used when a data analyst wants to determine the opinion of different users on social media platforms regarding a politician or a political event. [...] Read more.
Sentiment Analysis is an actively growing field with demand in both scientific and industrial sectors. Political sentiment analysis is used when a data analyst wants to determine the opinion of different users on social media platforms regarding a politician or a political event. This paper presents Athena Political Popularity Analysis (AthPPA), a tool for identifying political popularity over Twitter. AthPPA is able to collect in-real-time tweets and for each tweet to extract metadata such as number of likes, retweets per tweet etc. Then it processes their text in order to calculate their overall sentiment. For the calculation of sentiment analysis, we have implemented a sentiment analyzer that is able to identify the grammatical issues of a sentence as well as a lexicon of negative and positive words designed specifically for political sentiment analysis. An analytic engine processes the collected data and provides different visualizations that provide additional insights on the collected data. We show how we applied our framework to the three most prominent Greek political leaders in Greece and present our findings there. Full article
Show Figures

Figure 1

20 pages, 797 KiB  
Article
What Drives Authorization in Mobile Applications? A Perspective of Privacy Boundary Management
by Jie Tang, Bin Zhang and Umair Akram
Information 2021, 12(8), 311; https://doi.org/10.3390/info12080311 - 30 Jul 2021
Cited by 1 | Viewed by 3327
Abstract
Personal information has been likened to “golden data”, which companies have chased using every means possible. Via mobile apps, the incidents of compulsory authorization and excessive data collection have evoked privacy concerns and strong repercussions among app users. This manuscript proposes a privacy [...] Read more.
Personal information has been likened to “golden data”, which companies have chased using every means possible. Via mobile apps, the incidents of compulsory authorization and excessive data collection have evoked privacy concerns and strong repercussions among app users. This manuscript proposes a privacy boundary management model, which elaborates how such users can demarcate and regulate their privacy boundaries. The survey data came from 453 users who authorized certain operations through mobile apps. The partial least squares (PLS) analysis method was used to validate the instrument and the proposed model. Results indicate that information relevance and transparency play a significant role in shaping app users’ control–risk perceptions, while government regulation is more effective than industry self-discipline in promoting the formation of privacy boundaries. Unsurprisingly, privacy risk control perceptions significantly affect users’ privacy concerns and trust beliefs, which are two vital factors that ultimately influence their willingness to authorize. The implications of conducting a thorough inquiry into app users’ willingness to authorize their privacy information are far-reaching. In relation to this, app vendors should probe into the privacy-relevant beliefs of their users and enact effective privacy practices to intercept the economic and reputational damages induced by improper information collection. More significantly, a comprehensive understanding of users’ willingness to authorize their information can serve as an essential reference for relevant regulatory bodies to formulate reasonable privacy protection policies in the future. Full article
(This article belongs to the Special Issue New Applications in Multiple Criteria Decision Analysis)
Show Figures

Figure 1

19 pages, 11931 KiB  
Article
Image Watermarking Approach Using a Hybrid Domain Based on Performance Parameter Analysis
by Rohit Srivastava, Ravi Tomar, Maanak Gupta, Anuj Kumar Yadav and Jaehong Park
Information 2021, 12(8), 310; https://doi.org/10.3390/info12080310 - 30 Jul 2021
Cited by 9 | Viewed by 2993
Abstract
In today’s scenario, image watermarking has been an integral part in various multimedia applications. Watermarking is the approach for adding additional information to the existing image to protect the data from modification and to provide data integrity. Frequency transform domain techniques are complex [...] Read more.
In today’s scenario, image watermarking has been an integral part in various multimedia applications. Watermarking is the approach for adding additional information to the existing image to protect the data from modification and to provide data integrity. Frequency transform domain techniques are complex and costly and degrade the quality of the image due to less embedding of bits. The proposed work utilize the original DCT method with some modifications and applies this method on frequency bands of DWT. Furthermore, the output is used in combination with a pixel modification method for embedding and extraction. The proposed outcome is the improvement of performance achieved in terms of time, imperceptibility, and robustness. Full article
(This article belongs to the Special Issue Secure and Trustworthy Cyber–Physical Systems)
Show Figures

Figure 1

Previous Issue
Back to TopTop