Next Issue
Volume 5, December
Previous Issue
Volume 5, June
 
 

Big Data Cogn. Comput., Volume 5, Issue 3 (September 2021) – 18 articles

Cover Story (view full-size image): Virtual reality (VR) applications are rich in motion-tracking data that can be used to improve learning beyond standard training interfaces. In this work, we present machine learning classifiers that predict outcomes from a VR training application. Our approach makes use of the data from the tracked head-mounted display and handheld controllers involved during VR training to predict whether a user will exhibit high or low knowledge acquisition, knowledge retention, and performance retention. Our results demonstrate that it is feasible to develop VR training applications that dynamically adapt to a user by using commonly available tracking data to predict learning and retention outcomes.View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
16 pages, 2540 KiB  
Article
Diversification of Legislation Editing Open Software (LEOS) Using Software Agents—Transforming Parliamentary Control of the Hellenic Parliament into Big Open Legal Data
by Sotiris Leventis, Fotios Fitsilis and Vasileios Anastasiou
Big Data Cogn. Comput. 2021, 5(3), 45; https://doi.org/10.3390/bdcc5030045 - 18 Sep 2021
Cited by 7 | Viewed by 5100
Abstract
The accessibility and reuse of legal data is paramount for promoting transparency, accountability and, ultimately, trust towards governance institutions. The aggregation of structured and semi-structured legal data inevitably leads to the big data realm and a series of challenges for the generation, handling, [...] Read more.
The accessibility and reuse of legal data is paramount for promoting transparency, accountability and, ultimately, trust towards governance institutions. The aggregation of structured and semi-structured legal data inevitably leads to the big data realm and a series of challenges for the generation, handling, and analysis of large datasets. When it comes to data generation, LEOS represents a legal informatics tool that is maturing quickly. Now in its third release, it effectively supports the drafting of legal documents using Akoma Ntoso compatible schemes. However, the tool, originally developed for cooperative legislative drafting, can be repurposed to draft parliamentary control documents. This is achieved through the use of actor-oriented software components, referred to as software agents, which enable system interoperability by interlinking the text editing system with parliamentary control datasets. A validated corpus of written questions from the Hellenic Parliament is used to evaluate the feasibility of the endeavour, and the feasibility of using it as an authoring tool for written parliamentary questions and generation of standardised, open, legislative data. Systemic integration not only proves the tool’s versatility, but also opens up new grounds in interoperability between formerly unrelated legal systems and data sources. Full article
Show Figures

Figure 1

17 pages, 1764 KiB  
Article
CrowDSL: Platform for Incidents Management in a Smart City Context
by Darío Rodríguez-García, Vicente García-Díaz and Cristian González García
Big Data Cogn. Comput. 2021, 5(3), 44; https://doi.org/10.3390/bdcc5030044 - 16 Sep 2021
Cited by 2 | Viewed by 3118
Abstract
The final objective of smart cities is to optimize services and improve the quality of life of their citizens, who can play important roles due to the information they can provide. This information can be used in order to enhance many sectors involved [...] Read more.
The final objective of smart cities is to optimize services and improve the quality of life of their citizens, who can play important roles due to the information they can provide. This information can be used in order to enhance many sectors involved in city activity such as transport, energy or health. Crowd-sourcing initiatives focus their efforts on making cities safer places that are adapted to the population size they host. In this way, citizens are able to report the issues they identify to the relevant body so that they can be fixed and, at the same time, they can provide useful information to other citizens. There are several projects aimed at reporting incidents in a smart city context. In this paper, we propose the use of model-driven engineering by designing a graphical domain-specific language to abstract and improve the incident-reporting process. With the use of a domain-specific language, we can obtain several benefits in our research for users and cities. For instance, we can shorten the time for reporting the events by users and, at the same time, we gain an expressive power compared to other methodologies for incident reporting. In addition, it can be reused and is centered in this specific domain after being studied. Furthermore, we have evaluated the DSL with different users, obtaining a high satisfaction percentage. Full article
(This article belongs to the Special Issue Internet of Things (IoT) and Ambient Intelligence)
Show Figures

Figure 1

45 pages, 10033 KiB  
Article
AI Based Emotion Detection for Textual Big Data: Techniques and Contribution
by Sheetal Kusal, Shruti Patil, Ketan Kotecha, Rajanikanth Aluvalu and Vijayakumar Varadarajan
Big Data Cogn. Comput. 2021, 5(3), 43; https://doi.org/10.3390/bdcc5030043 - 9 Sep 2021
Cited by 45 | Viewed by 16136
Abstract
Online Social Media (OSM) like Facebook and Twitter has emerged as a powerful tool to express via text people’s opinions and feelings about the current surrounding events. Understanding the emotions at the fine-grained level of these expressed thoughts is important for system improvement. [...] Read more.
Online Social Media (OSM) like Facebook and Twitter has emerged as a powerful tool to express via text people’s opinions and feelings about the current surrounding events. Understanding the emotions at the fine-grained level of these expressed thoughts is important for system improvement. Such crucial insights cannot be completely obtained by doing AI-based big data sentiment analysis; hence, text-based emotion detection using AI in social media big data has become an upcoming area of Natural Language Processing research. It can be used in various fields such as understanding expressed emotions, human–computer interaction, data mining, online education, recommendation systems, and psychology. Even though the research work is ongoing in this domain, it still lacks a formal study that can give a qualitative (techniques used) and quantitative (contributions) literature overview. This study has considered 827 Scopus and 83 Web of Science research papers from the years 2005–2020 for the analysis. The qualitative review represents different emotion models, datasets, algorithms, and application domains of text-based emotion detection. The quantitative bibliometric review of contributions presents research details such as publications, volume, co-authorship networks, citation analysis, and demographic research distribution. In the end, challenges and probable solutions are showcased, which can provide future research directions in this area. Full article
(This article belongs to the Special Issue Big Data and Internet of Things)
Show Figures

Figure 1

28 pages, 3163 KiB  
Article
Indoor Localization for Personalized Ambient Assisted Living of Multiple Users in Multi-Floor Smart Environments
by Nirmalya Thakur and Chia Y. Han
Big Data Cogn. Comput. 2021, 5(3), 42; https://doi.org/10.3390/bdcc5030042 - 8 Sep 2021
Cited by 22 | Viewed by 4953
Abstract
This paper presents a multifunctional interdisciplinary framework that makes four scientific contributions towards the development of personalized ambient assisted living (AAL), with a specific focus to address the different and dynamic needs of the diverse aging population in the future of smart living [...] Read more.
This paper presents a multifunctional interdisciplinary framework that makes four scientific contributions towards the development of personalized ambient assisted living (AAL), with a specific focus to address the different and dynamic needs of the diverse aging population in the future of smart living environments. First, it presents a probabilistic reasoning-based mathematical approach to model all possible forms of user interactions for any activity arising from user diversity of multiple users in such environments. Second, it presents a system that uses this approach with a machine learning method to model individual user-profiles and user-specific user interactions for detecting the dynamic indoor location of each specific user. Third, to address the need to develop highly accurate indoor localization systems for increased trust, reliance, and seamless user acceptance, the framework introduces a novel methodology where two boosting approaches—Gradient Boosting and the AdaBoost algorithm are integrated and used on a decision tree-based learning model to perform indoor localization. Fourth, the framework introduces two novel functionalities to provide semantic context to indoor localization in terms of detecting each user’s floor-specific location as well as tracking whether a specific user was located inside or outside a given spatial region in a multi-floor-based indoor setting. These novel functionalities of the proposed framework were tested on a dataset of localization-related Big Data collected from 18 different users who navigated in 3 buildings consisting of 5 floors and 254 indoor spatial regions, with an to address the limitation in prior works in this field centered around the lack of training data from diverse users. The results show that this approach of indoor localization for personalized AAL that models each specific user always achieves higher accuracy as compared to the traditional approach of modeling an average user. The results further demonstrate that the proposed framework outperforms all prior works in this field in terms of functionalities, performance characteristics, and operational features. Full article
(This article belongs to the Special Issue Advanced Data Mining Techniques for IoT and Big Data)
Show Figures

Figure 1

21 pages, 476 KiB  
Review
A Review of Artificial Intelligence, Big Data, and Blockchain Technology Applications in Medicine and Global Health
by Supriya M. and Vijay Kumar Chattu
Big Data Cogn. Comput. 2021, 5(3), 41; https://doi.org/10.3390/bdcc5030041 - 6 Sep 2021
Cited by 64 | Viewed by 14229
Abstract
Artificial intelligence (AI) programs are applied to methods such as diagnostic procedures, treatment protocol development, patient monitoring, drug development, personalized medicine in healthcare, and outbreak predictions in global health, as in the case of the current COVID-19 pandemic. Machine learning (ML) is a [...] Read more.
Artificial intelligence (AI) programs are applied to methods such as diagnostic procedures, treatment protocol development, patient monitoring, drug development, personalized medicine in healthcare, and outbreak predictions in global health, as in the case of the current COVID-19 pandemic. Machine learning (ML) is a field of AI that allows computers to learn and improve without being explicitly programmed. ML algorithms can also analyze large amounts of data called Big data through electronic health records for disease prevention and diagnosis. Wearable medical devices are used to continuously monitor an individual’s health status and store it in cloud computing. In the context of a newly published study, the potential benefits of sophisticated data analytics and machine learning are discussed in this review. We have conducted a literature search in all the popular databases such as Web of Science, Scopus, MEDLINE/PubMed and Google Scholar search engines. This paper describes the utilization of concepts underlying ML, big data, blockchain technology and their importance in medicine, healthcare, public health surveillance, case estimations in COVID-19 pandemic and other epidemics. The review also goes through the possible consequences and difficulties for medical practitioners and health technologists in designing futuristic models to improve the quality and well-being of human lives. Full article
(This article belongs to the Special Issue Big Data and Cognitive Computing: 5th Anniversary Feature Papers)
Show Figures

Figure 1

16 pages, 3495 KiB  
Article
A Simple Free-Text-like Method for Extracting Semi-Structured Data from Electronic Health Records: Exemplified in Prediction of In-Hospital Mortality
by Eyal Klang, Matthew A. Levin, Shelly Soffer, Alexis Zebrowski, Benjamin S. Glicksberg, Brendan G. Carr, Jolion Mcgreevy, David L. Reich and Robert Freeman
Big Data Cogn. Comput. 2021, 5(3), 40; https://doi.org/10.3390/bdcc5030040 - 29 Aug 2021
Cited by 4 | Viewed by 4640
Abstract
The Epic electronic health record (EHR) is a commonly used EHR in the United States. This EHR contain large semi-structured “flowsheet” fields. Flowsheet fields lack a well-defined data dictionary and are unique to each site. We evaluated a simple free-text-like method to extract [...] Read more.
The Epic electronic health record (EHR) is a commonly used EHR in the United States. This EHR contain large semi-structured “flowsheet” fields. Flowsheet fields lack a well-defined data dictionary and are unique to each site. We evaluated a simple free-text-like method to extract these data. As a use case, we demonstrate this method in predicting mortality during emergency department (ED) triage. We retrieved demographic and clinical data for ED visits from the Epic EHR (1/2014–12/2018). Data included structured, semi-structured flowsheet records and free-text notes. The study outcome was in-hospital death within 48 h. Most of the data were coded using a free-text-like Bag-of-Words (BoW) approach. Two machine-learning models were trained: gradient boosting and logistic regression. Term frequency-inverse document frequency was employed in the logistic regression model (LR-tf-idf). An ensemble of LR-tf-idf and gradient boosting was evaluated. Models were trained on years 2014–2017 and tested on year 2018. Among 412,859 visits, the 48-h mortality rate was 0.2%. LR-tf-idf showed AUC 0.98 (95% CI: 0.98–0.99). Gradient boosting showed AUC 0.97 (95% CI: 0.96–0.99). An ensemble of both showed AUC 0.99 (95% CI: 0.98–0.99). In conclusion, a free-text-like approach can be useful for extracting knowledge from large amounts of complex semi-structured EHR data. Full article
(This article belongs to the Special Issue Data Science in Health Care)
Show Figures

Figure 1

16 pages, 1130 KiB  
Article
A Novel Approach to Learning Models on EEG Data Using Graph Theory Features—A Comparative Study
by Bhargav Prakash, Gautam Kumar Baboo and Veeky Baths
Big Data Cogn. Comput. 2021, 5(3), 39; https://doi.org/10.3390/bdcc5030039 - 28 Aug 2021
Cited by 4 | Viewed by 6181
Abstract
Brain connectivity is studied as a functionally connected network using statistical methods such as measuring correlation or covariance. The non-invasive neuroimaging techniques such as Electroencephalography (EEG) signals are converted to networks by transforming the signals into a Correlation Matrix and analyzing the resulting [...] Read more.
Brain connectivity is studied as a functionally connected network using statistical methods such as measuring correlation or covariance. The non-invasive neuroimaging techniques such as Electroencephalography (EEG) signals are converted to networks by transforming the signals into a Correlation Matrix and analyzing the resulting networks. Here, four learning models, namely, Logistic Regression, Random Forest, Support Vector Machine, and Recurrent Neural Networks (RNN), are implemented on two different types of correlation matrices: Correlation Matrix (static connectivity) and Time-resolved Correlation Matrix (dynamic connectivity), to classify them either on their psychometric assessment or the effect of therapy. These correlation matrices are different from traditional learning techniques in the sense that they incorporate theory-based graph features into the learning models, thus providing novelty to this study. The EEG data used in this study is trail-based/event-related from five different experimental paradigms, of which can be broadly classified as working memory tasks and assessment of emotional states (depression, anxiety, and stress). The classifications based on RNN provided higher accuracy (74–88%) than the other three models (50–78%). Instead of using individual graph features, a Correlation Matrix provides an initial test of the data. When compared with the Time-resolved Correlation Matrix, it offered a 4–5% higher accuracy. The Time-resolved Correlation Matrix is better suited for dynamic studies here; it provides lower accuracy when compared to the Correlation Matrix, a static feature. Full article
(This article belongs to the Special Issue Knowledge Modelling and Learning through Cognitive Networks)
Show Figures

Figure 1

18 pages, 2001 KiB  
Article
The Optimization Strategies on Clarification of the Misconceptions of Big Data Processing in Dynamic and Opportunistic Environments
by Wei Li and Maolin Tang
Big Data Cogn. Comput. 2021, 5(3), 38; https://doi.org/10.3390/bdcc5030038 - 21 Aug 2021
Cited by 1 | Viewed by 3344
Abstract
This paper identifies four common misconceptions about the scalability of volunteer computing on big data problems. The misconceptions are then clarified by analyzing the relationship between scalability and the impact factors including the problem size of big data, the heterogeneity and dynamics of [...] Read more.
This paper identifies four common misconceptions about the scalability of volunteer computing on big data problems. The misconceptions are then clarified by analyzing the relationship between scalability and the impact factors including the problem size of big data, the heterogeneity and dynamics of volunteers, and the overlay structure. This paper proposes optimization strategies to find the optimal overlay for the given big data problem. This paper forms multiple overlays to optimize the performance of individual steps in terms of MapReduce paradigm. The optimization is to achieve the maximum overall performance by using a minimum number of volunteers, not overusing resources. This paper has demonstrated that the simulations on the concerned factors can fast find the optimization points. This paper concludes that always welcoming more volunteers is an overuse of available resources because they do not always bring benefit to the overall performance. Finding optimal use of volunteers are possible for the given big data problems even on the dynamics and opportunism of volunteers. Full article
Show Figures

Figure 1

16 pages, 4296 KiB  
Article
Personalized Data Analysis Approach for Assessing Necessary Hospital Bed-Days Built on Condition Space and Hierarchical Predictor
by Nataliia Melnykova, Nataliya Shakhovska, Volodymyr Melnykov, Kateryna Melnykova and Khrystyna Lishchuk-Yakymovych
Big Data Cogn. Comput. 2021, 5(3), 37; https://doi.org/10.3390/bdcc5030037 - 16 Aug 2021
Cited by 1 | Viewed by 3756
Abstract
The paper describes the medical data personalization problem by determining the individual characteristics needed to predict the number of days a patient spends in a hospital. The mathematical problem of patient information analysis is formalized, which will help identify critical personal characteristics based [...] Read more.
The paper describes the medical data personalization problem by determining the individual characteristics needed to predict the number of days a patient spends in a hospital. The mathematical problem of patient information analysis is formalized, which will help identify critical personal characteristics based on conditioned space analysis. The condition space is given in cube form as a reflection of the functional relationship of the general parameters to the studied object. The dataset consists of 51 instances, and ten parameters are processed using different clustering and regression models. Days in hospital is the target variable. A condition space cube is formed based on clustering analysis and features selection. In this manner, a hierarchical predictor based on clustering and an ensemble of weak regressors is built. The quality of the developed hierarchical predictor for Root Mean Squared Error metric is 1.47 times better than the best weak predictor (perceptron with 12 units in a single hidden layer). Full article
Show Figures

Figure 1

15 pages, 355 KiB  
Article
Comparing Swarm Intelligence Algorithms for Dimension Reduction in Machine Learning
by Gabriella Kicska and Attila Kiss
Big Data Cogn. Comput. 2021, 5(3), 36; https://doi.org/10.3390/bdcc5030036 - 13 Aug 2021
Cited by 17 | Viewed by 5135
Abstract
Nowadays, the high-dimensionality of data causes a variety of problems in machine learning. It is necessary to reduce the feature number by selecting only the most relevant of them. Different approaches called Feature Selection are used for this task. In this paper, we [...] Read more.
Nowadays, the high-dimensionality of data causes a variety of problems in machine learning. It is necessary to reduce the feature number by selecting only the most relevant of them. Different approaches called Feature Selection are used for this task. In this paper, we propose a Feature Selection method that uses Swarm Intelligence techniques. Swarm Intelligence algorithms perform optimization by searching for optimal points in the search space. We show the usability of these techniques for solving Feature Selection and compare the performance of five major swarm algorithms: Particle Swarm Optimization, Artificial Bee Colony, Invasive Weed Optimization, Bat Algorithm, and Grey Wolf Optimizer. The accuracy of a decision tree classifier was used to evaluate the algorithms. It turned out that the dimension of the data can be reduced about two times without a loss in accuracy. Moreover, the accuracy increased when abandoning redundant features. Based on our experiments GWO turned out to be the best. It has the highest ranking on different datasets, and its average iteration number to find the best solution is 30.8. ABC obtained the lowest ranking on high-dimensional datasets. Full article
Show Figures

Figure 1

13 pages, 538 KiB  
Article
Deep Neural Network and Boosting Based Hybrid Quality Ranking for e-Commerce Product Search
by Mourad Jbene, Smail Tigani, Rachid Saadane and Abdellah Chehri
Big Data Cogn. Comput. 2021, 5(3), 35; https://doi.org/10.3390/bdcc5030035 - 13 Aug 2021
Cited by 3 | Viewed by 4307
Abstract
In the age of information overload, customers are overwhelmed with the number of products available for sale. Search engines try to overcome this issue by filtering relevant items to the users’ queries. Traditional search engines rely on the exact match of terms in [...] Read more.
In the age of information overload, customers are overwhelmed with the number of products available for sale. Search engines try to overcome this issue by filtering relevant items to the users’ queries. Traditional search engines rely on the exact match of terms in the query and product meta-data. Recently, deep learning-based approaches grabbed more attention by outperforming traditional methods in many circumstances. In this work, we involve the power of embeddings to solve the challenging task of optimizing product search engines in e-commerce. This work proposes an e-commerce product search engine based on a similarity metric that works on top of query and product embeddings. Two pre-trained word embedding models were tested, the first representing a category of models that generate fixed embeddings and a second representing a newer category of models that generate context-aware embeddings. Furthermore, a re-ranking step was performed by incorporating a list of quality indicators that reflects the utility of the product to the customer as inputs to well-known ranking methods. To prove the reliability of the approach, the Amazon reviews dataset was used for experimentation. The results demonstrated the effectiveness of context-aware embeddings in retrieving relevant products and the quality indicators in ranking high-quality products. Full article
Show Figures

Figure 1

23 pages, 459 KiB  
Article
Event Detection in Wikipedia Edit History Improved by Documents Web Based Automatic Assessment
by Marco Fisichella and Andrea Ceroni
Big Data Cogn. Comput. 2021, 5(3), 34; https://doi.org/10.3390/bdcc5030034 - 4 Aug 2021
Cited by 5 | Viewed by 4379
Abstract
A majority of current work in events extraction assumes the static nature of relationships in constant expertise knowledge bases. However, in collaborative environments, such as Wikipedia, information and systems are extraordinarily dynamic over time. In this work, we introduce a new approach for [...] Read more.
A majority of current work in events extraction assumes the static nature of relationships in constant expertise knowledge bases. However, in collaborative environments, such as Wikipedia, information and systems are extraordinarily dynamic over time. In this work, we introduce a new approach for extracting complex structures of events from Wikipedia. We advocate a new model to represent events by engaging more than one entities that are generalizable to an arbitrary language. The evolution of an event is captured successfully primarily based on analyzing the user edits records in Wikipedia. Our work presents a basis for a singular class of evolution-aware entity-primarily based enrichment algorithms and will extensively increase the quality of entity accessibility and temporal retrieval for Wikipedia. We formalize this problem case and conduct comprehensive experiments on a real dataset of 1.8 million Wikipedia articles in order to show the effectiveness of our proposed answer. Furthermore, we suggest a new event validation automatic method relying on a supervised model to predict the presence of events in a non-annotated corpus. As the extra document source for event validation, we chose the Web due to its ease of accessibility and wide event coverage. Our outcomes display that we are capable of acquiring 70% precision evaluated on a manually annotated corpus. Ultimately, we conduct a comparison of our strategy versus the Current Event Portal of Wikipedia and discover that our proposed WikipEvent along with the usage of Co-References technique may be utilized to provide new and more data on events. Full article
(This article belongs to the Special Issue Multimedia Systems for Multimedia Big Data)
Show Figures

Figure 1

28 pages, 11316 KiB  
Article
Fast and Effective Retrieval for Large Multimedia Collections
by Stefan Wagenpfeil, Binh Vu, Paul Mc Kevitt and Matthias Hemmje
Big Data Cogn. Comput. 2021, 5(3), 33; https://doi.org/10.3390/bdcc5030033 - 22 Jul 2021
Cited by 7 | Viewed by 4730
Abstract
The indexing and retrieval of multimedia content is generally implemented by employing feature graphs. These graphs typically contain a significant number of nodes and edges to reflect the level of detail in feature detection. A higher level of detail increases the effectiveness of [...] Read more.
The indexing and retrieval of multimedia content is generally implemented by employing feature graphs. These graphs typically contain a significant number of nodes and edges to reflect the level of detail in feature detection. A higher level of detail increases the effectiveness of the results, but also leads to more complex graph structures. However, graph traversal-based algorithms for similarity are quite inefficient and computationally expensive, especially for large data structures. To deliver fast and effective retrieval especially for large multimedia collections and multimedia big data, an efficient similarity algorithm for large graphs in particular is desirable. Hence, in this paper, we define a graph projection into a 2D space (Graph Code) and the corresponding algorithms for indexing and retrieval. We show that calculations in this space can be performed more efficiently than graph traversals due to the simpler processing model and the high level of parallelization. As a consequence, we demonstrate experimentally that the effectiveness of retrieval also increases substantially, as the Graph Code facilitates more levels of detail in feature fusion. These levels of detail also support an increased trust prediction, particularly for fused social media content. In our mathematical model, we define a metric triple for the Graph Code, which also enhances the ranked result representations. Thus, Graph Codes provide a significant increase in efficiency and effectiveness, especially for multimedia indexing and retrieval, and can be applied to images, videos, text and social media information. Full article
(This article belongs to the Special Issue Multimedia Systems for Multimedia Big Data)
Show Figures

Figure 1

16 pages, 865 KiB  
Article
The Global Cyber Security Model: Counteracting Cyber Attacks through a Resilient Partnership Arrangement
by Peter R.J. Trim and Yang-Im Lee
Big Data Cogn. Comput. 2021, 5(3), 32; https://doi.org/10.3390/bdcc5030032 - 13 Jul 2021
Cited by 13 | Viewed by 6927
Abstract
In this paper, insights are provided into how senior managers can establish a global cyber security model that raises cyber security awareness among staff in a partnership arrangement and ensures that cyber attacks are anticipated and dealt with in real time. We deployed [...] Read more.
In this paper, insights are provided into how senior managers can establish a global cyber security model that raises cyber security awareness among staff in a partnership arrangement and ensures that cyber attacks are anticipated and dealt with in real time. We deployed a qualitative research strategy that involved a group interview involving cyber security and intelligence experts. The coding approach was used to identify the themes in the data and, in addition, a number of categories and subcategories were identified. The mind map approach was utilized to identify the thought processes of senior managers in relation to ensuring that the cyber security management process is effective. The global cyber security model can be used by senior managers to establish a framework for dealing with a range of cyber security attacks, as well as to upgrade the cyber security skill and knowledge base of individuals. In order for a cyber security mentality to be established, senior managers need to ensure that staff are focused on organizational vulnerability and resilience, there is an open and transparent communication process in place, and staff are committed to sharing cyber security knowledge. By placing cyber security within the context of a partnership arrangement, senior managers can adopt a collectivist approach to cyber security and benefit from the knowledge of external experts. Full article
(This article belongs to the Special Issue Cybersecurity, Threat Analysis and the Management of Risk)
Show Figures

Figure 1

19 pages, 1727 KiB  
Article
Proposal for Customer Identification Service Model Based on Distributed Ledger Technology to Transfer Virtual Assets
by Keundug Park and Heung-Youl Youm
Big Data Cogn. Comput. 2021, 5(3), 31; https://doi.org/10.3390/bdcc5030031 - 13 Jul 2021
Cited by 5 | Viewed by 4696
Abstract
Recently, cross-border transfers using blockchain-based virtual assets (cryptocurrency) have been increasing. However, due to the anonymity of blockchain, there is a problem related to money laundering because the virtual asset service providers cannot identify the originators and the beneficiaries. In addition, the international [...] Read more.
Recently, cross-border transfers using blockchain-based virtual assets (cryptocurrency) have been increasing. However, due to the anonymity of blockchain, there is a problem related to money laundering because the virtual asset service providers cannot identify the originators and the beneficiaries. In addition, the international anti-money-laundering organization (the Financial Action Task Force, FATF) has placed anti-money-laundering obligations on virtual asset service providers through anti-money-laundering guidance for virtual assets issued in June 2019. This paper proposes a customer identification service model based on distributed ledger technology (DLT) that enables virtual asset service providers to verify the identity of the originators and beneficiaries. Full article
(This article belongs to the Special Issue Cybersecurity, Threat Analysis and the Management of Risk)
Show Figures

Figure 1

20 pages, 1241 KiB  
Article
Big Data Research in Fighting COVID-19: Contributions and Techniques
by Dianadewi Riswantini, Ekasari Nugraheni, Andria Arisal, Purnomo Husnul Khotimah, Devi Munandar and Wiwin Suwarningsih
Big Data Cogn. Comput. 2021, 5(3), 30; https://doi.org/10.3390/bdcc5030030 - 12 Jul 2021
Cited by 14 | Viewed by 7963
Abstract
The COVID-19 pandemic has induced many problems in various sectors of human life. After more than one year of the pandemic, many studies have been conducted to discover various technological innovations and applications to combat the virus that has claimed many lives. The [...] Read more.
The COVID-19 pandemic has induced many problems in various sectors of human life. After more than one year of the pandemic, many studies have been conducted to discover various technological innovations and applications to combat the virus that has claimed many lives. The use of Big Data technology to mitigate the threats of the pandemic has been accelerated. Therefore, this survey aims to explore Big Data technology research in fighting the pandemic. Furthermore, the relevance of Big Data technology was analyzed while technological contributions to five main areas were highlighted. These include healthcare, social life, government policy, business and management, and the environment. The analytical techniques of machine learning, deep learning, statistics, and mathematics were discussed to solve issues regarding the pandemic. The data sources used in previous studies were also presented and they consist of government officials, institutional service, IoT generated, online media, and open data. Therefore, this study presents the role of Big Data technologies in enhancing the research relative to COVID-19 and provides insights into the current state of knowledge within the domain and references for further development or starting new studies are provided. Full article
(This article belongs to the Special Issue Advanced Data Mining Techniques for IoT and Big Data)
Show Figures

Figure 1

17 pages, 1968 KiB  
Article
Exploration of Feature Representations for Predicting Learning and Retention Outcomes in a VR Training Scenario
by Alec G. Moore, Ryan P. McMahan and Nicholas Ruozzi
Big Data Cogn. Comput. 2021, 5(3), 29; https://doi.org/10.3390/bdcc5030029 - 12 Jul 2021
Cited by 3 | Viewed by 4488
Abstract
Training and education of real-world tasks in Virtual Reality (VR) has seen growing use in industry. The motion-tracking data that is intrinsic to immersive VR applications is rich and can be used to improve learning beyond standard training interfaces. In this paper, we [...] Read more.
Training and education of real-world tasks in Virtual Reality (VR) has seen growing use in industry. The motion-tracking data that is intrinsic to immersive VR applications is rich and can be used to improve learning beyond standard training interfaces. In this paper, we present machine learning (ML) classifiers that predict outcomes from a VR training application. Our approach makes use of the data from the tracked head-mounted display (HMD) and handheld controllers during VR training to predict whether a user will exhibit high or low knowledge acquisition, knowledge retention, and performance retention. We evaluated six different sets of input features and found varying degrees of accuracy depending on the predicted outcome. By visualizing the tracking data, we determined that users with higher acquisition and retention outcomes made movements with more certainty and with greater velocities than users with lower outcomes. Our results demonstrate that it is feasible to develop VR training applications that dynamically adapt to a user by using commonly available tracking data to predict learning and retention outcomes. Full article
(This article belongs to the Special Issue Virtual Reality, Augmented Reality, and Human-Computer Interaction)
Show Figures

Figure 1

29 pages, 549 KiB  
Article
Big Data and the United Nations Sustainable Development Goals (UN SDGs) at a Glance
by Hossein Hassani, Xu Huang, Steve MacFeely and Mohammad Reza Entezarian
Big Data Cogn. Comput. 2021, 5(3), 28; https://doi.org/10.3390/bdcc5030028 - 28 Jun 2021
Cited by 58 | Viewed by 18619
Abstract
The launch of the United Nations (UN) 17 Sustainable Development Goals (SDGs) in 2015 was a historic event, uniting countries around the world around the shared agenda of sustainable development with a more balanced relationship between human beings and the planet. The SDGs [...] Read more.
The launch of the United Nations (UN) 17 Sustainable Development Goals (SDGs) in 2015 was a historic event, uniting countries around the world around the shared agenda of sustainable development with a more balanced relationship between human beings and the planet. The SDGs affect or impact almost all aspects of life, as indeed does the technological revolution, empowered by Big Data and their related technologies. It is inevitable that these two significant domains and their integration will play central roles in achieving the 2030 Agenda. This research aims to provide a comprehensive overview of how these domains are currently interacting, by illustrating the impact of Big Data on sustainable development in the context of each of the 17 UN SDGs. Full article
(This article belongs to the Special Issue Big Data and UN Sustainable Development Goals (SDGs))
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop