Next Issue
Volume 11, April
Previous Issue
Volume 11, February
 
 

Computers, Volume 11, Issue 3 (March 2022) – 18 articles

Cover Story (view full-size image): Optimizing traffic signal control is a challenging problem, particularly when scaled to large networks. Several solutions exist for traffic signal control for small networks. However, adopting these solutions for large networks is often inefficient due to the complexity of interactions between intersections. An approach using empirical data and deep reinforcement learning can facilitate the development of intelligent solutions for large network traffic signal control. This paper presents a scalable model that relies on smart infrastructure to facilitate local data sharing and uses graph attention networks as the neural network for deep reinforcement learning. The model is expected to significantly enhance large network traffic signal performance while reducing the computational load. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
15 pages, 887 KiB  
Article
Representation Learning for EEG-Based Biometrics Using Hilbert–Huang Transform
by Mikhail Svetlakov, Ilya Kovalev, Anton Konev, Evgeny Kostyuchenko and Artur Mitsel
Computers 2022, 11(3), 47; https://doi.org/10.3390/computers11030047 - 20 Mar 2022
Cited by 9 | Viewed by 3087
Abstract
A promising approach to overcome the various shortcomings of password systems is the use of biometric authentication, in particular the use of electroencephalogram (EEG) data. In this paper, we propose a subject-independent learning method for EEG-based biometrics using Hilbert spectrograms of the data. [...] Read more.
A promising approach to overcome the various shortcomings of password systems is the use of biometric authentication, in particular the use of electroencephalogram (EEG) data. In this paper, we propose a subject-independent learning method for EEG-based biometrics using Hilbert spectrograms of the data. The proposed neural network architecture treats the spectrogram as a collection of one-dimensional series and applies one-dimensional dilated convolutions over them, and a multi-similarity loss was used as the loss function for subject-independent learning. The architecture was tested on the publicly available PhysioNet EEG Motor Movement/Imagery Dataset (PEEGMIMDB) with a 14.63% Equal Error Rate (EER) achieved. The proposed approach’s main advantages are subject independence and suitability for interpretation via created spectrograms and the integrated gradients method. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence for Biometrics 2021)
Show Figures

Figure 1

17 pages, 1963 KiB  
Article
Modeling and Numerical Validation for an Algorithm Based on Cellular Automata to Reduce Noise in Digital Images
by Karen Vanessa Angulo, Danilo Gustavo Gil and Helbert Eduardo Espitia
Computers 2022, 11(3), 46; https://doi.org/10.3390/computers11030046 - 20 Mar 2022
Cited by 2 | Viewed by 2464
Abstract
Given the grid features of digital images, a direct relation with cellular automata can be established with transition rules based on information of the cells in the grid. This document presents the modeling of an algorithm based on cellular automata for digital images [...] Read more.
Given the grid features of digital images, a direct relation with cellular automata can be established with transition rules based on information of the cells in the grid. This document presents the modeling of an algorithm based on cellular automata for digital images processing. Using an adaptation mechanism, the algorithm allows the elimination of impulsive noise in digital images. Additionally, the comparison of the cellular automata algorithm and median and mean filters is carried out to observe that the adaptive process obtains suitable results for eliminating salt and pepper type-noise. Finally, by means of examples, the result of the algorithm are shown graphically. Full article
(This article belongs to the Special Issue Computing, Electrical and Industrial Systems 2022)
Show Figures

Graphical abstract

38 pages, 2357 KiB  
Article
Mind Your Outcomes: The ΔQSD Paradigm for Quality-Centric Systems Development and Its Application to a Blockchain Case Study
by Seyed Hossein Haeri, Peter Thompson, Neil Davies, Peter Van Roy, Kevin Hammond and James Chapman
Computers 2022, 11(3), 45; https://doi.org/10.3390/computers11030045 - 17 Mar 2022
Cited by 4 | Viewed by 6074
Abstract
This paper directly addresses a long-standing issue that affects the development of many complex distributed software systems: how to establish quickly, cheaply, and reliably whether they can deliver their intended performance before expending significant time, effort, and money on detailed design and implementation. [...] Read more.
This paper directly addresses a long-standing issue that affects the development of many complex distributed software systems: how to establish quickly, cheaply, and reliably whether they can deliver their intended performance before expending significant time, effort, and money on detailed design and implementation. We describe ΔQSD, a novel metrics-based and quality-centric paradigm that uses formalised outcome diagrams to explore the performance consequences of design decisions, as a performance blueprint of the system. The distinctive feature of outcome diagrams is that they capture the essential observational properties of the system, independent of the details of system structure and behaviour. The ΔQSD paradigm derives bounds on performance expressed as probability distributions encompassing all possible executions of the system. The ΔQSD paradigm is both effective and generic: it allows values from various sources to be combined in a rigorous way so that approximate results can be obtained quickly and subsequently refined. ΔQSD has been successfully used by a small team in Predictable Network Solutions for consultancy on large-scale applications in a number of industries, including telecommunications, avionics, and space and defence, resulting in cumulative savings worth billions of US dollars. The paper outlines the ΔQSD paradigm, describes its formal underpinnings, and illustrates its use via a topical real-world example taken from the blockchain/cryptocurrency domain. ΔQSD has supported the development of an industry-leading proof-of-stake blockchain implementation that reliably and consistently delivers blocks of up to 80 kB every 20 s on average across a globally distributed network of collaborating block-producing nodes operating on the public internet. Full article
(This article belongs to the Special Issue Blockchain-Based Systems)
Show Figures

Figure 1

15 pages, 2479 KiB  
Article
Towards Accurate Skin Lesion Classification across All Skin Categories Using a PCNN Fusion-Based Data Augmentation Approach
by Esther Chabi Adjobo, Amadou Tidjani Sanda Mahama, Pierre Gouton and Joël Tossa
Computers 2022, 11(3), 44; https://doi.org/10.3390/computers11030044 - 16 Mar 2022
Cited by 7 | Viewed by 3018
Abstract
Deep learning models yield remarkable results in skin lesions analysis. However, these models require considerable amounts of data, while accessibility to the images with annotated skin lesions is often limited, and the classes are often imbalanced. Data augmentation is one way to alleviate [...] Read more.
Deep learning models yield remarkable results in skin lesions analysis. However, these models require considerable amounts of data, while accessibility to the images with annotated skin lesions is often limited, and the classes are often imbalanced. Data augmentation is one way to alleviate the lack of labeled data and class imbalance. This paper proposes a new data augmentation method based on image fusion technique to construct large dataset on all existing tones. The fusion method consists of a pulse-coupled neural network fusion strategy in a non-subsampled shearlet transform domain and consists of three steps: decomposition, fusion, and reconstruction. The dermoscopic dataset is obtained by combining ISIC2019 and ISIC2020 Challenge datasets. A comparative study with current algorithms was performed to access the effectiveness of the proposed one. The first experiment results indicate that the proposed algorithm best preserves the lesion dermoscopic structure and skin tones features. The second experiment, which consisted of training a convolutional neural network model with the augmented dataset, indicates a more significant increase in accuracy by 15.69%, and 15.38% respectively for tanned, and brown skin categories. The model precision, recall, and F1-score have also been increased. The obtained results indicate that the proposed augmentation method is suitable for dermoscopic images and can be used as a solution to the lack of dark skin images in the dataset. Full article
(This article belongs to the Special Issue Advances of Machine and Deep Learning in the Health Domain)
Show Figures

Graphical abstract

25 pages, 435 KiB  
Article
Application of the Hurricane-Based Optimization Algorithm to the Phase-Balancing Problem in Three-Phase Asymmetric Networks
by Jose Luis Cruz-Reyes, Sergio Steven Salcedo-Marcelo and Oscar Danilo Montoya
Computers 2022, 11(3), 43; https://doi.org/10.3390/computers11030043 - 14 Mar 2022
Cited by 5 | Viewed by 2645
Abstract
This article addresses the problem of optimal phase-swapping in asymmetric distribution grids through the application of hurricane-based optimization algorithm (HOA). The exact mixed-integer nonlinear programming (MINLP) model is solved by using a master–slave optimization procedure. The master stage is entrusted with the definition [...] Read more.
This article addresses the problem of optimal phase-swapping in asymmetric distribution grids through the application of hurricane-based optimization algorithm (HOA). The exact mixed-integer nonlinear programming (MINLP) model is solved by using a master–slave optimization procedure. The master stage is entrusted with the definition of load connection at each stage by using an integer codification that ensures that, per node, only one from the possible six-load connections is assigned. In the slave stage, the load connection set provided by the master stage is applied with the backward/forward power flow method in its matricial form to determine the amount of grid power losses. The computational performance of the HOA was tested in three literature test feeders composed of 8, 25, and 37 nodes. Numerical results show the effectiveness of the proposed master–slave optimization approach when compared with the classical Chu and Beasley genetic algorithm (CBGA) and the discrete vortex search algorithm (DVSA). The reductions reached with HOA were 24.34%, 4.16%, and 19.25% for the 8-, 28-, and 37-bus systems; this confirms the literature reports in the first two test feeders and improves the best current solution of the IEEE 37-bus grid. All simulations are carried out in the MATLAB programming environment. Full article
(This article belongs to the Special Issue Computing, Electrical and Industrial Systems 2022)
Show Figures

Figure 1

11 pages, 2924 KiB  
Article
A Secure Blockchain Framework for Storing Historical Text: A Case Study of the Holy Hadith
by Khaled M. Awad, Mustafa ElNainay, Mohammad Abdeen, Marwan Torki, Omar Saif and Emad Nabil
Computers 2022, 11(3), 42; https://doi.org/10.3390/computers11030042 - 14 Mar 2022
Cited by 1 | Viewed by 5369
Abstract
Historical texts are one of the main pillars for understanding current civilization and are used to reference different aspects. Hadiths are an example of one of the historical texts that should be securely preserved. Due to the expansion of the online resources, fabrications [...] Read more.
Historical texts are one of the main pillars for understanding current civilization and are used to reference different aspects. Hadiths are an example of one of the historical texts that should be securely preserved. Due to the expansion of the online resources, fabrications and alterations of fake Hadiths are easily feasible. Therefore, it has become more challenging to authenticate the online available Hadith contents and much harder to keep these authenticated results secure and unmanipulated. In this research, we are using the capabilities of the distributed blockchain technology to securely archive the Hadith and its level of authenticity in a blockchain. We selected a permissioned blockchain customized model in which the main entities approving the level of authenticity of the Hadith are well-established and specialized institutions in the main Islamic countries that can apply their own Hadith validation model. The proposed solution guarantees its integrity using the crowd wisdom represented in the selected nodes in the blockchain, which uses voting algorithms to decide the insertion of any new Hadiths into the database. This technique secures data integrity at any given time. If any organization’s credentials are compromised and used to update the data maliciously, 50% + 1 approval from the whole network nodes will be required. In case of any malicious or misguided information during the state of reaching consensus, the system will self-heal using practical Byzantine Fault Tolerance (pBFT). We evaluated the proposed framework’s read/write performance and found it adequate for the operational requirements. Full article
(This article belongs to the Section Blockchain Infrastructures and Enabled Applications)
Show Figures

Figure 1

19 pages, 1356 KiB  
Article
Deep Q-Learning Based Reinforcement Learning Approach for Network Intrusion Detection
by Hooman Alavizadeh, Hootan Alavizadeh and Julian Jang-Jaccard
Computers 2022, 11(3), 41; https://doi.org/10.3390/computers11030041 - 11 Mar 2022
Cited by 98 | Viewed by 10222
Abstract
The rise of the new generation of cyber threats demands more sophisticated and intelligent cyber defense solutions equipped with autonomous agents capable of learning to make decisions without the knowledge of human experts. Several reinforcement learning methods (e.g., Markov) for automated network intrusion [...] Read more.
The rise of the new generation of cyber threats demands more sophisticated and intelligent cyber defense solutions equipped with autonomous agents capable of learning to make decisions without the knowledge of human experts. Several reinforcement learning methods (e.g., Markov) for automated network intrusion tasks have been proposed in recent years. In this paper, we introduce a new generation of the network intrusion detection method, which combines a Q-learning based reinforcement learning with a deep feed forward neural network method for network intrusion detection. Our proposed Deep Q-Learning (DQL) model provides an ongoing auto-learning capability for a network environment that can detect different types of network intrusions using an automated trial-error approach and continuously enhance its detection capabilities. We provide the details of fine-tuning different hyperparameters involved in the DQL model for more effective self-learning. According to our extensive experimental results based on the NSL-KDD dataset, we confirm that the lower discount factor, which is set as 0.001 under 250 episodes of training, yields the best performance results. Our experimental results also show that our proposed DQL is highly effective in detecting different intrusion classes and outperforms other similar machine learning approaches. Full article
Show Figures

Figure 1

14 pages, 671 KiB  
Article
Football Match Line-Up Prediction Based on Physiological Variables: A Machine Learning Approach
by Alberto Cortez, António Trigo and Nuno Loureiro
Computers 2022, 11(3), 40; https://doi.org/10.3390/computers11030040 - 11 Mar 2022
Cited by 5 | Viewed by 5137
Abstract
One of the great challenges for football coaches is to choose the football line-up that gives more guarantees of success. Even though there are several dimensions to analyse the problem, such as the opposing team characteristics. The objective of this study is to [...] Read more.
One of the great challenges for football coaches is to choose the football line-up that gives more guarantees of success. Even though there are several dimensions to analyse the problem, such as the opposing team characteristics. The objective of this study is to identify, based on the players’ physiological variables collected using Global Positioning Systems (GPS), which players are the most suitable to be part of the starting team/line-up. The work was developed in two stages, first with the choice of the most important variables using the Recursive Feature Elimination algorithm, and then using logistic regression on these chosen variables. The logistic regression resulted in an index, called the line-up preparedness index, for the following player positions: Fullbacks, Central Midfielders and Wingers. For the other players’ positions, the model results were not satisfactory. Full article
(This article belongs to the Special Issue Selected Papers from ICCSA 2021)
Show Figures

Figure 1

16 pages, 7568 KiB  
Article
A Fast Text-to-Image Encryption-Decryption Algorithm for Secure Network Communication
by Noor Sattar Noor, Dalal Abdulmohsin Hammood, Ali Al-Naji and Javaan Chahl
Computers 2022, 11(3), 39; https://doi.org/10.3390/computers11030039 - 9 Mar 2022
Cited by 21 | Viewed by 9662
Abstract
Data security is the science of protecting data in information technology, including authentication, data encryption, data decryption, data recovery, and user protection. To protect data from unauthorized disclosure and modification, a secure algorithm should be used. Many techniques have been proposed to encrypt [...] Read more.
Data security is the science of protecting data in information technology, including authentication, data encryption, data decryption, data recovery, and user protection. To protect data from unauthorized disclosure and modification, a secure algorithm should be used. Many techniques have been proposed to encrypt text to an image. Most past studies used RGB layers to encrypt text to an image. In this paper, a Text-to-Image Encryption-Decryption (TTIED) algorithm based on Cyan, Magenta, Yellow, Key/Black (CMYK) mode is proposed to improve security, capacity, and processing time. The results show that the capacity increased from one to four times compared to RGB mode. Security was also improved due to a decrease in the probability of an adversary discovering keys. The processing time ranged between 0.001 ms (668 characters) and 31 s (25 million characters), depending on the length of the text. The compression rate for the encrypted file was decreased compared to WinRAR. In this study, Arabic and English texts were encrypted and decrypted. Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Show Figures

Figure 1

11 pages, 1706 KiB  
Article
Scalable Traffic Signal Controls Using Fog-Cloud Based Multiagent Reinforcement Learning
by Paul (Young Joun) Ha, Sikai Chen, Runjia Du and Samuel Labi
Computers 2022, 11(3), 38; https://doi.org/10.3390/computers11030038 - 8 Mar 2022
Cited by 6 | Viewed by 2832
Abstract
Optimizing traffic signal control (TSC) at intersections continues to pose a challenging problem, particularly for large-scale traffic networks. It has been shown in past research that it is feasible to optimize the operations of individual TSC systems or a small collection of such [...] Read more.
Optimizing traffic signal control (TSC) at intersections continues to pose a challenging problem, particularly for large-scale traffic networks. It has been shown in past research that it is feasible to optimize the operations of individual TSC systems or a small collection of such systems. However, it has been computationally difficult to scale these solution approaches to large networks partly due to the curse of dimensionality that is encountered as the number of intersections increases. Fortunately, recent studies have recognized the potential of exploiting advancements in deep and reinforcement learning to address this problem, and some preliminary successes have been achieved in this regard. However, facilitating such intelligent solution approaches may require large amounts of infrastructure investments such as roadside units (RSUs) and drones, to ensure that connectivity is available across all intersections in the large network. This represents an investment that may be burdensome for the road agency. As such, this study builds on recent work to present a scalable TSC model that may reduce the number of enabling infrastructure that is required. This is achieved using graph attention networks (GATs) to serve as the neural network for deep reinforcement learning. GAT helps to maintain the graph topology of the traffic network while disregarding any irrelevant information. A case study is carried out to demonstrate the effectiveness of the proposed model, and the results show much promise. The overall research outcome suggests that by decomposing large networks using fog nodes, the proposed fog-based graphic RL (FG-RL) model can be easily applied to scale into larger traffic networks. Full article
(This article belongs to the Special Issue Machine Learning for Traffic Modeling and Prediction)
Show Figures

Figure 1

17 pages, 950 KiB  
Article
NDN Content Store and Caching Policies: Performance Evaluation
by Elídio Tomás da Silva, Joaquim Melo Henriques de Macedo and António Luís Duarte Costa
Computers 2022, 11(3), 37; https://doi.org/10.3390/computers11030037 - 4 Mar 2022
Cited by 13 | Viewed by 4005
Abstract
Among various factors contributing to performance of named data networking (NDN), the organization of caching is a key factor and has benefited from intense studies by the networking research community. The performed studies aimed at (1) finding the best strategy to adopt for [...] Read more.
Among various factors contributing to performance of named data networking (NDN), the organization of caching is a key factor and has benefited from intense studies by the networking research community. The performed studies aimed at (1) finding the best strategy to adopt for content caching; (2) specifying the best location, and number of content stores (CS) in the network; and (3) defining the best cache replacement policy. Accessing and comparing the performance of the proposed solutions is as essential as the development of the proposals themselves. The present work aims at evaluating and comparing the behavior of four caching policies (i.e., random, least recently used (LRU), least frequently used (LFU), and first in first out (FIFO)) applied to NDN. Several network scenarios are used for simulation (2 topologies, varying the percentage of nodes of the content stores (5–100), 1 and 10 producers, 32 and 41 consumers). Five metrics are considered for the performance evaluation: cache hit ratio (CHR), network traffic, retrieval delay, interest re-transmissions, and the number of upstream hops. The content request follows the Zipf–Mandelbrot distribution (with skewness factor α=1.1 and α=0.75). LFU presents better performance in all considered metrics, except on the NDN testbed, with 41 consumers, 1 producer and a content request rate of 100 packets/s. For the level of content store from 50% to 100%, LRU presents a notably higher performance. Although the network behavior is similar for both skewness factors, when α=0.75, the CHR is significantly reduced, as expected. Full article
Show Figures

Figure 1

4 pages, 176 KiB  
Editorial
Game-Based Learning, Gamification in Education and Serious Games
by Carlos Vaz de Carvalho and Antonio Coelho
Computers 2022, 11(3), 36; https://doi.org/10.3390/computers11030036 - 4 Mar 2022
Cited by 27 | Viewed by 8032
Abstract
Video games have become one of the predominant forms of entertainment in our society, but they have also impacted many other of its social and cultural aspects [...] Full article
(This article belongs to the Special Issue Game-Based Learning, Gamification in Education and Serious Games)
18 pages, 1154 KiB  
Article
Rating the Dominance of Concepts in Semantic Taxonomies
by Gerasimos Razis, Ioannis Anagnostopoulos and Hong Zhou
Computers 2022, 11(3), 35; https://doi.org/10.3390/computers11030035 - 2 Mar 2022
Cited by 1 | Viewed by 2516
Abstract
The descriptive concepts of “semantic” taxonomies are assigned to content items of the publishing domain for supporting a plethora of operations, mostly regarding the organization and discoverability of the content, as well as for recommendation tasks. However, either not all publishers rely on [...] Read more.
The descriptive concepts of “semantic” taxonomies are assigned to content items of the publishing domain for supporting a plethora of operations, mostly regarding the organization and discoverability of the content, as well as for recommendation tasks. However, either not all publishers rely on such structures, or in many cases employ their own proprietary taxonomies, thus the content is either difficult to be retrieved by the end users or stored in publisher-specific fragmented “data-silos”, respectively. To address these issues, the modular and scalable “Dominance Metric” methodology is proposed for rating the dominance and importance of concepts in semantic taxonomies. Our proposed metric is applied both on the vast multidisciplinary Microsoft Academic Graph Fields of Study taxonomy and the MeSH controlled vocabulary in order for their enhanced and refined versions to be produced. Moreover, we describe the cleansing process of the resulting taxonomy from Microsoft’s structure by deduplicating concepts and refining the hierarchical relations towards the increase of its representation quality. Our evaluation procedure provided valuable insights by showcasing that high volume, namely the number of publications a concept is assigned to, does not necessarily imply high influence, but the latter is also affected by the structural and topological properties of the individual entities. Full article
Show Figures

Figure 1

15 pages, 2508 KiB  
Article
Multimodal Lip-Reading for Tracheostomy Patients in the Greek Language
by Yorghos Voutos, Georgios Drakopoulos, Georgios Chrysovitsiotis, Zoi Zachou, Dimitris Kikidis, Efthymios Kyrodimos and Themis Exarchos
Computers 2022, 11(3), 34; https://doi.org/10.3390/computers11030034 - 28 Feb 2022
Cited by 2 | Viewed by 2810
Abstract
Voice loss constitutes a crucial disorder which is highly associated with social isolation. The use of multimodal information sources, such as, audiovisual information, is crucial since it can lead to the development of straightforward personalized word prediction models which can reproduce the patient’s [...] Read more.
Voice loss constitutes a crucial disorder which is highly associated with social isolation. The use of multimodal information sources, such as, audiovisual information, is crucial since it can lead to the development of straightforward personalized word prediction models which can reproduce the patient’s original voice. In this work we designed a multimodal approach based on audiovisual information from patients before loss-of-voice to develop a system for automated lip-reading in the Greek language. Data pre-processing methods, such as, lip-segmentation and frame-level sampling techniques were used to enhance the quality of the imaging data. Audio information was incorporated in the model to automatically annotate sets of frames as words. Recurrent neural networks were trained on four different video recordings to develop a robust word prediction model. The model was able to correctly identify test words in different time frames with 95% accuracy. To our knowledge, this is the first word prediction model that is trained to recognize words from video recordings in the Greek language. Full article
Show Figures

Figure 1

19 pages, 1531 KiB  
Article
Distributed Attack Deployment Capability for Modern Automated Penetration Testing
by Jack Hance, Jordan Milbrath, Noah Ross and Jeremy Straub
Computers 2022, 11(3), 33; https://doi.org/10.3390/computers11030033 - 23 Feb 2022
Cited by 5 | Viewed by 5212
Abstract
Cybersecurity is an ever-changing landscape. The threats of the future are hard to predict and even harder to prepare for. This paper presents work designed to prepare for the cybersecurity landscape of tomorrow by creating a key support capability for an autonomous cybersecurity [...] Read more.
Cybersecurity is an ever-changing landscape. The threats of the future are hard to predict and even harder to prepare for. This paper presents work designed to prepare for the cybersecurity landscape of tomorrow by creating a key support capability for an autonomous cybersecurity testing system. This system is designed to test and prepare critical infrastructure for what the future of cyberattacks looks like. It proposes a new type of attack framework that provides precise and granular attack control and higher perception within a set of infected infrastructure. The proposed attack framework is intelligent, supports the fetching and execution of arbitrary attacks, and has a small memory and network footprint. This framework facilitates autonomous rapid penetration testing as well as the evaluation of where detection systems and procedures are underdeveloped and require further improvement in preparation for rapid autonomous cyber-attacks. Full article
(This article belongs to the Special Issue Feature Paper in Computers)
Show Figures

Figure 1

19 pages, 2588 KiB  
Review
Literature Review on MOOCs on Sensory (Olfactory) Learning
by Pierpaolo Limone, Sandra Pati, Giusi Antonia Toto, Raffaele Di Fuccio, Antonietta Baiano and Giuseppe Lopriore
Computers 2022, 11(3), 32; https://doi.org/10.3390/computers11030032 - 23 Feb 2022
Cited by 2 | Viewed by 4124
Abstract
Massive Open Online Courses (MOOCs) have been described as a “next development of networked learning”, and they have the potential to mediate sensory learning. To understand this phenomenon, the present systematic review examines the research techniques, subjects, and trends of MOOC research on [...] Read more.
Massive Open Online Courses (MOOCs) have been described as a “next development of networked learning”, and they have the potential to mediate sensory learning. To understand this phenomenon, the present systematic review examines the research techniques, subjects, and trends of MOOC research on sensory learning, in order to provide a thorough understanding of the MOOC relevant to sensory (olfactory) learning phenomena by evaluating 65 (four studies are about multisensorial learning and 61 are about multisensorial empirical MOOCs researches) empirical MOOC studies published between 2008 and 2021 by searching through databases: PubMed, Scopus, Web of Science, and Google Scholar. The results indicated that most studies were based on quantitative research methods followed by mixed research methods and the qualitative research approaches; most of the studies were surveys, followed by platform databases and interviews; almost half of the studies were conducted using at least two methods for data collection: survey and interviews; most were replicated. The most highlighted subjects included student retention, learning experience, social learning, and engagement. Implications and studies into the future have been considered in order to obtain a more evolved understanding of the acquisition of knowledge through the senses. Full article
(This article belongs to the Special Issue Present and Future of E-Learning Technologies)
Show Figures

Figure 1

16 pages, 7825 KiB  
Article
Attention Mechanism Guided Deep Regression Model for Acne Severity Grading
by Saeed Alzahrani, Baidaa Al-Bander and Waleed Al-Nuaimy
Computers 2022, 11(3), 31; https://doi.org/10.3390/computers11030031 - 23 Feb 2022
Cited by 10 | Viewed by 4356
Abstract
Acne vulgaris is the common form of acne that primarily affects adolescents, characterised by an eruption of inflammatory and/or non-inflammatory skin lesions. Accurate evaluation and severity grading of acne play a significant role in precise treatment for patients. Manual acne examination is typically [...] Read more.
Acne vulgaris is the common form of acne that primarily affects adolescents, characterised by an eruption of inflammatory and/or non-inflammatory skin lesions. Accurate evaluation and severity grading of acne play a significant role in precise treatment for patients. Manual acne examination is typically conducted by dermatologists through visual inspection of the patient skin and counting the number of acne lesions. However, this task costs time and requires excessive effort by dermatologists. This paper presents automated acne counting and severity grading method from facial images. To this end, we develop a multi-scale dilated fully convolutional regressor for density map generation integrated with an attention mechanism. The proposed fully convolutional regressor module adapts UNet with dilated convolution filters to systematically aggregate multi-scale contextual information for density maps generation. We incorporate an attention mechanism represented by prior knowledge of bounding boxes generated by Faster R-CNN into the regressor model. This attention mechanism guides the regressor model on where to look for the acne lesions by locating the most salient features related to the understudied acne lesions, therefore improving its robustness to diverse facial acne lesion distributions in sparse and dense regions. Finally, integrating over the generated density maps yields the count of acne lesions within an image, and subsequently the acne count indicates the level of acne severity. The obtained results demonstrate improved performance compared to the state-of-the-art methods in terms of regression and classification metrics. The developed computer-based diagnosis tool would greatly benefit and support automated acne lesion severity grading, significantly reducing the manual assessment and evaluation workload. Full article
(This article belongs to the Special Issue Advances of Machine and Deep Learning in the Health Domain)
Show Figures

Figure 1

18 pages, 2277 KiB  
Article
Learning from Peer Mistakes: Collaborative UML-Based ITS with Peer Feedback Evaluation
by Sehrish Abrejo, Hameedullah Kazi, Mutee U. Rahman, Ahsanullah Baloch and Amber Baig
Computers 2022, 11(3), 30; https://doi.org/10.3390/computers11030030 - 22 Feb 2022
Viewed by 2736
Abstract
Collaborative Intelligent Tutoring Systems (ITSs) use peer tutor assessment to give feedback to students in solving problems. Through this feedback, the students reflect on their thinking and try to improve it when they get similar questions. The accuracy of the feedback given by [...] Read more.
Collaborative Intelligent Tutoring Systems (ITSs) use peer tutor assessment to give feedback to students in solving problems. Through this feedback, the students reflect on their thinking and try to improve it when they get similar questions. The accuracy of the feedback given by the peers is important because this helps students to improve their learning skills. If the student acting as a peer tutor is unclear about the topic, then they will probably provide incorrect feedback. There have been very few attempts in the literature that provide limited support to improve the accuracy and relevancy of peer feedback. This paper presents a collaborative ITS to teach Unified Modeling Language (UML), which is designed in such a way that it can detect erroneous feedback before it is delivered to the student. The evaluations conducted in this study indicate that receiving and sending incorrect feedback have negative impact on students’ learning skills. Furthermore, the results also show that the experimental group with peer feedback evaluation has significant learning gains compared to the control group. Full article
(This article belongs to the Special Issue Present and Future of E-Learning Technologies)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop