applsci-logo

Journal Browser

Journal Browser

Computational Intelligence, Soft Computing and Communication Networks for Applied Science II

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (31 May 2022) | Viewed by 24601

Special Issue Editor


E-Mail Website
Guest Editor
Disaster Preparedness and Emergency Management, University of Hawaii, 2540 Dole Street, Honolulu, HI 96822, USA
Interests: epidemiology and prevention of congenital anomalies; psychosis and affective psychosis; cancer epidemiology and prevention; molecular and human genome epidemiology; evidence synthesis related to public health and health services research
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

There have been a number of research advances in the fields of intelligent systems and communication networks in recent years. Accordingly this special issue extends a previous body of work dealing with Computational Intelligence, Soft Computing and Communication Networks for Applied Science.

Artificial and computational intelligence continues to transform all aspects of society including the use of green computing for sustainability, the modeling of intelligent healthcare systems, the design of smart transportation networks and  the modeling of autonomous, intelligent, mobile vehicles. There are also many burgeoning fields in the applied sciences. Computational intelligence and soft computing approaches possess a number of critical strengths: they are capable of processing large amounts of real-time and historical data acquired via environmental interactions; they continually learn through the consequences of action–result combinations; tools from a number of branches of soft systems science can be used together for synergistic effects to transform our society. All aspects of communication systems and networks and computational intelligence will be considered in this special issue. Artificial intelligence and soft computing paradigms often leverage nature-inspired computational methodologies including artificial neural networks (ANNs), fuzzy sets, and evolutionary algorithms (EA) including genetic algorithms (EA/GAs)—and their hybridizations, such as neuro-fuzzy computing and neo-fuzzy systems. These systems have produced valuable, timely, robust, high quality and human-competitive results that have contributed to artificial intelligence research breakthroughs ranging from deep learning to genetic programming. The most promising tools and paradigms for computational intelligence and soft computing paradigms will be emphasized, including  neural networks, swarm intelligence, expert systems, evolutionary computing, fuzzy systems, and artificial immune systems

Prof. Dr. Jason K. Levy
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Soft, mobile-cloud based computing for social networks
  • Data mining and Big data analytics for applied science and engineering
  • Fuzzy system theory in health and environmental applications
  • Socio-environmental data analytical approaches using computational methods
  • Deep learning and machine learning algorithms for industrial applications
  • Intelligent techniques for smart surveillance and security in public health systems
  • Crowd computing-assisted access control and digital rights management
  • Evolutionary algorithms for data analysis and recommendations
  • Crowd intelligence and computing paradigms Computer vision and image processing and pattern recognition technologies healthcare Parallel and distributed computing for smart healthcare services Autonomous systems and industrial processes optimization Extreme and intelligent manufacturing Wireless and optical communications and networking
  • Parallel and Distributed Computing Cloud computing and networks
  • Networked control systems and information security
  • Speech/image/video processing and communications
  • Green computing and internet of thing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 2422 KiB  
Article
Lightweight Transformer Network for Ship HRRP Target Recognition
by Zhibin Yue, Jianbin Lu and Lu Wan
Appl. Sci. 2022, 12(19), 9728; https://doi.org/10.3390/app12199728 - 27 Sep 2022
Cited by 1 | Viewed by 1798
Abstract
The traditional High-Resolution Range Profile (HRRP) target recognition method has difficulty automatically extracting target deep features, and has low recognition accuracy under low training samples. To solve these problems, a ship recognition method is proposed based on the lightweight Transformer model. The model [...] Read more.
The traditional High-Resolution Range Profile (HRRP) target recognition method has difficulty automatically extracting target deep features, and has low recognition accuracy under low training samples. To solve these problems, a ship recognition method is proposed based on the lightweight Transformer model. The model enhances the representation of key features by embedding Recurrent Neural Networks (RNN) into Transformer’s encoder. The Group Linear Transformations (GLTs) are introduced into Transformer to reduce the number of parameters in the model, and stable features are extracted through linear intergroup dimensional transformations. The adaptive gradient clipping algorithm is combined with the Stochastic Gradient Descent (SGD) optimizer to allow the gradient to change dynamically with the training process and to improve the training speed and generalization ability of the model. Experimental results on the simulated dataset show that multi-layer model stacking can effectively extract deep features of targets and raise recognition accuracy. At the same time, the lightweight Transformer model can maintain good recognition performance with low parameters and low training samples. Full article
Show Figures

Figure 1

21 pages, 1995 KiB  
Article
Development of a Framework to Aid the Transition from Reactive to Proactive Maintenance Approaches to Enable Energy Reduction
by Michael Ahern, Dominic T. J. O’Sullivan and Ken Bruton
Appl. Sci. 2022, 12(13), 6704; https://doi.org/10.3390/app12136704 - 1 Jul 2022
Cited by 4 | Viewed by 3024
Abstract
The disparity between public datasets and real industrial datasets is limiting the practical application of advanced data analysis. Therefore, industry is stuck in a reactive mode regarding their maintenance strategy and cannot transition to cost-effective and energy-efficient proactive maintenance approaches. In this paper, [...] Read more.
The disparity between public datasets and real industrial datasets is limiting the practical application of advanced data analysis. Therefore, industry is stuck in a reactive mode regarding their maintenance strategy and cannot transition to cost-effective and energy-efficient proactive maintenance approaches. In this paper, an integration-type adaptation of the CRISP-DM data mining process model is proposed to combine domain expertise with data science techniques to address the pervasive data issues in industrial datasets. The development of the Industrial Data Analysis Improvement Cycle (IDAIC) framework led to the novel repurposing of knowledge-based fault detection and diagnosis (FDD) techniques for data quality assessment. Through interdisciplinary collaboration, the proposed framework facilitates a transition from reactive to proactive problem solving by firstly resolving known faults and data issues using domain expertise, and secondly exploring unknown or novel faults using data analysis. Full article
Show Figures

Figure 1

8 pages, 2011 KiB  
Article
A QR Code-Based Approach to Differentiating the Display of Augmented Reality Content
by Pei-Yu Lin, Wen-Chuan Wu and Jen-Ho Yang
Appl. Sci. 2021, 11(24), 11801; https://doi.org/10.3390/app112411801 - 12 Dec 2021
Cited by 4 | Viewed by 4471
Abstract
The augmented reality (AR) system requires markers to recognize and locate virtual objects on the screens of mobile devices. However, both markers and objects must be registered via the online platform in advance. In addition, an AR marker can only pair with a [...] Read more.
The augmented reality (AR) system requires markers to recognize and locate virtual objects on the screens of mobile devices. However, both markers and objects must be registered via the online platform in advance. In addition, an AR marker can only pair with a fixed set of virtual objects, limiting the flexibility and immediacy of changing and updating these data. This paper incorporates the quick response barcode (QR code) into the AR system to address these issues. We propose an algorithm with two vital goals, including (1) generating differentiated virtual objects for different target users by using only one QR code as the marker and (2) concealing different private authentication in QR modules by applying the error correction capability. We then demonstrate the proposed approach via a simulation of two practical scenarios, the electronic catalogs for business applications, and differentiated instructional materials for digital learning. This paper contributes to AR and QR code research and practices. Full article
Show Figures

Figure 1

8 pages, 1316 KiB  
Communication
Towards Continuous Deployment for Blockchain
by Tomasz Górski
Appl. Sci. 2021, 11(24), 11745; https://doi.org/10.3390/app112411745 - 10 Dec 2021
Cited by 20 | Viewed by 3005
Abstract
Ensuring a production-ready state of the application under development is the immanent feature of the continuous delivery approach. In a blockchain network, nodes communicate, storing data in a decentralized manner. Each node executes the same business application but operates in a distinct execution [...] Read more.
Ensuring a production-ready state of the application under development is the immanent feature of the continuous delivery approach. In a blockchain network, nodes communicate, storing data in a decentralized manner. Each node executes the same business application but operates in a distinct execution environment. The literature lacks research, focusing on continuous practices for blockchain and distributed ledger technology. In particular, such works with support for both software development disciplines of design and deployment. Artifacts from considered disciplines have been placed in the 1 + 5 architectural views model. The approach aims to ensure the continuous deployment of containerized blockchain distributed applications. The solution has been divided into two independent components: Delivery and deployment. They interact through Git distributed version control. Dedicated GitHub repositories should store the business application and deployment configurations for nodes. The delivery component has to ensure the deployment package in the actual version of the business application with the node-specific up-to-date version of deployment configuration files. The deployment component is responsible for providing running distributed applications in containers for all blockchain nodes. The approach uses Jenkins and Kubernetes frameworks. For the sake of verification, preliminary tests have been conducted for the Electricity Consumption and Supply Management blockchain-based system for prosumers of renewable energy. Full article
Show Figures

Figure 1

14 pages, 1265 KiB  
Article
An Efficient Analytical Approach to Visualize Text-Based Event Logs for Semiconductor Equipment
by Gunwoo Lee and Jongpil Jeong
Appl. Sci. 2021, 11(13), 5944; https://doi.org/10.3390/app11135944 - 26 Jun 2021
Cited by 1 | Viewed by 1947
Abstract
Semiconductor equipment consists of a complex system in which numerous components are organically connected and controlled by many controllers. EventLog records all the information available during system processes. Because the EventLog records system runtime information so developers and engineers can understand system behavior [...] Read more.
Semiconductor equipment consists of a complex system in which numerous components are organically connected and controlled by many controllers. EventLog records all the information available during system processes. Because the EventLog records system runtime information so developers and engineers can understand system behavior and identify possible problems, it is essential for engineers to troubleshoot and maintain it. However, because the EventLog is text-based, complex to view, and stores a large quantity of information, the file size is very large. For long processes, the log file comprises several files, and engineers must look through many files, which makes it difficult to find the cause of the problem and therefore, a long time is required for the analysis. In addition, if the file size of the EventLog becomes large, the EventLog cannot be saved for a prolonged period because it uses a large amount of hard disk space on the CTC computer. In this paper, we propose a method to reduce the size of existing text-based log files. Our proposed method saves and visualizes text-based EventLogs in DB, making it easier to approach problems than the existing text-based analysis. We will confirm the possibility and propose a method that makes it easier for engineers to analyze log files. Full article
Show Figures

Figure 1

14 pages, 514 KiB  
Article
Concerto: Dynamic Processor Scaling for Distributed Data Systems with Replication
by Jinsu Lee and Eunji Lee
Appl. Sci. 2021, 11(12), 5731; https://doi.org/10.3390/app11125731 - 21 Jun 2021
Cited by 3 | Viewed by 2255
Abstract
A surge of interest in data-intensive computing has led to a drastic increase in the demand for data centers. Given this growing popularity, data centers are becoming a primary contributor to the increased consumption of energy worldwide. To mitigate this problem, this paper [...] Read more.
A surge of interest in data-intensive computing has led to a drastic increase in the demand for data centers. Given this growing popularity, data centers are becoming a primary contributor to the increased consumption of energy worldwide. To mitigate this problem, this paper revisits DVFS (Dynamic Voltage Frequency Scaling), a well-known technique to reduce the energy usage of processors, from the viewpoint of distributed systems. Distributed data systems typically adopt a replication facility to provide high availability and short latency. In this type of architecture, the replicas are maintained in an asynchronous manner, while the master synchronously operates via user requests. Based on this relaxation constraint of replica, we present a novel DVFS technique called Concerto, which intentionally scales down the frequency of processors operating for the replicas. This mechanism can achieve considerable energy savings without an increase in the user-perceived latency. We implemented Concerto on Redis 6.0.1, a commercial-level distributed key-value store, demonstrating that all associated performance issues were resolved. To prevent a delay in read queries assigned to the replicas, we offload the independent part of the read operation to the fast-running thread. We also empirically demonstrate that the decreased performance of the replica does not cause an increase of the replication lag because the inherent load unbalance between the master and replica hides the increased latency of the replica. Performance evaluations with micro and real-world benchmarks show that Redis saves 32% on average and up to 51% of energy with Concerto under various workloads, with minor performance losses in the replicas. Despite numerous studies of the energy saving in data centers, to the best of our best knowledge, Concerto is the first approach that considers clock-speed scaling at the aggregate level, exploiting heterogeneous performance constraints across data nodes. Full article
Show Figures

Figure 1

12 pages, 17057 KiB  
Article
Machine Learning Model for Lymph Node Metastasis Prediction in Breast Cancer Using Random Forest Algorithm and Mitochondrial Metabolism Hub Genes
by Byung-Chul Kim, Jingyu Kim, Ilhan Lim, Dong Ho Kim, Sang Moo Lim and Sang-Keun Woo
Appl. Sci. 2021, 11(7), 2897; https://doi.org/10.3390/app11072897 - 24 Mar 2021
Cited by 7 | Viewed by 2882
Abstract
Breast cancer metastasis can have a fatal outcome, with the prediction of metastasis being critical for establishing effective treatment strategies. RNA-sequencing (RNA-seq) is a good tool for identifying genes that promote and support metastasis development. The hub gene analysis method is a bioinformatics [...] Read more.
Breast cancer metastasis can have a fatal outcome, with the prediction of metastasis being critical for establishing effective treatment strategies. RNA-sequencing (RNA-seq) is a good tool for identifying genes that promote and support metastasis development. The hub gene analysis method is a bioinformatics method that can effectively analyze RNA sequencing results. This can be used to specify the set of genes most relevant to the function of the cell involved in metastasis. Herein, a new machine learning model based on RNA-seq data using the random forest algorithm and hub genes to estimate the accuracy of breast cancer metastasis prediction. Single-cell breast cancer samples (56 metastatic and 38 non-metastatic samples) were obtained from the Gene Expression Omnibus database, and the Weighted Gene Correlation Network Analysis package was used for the selection of gene modules and hub genes (function in mitochondrial metabolism). A machine learning prediction model using the hub gene set was devised and its accuracy was evaluated. A prediction model comprising 54-functional-gene modules and the hub gene set (NDUFA9, NDUFB5, and NDUFB3) showed an accuracy of 0.769 ± 0.02, 0.782 ± 0.012, and 0.945 ± 0.016, respectively. The test accuracy of the hub gene set was over 93% and that of the prediction model with random forest and hub genes was over 91%. A breast cancer metastasis dataset from The Cancer Genome Atlas was used for external validation, showing an accuracy of over 91%. The hub gene assay can be used to predict breast cancer metastasis by machine learning. Full article
Show Figures

Figure 1

22 pages, 843 KiB  
Article
Anticipatory Troubleshooting
by Netanel Hasidi and Meir Kalech
Appl. Sci. 2021, 11(3), 995; https://doi.org/10.3390/app11030995 - 22 Jan 2021
Cited by 2 | Viewed by 1578
Abstract
Troubleshooting is the process of diagnosing and repairing a system that is behaving abnormally. It involves performing various diagnostic and repair actions. Performing these actions may incur costs, and traditional troubleshooting algorithms aim to minimize the costs incurred until the system is fixed. [...] Read more.
Troubleshooting is the process of diagnosing and repairing a system that is behaving abnormally. It involves performing various diagnostic and repair actions. Performing these actions may incur costs, and traditional troubleshooting algorithms aim to minimize the costs incurred until the system is fixed. Prognosis deals with predicting future failures. We propose to incorporate prognosis and diagnosis techniques to solve troubleshooting problems. This integration enables (1) better fault isolation and (2) more intelligent decision making with respect to the repair actions to employ to minimize troubleshooting costs over time. In particular, we consider an anticipatory troubleshooting challenge in which we aim to minimize the costs incurred to fix the system over time, while reasoning about both current and future failures. Anticipatory troubleshooting raises two main dilemmas: the fix–replace dilemma and the replace-healthy dilemma. The fix–replace dilemma is the question of how to repair a faulty component: fixing it or replacing it with a new one. The replace-healthy dilemma is the question of whether a healthy component should be replaced with a new one in order to prevent it from failing in the future. We propose to solve these dilemmas by modeling them as a Markov decision problem and reasoning about future failures using techniques from the survival analysis literature. The resulting algorithm was evaluated experimentally, showing that the proposed anticipatory troubleshooting algorithms yield lower overall costs compared to troubleshooting algorithms that do not reason about future faults. Full article
Show Figures

Figure 1

15 pages, 3384 KiB  
Article
DASH Live Broadcast Traffic Model: A Time-Bound Delay Model for IP-Based Digital Terrestrial Broadcasting Systems
by Hyungyoon Seo and Goo Kim
Appl. Sci. 2021, 11(1), 247; https://doi.org/10.3390/app11010247 - 29 Dec 2020
Viewed by 1885
Abstract
This paper proposes a live broadcast traffic model for an internet protocol (IP)-based terrestrial digital broadcasting system to transmit dynamic adaptive streaming over hypertext transfer protocol (DASH) media. The IP-based terrestrial digital broadcasting systems such as Advanced Television Systems Committee (ATSC) 3.0 transmit [...] Read more.
This paper proposes a live broadcast traffic model for an internet protocol (IP)-based terrestrial digital broadcasting system to transmit dynamic adaptive streaming over hypertext transfer protocol (DASH) media. The IP-based terrestrial digital broadcasting systems such as Advanced Television Systems Committee (ATSC) 3.0 transmit media content (e.g., full high definition and ultra-high definition) in units of DASH segment files. Although the DASH segment file has the same quality and playback time, the size of each DASH segment file can vary according to the media composition. The transmission resource of the terrestrial broadcasting system has increased the transmission capacity of broadcasting with new technologies. However, the transmission capacity is still limited and fixed compared to wired broadcasting networks. Therefore, a problem occurs with the efficiency of broadcasting resources and transmission delay when transmitting a variable segment file to a terrestrial digital broadcasting network. In this paper, the resource efficiency and transmission delay results that occur when transmitting the actual DASH segment file are simulated through the live broadcast traffic model, and the maximum delay time that a viewer accessing the terrestrial broadcast can experience is presented. Full article
Show Figures

Figure 1

Back to TopTop