Next Issue
Volume 13, April
Previous Issue
Volume 13, February
 
 

Algorithms, Volume 13, Issue 3 (March 2020) – 25 articles

Cover Story (view full-size image): Dynamic Bayesian networks capture time-based causal relationships within their model nodes. When representing complex engineering systems, which are comprised of tightly integrated hardware, software, and human components, these connections support inferences about system-level diagnostics and prognostics that would otherwise be difficult to identify. New information received from sensors and other monitoring tools is used as evidence to update previous system health assessments. These models can be used to prepare system operators for rare-event accident scenarios, where an up-to-date understanding of a system’s health is critical for maintaining its current and future functionality. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 932 KiB  
Article
Bi-Objective Dynamic Multiprocessor Open Shop Scheduling: An Exact Algorithm
by Tamer F. Abdelmaguid
Algorithms 2020, 13(3), 74; https://doi.org/10.3390/a13030074 - 24 Mar 2020
Cited by 8 | Viewed by 3787
Abstract
An important element in the integration of the fourth industrial revolution is the development of efficient algorithms to deal with dynamic scheduling problems. In dynamic scheduling, jobs can be admitted during the execution of a given schedule, which necessitates appropriately planned rescheduling decisions [...] Read more.
An important element in the integration of the fourth industrial revolution is the development of efficient algorithms to deal with dynamic scheduling problems. In dynamic scheduling, jobs can be admitted during the execution of a given schedule, which necessitates appropriately planned rescheduling decisions for maintaining a high level of performance. In this paper, a dynamic case of the multiprocessor open shop scheduling problem is addressed. This problem appears in different contexts, particularly those involving diagnostic operations in maintenance and health care industries. Two objectives are considered simultaneously—the minimization of the makespan and the minimization of the mean weighted flow time. The former objective aims to sustain efficient utilization of the available resources, while the latter objective helps in maintaining a high customer satisfaction level. An exact algorithm is presented for generating optimal Pareto front solutions. Despite the fact that the studied problem is NP-hard for both objectives, the presented algorithm can be used to solve small instances. This is demonstrated through computational experiments on a testbed of 30 randomly generated instances. The presented algorithm can also be used to generate approximate Pareto front solutions in case computational time needed to find proven optimal solutions for generated sub-problems is found to be excessive. Furthermore, computational results are used to investigate the characteristics of the optimal Pareto front of the studied problem. Accordingly, some insights for future metaheuristic developments are drawn. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

20 pages, 1750 KiB  
Article
Classical and Deep Learning Paradigms for Detection and Validation of Key Genes of Risky Outcomes of HCV
by Nagwan M. Abdel Samee
Algorithms 2020, 13(3), 73; https://doi.org/10.3390/a13030073 - 24 Mar 2020
Cited by 11 | Viewed by 4333
Abstract
Hepatitis C virus (HCV) is one of the most dangerous viruses worldwide. It is the foremost cause of the hepatic cirrhosis, and hepatocellular carcinoma, HCC. Detecting new key genes that play a role in the growth of HCC in HCV patients using machine [...] Read more.
Hepatitis C virus (HCV) is one of the most dangerous viruses worldwide. It is the foremost cause of the hepatic cirrhosis, and hepatocellular carcinoma, HCC. Detecting new key genes that play a role in the growth of HCC in HCV patients using machine learning techniques paves the way for producing accurate antivirals. In this work, there are two phases: detecting the up/downregulated genes using classical univariate and multivariate feature selection methods, and validating the retrieved list of genes using Insilico classifiers. However, the classification algorithms in the medical domain frequently suffer from a deficiency of training cases. Therefore, a deep neural network approach is proposed here to validate the significance of the retrieved genes in classifying the HCV-infected samples from the disinfected ones. The validation model is based on the artificial generation of new examples from the retrieved genes’ expressions using sparse autoencoders. Subsequently, the generated genes’ expressions data are used to train conventional classifiers. Our results in the first phase yielded a better retrieval of significant genes using Principal Component Analysis (PCA), a multivariate approach. The retrieved list of genes using PCA had a higher number of HCC biomarkers compared to the ones retrieved from the univariate methods. In the second phase, the classification accuracy can reveal the relevance of the extracted key genes in classifying the HCV-infected and disinfected samples. Full article
(This article belongs to the Special Issue Algorithms in Bioinformatics)
Show Figures

Figure 1

65 pages, 1890 KiB  
Review
Energy Efficient Routing in Wireless Sensor Networks: A Comprehensive Survey
by Christos Nakas, Dionisis Kandris and Georgios Visvardis
Algorithms 2020, 13(3), 72; https://doi.org/10.3390/a13030072 - 24 Mar 2020
Cited by 113 | Viewed by 16179
Abstract
Wireless Sensor Networks (WSNs) are among the most emerging technologies, thanks to their great capabilities and their ever growing range of applications. However, the lifetime of WSNs is extremely restricted due to the delimited energy capacity of their sensor nodes. This is why [...] Read more.
Wireless Sensor Networks (WSNs) are among the most emerging technologies, thanks to their great capabilities and their ever growing range of applications. However, the lifetime of WSNs is extremely restricted due to the delimited energy capacity of their sensor nodes. This is why energy conservation is considered as the most important research concern for WSNs. Radio communication is the utmost energy consuming function in a WSN. Thus, energy efficient routing is necessitated to save energy and thus prolong the lifetime of WSNs. For this reason, numerous protocols for energy efficient routing in WSNs have been proposed. This article offers an analytical and up to date survey on the protocols of this kind. The classic and modern protocols presented are categorized, depending on i) how the network is structured, ii) how data are exchanged, iii) whether location information is or not used, and iv) whether Quality of Service (QoS) or multiple paths are or not supported. In each distinct category, protocols are both described and compared in terms of specific performance metrics, while their advantages and disadvantages are discussed. Finally, the study findings are discussed, concluding remarks are drawn, and open research issues are indicated. Full article
Show Figures

Figure 1

24 pages, 552 KiB  
Article
Two-Step Classification with SVD Preprocessing of Distributed Massive Datasets in Apache Spark
by Athanasios Alexopoulos, Georgios Drakopoulos, Andreas Kanavos, Phivos Mylonas and Gerasimos Vonitsanos
Algorithms 2020, 13(3), 71; https://doi.org/10.3390/a13030071 - 24 Mar 2020
Cited by 14 | Viewed by 4808
Abstract
At the dawn of the 10V or big data data era, there are a considerable number of sources such as smart phones, IoT devices, social media, smart city sensors, as well as the health care system, all of which constitute but a small [...] Read more.
At the dawn of the 10V or big data data era, there are a considerable number of sources such as smart phones, IoT devices, social media, smart city sensors, as well as the health care system, all of which constitute but a small portion of the data lakes feeding the entire big data ecosystem. This 10V data growth poses two primary challenges, namely storing and processing. Concerning the latter, new frameworks have been developed including distributed platforms such as the Hadoop ecosystem. Classification is a major machine learning task typically executed on distributed platforms and as a consequence many algorithmic techniques have been developed tailored for these platforms. This article extensively relies in two ways on classifiers implemented in MLlib, the main machine learning library for the Hadoop ecosystem. First, a vast number of classifiers is applied to two datasets, namely Higgs and PAMAP. Second, a two-step classification is ab ovo performed to the same datasets. Specifically, the singular value decomposition of the data matrix determines first a set of transformed attributes which in turn drive the classifiers of MLlib. The twofold purpose of the proposed architecture is to reduce complexity while maintaining a similar if not better level of the metrics of accuracy, recall, and F 1 . The intuition behind this approach stems from the engineering principle of breaking down complex problems to simpler and more manageable tasks. The experiments based on the same Spark cluster indicate that the proposed architecture outperforms the individual classifiers with respect to both complexity and the abovementioned metrics. Full article
(This article belongs to the Special Issue Mining Humanistic Data 2019)
Show Figures

Figure 1

24 pages, 963 KiB  
Article
Ensemble Learning of Hybrid Acoustic Features for Speech Emotion Recognition
by Kudakwashe Zvarevashe and Oludayo Olugbara
Algorithms 2020, 13(3), 70; https://doi.org/10.3390/a13030070 - 22 Mar 2020
Cited by 57 | Viewed by 7448
Abstract
Automatic recognition of emotion is important for facilitating seamless interactivity between a human being and intelligent robot towards the full realization of a smart society. The methods of signal processing and machine learning are widely applied to recognize human emotions based on features [...] Read more.
Automatic recognition of emotion is important for facilitating seamless interactivity between a human being and intelligent robot towards the full realization of a smart society. The methods of signal processing and machine learning are widely applied to recognize human emotions based on features extracted from facial images, video files or speech signals. However, these features were not able to recognize the fear emotion with the same level of precision as other emotions. The authors propose the agglutination of prosodic and spectral features from a group of carefully selected features to realize hybrid acoustic features for improving the task of emotion recognition. Experiments were performed to test the effectiveness of the proposed features extracted from speech files of two public databases and used to train five popular ensemble learning algorithms. Results show that random decision forest ensemble learning of the proposed hybrid acoustic features is highly effective for speech emotion recognition. Full article
(This article belongs to the Special Issue Ensemble Algorithms and Their Applications)
Show Figures

Figure 1

16 pages, 11276 KiB  
Article
Oil Spill Monitoring of Shipborne Radar Image Features Using SVM and Local Adaptive Threshold
by Jin Xu, Haixia Wang, Can Cui, Baigang Zhao and Bo Li
Algorithms 2020, 13(3), 69; https://doi.org/10.3390/a13030069 - 21 Mar 2020
Cited by 30 | Viewed by 4485
Abstract
In the case of marine accidents, monitoring marine oil spills can provide an important basis for identifying liabilities and assessing the damage. Shipborne radar can ensure large-scale, real-time monitoring, in all weather, with high-resolution. It therefore has the potential for broad applications in [...] Read more.
In the case of marine accidents, monitoring marine oil spills can provide an important basis for identifying liabilities and assessing the damage. Shipborne radar can ensure large-scale, real-time monitoring, in all weather, with high-resolution. It therefore has the potential for broad applications in oil spill monitoring. Considering the original gray-scale image from the shipborne radar acquired in the case of the Dalian 7.16 oil spill accident, a complete oil spill detection method is proposed. Firstly, the co-frequency interferences and speckles in the original image are eliminated by preprocessing. Secondly, the wave information is classified using a support vector machine (SVM), and the effective wave monitoring area is generated according to the gray distribution matrix. Finally, oil spills are detected by a local adaptive threshold and displayed on an electronic chart based on geographic information system (GIS). The results show that the SVM can extract the effective wave information from the original shipborne radar image, and the local adaptive threshold method has strong applicability for oil film segmentation. This method can provide a technical basis for real-time cleaning and liability determination in oil spill accidents. Full article
(This article belongs to the Special Issue Bio-Inspired Algorithms for Image Processing)
Show Figures

Figure 1

15 pages, 353 KiB  
Article
Adding Edges for Maximizing Weighted Reachability
by Federico Corò, Gianlorenzo D'Angelo and Cristina M. Pinotti
Algorithms 2020, 13(3), 68; https://doi.org/10.3390/a13030068 - 18 Mar 2020
Cited by 4 | Viewed by 3689
Abstract
In this paper, we consider the problem of improving the reachability of a graph. We approach the problem from a graph augmentation perspective, in which a limited set size of edges is added to the graph to increase the overall number of reachable [...] Read more.
In this paper, we consider the problem of improving the reachability of a graph. We approach the problem from a graph augmentation perspective, in which a limited set size of edges is added to the graph to increase the overall number of reachable nodes. We call this new problem the Maximum Connectivity Improvement (MCI) problem. We first show that, for the purpose of solve solving MCI, we can focus on Directed Acyclic Graphs (DAG) only. We show that approximating the MCI problem on DAG to within any constant factor greater than 1 1 / e is NP -hard even if we restrict to graphs with a single source or a single sink, and the problem remains NP -complete if we further restrict to unitary weights. Finally, this paper presents a dynamic programming algorithm for the MCI problem on trees with a single source that produces optimal solutions in polynomial time. Then, we propose two polynomial-time greedy algorithms that guarantee ( 1 1 / e ) -approximation ratio on DAGs with a single source, a single sink or two sources. Full article
(This article belongs to the Special Issue Approximation Algorithms for NP-Hard Problems)
Show Figures

Figure 1

33 pages, 977 KiB  
Article
Optimizing Convolutional Neural Network Hyperparameters by Enhanced Swarm Intelligence Metaheuristics
by Nebojsa Bacanin, Timea Bezdan, Eva Tuba, Ivana Strumberger and Milan Tuba
Algorithms 2020, 13(3), 67; https://doi.org/10.3390/a13030067 - 17 Mar 2020
Cited by 97 | Viewed by 12459
Abstract
Computer vision is one of the most frontier technologies in computer science. It is used to build artificial systems to extract valuable information from images and has a broad range of applications in various areas such as agriculture, business, and healthcare. Convolutional neural [...] Read more.
Computer vision is one of the most frontier technologies in computer science. It is used to build artificial systems to extract valuable information from images and has a broad range of applications in various areas such as agriculture, business, and healthcare. Convolutional neural networks represent the key algorithms in computer vision, and in recent years, they have attained notable advances in many real-world problems. The accuracy of the network for a particular task profoundly relies on the hyperparameters’ configuration. Obtaining the right set of hyperparameters is a time-consuming process and requires expertise. To approach this concern, we propose an automatic method for hyperparameters’ optimization and structure design by implementing enhanced metaheuristic algorithms. The aim of this paper is twofold. First, we propose enhanced versions of the tree growth and firefly algorithms that improve the original implementations. Second, we adopt the proposed enhanced algorithms for hyperparameters’ optimization. First, the modified metaheuristics are evaluated on standard unconstrained benchmark functions and compared to the original algorithms. Afterward, the improved algorithms are employed for the network design. The experiments are carried out on the famous image classification benchmark dataset, the MNIST dataset, and comparative analysis with other outstanding approaches that were tested on the same problem is conducted. The experimental results show that both proposed improved methods establish higher performance than the other existing techniques in terms of classification accuracy and the use of computational resources. Full article
(This article belongs to the Special Issue Swarm Intelligence Applications and Algorithms)
Show Figures

Figure 1

17 pages, 2260 KiB  
Article
Observability of Uncertain Nonlinear Systems Using Interval Analysis
by Thomas Paradowski, Sabine Lerch, Michelle Damaszek, Robert Dehnert and Bernd Tibken
Algorithms 2020, 13(3), 66; https://doi.org/10.3390/a13030066 - 16 Mar 2020
Cited by 10 | Viewed by 4591
Abstract
In the field of control engineering, observability of uncertain nonlinear systems is often neglected and not examined. This is due to the complex analytical calculations required for the verification. Therefore, the aim of this work is to provide an algorithm which numerically analyzes [...] Read more.
In the field of control engineering, observability of uncertain nonlinear systems is often neglected and not examined. This is due to the complex analytical calculations required for the verification. Therefore, the aim of this work is to provide an algorithm which numerically analyzes the observability of nonlinear systems described by finite-dimensional, continuous-time sets of ordinary differential equations. The algorithm is based on definitions for distinguishability and local observability using a rank check from which conditions are deduced. The only requirements are the uncertain model equations of the system. Further, the methodology verifies observability of nonlinear systems on a given state space. In case that the state space is not fully observable, the algorithm provides the observable set of states. In addition, the results obtained by the algorithm allows insight into why the remaining states cannot be distinguished. Full article
(This article belongs to the Special Issue Algorithms for Reliable Estimation, Identification and Control)
Show Figures

Figure 1

21 pages, 3620 KiB  
Article
Nature-Inspired Optimization Algorithms for the 3D Reconstruction of Porous Media
by George A. Papakostas, John W. Nolan and Athanasios C. Mitropoulos
Algorithms 2020, 13(3), 65; https://doi.org/10.3390/a13030065 - 16 Mar 2020
Cited by 4 | Viewed by 4266
Abstract
One of the most challenging problems that are still open in the field of materials science is the 3D reconstruction of porous media using information from a single 2D thin image of the original material. Such a reconstruction is only feasible subject to [...] Read more.
One of the most challenging problems that are still open in the field of materials science is the 3D reconstruction of porous media using information from a single 2D thin image of the original material. Such a reconstruction is only feasible subject to some important assumptions that need to be made as far as the statistical properties of the material are concerned. In this study, the aforementioned problem is investigated as an explicitly formulated optimization problem, with the phase of each porous material point being decided such that the resulting 3D material model shows the same statistical properties as its corresponding 2D version. Based on this problem formulation, herein for the first time, several traditional (genetic algorithms—GAs, particle swarm optimization—PSO, differential evolution—DE), as well as recently proposed (firefly algorithm—FA, artificial bee colony—ABC, gravitational search algorithm—GSA) nature-inspired optimization algorithms were applied to solve the 3D reconstruction problem. These algorithms utilized a newly proposed data representation scheme that decreased the number of unknowns searched by the optimization process. The advantages of addressing the 3D reconstruction of porous media through the application of a parallel heuristic optimization algorithm were clearly defined, while appropriate experiments demonstrating the greater performance of the GA algorithm in almost all the cases by a factor between 5%–84% (porosity accuracy) and 3%–15% (auto-correlation function accuracy) over the PSO, DE, FA, ABC, and GSA algorithms were undertaken. Moreover, this study revealed that statistical functions of a high order need to be incorporated into the reconstruction procedure to increase the reconstruction accuracy. Full article
Show Figures

Figure 1

20 pages, 828 KiB  
Article
A Dynamic Bayesian Network Structure for Joint Diagnostics and Prognostics of Complex Engineering Systems
by Austin D. Lewis and Katrina M. Groth
Algorithms 2020, 13(3), 64; https://doi.org/10.3390/a13030064 - 12 Mar 2020
Cited by 20 | Viewed by 6144
Abstract
Dynamic Bayesian networks (DBNs) represent complex time-dependent causal relationships through the use of conditional probabilities and directed acyclic graph models. DBNs enable the forward and backward inference of system states, diagnosing current system health, and forecasting future system prognosis within the same modeling [...] Read more.
Dynamic Bayesian networks (DBNs) represent complex time-dependent causal relationships through the use of conditional probabilities and directed acyclic graph models. DBNs enable the forward and backward inference of system states, diagnosing current system health, and forecasting future system prognosis within the same modeling framework. As a result, there has been growing interest in using DBNs for reliability engineering problems and applications in risk assessment. However, there are open questions about how they can be used to support diagnostics and prognostic health monitoring of a complex engineering system (CES), e.g., power plants, processing facilities and maritime vessels. These systems’ tightly integrated human, hardware, and software components and dynamic operational environments have previously been difficult to model. As part of the growing literature advancing the understanding of how DBNs can be used to improve the risk assessments and health monitoring of CESs, this paper shows the prognostic and diagnostic inference capabilities that are possible to encapsulate within a single DBN model. Using simulated accident sequence data from a model sodium fast nuclear reactor as a case study, a DBN is designed, quantified, and verified based on evidence associated with a transient overpower. The results indicate that a joint prognostic and diagnostic model that is responsive to new system evidence can be generated from operating data to represent CES health. Such a model can therefore serve as another training tool for CES operators to better prepare for accident scenarios. Full article
Show Figures

Figure 1

14 pages, 1103 KiB  
Article
On a Hybridization of Deep Learning and Rough Set Based Granular Computing
by Krzysztof Ropiak and Piotr Artiemjew
Algorithms 2020, 13(3), 63; https://doi.org/10.3390/a13030063 - 11 Mar 2020
Cited by 9 | Viewed by 4315
Abstract
The set of heuristics constituting the methods of deep learning has proved very efficient in complex problems of artificial intelligence such as pattern recognition, speech recognition, etc., solving them with better accuracy than previously applied methods. Our aim in this work has been [...] Read more.
The set of heuristics constituting the methods of deep learning has proved very efficient in complex problems of artificial intelligence such as pattern recognition, speech recognition, etc., solving them with better accuracy than previously applied methods. Our aim in this work has been to integrate the concept of the rough set to the repository of tools applied in deep learning in the form of rough mereological granular computing. In our previous research we have presented the high efficiency of our decision system approximation techniques (creating granular reflections of systems), which, with a large reduction in the size of the training systems, maintained the internal knowledge of the original data. The current research has led us to the question whether granular reflections of decision systems can be effectively learned by neural networks and whether the deep learning will be able to extract the knowledge from the approximated decision systems. Our results show that granulated datasets perform well when mined by deep learning tools. We have performed exemplary experiments using data from the UCI repository—Pytorch and Tensorflow libraries were used for building neural network and classification process. It turns out that deep learning method works effectively based on reduced training sets. Approximation of decision systems before neural networks learning can be important step to give the opportunity to learn in reasonable time. Full article
(This article belongs to the Special Issue Algorithms for Pattern Recognition)
Show Figures

Figure 1

18 pages, 918 KiB  
Review
A Review of Lithium-Ion Battery Fault Diagnostic Algorithms: Current Progress and Future Challenges
by Manh-Kien Tran and Michael Fowler
Algorithms 2020, 13(3), 62; https://doi.org/10.3390/a13030062 - 8 Mar 2020
Cited by 192 | Viewed by 20154
Abstract
The usage of Lithium-ion (Li-ion) batteries has increased significantly in recent years due to their long lifespan, high energy density, high power density, and environmental benefits. However, various internal and external faults can occur during the battery operation, leading to performance issues and [...] Read more.
The usage of Lithium-ion (Li-ion) batteries has increased significantly in recent years due to their long lifespan, high energy density, high power density, and environmental benefits. However, various internal and external faults can occur during the battery operation, leading to performance issues and potentially serious consequences, such as thermal runaway, fires, or explosion. Fault diagnosis, hence, is an important function in the battery management system (BMS) and is responsible for detecting faults early and providing control actions to minimize fault effects, to ensure the safe and reliable operation of the battery system. This paper provides a comprehensive review of various fault diagnostic algorithms, including model-based and non-model-based methods. The advantages and disadvantages of the reviewed algorithms, as well as some future challenges for Li-ion battery fault diagnosis, are also discussed in this paper. Full article
(This article belongs to the Special Issue Algorithms for Fault Detection and Diagnosis)
Show Figures

Figure 1

25 pages, 1744 KiB  
Article
GeoAI: A Model-Agnostic Meta-Ensemble Zero-Shot Learning Method for Hyperspectral Image Analysis and Classification
by Konstantinos Demertzis and Lazaros Iliadis
Algorithms 2020, 13(3), 61; https://doi.org/10.3390/a13030061 - 7 Mar 2020
Cited by 22 | Viewed by 6212
Abstract
Deep learning architectures are the most effective methods for analyzing and classifying Ultra-Spectral Images (USI). However, effective training of a Deep Learning (DL) gradient classifier aiming to achieve high classification accuracy, is extremely costly and time-consuming. It requires huge datasets with hundreds or [...] Read more.
Deep learning architectures are the most effective methods for analyzing and classifying Ultra-Spectral Images (USI). However, effective training of a Deep Learning (DL) gradient classifier aiming to achieve high classification accuracy, is extremely costly and time-consuming. It requires huge datasets with hundreds or thousands of labeled specimens from expert scientists. This research exploits the MAML++ algorithm in order to introduce the Model-Agnostic Meta-Ensemble Zero-shot Learning (MAME-ZsL) approach. The MAME-ZsL overcomes the above difficulties, and it can be used as a powerful model to perform Hyperspectral Image Analysis (HIA). It is a novel optimization-based Meta-Ensemble Learning architecture, following a Zero-shot Learning (ZsL) prototype. To the best of our knowledge it is introduced to the literature for the first time. It facilitates learning of specialized techniques for the extraction of user-mediated representations, in complex Deep Learning architectures. Moreover, it leverages the use of first and second-order derivatives as pre-training methods. It enhances learning of features which do not cause issues of exploding or diminishing gradients; thus, it avoids potential overfitting. Moreover, it significantly reduces computational cost and training time, and it offers an improved training stability, high generalization performance and remarkable classification accuracy. Full article
(This article belongs to the Special Issue Ensemble Algorithms and Their Applications)
Show Figures

Figure 1

17 pages, 902 KiB  
Article
MDAN-UNet: Multi-Scale and Dual Attention Enhanced Nested U-Net Architecture for Segmentation of Optical Coherence Tomography Images
by Wen Liu, Yankui Sun and Qingge Ji
Algorithms 2020, 13(3), 60; https://doi.org/10.3390/a13030060 - 4 Mar 2020
Cited by 51 | Viewed by 9220
Abstract
Optical coherence tomography (OCT) is an optical high-resolution imaging technique for ophthalmic diagnosis. In this paper, we take advantages of multi-scale input, multi-scale side output and dual attention mechanism and present an enhanced nested U-Net architecture (MDAN-UNet), a new powerful fully convolutional network [...] Read more.
Optical coherence tomography (OCT) is an optical high-resolution imaging technique for ophthalmic diagnosis. In this paper, we take advantages of multi-scale input, multi-scale side output and dual attention mechanism and present an enhanced nested U-Net architecture (MDAN-UNet), a new powerful fully convolutional network for automatic end-to-end segmentation of OCT images. We have evaluated two versions of MDAN-UNet (MDAN-UNet-16 and MDAN-UNet-32) on two publicly available benchmark datasets which are the Duke Diabetic Macular Edema (DME) dataset and the RETOUCH dataset, in comparison with other state-of-the-art segmentation methods. Our experiment demonstrates that MDAN-UNet-32 achieved the best performance, followed by MDAN-UNet-16 with smaller parameter, for multi-layer segmentation and multi-fluid segmentation respectively. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

16 pages, 4067 KiB  
Article
A Geolocation Analytics-Driven Ontology for Short-Term Leases: Inferring Current Sharing Economy Trends
by Georgios Alexandridis, Yorghos Voutos, Phivos Mylonas and George Caridakis
Algorithms 2020, 13(3), 59; https://doi.org/10.3390/a13030059 - 4 Mar 2020
Cited by 6 | Viewed by 6105
Abstract
Short-term property rentals are perhaps one of the most common traits of present day shared economy. Moreover, they are acknowledged as a major driving force behind changes in urban landscapes, ranging from established metropolises to developing townships, as well as a facilitator of [...] Read more.
Short-term property rentals are perhaps one of the most common traits of present day shared economy. Moreover, they are acknowledged as a major driving force behind changes in urban landscapes, ranging from established metropolises to developing townships, as well as a facilitator of geographical mobility. A geolocation ontology is a high level inference tool, typically represented as a labeled graph, for discovering latent patterns from a plethora of unstructured and multimodal data. In this work, a two-step methodological framework is proposed, where the results of various geolocation analyses, important in their own respect, such as ghost hotel discovery, form intermediate building blocks towards an enriched knowledge graph. The outlined methodology is validated upon data crawled from the Airbnb website and more specifically, on keywords extracted from comments made by users of the said platform. A rather solid case-study, based on the aforementioned type of data regarding Athens, Greece, is addressed in detail, studying the different degrees of expansion & prevalence of the phenomenon among the city’s various neighborhoods. Full article
(This article belongs to the Special Issue Mining Humanistic Data 2019)
Show Figures

Figure 1

17 pages, 2821 KiB  
Article
Kalman Filter-Based Online Identification of the Electric Power Characteristic of Solid Oxide Fuel Cells Aiming at Maximum Power Point Tracking
by Andreas Rauh, Wiebke Frenkel and Julia Kersten
Algorithms 2020, 13(3), 58; https://doi.org/10.3390/a13030058 - 2 Mar 2020
Cited by 12 | Viewed by 3688
Abstract
High-temperature fuel cells are one of the devices currently investigated for an integration into distributed power supply grids. Such distributed grids aim at the simultaneous production of thermal energy and electricity. To maximize the efficiency of fuel cell systems, it is reasonable to [...] Read more.
High-temperature fuel cells are one of the devices currently investigated for an integration into distributed power supply grids. Such distributed grids aim at the simultaneous production of thermal energy and electricity. To maximize the efficiency of fuel cell systems, it is reasonable to track the point of maximum electric power production and to operate the system in close vicinity to this point. However, variations of gas mass flows, especially the concentration of hydrogen contained in the anode gas, as well as variations of the internal temperature distribution in the fuel cell stack module lead to the fact that the maximum power point changes in dependence of the aforementioned phenomena. Therefore, this paper first proposes a real-time capable stochastic filter approach for the local identification of the electric power characteristic of the fuel cell. Second, based on this estimate, a maximum power point tracking procedure is derived. It is based on an iteration procedure under consideration of the estimation accuracy of the stochastic filter and adjusts the fuel cell’s electric current so that optimal operating points are guaranteed. Numerical simulations, based on real measured data gathered at a test rig available at the Chair of Mechatronics at the University of Rostock, Germany, conclude this paper. Full article
(This article belongs to the Special Issue Algorithms for Reliable Estimation, Identification and Control)
Show Figures

Figure 1

11 pages, 2282 KiB  
Article
Time Series Clustering Model based on DTW for Classifying Car Parks
by Taoying Li, Xu Wu and Junhe Zhang
Algorithms 2020, 13(3), 57; https://doi.org/10.3390/a13030057 - 2 Mar 2020
Cited by 14 | Viewed by 5229
Abstract
An increasing number of automobiles have led to a serious shortage of parking spaces and a serious imbalance of parking supply and demand. The best way to solve these problems is to achieve the reasonable planning and classify management of car parks, guide [...] Read more.
An increasing number of automobiles have led to a serious shortage of parking spaces and a serious imbalance of parking supply and demand. The best way to solve these problems is to achieve the reasonable planning and classify management of car parks, guide the intelligent parking, and then promote its marketization and industrialization. Therefore, we aim to adopt clustering method to classify car parks. Owing to the time series characteristics of car park data, a time series clustering framework, including preprocessing, distance measurement, clustering and evaluation, is first developed for classifying car parks. Then, in view of the randomness of existing clustering models, a new time series clustering model based on dynamic time warping (DTW) is proposed, which contains distance radius calculation, obtaining density of the neighbor area, k centers initialization, and clustering. Finally, some UCR datasets and data of 27 car parks are employed to evaluate the performance of the models and results show that the proposed model performs obviously better results than those clustering models based on Euclidean distance (ED) and traditional clustering models based on DTW. Full article
(This article belongs to the Special Issue Supervised and Unsupervised Classification Algorithms)
Show Figures

Figure 1

20 pages, 2932 KiB  
Article
Misalignment Fault Prediction of Wind Turbines Based on Combined Forecasting Model
by Yancai Xiao and Zhe Hua
Algorithms 2020, 13(3), 56; https://doi.org/10.3390/a13030056 - 1 Mar 2020
Cited by 8 | Viewed by 4417
Abstract
Due to the harsh working environment of wind turbines, various types of faults are prone to occur during long-term operation. Misalignment faults between the gearbox and the generator are one of the latent common faults for doubly-fed wind turbines. Compared with other faults [...] Read more.
Due to the harsh working environment of wind turbines, various types of faults are prone to occur during long-term operation. Misalignment faults between the gearbox and the generator are one of the latent common faults for doubly-fed wind turbines. Compared with other faults like gears and bearings, the prediction research of misalignment faults for wind turbines is relatively few. How to accurately predict its developing trend has always been a difficulty. In this paper, a combined forecasting model is proposed for misalignment fault prediction of wind turbines based on vibration and current signals. In the modelling, the improved Multivariate Grey Model (IMGM) is used to predict the deterministic trend and the Least Squares Support Vector Machine (LSSVM) optimized by quantum genetic algorithm (QGA) is adopted to predict the stochastic trend of the fault index separately, and another LSSVM optimized by QGA is used as a non-linear combiner. Multiple information of time-domain, frequency-domain and time-frequency domain of the wind turbine’s vibration or current signals are extracted as the input vectors of the combined forecasting model and the kurtosis index is regarded as the output. The simulation results show that the proposed combined model has higher prediction accuracy than the single forecasting models. Full article
(This article belongs to the Special Issue Algorithms for Fault Detection and Diagnosis)
Show Figures

Figure 1

11 pages, 485 KiB  
Article
Model of Multi-branch Trees for Efficient Resource Allocation
by Natsumi Oyamaguchi, Hiroyuki Tajima and Isamu Okada
Algorithms 2020, 13(3), 55; https://doi.org/10.3390/a13030055 - 1 Mar 2020
Cited by 2 | Viewed by 3775
Abstract
Although exploring the principles of resource allocation is still important in many fields, little is known about appropriate methods for optimal resource allocation thus far. This is because we should consider many issues including opposing interests between many types of stakeholders. Here, we [...] Read more.
Although exploring the principles of resource allocation is still important in many fields, little is known about appropriate methods for optimal resource allocation thus far. This is because we should consider many issues including opposing interests between many types of stakeholders. Here, we develop a new allocation method to resolve budget conflicts. To do so, we consider two points—minimizing assessment costs and satisfying allocational efficiency. In our method, an evaluator’s assessment is restricted to one’s own projects in one’s own department, and both an executive’s and mid-level executives’ assessments are also restricted to each representative project in each branch or department they manage. At the same time, we develop a calculation method to integrate such assessments by using a multi-branch tree structure, where a set of leaf nodes represents projects and a set of non-leaf nodes represents either directors or executives. Our method is incentive-compatible because no director has any incentive to make fallacious assessments. Full article
Show Figures

Figure 1

21 pages, 347 KiB  
Article
Multidimensional Group Recommendations in the Health Domain
by Maria Stratigi, Haridimos Kondylakis and Kostas Stefanidis
Algorithms 2020, 13(3), 54; https://doi.org/10.3390/a13030054 - 28 Feb 2020
Cited by 13 | Viewed by 4335
Abstract
Providing useful resources to patients is essential in achieving the vision of participatory medicine. However, the problem of identifying pertinent content for a group of patients is even more difficult than identifying information for just one. Nevertheless, studies suggest that the group dynamics-based [...] Read more.
Providing useful resources to patients is essential in achieving the vision of participatory medicine. However, the problem of identifying pertinent content for a group of patients is even more difficult than identifying information for just one. Nevertheless, studies suggest that the group dynamics-based principles of behavior change have a positive effect on the patients’ welfare. Along these lines, in this paper, we present a multidimensional recommendation model in the health domain using collaborative filtering. We propose a novel semantic similarity function between users, going beyond patient medical problems, considering additional dimensions such as the education level, the health literacy, and the psycho-emotional status of the patients. Exploiting those dimensions, we are interested in providing recommendations that are both high relevant and fair to groups of patients. Consequently, we introduce the notion of fairness and we present a new aggregation method, accumulating preference scores. We experimentally show that our approach can perform better recommendations to small group of patients for useful information documents. Full article
Show Figures

Figure 1

22 pages, 903 KiB  
Article
Uncertainty Propagation through a Point Model for Steady-State Two-Phase Pipe Flow
by Andreas Strand, Ivar Eskerud Smith, Tor Erling Unander, Ingelin Steinsland and Leif Rune Hellevik
Algorithms 2020, 13(3), 53; https://doi.org/10.3390/a13030053 - 28 Feb 2020
Cited by 4 | Viewed by 4108
Abstract
Uncertainty propagation is used to quantify the uncertainty in model predictions in the presence of uncertain input variables. In this study, we analyze a steady-state point-model for two-phase gas-liquid flow. We present prediction intervals for holdup and pressure drop that are obtained from [...] Read more.
Uncertainty propagation is used to quantify the uncertainty in model predictions in the presence of uncertain input variables. In this study, we analyze a steady-state point-model for two-phase gas-liquid flow. We present prediction intervals for holdup and pressure drop that are obtained from knowledge of the measurement error in the variables provided to the model. The analysis also uncovers which variables the predictions are most sensitive to. Sensitivity indices and prediction intervals are calculated by two different methods, Monte Carlo and polynomial chaos. The methods give similar prediction intervals, and they agree that the predictions are most sensitive to the pipe diameter and the liquid viscosity. However, the Monte Carlo simulations require fewer model evaluations and less computational time. The model predictions are also compared to experiments while accounting for uncertainty, and the holdup predictions are accurate, but there is bias in the pressure drop estimates. Full article
Show Figures

Figure 1

13 pages, 493 KiB  
Article
Predictive Path Following and Collision Avoidance of Autonomous Connected Vehicles
by Mohamed Abdelaal and Steffen Schön
Algorithms 2020, 13(3), 52; https://doi.org/10.3390/a13030052 - 28 Feb 2020
Cited by 9 | Viewed by 4531
Abstract
This paper considers nonlinear model predictive control for simultaneous path-following and collision avoidance of connected autonomous vehicles. For each agent, a nonlinear bicycle model is used to predict a sequence of the states and then optimize them with respect to a sequence of [...] Read more.
This paper considers nonlinear model predictive control for simultaneous path-following and collision avoidance of connected autonomous vehicles. For each agent, a nonlinear bicycle model is used to predict a sequence of the states and then optimize them with respect to a sequence of control inputs. The objective function of the optimal control problem is to follow the planned path which is represented by a Bézier curve. In order to achieve collision avoidance among the networked vehicles, a geometric shape must be selected to represent the vehicle geometry. In this paper, an elliptic disk is selected for that as it represents the geometry of the vehicle better than the traditional circular disk. A separation condition between each pair of elliptic disks is formulated as time-varying state constraints for the optimization problem. Driving corridors are assumed to be also Bézier curves, which could be obtained from digital maps, and are reformulated to suit the controller algorithm. The algorithm is validated using MATLAB simulation with the aid of ACADO toolkit. Full article
(This article belongs to the Special Issue Algorithms for Reliable Estimation, Identification and Control)
Show Figures

Figure 1

20 pages, 390 KiB  
Article
Approximation and Uncertainty Quantification of Systems with Arbitrary Parameter Distributions Using Weighted Leja Interpolation
by Dimitrios Loukrezis and Herbert De Gersem
Algorithms 2020, 13(3), 51; https://doi.org/10.3390/a13030051 - 26 Feb 2020
Cited by 2 | Viewed by 3858
Abstract
Approximation and uncertainty quantification methods based on Lagrange interpolation are typically abandoned in cases where the probability distributions of one or more system parameters are not normal, uniform, or closely related distributions, due to the computational issues that arise when one wishes to [...] Read more.
Approximation and uncertainty quantification methods based on Lagrange interpolation are typically abandoned in cases where the probability distributions of one or more system parameters are not normal, uniform, or closely related distributions, due to the computational issues that arise when one wishes to define interpolation nodes for general distributions. This paper examines the use of the recently introduced weighted Leja nodes for that purpose. Weighted Leja interpolation rules are presented, along with a dimension-adaptive sparse interpolation algorithm, to be employed in the case of high-dimensional input uncertainty. The performance and reliability of the suggested approach is verified by four numerical experiments, where the respective models feature extreme value and truncated normal parameter distributions. Furthermore, the suggested approach is compared with a well-established polynomial chaos method and found to be either comparable or superior in terms of approximation and statistics estimation accuracy. Full article
Show Figures

Figure 1

17 pages, 1075 KiB  
Article
Fractional Sliding Mode Nonlinear Procedure for Robust Control of an Eutrophying Microalgae Photobioreactor
by Abraham Efraim Rodríguez-Mata, Ricardo Luna, Jose Ricardo Pérez-Correa, Alejandro Gonzalez-Huitrón, Rafael Castro-Linares and Manuel A. Duarte-Mermoud
Algorithms 2020, 13(3), 50; https://doi.org/10.3390/a13030050 - 26 Feb 2020
Cited by 9 | Viewed by 3835
Abstract
This paper proposes a fractional-order sliding mode controller (FOSMC) for the robust control of a nonlinear process subjected to unknown parametric disturbances. The controller aims to ensure optimal growth in photobioreactors of native microalgae involved in eutrophication of the Sinaloa rivers in Mexico. [...] Read more.
This paper proposes a fractional-order sliding mode controller (FOSMC) for the robust control of a nonlinear process subjected to unknown parametric disturbances. The controller aims to ensure optimal growth in photobioreactors of native microalgae involved in eutrophication of the Sinaloa rivers in Mexico. The controller design is based on the Caputo fractional integral-order derivative and on the convergence properties of a sliding surface. For nonlinear systems, the proposed FOSMC guarantees convergence to the sliding surface even in the presence of model disturbances. The proposed controller is compared to an Internal Model Control (IMC) through numerical simulations. Full article
(This article belongs to the Special Issue Theory and Applications of Fractional Order Systems and Signals II)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop