Advance in Machine Learning

A special issue of Processes (ISSN 2227-9717). This special issue belongs to the section "Advanced Digital and Other Processes".

Deadline for manuscript submissions: closed (15 October 2021) | Viewed by 28523

Special Issue Editors


E-Mail Website
Guest Editor
Department of Physics, Faculty of Sciences, International Hellenic University, Ag. Loukas Campus, 65404 Kavala, Greece
Interests: model-agnostic meta-learning; multi-task learning; real-time analytics; scalable and compassable privacy-preserving data mining; automated assessment and response systems; AI anomaly detection; AI malware analysis; AI IDS-IPS; AI forensics; AI in blockchain
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Lab of Mathematics and Informatics (ISCE), Department of Civil Engineering, Democritus University of Thrace, 67100 Xanthi, Greece
Interests: computational intelligence; artificial neural networks; fuzzy logic; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science and Telecommunications, School of Sciences, University of Thessaly, Volos, Greece
Interests: parallel and distributed systems; distributed machine learning; performance optimization; IoT/IIoT; real-time big data analytics; cloud computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
EDA Research and Technology Coordinator - Head of Unit Technology and Innovation at European Defence Agency
Interests: real-time architectures; machine learning; sensor networks; edge computing; ontologies; semantic web; user modeling; emergency management; ambient intelligence

Special Issue Information

Dear Colleagues,

Machine learning is filling the gaps between theory and practice and helps to change virtually every aspect of modern lives. Today, advance in machine learning algorithms accomplishes tasks to solving real-world problems that until recently only expert humans could perform.  

In this Special Issue, we seek research and case studies that demonstrate the application of machine learning to support applied scientific research, in any area of science and technology. Example topics include (but are not limited to) the following topics applied to machine learning:

  • New machine learning algorithms
  • New optimization techniques
  • Distributed machine learning systems and architectures
  • New applications on real-time/big data analytics
  • Intelligent applications
  • Quantum machine learning
  • Data and code integration
  • Visualization of modern systems and networks
  • High-throughput data analysis
  • Comparison and alignment methods

Dr. Konstantinos Demertzis
Prof. Dr. Lazaros Iliadis
Dr. Nikos Tziritas
Dr. Panayotis Kikiras
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Processes is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Deep Learning
  • Spiking Neural Computation
  • Big Data Architectures
  • Data Lakes
  • Quantum Machine Learning
  • Stream Learning
  • Meta-Learning
  • Ambient Intelligence
  • Real-Time Analytics
  • Distributed Systems

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

3 pages, 159 KiB  
Editorial
Special Issue “Advance in Machine Learning”
by Konstantinos Demertzis, Lazaros Iliadis, Nikos Tziritas and Panayotis Kikiras
Processes 2023, 11(4), 1043; https://doi.org/10.3390/pr11041043 - 30 Mar 2023
Viewed by 1428
Abstract
Machine learning has increasingly become the bridge between theoretical knowledge and practical applications, transforming countless aspects of modern life [...] Full article
(This article belongs to the Special Issue Advance in Machine Learning)

Research

Jump to: Editorial

16 pages, 4135 KiB  
Article
Optimal Design of Computational Fluid Dynamics: Numerical Calculation and Simulation Analysis of Windage Power Losses in the Aviation
by Yuzhong Zhang, Linlin Li and Ziqiang Zhao
Processes 2021, 9(11), 1999; https://doi.org/10.3390/pr9111999 - 9 Nov 2021
Cited by 6 | Viewed by 1620
Abstract
Based on the theory of computational fluid dynamics (CFD), with the help of the Fluent software and the powerful parallel computing capability of the super cloud computer, the single-phase flow transient simulation calculation of the windage power loss of the engagement spiral bevel [...] Read more.
Based on the theory of computational fluid dynamics (CFD), with the help of the Fluent software and the powerful parallel computing capability of the super cloud computer, the single-phase flow transient simulation calculation of the windage power loss of the engagement spiral bevel gear pair (SBGP) was performed. The two-equation SST k-ω turbulence model based on the assumption of eddy viscosity was adopted, which was improved from the standard k-ε model combined with the Wilcox k-ω model. The SST k-ω turbulence model inherited the respective advantages of the Wilcox k-ω model in the near-wall region and the k-ε model in the free shear layer and could more accurately describe the resistance and separation effect of the gear tooth surface on the airflow. The simulation analyzed the airflow characteristics around SBGP and the mechanism of the windshield to reduce the windage loss of the gear. It also studied the influence of the windshield clearance and opening size on the windage power loss. Then the orthogonal experimental analysis method was adopted to perform numerical simulation analysis. The windage torque was studied under different clearance values between the windshield and the gear tooth surface, as well as the large end and the small end. The variance analysis was performed on the numerical simulation data. The results showed that when the windshield clearance value was 1 mm and the engagement opening was 30°, the windage torque was the smallest, and the effect of reducing the windage power loss was the best. According to the changes in the pressure, velocity, and turbulent kinetic energy cloud diagram of the flow field in the reducer during multi-group simulation tests, the local optimal windshield configuration was obtained, which provided a method for further research on the multi-objective optimization of the windshield and the windage loss of the gear pair under the oil–gas two-phase flow and also provided a reference for the practical engineering application of the windshield. Full article
(This article belongs to the Special Issue Advance in Machine Learning)
Show Figures

Figure 1

19 pages, 26583 KiB  
Article
Efficient Video-based Vehicle Queue Length Estimation using Computer Vision and Deep Learning for an Urban Traffic Scenario
by Muhammad Umair, Muhammad Umar Farooq, Rana Hammad Raza, Qian Chen and Baher Abdulhai
Processes 2021, 9(10), 1786; https://doi.org/10.3390/pr9101786 - 8 Oct 2021
Cited by 14 | Viewed by 5356
Abstract
In the Intelligent Transportation System (ITS) realm, queue length estimation is one of an essential yet a challenging task. Queue lengths are important for determining traffic density in traffic lanes so that possible congestion in any lane can be minimized. Smart roadside sensors [...] Read more.
In the Intelligent Transportation System (ITS) realm, queue length estimation is one of an essential yet a challenging task. Queue lengths are important for determining traffic density in traffic lanes so that possible congestion in any lane can be minimized. Smart roadside sensors such as loop detectors, radars and pneumatic road tubes etc. are promising for such tasks though they have a very high installation and maintenance cost. Large scale deployment of surveillance cameras have shown a great potential in the collection of vehicular data in a flexible way and are also cost effective. Similarly, vision-based sensors can be used independently or if required can also augment the functionality of other roadside sensors to effectively process queue length at prescribed traffic lanes. In this research, a CNN-based approach for estimation of vehicle queue length in an urban traffic scenario using low-resolution traffic videos is proposed. The queue length is estimated based on count of total vehicles waiting on a signal. The proposed approach calculates queue length without the knowledge of any onsite camera calibration information. Average vehicle length is approximated to be 5 m. This caters for the vehicles at the far end of the traffic lane that appear smaller in the camera view. Identification of stopped vehicles is done using Deep SORT based object tracking. Due to robust and accurate CNN-based detection and tracking, the queue length estimated by using only the cameras has been very effective. This mostly eliminates the need for fusion with any roadside or in-vehicle sensors. A detailed comparative analysis of vehicle detection models including YOLOv3, YOLOv4, YOLOv5, SSD, ResNet101, and InceptionV3 was performed. Based on this analysis, YOLOv4 was selected as a baseline model for queue length estimation. Using the pre-trained 80-classes YOLOv4 model, an overall accuracy of 73% and 88% was achieved for vehicle count and vehicle count-based queue length estimation, respectively. After fine-tuning of model and narrowing the output classes to vehicle class only, an average accuracy of 83% and 93% was achieved, respectively. This shows the efficiency and robustness of the proposed approach. Full article
(This article belongs to the Special Issue Advance in Machine Learning)
Show Figures

Figure 1

11 pages, 991 KiB  
Article
Smooth Stitching Method for the Texture Seams of Remote Sensing Images Based on Gradient Structure Information
by Danjun Deng
Processes 2021, 9(10), 1689; https://doi.org/10.3390/pr9101689 - 22 Sep 2021
Cited by 4 | Viewed by 1825
Abstract
Traditional smooth stitching method for the texture seams of remote sensing images is affected by gradient structure information, leading to poor stitching effect. Therefore, a smooth stitching method for the texture seams of remote sensing images based on gradient structure information is proposed [...] Read more.
Traditional smooth stitching method for the texture seams of remote sensing images is affected by gradient structure information, leading to poor stitching effect. Therefore, a smooth stitching method for the texture seams of remote sensing images based on gradient structure information is proposed in this research. By matching the feature points of remote sensing images and introducing a block link constraint and shape distortion constraint, the modified stitching image is obtained. By using remote sensing image fusion, the smooth stitching image of texture seams is obtained, and the local overlapping area of the texture is optimized. The main direction of texture seams is determined by calculating the gradient structure information of texture seams in horizontal and vertical directions. By selecting the initial point, the optimal stitching line is extracted by using the minimum mean value of the cumulative error of the smooth stitching line. By using the method of boundary correlation constraints, matching the feature points of the texture seams of remote sensing images and selecting the best matching pair, a smooth stitching algorithm for the texture seams of remote sensing image is designed, which realizes the smooth stitching of the texture seams of remote sensing images. Experimental results show that the design method has good performance in stitching accuracy and efficiency in the smooth stitching of the texture seams of remote sensing images. Specifically, the Liu et al. and the Zhang et al. methods that are the benchmark studies in the literature are introduced as a comparison, and the stitching experiment is carried out. The test is carried out according to accuracy and time and the proposed method achieves better results by almost 25%. Full article
(This article belongs to the Special Issue Advance in Machine Learning)
Show Figures

Figure 1

14 pages, 1776 KiB  
Article
Deep-Sequence–Aware Candidate Generation for e-Learning System
by Aziz Ilyosov, Alpamis Kutlimuratov and Taeg-Keun Whangbo
Processes 2021, 9(8), 1454; https://doi.org/10.3390/pr9081454 - 20 Aug 2021
Cited by 11 | Viewed by 2194
Abstract
Recently proposed recommendation systems based on embedding vector technology allow us to utilize a wide range of information such as user side and item side information to predict user preferences. Since there is a lack of ability to use the sequential information of [...] Read more.
Recently proposed recommendation systems based on embedding vector technology allow us to utilize a wide range of information such as user side and item side information to predict user preferences. Since there is a lack of ability to use the sequential information of user history, most recommendation system algorithms fail to predict the user’s preferences more accurately. Therefore, in this study, we developed a novel recommendation system that takes advantage of sequence and heterogeneous information in the candidate-generation process. The principle underlying the proposed recommendation model is that the new sequence based embedding layer in the model catches the sequence pattern of user history. The proposed deep-learning model may improve the prediction accuracy using user data, item data, and sequential information of the user’s profile. Experiments were conducted on datasets of the Korean e-learning platform, and the empirical results confirmed the capability of the proposed approach and its superiority over models that do not use the sequences of the heterogeneous information of users and items for the candidate-generation process. Full article
(This article belongs to the Special Issue Advance in Machine Learning)
Show Figures

Figure 1

14 pages, 5279 KiB  
Article
Designed a Passive Grinding Test Machine to Simulate Passive Grinding Process
by Peng-Zhan Liu, Wen-Jun Zou, Jin Peng, Xu-Dong Song and Fu-Ren Xiao
Processes 2021, 9(8), 1317; https://doi.org/10.3390/pr9081317 - 29 Jul 2021
Cited by 7 | Viewed by 2579
Abstract
Passive grinding is a high-speed rail grinding maintenance strategy, which is completely different from the conventional rail active grinding system. In contrast to active grinding, there is no power to drive the grinding wheel to rotate actively in passive grinding. The passive grinding [...] Read more.
Passive grinding is a high-speed rail grinding maintenance strategy, which is completely different from the conventional rail active grinding system. In contrast to active grinding, there is no power to drive the grinding wheel to rotate actively in passive grinding. The passive grinding process is realized only by the cooperation of grinding pressure, relative motion, and deflection angle. Grinding tests for passive grinding can help to improve the passive grinding process specifications and be used for the development of passive grinding wheels. However, most of the known grinding methods are active grinding, while the passive grinding machines and processes are rarely studied. Therefore, a passive grinding test machine was designed to simulate passive grinding in this study. This paper gives a detailed description and explanation of the structure and function of the passive grinding tester. Moreover, the characteristics of the grinding process and parameter settings of the testing machine were discussed based on the passive grinding principle. The design of a passive grinding test machine provides experimental equipment support for investigating passive grinding behavior and grinding process. Full article
(This article belongs to the Special Issue Advance in Machine Learning)
Show Figures

Figure 1

35 pages, 80627 KiB  
Article
Pandemic Analytics by Advanced Machine Learning for Improved Decision Making of COVID-19 Crisis
by Konstantinos Demertzis, Dimitrios Taketzis, Dimitrios Tsiotas, Lykourgos Magafas, Lazaros Iliadis and Panayotis Kikiras
Processes 2021, 9(8), 1267; https://doi.org/10.3390/pr9081267 - 22 Jul 2021
Cited by 10 | Viewed by 3851
Abstract
With the advent of the first pandemic wave of Severe Acute Respiratory Syndrome Coronavirus-2 (SARS-CoV-2), the question arises as to whether the spread of the virus will be controlled by the application of preventive measures or will follow a different course, regardless of [...] Read more.
With the advent of the first pandemic wave of Severe Acute Respiratory Syndrome Coronavirus-2 (SARS-CoV-2), the question arises as to whether the spread of the virus will be controlled by the application of preventive measures or will follow a different course, regardless of the pattern of spread already recorded. These conditions caused by the unprecedented pandemic have highlighted the importance of reliable data from official sources, their complete recording and analysis, and accurate investigation of epidemiological indicators in almost real time. There is an ongoing research demand for reliable and effective modeling of the disease but also the formulation of substantiated views to make optimal decisions for the design of preventive or repressive measures by those responsible for the implementation of policy in favor of the protection of public health. The main objective of the study is to present an innovative data-analysis system of COVID-19 disease progression in Greece and her border countries by real-time statistics about the epidemiological indicators. This system utilizes visualized data produced by an automated information system developed during the study, which is based on the analysis of large pandemic-related datasets, making extensive use of advanced machine learning methods. Finally, the aim is to support with up-to-date technological means optimal decisions in almost real time as well as the development of medium-term forecast of disease progression, thus assisting the competent bodies in taking appropriate measures for the effective management of the available health resources. Full article
(This article belongs to the Special Issue Advance in Machine Learning)
Show Figures

Figure 1

16 pages, 3434 KiB  
Article
Lifetime Prediction Using a Tribology-Aware, Deep Learning-Based Digital Twin of Ball Bearing-Like Tribosystems in Oil and Gas
by Prathamesh S. Desai, Victoria Granja and C. Fred Higgs III
Processes 2021, 9(6), 922; https://doi.org/10.3390/pr9060922 - 24 May 2021
Cited by 32 | Viewed by 5536
Abstract
The recent decline in crude oil prices due to global competition and COVID-19-related demand issues has highlighted the need for the efficient operation of an oil and gas plant. One such avenue is accurate predictions about the remaining useful life (RUL) of components [...] Read more.
The recent decline in crude oil prices due to global competition and COVID-19-related demand issues has highlighted the need for the efficient operation of an oil and gas plant. One such avenue is accurate predictions about the remaining useful life (RUL) of components used in oil and gas plants. A tribosystem is comprised of the surfaces in relative motion and the lubricant between them. Lubricant oils play a significant role in keeping any tribosystem such as bearings and gears working smoothly over the lifetime of the oil and gas plant. The lubricant oil needs replenishment from time to time to avoid component breakdown due to the increased presence of wear debris and friction between the sliding surfaces of bearings and gears. Traditionally, this oil change is carried out at pre-determined times. This paper explored the possibilities of employing machine learning to predict early failure behavior in sensor-instrumented tribosystems. Specifically, deep learning and tribological data obtained from sensors deployed on the components can provide more accurate predictions about the RUL of the tribosystem. This automated maintenance can improve the overall efficiency of the component. The present study aimed to develop a deep learning-based digital twin for accurately predicting the RUL of a tribosystem comprised of a ball bearing-like test apparatus, a four-ball tester, and lubricant oil. A commercial lubricant used in the offshore oil and gas components was tested for its extreme pressure performance, and its welding load was measured using a four-ball tester. Three accelerated deterioration tests was carried out on the four-ball tester at a load below the welding load. Based on the wear scar measurements obtained from the experimental tests, the RUL data were used to train a multivariate convolutional neural network (CNN). The training accuracy of the model was above 99%, and the testing accuracy was above 95%. This work involved the model-free learning prediction of the remaining useful lifetime of ball bearing-type contacts as a function of key sensor input data (i.e., load, friction, temperature). This model can be deployed for in-field tribological machine elements to trigger automated maintenance without explicitly measuring the wear phenomenon. Full article
(This article belongs to the Special Issue Advance in Machine Learning)
Show Figures

Figure 1

13 pages, 1074 KiB  
Article
A Study on Standardization of Security Evaluation Information for Chemical Processes Based on Deep Learning
by Lanfei Peng, Dong Gao and Yujie Bai
Processes 2021, 9(5), 832; https://doi.org/10.3390/pr9050832 - 10 May 2021
Cited by 8 | Viewed by 2445
Abstract
Hazard and operability analysis (HAZOP) is one of the most commonly used hazard analysis methods in the petrochemical industry. The large amount of unstructured data in HAZOP reports has generated an information explosion which has led to a pressing need for technologies that [...] Read more.
Hazard and operability analysis (HAZOP) is one of the most commonly used hazard analysis methods in the petrochemical industry. The large amount of unstructured data in HAZOP reports has generated an information explosion which has led to a pressing need for technologies that can simplify the use of this information. In order to solve the problem that massive data are difficult to reuse and share, in this study, we propose a new deep learning framework for Chinese HAZOP documents to perform a named entity recognition (NER) task, aiming at the characteristics of HAZOP documents, such as polysemy, multi-entity nesting, and long-distance text. Specifically, the preprocessed data are input into an embeddings from language models (ELMo) and a double convolutional neural network (DCNN) model to extract rich character features. Meanwhile, a bidirectional long short-term memory (BiLSTM) network is used to extract long-distance semantic information. Finally, the results are decoded by a conditional random field (CRF), and then output. Experiments were carried out using the HAZOP report of a coal seam indirect liquefaction project. The experimental results for the proposed model showed that the accuracy rate of the optimal results reached 90.83, the recall rate reached 92.46, and the F-value reached the highest 91.76%, which was significantly improved as compared with other models. Full article
(This article belongs to the Special Issue Advance in Machine Learning)
Show Figures

Figure 1

Back to TopTop