Advances in Algorithm Optimization and Computational Intelligence

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 15 April 2025 | Viewed by 6461

Special Issue Editors

School of Engineering and Computing, University of Central Lancashire (UCLan), Preston PR1 2HE, UK
Interests: artificial intelligence; computer vision; digital healthcare; image processing; computational thinking; assisted living

E-Mail Website
Guest Editor
Department of Computer Science, Solent University, Southampton SO14 0YN, UK
Interests: affective computing; investigating multimodal data; hybrid DNNs; applications of AI; data science; computer vision; time-series and financial market analysis

E-Mail Website
Guest Editor
Graduate Institute of Intelligent Robotics, Hwa Hsia University of Technology, New Taipei City 235, Taiwan
Interests: artificial intelligence; machine learning; image processing; biometrics; pattern recognition
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The Special Issue of Electronics, “Advances in Algorithm Optimization and Computational Intelligence,” is a pivotal scholarly contribution to the dynamic domain of computer science. Our aim is to provide a conduit for academics and industry professionals in terms of disseminating their research outcomes and methodologies in the realms of algorithmic optimization and computational intelligence.

This Special Issue’s objective is to spotlight avant-garde research and methodologies that augment the efficacy, robustness, and versatility of algorithms. This aligns seamlessly with the journal’s overarching mission of fostering state-of-the-art research in computer science and its intersecting disciplines. The focus of this Issue is the exploration and development of novel algorithmic strategies, the application of machine learning techniques for optimization, and the advancement of artificial intelligence paradigms for complex problem solving.

Potential article themes for this Special Issue include, but are not limited to, machine learning algorithms, evolutionary computation, swarm intelligence, artificial neural networks, fuzzy systems, and decision support systems. These themes reflect the current trends and future directions in the realm of computational intelligence and algorithm optimization. This Issue encourages submissions that offer novel insights, propose new methodologies, or apply existing techniques in innovative ways to solve complex problems. This is an excellent opportunity for scholars to contribute to and shape discourse in this crucial research area.

In this Special Issue, original research articles and reviews are welcome. Research areas may include (but not limited to) the following:

  1. Machine learning algorithms
  2. Evolutionary computation
  3. Swarm intelligence
  4. Artificial neural networks
  5. Fuzzy systems
  6. Decision support systems
  7. Optimization algorithms
  8. Deep learning
  9. Natural language processing
  10. Computer vision

Dr. Amin Amini
Dr. Bacha Rehman
Prof. Dr. Chih-Lung Lin
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • image processing
  • computer vision
  • algorithm optimization
  • computational intelligence

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 597 KiB  
Article
Phase-Angle-Encoded Snake Optimization Algorithm for K-Means Clustering
by Dan Xue, Sen-Yuan Pang, Ning Liu, Shang-Kun Liu and Wei-Min Zheng
Electronics 2024, 13(21), 4215; https://doi.org/10.3390/electronics13214215 - 27 Oct 2024
Viewed by 463
Abstract
The rapid development of metaheuristic algorithms proves their advantages in optimization. Data clustering, as an optimization problem, faces challenges for high accuracy. The K-means algorithm is traditaaional but has low clustering accuracy. In this paper, the phase-angle-encoded snake optimization algorithm (θ-SO), [...] Read more.
The rapid development of metaheuristic algorithms proves their advantages in optimization. Data clustering, as an optimization problem, faces challenges for high accuracy. The K-means algorithm is traditaaional but has low clustering accuracy. In this paper, the phase-angle-encoded snake optimization algorithm (θ-SO), based on mapping strategy, is proposed for data clustering. The disadvantages of traditional snake optimization include slow convergence speed and poor optimization accuracy. The improved θ-SO uses phase angles for boundary setting and enables efficient adjustments in the phase angle vector to accelerate convergence, while employing a Gaussian distribution strategy to enhance optimization accuracy. The optimization performance of θ-SO is evaluated by CEC2013 datasets and compared with other metaheuristic algorithms. Additionally, its clustering optimization capabilities are tested on Iris, Wine, Seeds, and CMC datasets, using the classification error rate and sum of intra-cluster distances. Experimental results show θ-SO surpasses other algorithms on over 2/3 of CEC2013 test functions, hitting a 90% high-performance mark across all clustering optimization tasks. The method proposed in this paper effectively addresses the issues of data clustering difficulty and low clustering accuracy. Full article
(This article belongs to the Special Issue Advances in Algorithm Optimization and Computational Intelligence)
Show Figures

Figure 1

11 pages, 2205 KiB  
Article
A Novel Approach for Solving the N-Queen Problem Using a Non-Sequential Conflict Resolution Algorithm
by Omid Moghimi and Amin Amini
Electronics 2024, 13(20), 4065; https://doi.org/10.3390/electronics13204065 - 16 Oct 2024
Viewed by 678
Abstract
The N-Queens problem is a fundamental challenge in combinatorial optimization, commonly used as a benchmark for assessing the efficiency of algorithms. Traditional algorithms, such as Backtracking with Forward Checking (BFC), constraint satisfaction problem (CSP) techniques, Lookahead algorithms, and heuristic-based methods, often face challenges [...] Read more.
The N-Queens problem is a fundamental challenge in combinatorial optimization, commonly used as a benchmark for assessing the efficiency of algorithms. Traditional algorithms, such as Backtracking with Forward Checking (BFC), constraint satisfaction problem (CSP) techniques, Lookahead algorithms, and heuristic-based methods, often face challenges with exponential time complexity, making them less practical for large-scale instances. This paper introduces a novel algorithm, non-sequential conflict resolution (NSCR), which improves performance over traditional algorithms through dynamic conflict resolution. The NSCR algorithm iteratively resolves conflicts among queens by adjusting their positions, aiming to optimize both time complexity and memory usage. While NSCR also operates within exponential time bounds, it demonstrates improved scalability and efficiency compared to traditional methods. A significant strength of the NSCR algorithm lies in its space complexity, which is O(n), and a time complexity that, while typically lower than traditional methods, can reach O(n3) in the worst-case scenario. This linear space complexity is highly advantageous, particularly when dealing with large problem sizes, as it ensures efficient use of memory resources. Comparative analysis with the aforementioned algorithms shows that NSCR offers superior resource management, using up to 60% less memory and reducing runtime by approximately 50%, making it an efficient option for large-scale instances of the N-Queens problem. The algorithm’s performance, evaluated on problem sizes ranging from 8 to 1000 queens, highlights its ability to manage computational resources effectively, despite the inherent challenges of exponential time complexity. Full article
(This article belongs to the Special Issue Advances in Algorithm Optimization and Computational Intelligence)
Show Figures

Figure 1

19 pages, 1236 KiB  
Article
Multi-Task Diffusion Learning for Time Series Classification
by Shaoqiu Zheng, Zhen Liu, Long Tian, Ling Ye, Shixin Zheng, Peng Peng and Wei Chu
Electronics 2024, 13(20), 4015; https://doi.org/10.3390/electronics13204015 - 12 Oct 2024
Viewed by 635
Abstract
Current deep learning models for time series often face challenges with generalizability in scenarios characterized by limited samples or inadequately labeled data. By tapping into the robust generative capabilities of diffusion models, which have shown success in computer vision and natural language processing, [...] Read more.
Current deep learning models for time series often face challenges with generalizability in scenarios characterized by limited samples or inadequately labeled data. By tapping into the robust generative capabilities of diffusion models, which have shown success in computer vision and natural language processing, we see potential for improving the adaptability of deep learning models. However, the specific application of diffusion models in generating samples for time series classification tasks remains underexplored. To bridge this gap, we introduce the MDGPS model, which incorporates multi-task diffusion learning and gradient-free patch search (MDGPS). Our methodology aims to bolster the generalizability of time series classification models confronted with restricted labeled samples. The multi-task diffusion learning module integrates frequency-domain classification with random masked patches diffusion learning, leveraging frequency-domain feature representations and patch observation distributions to improve the discriminative properties of generated samples. Furthermore, a gradient-free patch search module, utilizing the particle swarm optimization algorithm, refines time series for specific samples through a pre-trained multi-task diffusion model. This process aims to reduce classification errors caused by random patch masking. The experimental results on four time series datasets show that the proposed MDGPS model consistently surpasses other methods, achieving the highest classification accuracy and F1-score across all datasets: 95.81%, 87.64%, 82.31%, and 100% in accuracy; and 95.21%, 82.32%, 78.57%, and 100% in F1-Score for Epilepsy, FD-B, Gesture, and EMG, respectively. In addition, evaluations in a reinforcement learning scenario confirm MDGPS’s superior performance. Ablation and visualization experiments further validate the effectiveness of its individual components. Full article
(This article belongs to the Special Issue Advances in Algorithm Optimization and Computational Intelligence)
Show Figures

Figure 1

22 pages, 7227 KiB  
Article
Robust Reversible Watermarking Scheme in Video Compression Domain Based on Multi-Layer Embedding
by Yifei Meng, Ke Niu, Yingnan Zhang, Yucheng Liang and Fangmeng Hu
Electronics 2024, 13(18), 3734; https://doi.org/10.3390/electronics13183734 - 20 Sep 2024
Viewed by 751
Abstract
Most of the existing research on video watermarking schemes focus on improving the robustness of watermarking. However, in application scenarios such as judicial forensics and telemedicine, the distortion caused by watermark embedding on the original video is unacceptable. To solve this problem, this [...] Read more.
Most of the existing research on video watermarking schemes focus on improving the robustness of watermarking. However, in application scenarios such as judicial forensics and telemedicine, the distortion caused by watermark embedding on the original video is unacceptable. To solve this problem, this paper proposes a robust reversible watermarking (RRW)scheme based on multi-layer embedding in the video compression domain. Firstly, the watermarking data are divided into several sub-secrets by using Shamir’s (t, n)-threshold secret sharing. After that, the chroma sub-block with more complex texture information is filtered out in the I-frame of each group of pictures (GOP), and the sub-secret is embedded in that frame by modifying the discrete cosine transform (DCT) coefficients within the sub-block. Finally, the auxiliary information required to recover the coefficients is embedded into the motion vector of the P-frame of each GOP by a reversible steganography algorithm. In the absence of an attack, the receiver can recover the DCT coefficients by extracting the auxiliary information in the vectors, ultimately recovering the video correctly. The watermarking scheme demonstrates strong robustness even when it suffers from malicious attacks such as recompression attacks and requantization attacks. The experimental results demonstrate that the watermarking scheme proposed in this paper exhibits reversibility and high visual quality. Moreover, the scheme surpasses other comparable methods in the robustness test session. Full article
(This article belongs to the Special Issue Advances in Algorithm Optimization and Computational Intelligence)
Show Figures

Figure 1

15 pages, 10699 KiB  
Article
Frequency-Auxiliary One-Shot Domain Adaptation of Generative Adversarial Networks
by Kan Cheng, Haidong Liu, Jiayu Liu, Bo Xu and Xinyue Liu
Electronics 2024, 13(13), 2643; https://doi.org/10.3390/electronics13132643 - 5 Jul 2024
Viewed by 797
Abstract
Generative domain adaptation in a one-shot scenario involves transferring a pretrained generator from one domain to another using only a single reference image. To address the issue of extremely scarce data, existing methods resort to complex parameter constraints and leverage additional semantic knowledge [...] Read more.
Generative domain adaptation in a one-shot scenario involves transferring a pretrained generator from one domain to another using only a single reference image. To address the issue of extremely scarce data, existing methods resort to complex parameter constraints and leverage additional semantic knowledge from CLIP models to mitigate it. However, these methods still suffer from overfitting and underfitting issues due to the lack of prior knowledge about the domain adaptation task. In this paper, we firstly introduce the perspective of the frequency domain into the generative domain adaptation task to support the model in understanding the adaptation goals in a one-shot scenario and propose a method called frequency-auxiliary GAN (FAGAN). The FAGAN contains two core modules: a low-frequency fusion module (LFF-Module) and high-frequency guide module (HFG-Module). Specifically, the LFF-Module aims to inherit the domain-sharing information of the source module by fusing the low-frequency features of the source model. In addition, the HFG-Module is designed to select the domain-specific information of the reference image and guide the model to fit them by utilizing high-frequency guidance. These two modules are dedicated to alleviating overfitting and underfitting issues, thereby enchancing the diversity and fidelity of generated images. Extensive experimental results showed that our method leads to better quantitative and qualitative results than the existing methods under a wide range of task settings. Full article
(This article belongs to the Special Issue Advances in Algorithm Optimization and Computational Intelligence)
Show Figures

Figure 1

14 pages, 398 KiB  
Article
FCL: Pedestrian Re-Identification Algorithm Based on Feature Fusion Contrastive Learning
by Yuangang Li, Yuhan Zhang, Yunlong Gao, Bo Xu and Xinyue Liu
Electronics 2024, 13(12), 2368; https://doi.org/10.3390/electronics13122368 - 17 Jun 2024
Cited by 1 | Viewed by 870
Abstract
Pedestrian re-identification leverages computer vision technology to achieve cross-camera matching of pedestrians; it has recently led to significant progress and presents numerous practical applications. However, current algorithms face the following challenges: (1) most of the methods are supervised, heavily relying on specific datasets, [...] Read more.
Pedestrian re-identification leverages computer vision technology to achieve cross-camera matching of pedestrians; it has recently led to significant progress and presents numerous practical applications. However, current algorithms face the following challenges: (1) most of the methods are supervised, heavily relying on specific datasets, and lacking robust generalization capabilities; (2) it is hard to extract features because the elongated and narrow shape of pedestrian images introduces uneven feature distributions; (3) the substantial imbalance between positive and negative samples. To address these challenges, we introduce a novel pedestrian re-identification unsupervised algorithm called Feature Fusion Contrastive Learning (FCL) to extract more effective features. Specifically, we employ circular pooling to merge network features across different levels for pedestrian re-identification to improve robust generalization capability. Furthermore, we propose a feature fusion pooling method, which facilitates a more efficient distribution of feature representations across pedestrian images. Finally, we introduce FocalLoss to compute the clustering-level loss, mitigating the imbalance between positive and negative samples. Through extensive experiments conducted on three prominent datasets, our proposed method demonstrates promising performance, with an average 3.8% improvement in FCL’s mAP indicators compared to baseline results. Full article
(This article belongs to the Special Issue Advances in Algorithm Optimization and Computational Intelligence)
Show Figures

Figure 1

15 pages, 1985 KiB  
Article
An Improvement of Adam Based on a Cyclic Exponential Decay Learning Rate and Gradient Norm Constraints
by Yichuan Shao, Jiapeng Yang, Wen Zhou, Haijing Sun, Lei Xing, Qian Zhao and Le Zhang
Electronics 2024, 13(9), 1778; https://doi.org/10.3390/electronics13091778 - 4 May 2024
Cited by 1 | Viewed by 1331
Abstract
Aiming at a series of limitations of the Adam algorithm, such as hyperparameter sensitivity and unstable convergence, in this paper, an improved optimization algorithm, the Cycle-Norm-Adam (CN-Adam) algorithm, is proposed. The algorithm integrates the ideas of a cyclic exponential decay learning rate (CEDLR) [...] Read more.
Aiming at a series of limitations of the Adam algorithm, such as hyperparameter sensitivity and unstable convergence, in this paper, an improved optimization algorithm, the Cycle-Norm-Adam (CN-Adam) algorithm, is proposed. The algorithm integrates the ideas of a cyclic exponential decay learning rate (CEDLR) and gradient paradigm constraintsand accelerates the convergence speed of the Adam model and improves its generalization performance by dynamically adjusting the learning rate. In order to verify the effectiveness of the CN-Adam algorithm, we conducted extensive experimental studies. The CN-Adam algorithm achieved significant performance improvementsin both standard datasets. The experimental results show that the CN-Adam algorithm achieved 98.54% accuracy in the MNIST dataset and 72.10% in the CIFAR10 dataset. Due to the complexity and specificity of medical images, the algorithm was tested in a medical dataset and achieved an accuracy of 78.80%, which was better than the other algorithms. The experimental results show that the CN-Adam optimization algorithm provides an effective optimization strategy for improving model performance and promoting medical research. Full article
(This article belongs to the Special Issue Advances in Algorithm Optimization and Computational Intelligence)
Show Figures

Figure 1

Back to TopTop