Artificial Intelligence and Applications—Responsible AI

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 10 January 2025 | Viewed by 6727

Special Issue Editors


E-Mail Website1 Website2
Guest Editor
Faculty of Science and Technology, Charles Darwin University, Darwin, Sydney, NSW 2000, Australia
Interests: artificial intelligence; computational intelligence; explainable/responsible/ethical AI; evolutionary optimization; intelligent systems; cyber-physical systems
Special Issues, Collections and Topics in MDPI journals
Data Science Institute, University of Technology Sydney, Sydney, NSW 2007, Australia
Interests: AI for social good; AI fairness; AI explainability; smart agriculture; visual analytics; behavior analytics; human-computer interaction
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Artificial Intelligence and its applications across different industrial sectors are transforming the world. It is important to apply Artificial Intelligence decision-making systems to different industries with a strong emphasis on the ethical and explainable use of AI. Technological advancements are leading us toward the development of Artificial Intelligence decision-making systems that have the capability to make informed, responsible and ethical decisions within their designated industries.  Given the current speed at which Artificial Intelligence is developing, it is critical for us to consider the ethical implications of Artificial Intelligence systems. Another important component of AI systems to consider is the professional development of systems and integrating the correct principles when selecting and implementing intelligent algorithms. The current Special Issue highlights the applications of Artificial Intelligence in solving real-life industry problems that range from predictions for providing business solutions to cyber-physical systems, all the while maintaining and emphasizing the explainability of these decisions. It addresses the applications of Artificial Intelligence and smart algorithms, such as different neural networks, a variety of classifications and clustering methods for addressing real-world problems. It also delves into the explainable and ethical aspects of these AI solutions.

This Special Issue aims to collect the latest research studies on applications in AI, AI explainability, machine learning and deep learning; classification algorithms; and neural networks and clustering methods, such as support vector machines, graph neural networks, SHAP, convolutional neural networks, Ada Boost and KNN. Some specific topics include but are not limited to the following:

  • Industry applications of AI;
  • AI in business;
  • AI for management;
  • Responsible AI;
  • Explainable AI;
  • Ethical AI;
  • Cyber-physical systems and explainability;
  • Trusted AI;
  • AI in healthcare;
  • AI for good;
  • Transparency in AI.

Dr. Niusha Shafiabady
Dr. Jianlong Zhou
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • explainability
  • responsible AI
  • AI applications
  • ethical AI
  • classification
  • prediction
  • clustering
  • AI for industry

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 552 KiB  
Article
An Enhanced K-Means Clustering Algorithm for Phishing Attack Detections
by Abdallah Al-Sabbagh, Khalil Hamze, Samiya Khan and Mahmoud Elkhodr
Electronics 2024, 13(18), 3677; https://doi.org/10.3390/electronics13183677 - 16 Sep 2024
Viewed by 1412
Abstract
Phishing attacks continue to pose a significant threat to cybersecurity, employing increasingly sophisticated techniques to deceive victims into revealing sensitive information or downloading malware. This paper presents a comprehensive study on the application of Machine Learning (ML) techniques for identifying phishing websites, with [...] Read more.
Phishing attacks continue to pose a significant threat to cybersecurity, employing increasingly sophisticated techniques to deceive victims into revealing sensitive information or downloading malware. This paper presents a comprehensive study on the application of Machine Learning (ML) techniques for identifying phishing websites, with a focus on enhancing detection accuracy and efficiency. We propose an approach that integrates the CfsSubsetEval attribute evaluator with the K-Means Clustering algorithm to improve phishing detection capabilities. Our method was evaluated using datasets of varying sizes (2000, 7000, and 10,000 samples) from a publicly available repository. Simulation results demonstrate that our approach achieves an accuracy of 89.2% on the 2000-sample dataset, outperforming the traditional kernel K-Means algorithm, which achieved an accuracy of 51.5%. Further analysis using precision, recall, and F1-score metrics corroborates the effectiveness of our method. We also discuss the scalability and real-world applicability of our approach, addressing limitations and proposing future research directions. This study contributes to the ongoing efforts to develop robust, efficient, and adaptable phishing detection systems in the face of evolving cyber threats. Full article
(This article belongs to the Special Issue Artificial Intelligence and Applications—Responsible AI)
Show Figures

Figure 1

24 pages, 1720 KiB  
Article
Investigating and Mitigating the Performance–Fairness Tradeoff via Protected-Category Sampling
by Gideon Popoola and John Sheppard
Electronics 2024, 13(15), 3024; https://doi.org/10.3390/electronics13153024 - 31 Jul 2024
Viewed by 827
Abstract
Machine learning algorithms have become common in everyday decision making, and decision-assistance systems are ubiquitous in our everyday lives. Hence, research on the prevention and mitigation of potential bias and unfairness of the predictions made by these algorithms has been increasing in recent [...] Read more.
Machine learning algorithms have become common in everyday decision making, and decision-assistance systems are ubiquitous in our everyday lives. Hence, research on the prevention and mitigation of potential bias and unfairness of the predictions made by these algorithms has been increasing in recent years. Most research on fairness and bias mitigation in machine learning often treats each protected variable separately, but in reality, it is possible for one person to belong to multiple protected categories. Hence, in this work, combining a set of protected variables and generating new columns that separate these protected variables into many subcategories was examined. These new subcategories tend to be extremely imbalanced, so bias mitigation was approached as an imbalanced classification problem. Specifically, four new custom sampling methods were developed and investigated to sample these new subcategories. These new sampling methods are referred to as protected-category oversampling, protected-category proportional sampling, protected-category Synthetic Minority Oversampling Technique (PC-SMOTE), and protected-category Adaptive Synthetic Sampling (PC-ADASYN). These sampling methods modify the existing sampling method by focusing their sampling on the new subcategories rather than the class label. The impact of these sampling strategies was then evaluated based on classical performance and fairness in classification settings. Classification performance was measured using accuracy and F1 based on training univariate decision trees, and fairness was measured using equalized odd differences and statistical parity. To evaluate the impact of fairness versus performance, these measures were evaluated against decision tree depth. The results show that the proposed methods were able to determine optimal points, whereby fairness was increased without decreasing performance, thus mitigating any potential performance–fairness tradeoff. Full article
(This article belongs to the Special Issue Artificial Intelligence and Applications—Responsible AI)
Show Figures

Figure 1

20 pages, 6310 KiB  
Article
Collaborative Decision Making with Responsible AI: Establishing Trust and Load Models for Probabilistic Transparency
by Xinyue Wang, Yaxin Li and Chengqi Xue
Electronics 2024, 13(15), 3004; https://doi.org/10.3390/electronics13153004 - 30 Jul 2024
Viewed by 878
Abstract
In responsible AI development, the construction of AI systems with well-designed transparency and the capability to achieve transparency-adaptive adjustments necessitates a clear and quantified understanding of user states during the interaction process. Among these, trust and load are two important states of the [...] Read more.
In responsible AI development, the construction of AI systems with well-designed transparency and the capability to achieve transparency-adaptive adjustments necessitates a clear and quantified understanding of user states during the interaction process. Among these, trust and load are two important states of the user’s internal psychology, albeit often challenging to directly ascertain. Thus, this study employs transparency experiments involving multiple probabilistic indicators to capture users’ compliance and reaction times during the interactive collaboration process of receiving real-time feedback. Subsequently, estimations of trust and load states are established, leading to the further development of a state transition matrix. Through the establishment of a trust–workload model, probabilistic estimations of user states under varying levels of transparency are obtained, quantitatively delineating the evolution of states and transparency within interaction sequences. This research lays the groundwork for subsequent endeavors in optimal strategy formulation and the development of transparency dynamically adaptive adjustment strategies based on the trust–workload state model constraints. Full article
(This article belongs to the Special Issue Artificial Intelligence and Applications—Responsible AI)
Show Figures

Figure 1

14 pages, 2377 KiB  
Article
Efficient Adversarial Attack Based on Moment Estimation and Lookahead Gradient
by Dian Hong, Deng Chen, Yanduo Zhang, Huabing Zhou, Liang Xie, Jianping Ju and Jianyin Tang
Electronics 2024, 13(13), 2464; https://doi.org/10.3390/electronics13132464 - 24 Jun 2024
Viewed by 807
Abstract
Adversarial example generation is a technique that involves perturbing inputs with imperceptible noise to induce misclassifications in neural networks, serving as a means to assess the robustness of such models. Among the adversarial attack algorithms, momentum iterative fast gradient sign Method (MI-FGSM) and [...] Read more.
Adversarial example generation is a technique that involves perturbing inputs with imperceptible noise to induce misclassifications in neural networks, serving as a means to assess the robustness of such models. Among the adversarial attack algorithms, momentum iterative fast gradient sign Method (MI-FGSM) and its variants constitute a class of highly effective offensive strategies, achieving near-perfect attack success rates in white-box settings. However, these methods’ use of sign activation functions severely compromises gradient information, which leads to low success rates in black-box attacks and results in large adversarial perturbations. In this paper, we introduce a novel adversarial attack algorithm, NA-FGTM. Our method employs the Tanh activation function instead of the sign which can accurately preserve gradient information. In addition, it utilizes the Adam optimization algorithm as well as the Nesterov acceleration, which is able to stabilize gradient update directions and expedite gradient convergence. Above all, the transferability of adversarial examples can be enhanced. Through integration with data augmentation techniques such as DIM, TIM, and SIM, NA-FGTM can further improve the efficacy of black-box attacks. Extensive experiments on the ImageNet dataset demonstrate that our method outperforms the state-of-the-art approaches in terms of black-box attack success rate and generates adversarial examples with smaller perturbations. Full article
(This article belongs to the Special Issue Artificial Intelligence and Applications—Responsible AI)
Show Figures

Figure 1

16 pages, 318 KiB  
Article
DPShield: Optimizing Differential Privacy for High-Utility Data Analysis in Sensitive Domains
by Pratik Thantharate, Shyam Bhojwani and Anurag Thantharate
Electronics 2024, 13(12), 2333; https://doi.org/10.3390/electronics13122333 - 14 Jun 2024
Cited by 1 | Viewed by 823
Abstract
The proliferation of cloud computing has amplified the need for robust privacy-preserving technologies, particularly when dealing with sensitive financial and human resources (HR) data. However, traditional differential privacy methods often struggle to balance rigorous privacy protections with maintaining data utility. This study introduces [...] Read more.
The proliferation of cloud computing has amplified the need for robust privacy-preserving technologies, particularly when dealing with sensitive financial and human resources (HR) data. However, traditional differential privacy methods often struggle to balance rigorous privacy protections with maintaining data utility. This study introduces DPShield, an optimized adaptive framework that enhances the trade-off between privacy guarantees and data utility in cloud environments. DPShield leverages advanced differential privacy techniques, including dynamic noise-injection mechanisms tailored to data sensitivity, cumulative privacy loss tracking, and domain-specific optimizations. Through comprehensive evaluations on synthetic financial and real-world HR datasets, DPShield demonstrated a remarkable 21.7% improvement in aggregate query accuracy over existing differential privacy approaches. Moreover, it maintained machine learning model accuracy within 5% of non-private benchmarks, ensuring high utility for predictive analytics. These achievements signify a major advancement in differential privacy, offering a scalable solution that harmonizes robust privacy assurances with practical data analysis needs. DPShield’s domain adaptability and seamless integration with cloud architectures underscore its potential as a versatile privacy-enhancing tool. This work bridges the gap between theoretical privacy guarantees and practical implementation demands, paving the way for more secure, ethical, and insightful data usage in cloud computing environments. Full article
(This article belongs to the Special Issue Artificial Intelligence and Applications—Responsible AI)
Show Figures

Figure 1

Back to TopTop