Next Article in Journal
LDMP-FEC: A Real-Time Low-Latency Scheduling Algorithm for Video Transmission in Heterogeneous Networks
Previous Article in Journal
A Surrogate-Assisted Intelligent Adaptive Generation Framework for Cost-Effective Coal Blending Strategy in Thermal Power Units
Previous Article in Special Issue
Technological Convergence of Blockchain and Artificial Intelligence: A Review and Challenges
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimization of Artificial Intelligence Algorithm Selection: PIPRECIA-S Model and Multi-Criteria Analysis

1
Faculty of Mathematics and Computer Sciences, Alfa BK University, 11000 Belgrade, Serbia
2
Information Technology School—ITS, 11000 Belgrade, Serbia
3
Faculty of Information Technologies, Alfa BK University, 11000 Belgrade, Serbia
4
Department of Information Technology, University of Criminal Investigation and Police Studies, 11000 Belgrade, Serbia
*
Authors to whom correspondence should be addressed.
Electronics 2025, 14(3), 562; https://doi.org/10.3390/electronics14030562
Submission received: 18 November 2024 / Revised: 12 January 2025 / Accepted: 28 January 2025 / Published: 30 January 2025
(This article belongs to the Special Issue Feature Papers in "Computer Science & Engineering", 2nd Edition)

Abstract

:
In the age of digitization and the ever-present use of artificial intelligence (AI), it is essential to develop methodologies that enable the systematic evaluation and ranking of different AI algorithms. This paper investigated the application of the PIPRECIA-S model as a methodological framework for the multi-criteria ranking of AI algorithms. Analyzing relevant criteria such as efficiency, flexibility, ease of implementation, stability and scalability, the paper provided a comprehensive overview of existing algorithms and identified their strengths and weaknesses. The research results showed that the PIPRECIA-S model enabled a structured and objective assessment, which facilitated decision-making in selecting the most suitable algorithms for specific applications. This approach not only advances the understanding of AI algorithms but also contributes to the development of strategies for their implementation in various industries.

1. Introduction

In the modern age, artificial intelligence (AI) has become a cornerstone of numerous fields, including medicine, industry, finance, cybersecurity, and education [1,2,3,4]. The motivation for this research lies in the need for a systematic and objective ranking of AI algorithms to identify those most suitable for specific applications. The goal of this study is to enable researchers and practitioners to select the algorithm that best fits tasks such as classification, prediction, or data processing, based on well-defined criteria. Also, we must mention that long-term research in the application of AI in boilers with automatic firing led to the need for this research. However, the variety of available algorithms presents a challenge for selecting the most suitable one for a specific task, as each algorithm offers distinct strengths and limitations depending on the context of application and the nature of the data [5,6,7].
In this paper, the focus is on the analysis of several key AI algorithms, including linear regression, k-nearest neighbors (KNN), support vector machine (SVM), neural networks, Random Forest, extreme gradient boosting (XGBoost), convolutional neural networks (CNN), and recurrent neural networks (RNN) [8,9,10]. The novelty of this study lies in the application of the PIPRECIA-S method, which is contrasted with previous studies using methods such as AHP [11], TOPSIS [12], and PROMETHEE [13], demonstrating its advantages in reducing computational complexity while maintaining reliability. Each of these algorithms has its own specific advantages and disadvantages, as well as different applications, depending on the nature of the problem and the available data [14]. For example, some algorithms, like SVM and neural networks, are well-suited for complex, high-dimensional data, while simpler models like linear regression and KNN are often preferred for less complex tasks, where interpretability and speed are key considerations. Choosing an appropriate AI algorithm often depends on a number of factors, such as accuracy, processing speed, robustness, interpretability, and resource efficiency [15,16,17,18,19].
Within the analysis of artificial intelligence algorithms, the PIPRECIA-S method is used to define key criteria that enable a systematic evaluation of different algorithms. While it shares some similarities with traditional multi-criteria decision-making methods like AHP [11], PIPRECIA-S stands out with its simplified procedure [20]. Unlike AHP, which requires extensive pairwise comparisons and consistency checks, PIPRECIA-S streamlines the process by reducing the number of comparisons and eliminating the need for consistency validation. This methodology is particularly useful in scenarios where there is a need to balance multiple factors, providing clear recommendations for selecting algorithms in different applications. The idea to use PIPRECIA-S arose not only from a thorough review of literature but also from its experimental application in optimizing the parameters of OZON boilers. These research works, which have been carried out since 2019, have led to ideas for a more comprehensive approach to the selection of the models used. In addition, here, we saw the need to present our experiences, which were of key importance for research on small- and medium-power boilers [21,22,23], to the scientific public.
Other popular methods, such as TOPSIS, which simplifies decision-making by focusing on the distance from ideal and anti-ideal solutions [12], and PROMETHEE, which offers flexibility through partial and complete rankings, excel in their respective areas [13], while FAHP is particularly effective in managing uncertainty through fuzzy logic [24]. Compared to these approaches, our study highlights the PIPRECIA-S method’s simplicity and efficiency in determining criteria weights. For example, AHP requires extensive pairwise comparisons, while PIPRECIA-S significantly reduces computational overhead, as evidenced by our results aligning closely with those of AHP and TOPSIS in similar studies [25]. However, these methods can be computationally demanding, especially when dealing with a large number of criteria. In contrast, PIPRECIA-S provides a more efficient, structured framework for evaluating criteria like efficiency, flexibility, ease of implementation, stability, and scalability, reducing complexity while still ensuring reliable decision-making.
In the continuation of the paper, a comprehensive framework for the evaluation and ranking of these eight popular AI algorithms will be presented based on these key criteria. This framework allows for an objective comparison of algorithms, making it a valuable tool for researchers and practitioners in the field of artificial intelligence [25]. The aim of the study is to provide a structured and reliable method for selecting the most suitable AI algorithms based on specific criteria, helping decision-makers optimize their choice of algorithms for different tasks. The novelty of this study lies in the tailored selection of criteria and their application to the ranking of AI algorithms for different domains. The paper focuses on the specific needs of applications and demonstrates the capabilities of this well-known method in a context where it has not been previously utilized, showcasing its potential in this area.

2. Current State of the Art

Artificial intelligence (AI) is an overarching field concerned with the development of systems capable of performing tasks that typically require human intelligence, such as language comprehension, pattern recognition, and decision making [26,27,28,29]. AI techniques have been extensively researched in various domains, with key applications in areas such as prediction, classification, and optimization. These applications span a range of industries, including healthcare, finance, and autonomous systems, where AI has been instrumental in driving innovation and improving efficiency [30].
A significant body of research has focused on the performance evaluation of different AI algorithms, as well as their applicability in various contexts. For example, support vector machines (SVMs) are widely recognized for their effectiveness in high-dimensional spaces, particularly in binary classification tasks, yet they struggle with noisy or unstructured data [31]. Conversely, convolutional neural networks (CNNs), originally developed for image recognition tasks, have demonstrated superior performance in handling unstructured data like images and speech, making them a preferred choice in these fields [32,33]. However, CNNs require significant computational resources and large datasets, which may not be feasible in all scenarios.
Neural networks have proven highly capable in tasks involving complex, nonlinear relationships, and their success in fields such as natural language processing (NLP) and image recognition is well documented [34]. Nevertheless, they are prone to overfitting and require careful tuning of hyperparameters, which adds complexity to their implementation [35,36]. To address these challenges, techniques like dropout and batch normalization have been introduced to improve generalization and reduce training time [37]. Still, the trade-off between computational cost and performance remains an ongoing challenge.
On the other hand, Random Forest and extreme gradient boosting (XGBoost) are frequently employed in tabular data scenarios and structured datasets. Random Forest, with its ensemble of decision trees, excels at reducing variance and handling missing data, while XGBoost optimizes gradient boosting to enhance both speed and performance in large datasets [38]. However, while these models are robust and scalable, they lack transparency, making interpretability difficult, which is crucial in fields like healthcare and finance [39].
In addition to technical performance, there is growing attention on the practical challenges associated with implementing AI algorithms, including optimization and scalability issues, as well as the risk of overfitting [40,41]. Techniques such as regularization, cross-validation, and data balancing have been proposed to mitigate these issues, improving the ability of models to generalize to unseen data [42]. For instance, dropout has become a widely used technique in neural networks to reduce overfitting, while cross-validation is applied across various models to ensure robust performance [43,44].
Research has also increasingly focused on the ethical and social implications of AI. Bias in AI algorithms, particularly in applications like criminal justice and employment, can perpetuate existing inequalities if not carefully addressed [45,46]. Transparency in decision-making is another critical area of concern, as black-box models like neural networks make it difficult to understand how decisions are made [47]. This has led to a growing demand for AI systems that are not only effective but also fair, transparent, and accountable [48].
In summary, current research underscores that there is no single, universal AI algorithm that is optimal for all applications. Instead, the choice of algorithm must be made based on the specific requirements of the task, the characteristics of the data, and considerations regarding computational cost, scalability, and ethical implications [49]. In this study, we aim to build on this foundation by applying the PIPRECIA-S method to systematically evaluate and rank AI algorithms, providing a structured approach to both technical and ethical challenges in AI deployment.

3. Methodology and Materials

3.1. Simplified Method for Assessing the Relative Importance of Criteria (PIPRECIA-S)

The PIPRECIA-S method was chosen to streamline the process of determining criteria weight coefficients. This method was selected for its simplicity and efficiency in group decision-making. Unlike traditional methods such as AHP, PIPRECIA-S reduces the number of comparisons and eliminates the need for consistency checks, which is crucial in scenarios with limited resources. In this method, the importance of each criterion is directly compared to the importance of a designated reference criterion, simplifying the evaluation process. The main advantage of the PIPRECIA-S method lies in its simplicity and practicality for group decision-making. However, unlike the extended PIPRECIA method (PIPRECIA-E) [50] and the AHP method [51], PIPRECIA-S does not incorporate a consistency check, which represents a significant limitation. The procedure for calculating the weight coefficients of criteria using the PIPRECIA-S method involves five distinct steps in Figure 1, as described below [52].
Step 1. Selection of the evaluation criteria C j . This step involves defining the criteria C j ,     j = 1 , ,   n , where n is the number of criteria taken into account when solving the problem. Criteria can be determined using literature and/or expert opinions.
Step 2. Determining the relative importance of criteria s j . First, the criterion is established ( C 1 ), which is used as a basis for comparison. Starting with the second criterion, to each criterion C j , the relative importance of the criterion sj is assigned based on Equation (1). Therefore, every criterion C j is compared with the reference criterion C 1 .
s j = > 1 ,   C j > C 1 = 1 ,   C j = C 1 < 1 ,   C j < C 1
If the criterion C j is more important than the criteria C 1 , it is assigned a value s j , which is greater than 1. In case the criterion C j is less important than the criteria C 1 , it is assigned a value less than 1. In case the criteria C 1 and C j are equally important, then both criteria have an importance value of 1. Value s j belongs to the interval [0.6, 1.4]. Value s 1 is always 1 and represents the assessment of the importance of the reference criterion C 1 .
Step 3. The value of the coefficient k j is calculated based on Equation (2).
k j = 1 ,   j = 1 2 s j ,   j > 1
Step 4. The value of the coefficient q j is calculated based on Equation (3).
q j = 1 ,   j = 1 q j 1 k j ,   j > 1
Step 5. The relative weight w j of the criteria is calculated. Based on Equation (4), the relative weight of the criteria is calculated as w j , where 0 w j 1 , and k = 1 n w k = 1 .
w j = q j k = 1 n q k
After this step, the process of determining the weight values of the criteria is completed.

3.2. Evaluation of Criteria

Within the analysis of artificial intelligence algorithms, the PIPRECIA-S method is used to define the key criteria that enable a systematic evaluation of different algorithms. This approach, similar to the AHP method, allows researchers and practitioners to express their opinions and rank the importance of each criterion [53]. Using literature and expert assessments, the main criteria used to evaluate different types of algorithms have been identified.
The following five criteria are key to evaluating algorithms:
  • Efficiency—This criterion evaluates how quickly and accurately the algorithm can solve a given problem. This includes the analysis of execution time, as well as precision in achieving set goals [54].
  • Flexibility—This criterion focuses on the algorithm’s ability to adapt to different tasks and conditions, which is especially important in dynamic environments [55].
  • Ease of implementation—This criterion assesses how easy it is to implement and integrate the algorithm into existing systems, taking into account the availability of libraries and resources [56].
  • Stability—This criterion refers to the consistency of the algorithm’s performance in different scenarios and on different datasets. Algorithms that show stable results are usually preferred [57].
  • Scalability—This criterion evaluates how the algorithm behaves when faced with an increase in the volume of data. Algorithms that retain efficiency with larger datasets are essential for practical application [58].
Using the PIPRECIA-S model, these five criteria enable a comprehensive assessment of the performance of artificial intelligence algorithms, helping researchers identify which algorithms offer the greatest benefits and how best to optimize the development and deployment of resources [59,60,61].

3.3. Ranking Scale

For each of the mentioned criteria, a ranking scale was used, which enabled an objective and consistent evaluation of the performance of the algorithms. The proposed scale is shown in Table 1.
The PIPRECIA-S method uses a specific ranking scale to determine the relative importance of criteria. A simplified scale was introduced to make the evaluation process more straightforward and accessible to experts from various domains. Any nuances in the assessments were further harmonized through an iterative process. Values less than 1.00 indicated a reduced importance, compared to the reference criterion, while values greater than 1.00 indicated an increased importance. This adapted scale allows for an easier use during the evaluation by experts who are not familiar with the PIPRECIA-S method.

3.4. Setting Priorities in Criteria

Prioritizing criteria such as efficiency, flexibility, ease of implementation, stability, and scalability is the key to evaluating AI algorithms. This study involved 10 experts with experience in artificial intelligence, data analytics, and management. Their evaluations were used to iteratively adjust the weight coefficients of the criteria. This process is essential for decision makers in organizations, as it allows them to identify which algorithms represent the greatest value and where the development and implementation resources should be directed.
The goal is to enable organizations to determine systematically weighted coefficients for each criterion through a simple process of comparing the importance between the criteria. This facilitates the decision-making on priorities in the development and application of algorithms, whereby the technical aspects are aligned with the business goals.
Table 2 shows a possible example of the ranking criteria by importance in the algorithm evaluation process. In this example, the criterion of efficiency was chosen as the reference criterion, against which all other criteria were compared. The criteria were ranked by experts in the fields of artificial intelligence, data analytics, and management, taking into account unique challenges in each of those fields. Each expert independently evaluated the criteria according to their importance for the decision-making process. Following the initial evaluation, the results served as an input for the PIPRECIA method, and the criteria weightings were adjusted through an iterative process involving structured discussions and recalculations. During this process, experts reviewed the aggregated results and proposed adjustments to their initial evaluations, enabling a gradual refinement of the weightings. This iterative process continued until a consensus was reached among all participants, ensuring a balanced and representative set of weightings for the criteria.
This process involved the following steps:
  • Initial evaluation of criteria: Each expert assigned scores to the importance of the criteria using the PIPRECIA-S model scale.
  • Calculation of average values: The average score of each criterion was used as the initial weight.
  • Discussion among experts: Experts reviewed the results and proposed potential adjustments based on the aggregated data.
  • Iteration: The process was repeated until consensus was achieved on the criteria weights.
  • Final weight determination: The consensus-based weights were used for ranking the algorithms.
In the initial phase, each expert evaluated the criteria using a scale from 1 to 5, where 1 represented the lowest priority and 5 the highest. These rankings were then combined, and the average scores were utilized as the initial weights in the PIPRECIA method. In the following step, participants had the chance to modify their assessments based on the aggregated results, enabling them to harmonize different viewpoints. The final weights of the criteria, produced through this process, reflected a consensus rooted in a multidisciplinary approach to both research and decision-making.
Although the priorities outlined in the table were determined by particular needs of the study, it is crucial to emphasize that the ranking of these criteria can vary depending on unique requirements, context, and objectives of each specific case. Therefore, rankings should be tailored to align with the investors’ needs and the particular details of the project.
This ranking will allow organizations to clearly identify which algorithms should be developed or implemented as a priority, optimizing the allocation of resources and strategy.

4. Analysis of Results

The results in Table 3 show the relative importance of different AI algorithms in terms of efficiency, providing insight into their performance and applications. The results were compared with similar studies that used other MCDM methods, such as AHP and TOPSIS. It was found that PIPRECIA-S provided comparable results while significantly reducing the complexity of the process. Linear regression received a score of 3 (Medium), which aligns it with simpler tasks, where the relationship between input and output data is linear, offering reliable performance in these cases but limited applicability in complex analyses.
On the other hand, KNN scored a 4 (High) for efficiency on smaller datasets with distinct class differences, though its performance decreased with larger datasets, which could limit its scalability. SVM stood out with a score of 5 (Very High) due to its high accuracy in classification tasks, particularly with high-dimensional data, making it one of the most reliable algorithms in this area.
Neural networks also received a rating of 5 (Very High) for their ability to learn complex patterns from data and achieve high accuracy in various applications, especially when sufficient data are available. Random Forest achieved a score of 4 (High), showing solid accuracy in numerous scenarios, though it may be prone to overfitting in some cases, affecting overall effectiveness.
XGBoost stood out with a score of 5 (Very High) for its optimized prediction performance and processing speed, which allows for high accuracy, even with large datasets. CNN also scored a 5 (Very High), being highly effective for tasks such as image recognition, where it achieved peak accuracy. Finally, RNN received a score of 4 (High) for its accuracy in analyzing sequential data like text and speech, though it occasionally faced challenges in retaining long-term relationships in data.
These results suggest that algorithms such as SVM, XGBoost, and neural networks are particularly suited for tasks requiring high precision, while algorithms like linear regression offer solid results in simpler contexts. Choosing the right algorithm requires careful consideration of the task’s specific needs and the available data.
Table 4 presents the relative importance of different AI algorithms according to the flexibility criterion, providing insight into their adaptability in various scenarios. Linear regression received a score of 5 (Very High), indicating its adaptability due to its simple structure and minimal computational demands, which allow it to handle larger datasets with ease in less complex tasks.
KNN scored a 3 (Medium) for flexibility. Although it performed well on smaller datasets, the need to compute distances between all data points can reduce its efficiency with larger datasets, limiting its adaptability in more extensive data scenarios. Similarly, SVM received a score of 3 (Medium); while it can provide precise results, training is often computationally intensive, especially on large or complex datasets, restricting its flexibility.
Neural networks were rated a 2 (Low) for flexibility, as they require significant computational resources and extensive training time, especially with deep architectures. This makes them less adaptable in environments with limited resources or where rapid adjustments are necessary.
Random Forest scored a 4 (High) due to its parallel processing capabilities, allowing for reasonably fast processing and adaptation in various data contexts. However, it may still lag behind simpler algorithms in terms of adaptability to rapidly changing data or task requirements.
XGBoost also achieved a score of 4 (High) for flexibility, as its optimized boosting techniques enhance processing efficiency and allow it to adapt effectively to various data structures, compared to traditional machine learning algorithms.
CNN and RNN both received a score of 2 (Low) for flexibility. CNNs excel in specific tasks like image processing but require significant training time, which limits their adaptability in broader applications. RNNs are similarly constrained due to their sequential processing structure, which demands considerable time and resources to handle long sequences, reducing their overall flexibility in diverse tasks.
These results highlight that linear regression is best suited for tasks requiring high adaptability with minimal computational demand, while Random Forest and XGBoost offer solid flexibility for a range of applications. More complex models, such as neural networks, CNN, and RNN, are effective for specialized tasks but are less flexible in rapidly changing or resource-constrained environments.
Table 5 summarizes the relative importance of various AI algorithms based on their ease of implementation. Linear regression achieved the highest score of 5 (Very Low), highlighting its status as one of the simplest algorithms, requiring minimal setup and parameter tuning.
KNN followed with a score of 4 (Low), as its conceptual simplicity can be hindered by the need to choose the optimal number of neighbors (k) and its reliance on distance metrics and data scaling, which can complicate the implementation.
SVM scored a 3 (Medium) due to its potent capabilities; however, its implementation is often more demanding, necessitating careful tuning of hyperparameters like regularization and kernel functions for optimal performance.
Neural networks were rated a 2 (High) due to their intricate implementation demands, which involve numerous layers, hyperparameters, and substantial training datasets. Mastery of network architecture design is essential to achieve satisfactory outcomes.
Random Forest was rated a 4 (Low), indicating relative ease of implementation, although its complexity can increase with the number of trees involved; it remains less demanding than many other methods.
XGBoost received a score of 3 (Medium) because, while it is highly optimized, its implementation can be intricate, requiring adjustments to many hyperparameters and a deeper understanding of the algorithm to maximize effectiveness.
Lastly, both CNNs and RNNs were rated a 2 (High) due to their considerable complexity. CNNs necessitate specific architectural setups, including convolutions and pooling layers, rendering them among the more complicated algorithms to implement. In contrast, RNNs require sophisticated architectures to manage sequential data effectively, further complicating their implementation compared to traditional feedforward networks.
Table 6 presents the relative importance of various artificial intelligence algorithms based on their stability criteria, enabling significant conclusions regarding their performance across diverse contexts.
Linear regression was rated a 3 (Medium), reflecting its simplicity and effectiveness for linear datasets. However, its accuracy diminishes when faced with more complex, nonlinear problems.
KNN also received a rating of 3 (Medium) due to its dependence on the number of neighbors and data quality. While it can perform well in certain scenarios, it is prone to overfitting when applied to noisy datasets.
SVM achieved a score of 4 (High), attributed to its strong performance in binary classification tasks, particularly when clear class separations are present and appropriate kernel functions are employed.
Neural networks were rated a 5 (Very High) because deep neural architectures excel in complex tasks such as image processing, text recognition, and pattern identification, albeit requiring substantial computational resources.
Random Forest scored a 4 (High), benefiting from its ensemble of decision trees, which contributes to its accuracy and robustness across varied data types while minimizing the risk of overfitting.
XGBoost stood out with a score of 5 (Very High), recognized for its exceptional accuracy in both classification and regression tasks, utilizing advanced methodologies such as boosting and resource optimization.
CNNs also received a score of 5 (Very High) due to their effectiveness in image and video processing applications, achieving high accuracy in recognizing intricate patterns.
Finally, RNNs were rated a 4 (High) for their proficiency in analyzing sequential data, including time series and natural language processing; however, they encounter challenges such as gradient explosion that can affect their stability.
Table 7 outlines the relative importance of various artificial intelligence algorithms based on their scalability, offering valuable insights into their performance across different contexts.
Linear regression received a rating of 5 (Very Low) for processing time, indicating exceptional efficiency, particularly with linear datasets, and minimal computational resource requirements.
KNN was rated a 4 (Low), as its processing time is contingent on dataset size and the number of neighbors utilized. The need to calculate distances between data points can impede performance with larger datasets.
SVM also earned a score of 3 (Medium) because, although efficient for small to medium datasets, its processing time can escalate significantly with larger datasets, especially when utilizing complex kernel functions.
Neural networks were assigned a rating of 2 (High) due to the substantial processing time required for training and optimization, particularly in deep networks with multiple layers. This demand varies based on model complexity and data size.
Random Forest received a score of 4 (Low), reflecting its time efficiency relative to more complex methods like neural networks; however, increasing the number of trees may lead to longer processing times.
XGBoost was rated a 3 (Medium) due to its accurate results; nonetheless, the boosting technique necessitates multiple learning iterations, which can extend processing time, depending on model complexity and hyperparameter settings.
CNNs received a rating of 2 (High), similar to other deep networks, as they require considerable processing time for training and optimization, particularly when dealing with large image or video datasets.
RNNs also received a rating of 2 (High), since training these networks can be time-consuming, especially for longer sequences, due to their recurrent structure and the necessity for multiple data passes.
Overall, these tables enabled conclusions regarding the performance of the algorithms. The PIPRECIA-S model facilitated the weighting of grades according to the relative importance of criteria, allowing for further analysis to determine which algorithm offered the optimal balance among accuracy, resource efficiency, scalability, and interpretability.

5. Discussion

Using the PIPRECIA-S evaluation method, the researchers identified in Figure 2 Random Forest and XGBoost as the most significant approaches for data analysis across key criteria, including efficiency, flexibility, ease of implementation, stability, and scalability. These findings are consistent with previous studies using AHP [26] and TOPSIS [27], which also identified Random Forest and XGBoost as leading algorithms under similar evaluation criteria. However, this study contributed further by streamlining the evaluation process through the PIPRECIA-S method, providing a more accessible approach for researchers. Both algorithms showed high adaptability and stable performance across diverse tasks, making them versatile choices in various applications.
Random Forest, with its ensemble learning structure, demonstrated high efficiency and stability, positioning it as an excellent option for complex data analysis. It exceled in scalability, handling large datasets effectively without sacrificing accuracy, making it particularly valuable in contexts where data volume is a priority. XGBoost complemented this by delivering optimal performance through enhanced processing techniques and speed, providing quick, accurate results, even in high-dimensional data environments.
To evaluate the robustness of the rankings across all criteria, a sensitivity analysis was conducted by varying the weights of each criterion (’Efficiency,’ ’Flexibility,’ ’Ease of Implementation’, ’Stability,’ and ’Scalability’) by ±10% and ±5%. The results, presented in Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7, demonstrated that the rankings of top-performing algorithms, such as Random Forest and XGBoost, remained stable across most variations, emphasizing the reliability of the PIPRECIA-S methodology in multi-criteria decision-making.
However, the analysis also revealed that some algorithms, such as neural networks and CNN, are more sensitive to changes in certain criteria weights, reflecting their dependence on specific evaluation factors. This highlights the importance of carefully considering criteria weights based on application-specific requirements to ensure accurate algorithm selection.
This balance of efficiency and flexibility makes XGBoost suitable for real-time processing tasks.
Conversely, neural networks show exceptional capability in modeling nonlinear relationships and analyzing complex, high-dimensional data. While their high efficiency and stability make them ideal for tasks requiring advanced pattern recognition, their inherent complexity often complicates interpretability, limiting accessibility for users who require transparent decision-making processes. CNN and RNN, as subsets of neural networks, offer specialized applications in image and sequence analysis, respectively, though they demand significant computational resources and extended processing time.
In the context of ease of implementation, Random Forest and XGBoost have a clear advantage. Random Forest is straightforward to implement and interpret, making it accessible to a broader audience without extensive resources. On the other hand, neural networks, particularly CNN and RNN, are complex to configure, requiring specialized knowledge and significant computational power. While this complexity can limit their implementation scope, it is justified in scenarios where sophisticated data modeling is paramount.
Based on these findings, it is recommended that researchers and practitioners prioritize Random Forest and XGBoost for tasks where high accuracy, scalability, and processing speed are essential. Neural networks should be considered for applications that involve complex relationships and data modeling, particularly when resources for implementation and processing are available.
This study highlights the effectiveness of decision theory, particularly the PIPRECIA-S method, in systematically evaluating AI algorithms according to relevant criteria. The evaluation of the artificial intelligence model was a necessary requirement of the research that was performed on the application of AI, both to boilers with automatic firing and to other optimization systems that the research group worked on. Based on the given needs, we provide this research as a basis for researchers to easily understand the comparison of AI models. The findings underscore the importance of aligning algorithm selection with specific task requirements and available resources, achieving a balance that maximizes performance across diverse application scenarios in artificial intelligence.

6. Conclusions

Evaluation and ranking of artificial intelligence algorithms are crucial for developing effective data analysis solutions and addressing complex problems. This study examined various algorithms, including Random Forest, XGBoost, and neural networks, based on criteria such as efficiency, flexibility, ease of implementation, stability, and scalability.
The results indicate that Random Forest and XGBoost are highly effective choices for tasks that prioritize stability, scalability, and quick processing, making them suitable for large datasets and real-time applications. Neural networks, on the other hand, excel in handling complex, nonlinear relationships, offering advanced modeling capabilities at the expense of greater resource requirements and interpretability challenges.
This study does not aim to determine a universally superior algorithm but rather introduces a model that enables systematic evaluation and ranking using the PIPRECIA-S method. By comparing results with methodologies like AHP and TOPSIS, this research demonstrates the PIPRECIA-S method’s ability to achieve comparable outcomes with reduced computational complexity, providing a valuable alternative for decision-makers. The scientific contribution of this study lies in adapting and applying the PIPRECIA-S method to the domain of AI algorithms. Its practical contribution is providing users with a more efficient approach for selecting algorithms for various applications, optimizing resource allocation, and enhancing the success of implementations. This approach offers flexibility in decision-making, allowing researchers and practitioners to adapt the analysis to meet specific needs and constraints.
For decision-makers, the findings provide essential guidelines for selecting algorithms that optimize performance according to predefined criteria. Prioritizing Random Forest and XGBoost for efficient, scalable solutions while leveraging neural networks for specialized data modeling tasks can significantly enhance analytical outcomes and improve organizational decision-making. The successful application of these recommendations will depend on the ability of organizations to carefully evaluate and align algorithm choices with their specific real-world demands and operational challenges.

Author Contributions

Conceptualization, S.P. and D.V.; methodology, A.B. and D.V.; software, A.S.: validation, S.P. and V.D.; formal analysis, V.N.; investigation, D.D.; writing—original draft preparation, S.P. and A.B.; writing—review and editing, D.V. and V.N.; visualization, D.D. and A.S.; supervision, V.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data supporting the reported results in this study are contained within the article itself.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Lee, D.; Yoon, S.N. Application of artificial intelligence-based technologies in the healthcare industry: Opportunities and challenges. Int. J. Environ. Res. Public Health 2021, 18, 271. [Google Scholar] [CrossRef] [PubMed]
  2. Zhang, Z.; Al Hamadi, H.; Damiani, E.; Yeun, C.Y.; Taher, F. Explainable artificial intelligence applications in cyber security: State-of-the-art in research. IEEE Access 2022, 10, 93104–93139. [Google Scholar] [CrossRef]
  3. Chakraborty, U. Artificial Intelligence for All: Transforming Every Aspect of Our Life, 1st ed.; BPB publications: Noida, India, 2020. [Google Scholar]
  4. Al Ka’bi, A. Proposed artificial intelligence algorithm and deep learning techniques for development of higher education. Int. J. Intell. Netw. 2023, 4, 68–73. [Google Scholar] [CrossRef]
  5. Dwivedi, Y.K.; Hughes, L.; Ismagilova, E.; Aarts, G.; Coombs, C.; Crick, T.; Williams, M.D. Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. Int. J. Inf. Manag. 2021, 57, 101994. [Google Scholar] [CrossRef]
  6. Zopounidis, C.; Doumpos, M. Multicriteria classification and sorting methods: A literature review. Eur. J. Oper. Res. 2002, 138, 229–246. [Google Scholar] [CrossRef]
  7. Liu, H.; Yu, L. Toward integrating feature selection algorithms for classification and clustering. IEEE Trans. Knowl. Data Eng. 2005, 17, 491–502. [Google Scholar] [CrossRef]
  8. Soni, K.M.; Gupta, A.; Jain, T. Supervised machine learning approaches for breast cancer classification and a high performance recurrent neural network. In Proceedings of the Third International Conference on Inventive Research in Computing Applications (ICIRCA), Coimbatore, India, 2–4 September 2021. [Google Scholar] [CrossRef]
  9. Elyan, E.; Vuttipittayamongkol, P.; Johnston, P.; Martin, K.; McPherson, K.; Moreno-Garcia, C.; Jayne, C.; Sarker, M.; Mostafa, K. Computer vision and machine learning for medical image analysis: Recent advances, challenges, and way forward. Artif. Intell. Surg. 2022, 2, 24–45. [Google Scholar] [CrossRef]
  10. Nabipour, M.; Nayyeri, P.; Jabani, H.; Band, S.; Mosavi, A. Predicting stock market trends using machine learning and deep learning algorithms via continuous and binary data; a comparative analysis. IEEE Access 2020, 8, 150199–150212. [Google Scholar] [CrossRef]
  11. Maksimović, S.; Dimić, V. Multi-criteria analysis of ICT implementation in investment projects: Case Study of construction companies in the Republic of Serbia. MB Univ. Int. Rev. MBUIR 2023, 1, 57–67. [Google Scholar]
  12. Jato-Espino, D.; Castillo-Lopez, E.; Rodriguez-Hernandez, J.; Canteras-Jordana, J. A review of application of multi-criteria decision making methods in construction. Autom. Constr. 2014, 45, 151–162. [Google Scholar] [CrossRef]
  13. Yalcin, A.S.; Huseyin, S.K.; Dursun, D. The use of multi-criteria decision-making methods in business analytics: A comprehensive literature review. Technol. Forecast. Soc. Change 2022, 174, 121193. [Google Scholar] [CrossRef]
  14. Rahman, S.; Mehedi, H.; Ajay, K.S. Prediction of brain stroke using machine learning algorithms and deep neural network techniques. Eur. J. Electr. Eng. Comput. Sci. 2023, 7, 23–30. [Google Scholar] [CrossRef]
  15. Hamon, R.; Junklewitz, H.; Sanchez, I. Robustness and Explainability of Artificial Intelligence; Publications Office of the European Union: Luxembourg, 2020; Volume 207. [Google Scholar] [CrossRef]
  16. Linardatos, P.; Papastefanopoulos, V.; Kotsiantis, S. Explainable AI: A review of machine learning interpretability methods. Entropy 2020, 23, 18. [Google Scholar] [CrossRef] [PubMed]
  17. Tanveer, H.; Adam, M.; Khan, M.; Ali, M. Analyzing the Performance and Efficiency of Machine Learning Algorithms, such as Deep Learning, Decision Trees, or Support Vector Machines, on Various Datasets and Applications. Asian Bull. Big Data Manag. 2023, 3, 126–136. [Google Scholar] [CrossRef]
  18. Dang, L.M.; Wang, H.; Li, Y.; Nguyen, T. Explainable artificial intelligence: A comprehensive review. Artif. Intell. Rev. 2022, 55, 3503–3568. Available online: https://link.springer.com/article/10.1007%2Fs10462-021-10088-y (accessed on 15 September 2024).
  19. Diogo, C.V.; Pereira, E.M.; Cardoso, J.S. Machine learning interpretability: A survey on methods and metrics. Electronics 2019, 8, 832. [Google Scholar] [CrossRef]
  20. Stanujkić, M.; Popović, G.; Karabašević, D.; Šarčević, M.; Stanujkić, D.; Novaković, S. Approach to the personnel selection in a group decision-making environment based on the use of the MULTIMOORA and PIPRECIA-S methods. BizInfo (Blace) J. Econ. Manag. Inform. 2024, 15, 19–26. [Google Scholar] [CrossRef]
  21. Popović, S.; Djukić, D.; Djukić Popović, S.; Gligorijević, M. Neural networks in pellet combustion control—an overview of the group’s research work in 2022/2023. In Proceedings of the 9th Virtual International Conference on Science, Technology and Management in Energy, Belgrade, Serbia, 23–24 November 2023; ISBN 978-86-82602-03-3. [Google Scholar]
  22. Popović, S.; Djukić, D.; Djukić Popović, S.; Kopanja, L. Preliminary Research on the Application of Neural Networks to the Combustion Control of Boilers with Automatic Firing. In Proceedings of the 8th Virtual International Conference on Science Technology and Management in Energy, Belgrade, Serbia, 26–28 January 2023; ISBN 978-86-82602-01-9. [Google Scholar]
  23. Popović, S.; Djukic Popovic, S.; Djukić, D.; Gligorijević, M. Genetic algorithms and machine learning as the basis of all implemented solutions in smart cities. In Proceedings of the International Scientific Conference–ALFATECH–Smart Cities and modern technologies, Belgrade, Serbia, 15 April 2024; ISBN 978-86-6461-074-2. [Google Scholar]
  24. Milošević, M.; Milošević, D.; Dimić, V. Application of Fuzzy AHP Approach for Designing Model of Smart City Development. In Proceedings of the International Scientific Conference–ALFATECH–Smart Cities and modern technologies, Belgrade, Serbia, 15 April 2024; ISBN 978-86-6461-074-2. [Google Scholar]
  25. Setiawansyah, M.K.; Sanrimi, S.; Ahmad, A.A. MCDM Using Multi-Attribute Utility Theory and PIPRECIA in Customer Loan Eligibility Recommendations. J. Inform. Electr. Electron. Eng. 2023, 3, 212–220. [Google Scholar] [CrossRef]
  26. Samoili, S.; Cobo, M.L.; Gómez, E.; De Prato, G.; Martínez-Plumed, F.; Delipetrev, B. AI Watch. Defining Artificial Intelligence. Towards an Operational Definition and Taxonomy of Artificial Intelligence; Publications Office of the European Union: Luxembourg, 2021; ISBN 978-92-76-42648-6. [Google Scholar] [CrossRef]
  27. Raghuwanshi, B.S.; Sanyam, S. Classifying imbalanced data using BalanceCascade-based kernelized extreme learning machine. Pattern Anal. Appl. 2020, 23, 1157–1182. Available online: https://link.springer.com/article/10.1007/s10044-019-00844-w (accessed on 10 September 2024). [CrossRef]
  28. Bringsjord, S.; Govindarajulu, N.S. Artificial intelligence. In The Stanford Encyclopedia of Philosophy, Fall 2024 ed.; Zalta, E.N., Nodelman, U., Eds.; Stanford University: Stanford, CA, USA, 2024; Available online: https://plato.stanford.edu/archives/fall2024/entries/artificial-intelligence/ (accessed on 20 August 2024).
  29. Verma, M. Artificial intelligence role in modern science: Aims, merits, risks and its applications. Int. J. Trend Sci. Res. Dev. (IJTSRD) 2023, 7, 335–342. [Google Scholar]
  30. Macpherson, T.; Churchland, A.; Sejnowski, T.; DiCarlo, J.; Kamitani, Y.; Takahashi, H.; Hikida, T. Natural and Artificial Intelligence: A brief introduction to the interplay between AI and neuroscience research. Neural Netw. 2021, 144, 603–613. [Google Scholar] [CrossRef] [PubMed]
  31. Fan, J.; Zheng, J.; Wu, L.; Zhang, F. Estimation of daily maize transpiration using support vector machines, extreme gradient boosting, artificial and deep neural networks models. Agric. Water Manag. 2021, 245, 106547. [Google Scholar] [CrossRef]
  32. Justin, K.; Lipo, W.; Jai, R.; Tchoyoson, L. Deep learning applications in medical image analysis. IEEE Access 2018, 6, 9375–9389. [Google Scholar]
  33. Abiodun, O.I.; Jantan, A.; Omolara, A.E.; Dada, K.V.; Umar, A.B.; Linus, O.U.; Arshad, H.; Kazaure, A.A.; Gana, U.; Kiru, M.U. Comprehensive review of artificial neural network applications to pattern recognition. IEEE Access 2019, 7, 158820–158846. [Google Scholar] [CrossRef]
  34. Yang, L. Artificial intelligence: A survey on evolution, models, applications and future trends. J. Manag. Anal. 2019, 6, 1–29. [Google Scholar]
  35. Abdolrasol, V.N.M.; Hussain, S.M.S.; Ustun, T.S.; Sarker, M.R.; Hannan, M.A.; Mohamed, R.; Ali, J.A.; Mekhilef, S.; Milad, A. Artificial Neural Networks Based Optimization Techniques: A Review. Electronics 2021, 10, 2689. [Google Scholar] [CrossRef]
  36. Bejani, M.M.; Ghatee, M. A systematic review on overfitting control in shallow and deep neural networks. Artif. Intell. Rev. 2021, 54, 6391–6438. [Google Scholar] [CrossRef]
  37. Tien, J.M. Internet of things, real-time decision making, and artificial intelligence. Ann. Data Sci. 2017, 4, 149–178. [Google Scholar] [CrossRef]
  38. Wen, H.-T.; Lu, J.-H.; Phuc, M.-X. Applying Artificial Intelligence to Predict the Composition of Syngas Using Rice Husks: A Comparison of Artificial Neural Networks and Gradient Boosting Regression. Energies 2021, 14, 2932. [Google Scholar] [CrossRef]
  39. Soltani, A.; Battikh, T.; Jabri, I.; Lakhoua, N. A new expert system based on fuzzy logic and image processing algorithms for early glaucoma diagnosis. Biomed. Signal Process. Control 2018, 40, 366–377. [Google Scholar] [CrossRef]
  40. Hawkins, D.M. The problem of overfitting. J. Chem. Inf. Comput. Sci. 2004, 44, 1–12. [Google Scholar] [CrossRef] [PubMed]
  41. Baier, L.; Jöhren, F.; Seebacher, S. Challenges in the Deployment and Operation of Machine Learning in Practice. In Proceedings of the ECIS 2019—27th European Conference on Information Systems, Stockholm, Sweden, 8–14 June 2019. [Google Scholar]
  42. Wu, S. A traffic motion object extraction algorithm. Int. J. Bifurc. Chaos 2015, 25, 1540039. [Google Scholar] [CrossRef]
  43. Jabbar, H.; Rafiqul, Z.K. Methods to avoid over-fitting and under-fitting in supervised machine learning (comparative study). In Computer Science, Communication and Instrumenta tion Devices; Research Publishing: Singapore, 2015; Volume 70.10.3850, pp. 163–172. [Google Scholar] [CrossRef]
  44. Ying, X. An overview of overfitting and its solutions. J. Phys. Conf. Ser. 2019, 1168, 022022. [Google Scholar] [CrossRef]
  45. Bodo, B.; Helberger, N.; Iron, K.; Zuiderveen, B.F.; Moller, J.; Van de Velde, B.; Bol, N.; Van Es, B.; de Vreese, C. Tackling the algorithmic control crisis-the technical, legal, and ethical challenges of research into algorithmic agents. Yale J. Law Technol. 2017, 19, 133–180. [Google Scholar]
  46. Whittlestone, J.; Nyrup, R.; Alexandrova, A.; Dihal, K.; Cave, S. Ethical and Societal Implications of Algorithms, Data, and Artificial Intelligence: A Roadmap for Research; Report Number: 978-1-9160211-0-5; Nuffield Foundation: London, UK, 2019. [Google Scholar]
  47. Zhang, J. Computer image processing and neural network technology for thermal energy diagnosis of boiler plants. Therm. Sci. 2020, 24, 3221–3228. [Google Scholar] [CrossRef]
  48. Chen, J.; Li, Q.; Liu, K.; Li, X.; Lu, B.; Li, G. Correction of moisture interference in laser-induced breakdown spectroscopy detection of coal by combining neural networks and random spectral attenuation. J. Anal. At. Spectrom. 2022, 37, 1658–1664. [Google Scholar] [CrossRef]
  49. Fernández-Alemán, J.; Lopez-gonzalez, L.; González-Sequeros, O.; López-Jiménez, J.J.; Carrillo de Gea, J.M.; Toval, A. An empirical study of neural network-based audience response technology in a human anatomy course for pharmacy students. J. Med. Syst. 2016, 40, 85. [Google Scholar] [CrossRef]
  50. Stanujkic, D.; Zavadskas, E.K.; Karabasevic, D.; Smarandache, F.; Turskis, Z. The use of the PIvot Pairwise RElative Criteria Importance Assessment method for determining the weights of criteria. Rom. J. Econ. Forecast. 2016, 20, 116–133. [Google Scholar]
  51. Saaty, R.W. The analytic hierarchy process—What it is and how it is used. Math. Model. 1987, 9, 161–176. [Google Scholar] [CrossRef]
  52. Stanujkic, D.; Karabasevic, D.; Popovic, G.; Sava, C. Simplified Pivot Pairwise Relative Criteria Importance Assessment (Piprecia-S) Method. Rom. J. Econ. Forecast. 2021, 24, 141–154. [Google Scholar]
  53. de Fine Licht, K.; de Fine Licht, J. Artificial intelligence, transparency, and public decision-making: Why explanations are key when trying to produce perceived legitimacy. AI Soc. 2020, 35, 917–926. [Google Scholar] [CrossRef]
  54. Halim, A.H.; Ismail, I.; Das, S. Performance assessment of the metaheuristic optimization algorithms: An exhaustive review. Artif. Intell. Rev. 2021, 54, 2323–2409. [Google Scholar] [CrossRef]
  55. Duan, T.; Wang, W.; Wang, T.; Chen, X.; Li, X. Dynamic tasks scheduling model of UAV cluster based on flexible network architecture. IEEE Access 2020, 8, 115448–115460. [Google Scholar] [CrossRef]
  56. Naveed, Q.N.; Qureshi, M.R.N.; Tairan, N.; Mohammad, A.; Shaikh, A.; Alsayed, A.O.; Shah, A.; Alotaibi, F.M. Evaluating critical success factors in implementing E-learning system using multi-criteria decision-making. PLoS ONE 2020, 15, e0231465. [Google Scholar] [CrossRef] [PubMed]
  57. Khaire, U.M.; Dhanalakshmi, R. Stability of feature selection algorithm: A review. J. King Saud Univ. -Comput. Inf. Sci. 2022, 34, 1060–1073. [Google Scholar] [CrossRef]
  58. Ramezani, S.; Cummins, L.; Killen, B.; Carley, R.; Amirlatifi, A.; Rahimi, S.; Seale, M.; Bian, L. Scalability, explainability and performance of data-driven algorithms in predicting the remaining useful life: A comprehensive review. IEEE Access 2023, 11, 41741–41769. [Google Scholar] [CrossRef]
  59. Vollmer, S.; Mateen, B.A.; Bohner, G.; Király, F.J.; Ghani, R.; Jonsson, P.; Cumbers, S.; Jonas, A.; McAllister, K.S.L.; Myles, P.; et al. Machine learning and artificial intelligence research for patient benefit: 20 critical questions on transparency, replicability, ethics, and effectiveness. BMJ 2020, 368, l6927. [Google Scholar] [CrossRef]
  60. Sintaro, S.; Setiawansyah, S. Kombinasi Multi-Objective Optimization on the basis of Ratio Analysis (MOORA) dan PIPRECIA dalam Seleksi Penerimaan Barista. J. Ilm. Inform. Dan Ilmu Komput. (JIMA-ILKOM) 2024, 3, 13–23. [Google Scholar] [CrossRef]
  61. Setiawansyah, S.; Sintaro, S.; Saputra, V.H.; Aldino, A.A. Combination of Grey Relational Analysis (GRA) and Simplified Pivot Pairwise Relative Criteria Importance Assessment (PIPRECIA-S) in Determining the Best Staff. Bull. Inform. Data Sci. 2024, 2, 57–66. [Google Scholar] [CrossRef]
Figure 1. Step-by-step process for applying the PIPRECIA-S method.
Figure 1. Step-by-step process for applying the PIPRECIA-S method.
Electronics 14 00562 g001
Figure 2. Performance of AI algorithms across criteria.
Figure 2. Performance of AI algorithms across criteria.
Electronics 14 00562 g002
Figure 3. Sensitivity analysis for efficiency.
Figure 3. Sensitivity analysis for efficiency.
Electronics 14 00562 g003
Figure 4. Sensitivity analysis for flexibility.
Figure 4. Sensitivity analysis for flexibility.
Electronics 14 00562 g004
Figure 5. Sensitivity analysis for ease of implementation.
Figure 5. Sensitivity analysis for ease of implementation.
Electronics 14 00562 g005
Figure 6. Sensitivity analysis for stability.
Figure 6. Sensitivity analysis for stability.
Electronics 14 00562 g006
Figure 7. Sensitivity analysis for scalability.
Figure 7. Sensitivity analysis for scalability.
Electronics 14 00562 g007
Table 1. Ranking scale.
Table 1. Ranking scale.
Description of CriteriaSignificance of CriteriaPIPRECIA-S Scale
The criterion is much less important than the reference10.60
The criterion is somewhat less important than the reference20.80
The criterion has the same importance as the reference31.00
The criterion is slightly more important than the reference41.20
The criterion is much more important than the reference51.40
Source: author’s research.
Table 2. Relative importance of criteria for selecting artificial intelligence algorithms.
Table 2. Relative importance of criteria for selecting artificial intelligence algorithms.
Evaluation CriteriaValues of criteriaFinal Weight
Efficiency5 (1.40)0.41
Flexibility4 (1.20)0.28
Ease of implementation3 (1.00)0.18
Stability2 (0.80)0.09
Scalability1 (0.60)0.04
Source: author’s research.
Table 3. Research results based on efficiency.
Table 3. Research results based on efficiency.
CriterionLinear
Regression
KNNSVMNeural
Networks
Random ForestXGBoostCNNRNN
Rating3
(1.00)
4
(1.20)
5
(1.40)
5
(1.40)
4
(1.20)
5
(1.40)
5
(1.40)
4
(1.20)
Source: author’s research.
Table 4. Research results based on flexibility.
Table 4. Research results based on flexibility.
CriterionLinear
Regression
KNNSVMNeural
Networks
Random ForestXGBoostCNNRNN
Rating5
(1.40)
3
(1.00)
3
(1.00)
2
(0.80)
4
(1.20)
4
(1.20)
2
(0.80)
2
(0.80)
Source: author’s research.
Table 5. Research results based on ease of implementation.
Table 5. Research results based on ease of implementation.
CriterionLinear
Regression
KNNSVMNeural
Networks
Random ForestXGBoostCNNRNN
Rating5
(1.40)
4
(1.20)
4
(1.20)
2
(1.20)
4
(1.40)
3
(1.00)
2
(0.80)
3
(1.00)
Source: author’s research.
Table 6. Research results based on stability.
Table 6. Research results based on stability.
CriterionLinear
Regression
KNNSVMNeural
Networks
Random ForestXGBoostCNNRNN
Rating3
(1.00)
3
(1.00)
4
(1.20)
5
(1.40)
4
(1.20)
5
(1.40)
5
(1.40)
5
(1.40)
Source: author’s research.
Table 7. Research results based on scalability.
Table 7. Research results based on scalability.
CriterionLinear
Regression
KNNSVMNeural
Networks
Random ForestXGBoostCNNRNN
Rating5
(1.40)
4
(1.20)
3
(1.00)
2
(0.80)
4
(1.20)
3
(1.00)
2
(0.80)
2
(0.80)
Source: author’s research.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Popović, S.; Viduka, D.; Bašić, A.; Dimić, V.; Djukic, D.; Nikolić, V.; Stokić, A. Optimization of Artificial Intelligence Algorithm Selection: PIPRECIA-S Model and Multi-Criteria Analysis. Electronics 2025, 14, 562. https://doi.org/10.3390/electronics14030562

AMA Style

Popović S, Viduka D, Bašić A, Dimić V, Djukic D, Nikolić V, Stokić A. Optimization of Artificial Intelligence Algorithm Selection: PIPRECIA-S Model and Multi-Criteria Analysis. Electronics. 2025; 14(3):562. https://doi.org/10.3390/electronics14030562

Chicago/Turabian Style

Popović, Stefan, Dejan Viduka, Ana Bašić, Violeta Dimić, Dejan Djukic, Vojkan Nikolić, and Aleksandar Stokić. 2025. "Optimization of Artificial Intelligence Algorithm Selection: PIPRECIA-S Model and Multi-Criteria Analysis" Electronics 14, no. 3: 562. https://doi.org/10.3390/electronics14030562

APA Style

Popović, S., Viduka, D., Bašić, A., Dimić, V., Djukic, D., Nikolić, V., & Stokić, A. (2025). Optimization of Artificial Intelligence Algorithm Selection: PIPRECIA-S Model and Multi-Criteria Analysis. Electronics, 14(3), 562. https://doi.org/10.3390/electronics14030562

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop