1. Introduction
In recent years, the fusion of reconfigurable manufacturing systems (RMSs) with sustainable manufacturing (SM) has sparked growing global research interest, leading to the emergence of sustainable reconfigurable manufacturing systems (SRMSs). As the manufacturing landscape evolves, SRMSs stand at the forefront of innovation, blending the adaptive capabilities of RMSs with the principles of SM to create a forward-thinking production paradigm. RMSs, introduced in the late 1990s as the next-generation manufacturing system, involve the development of a production system at the frontier of flexible manufacturing systems and dedicated lines in response to sudden changes in the market and/or regulatory requirements [
1,
2,
3]. SM, viewed as a practice of circularity in manufacturing under the circular economy concept [
4], entails developing more sustainable products—those that are energy-efficient, eco-friendly, and socially responsible—using sustainable processes and systems, i.e., those that produce minimal adverse environmental effects, conserve energy and natural resources, are harmless to people and are viable for profit [
5,
6,
7]. The SRMS concept brings together the adaptability of RMSs and the sustainability considerations of SM. RMSs are characterized by six core characteristics—modularity, integrability, diagnosability, customization, convertibility, and scalability—enabling rapid reconfiguration of manufacturing systems to accommodate varying tasks [
8]. By embedding these characteristics within an SM framework, SRMSs support the development of manufacturing systems that respond to the dual demands of sustainability and reconfigurability [
9,
10]. By definition, SRMSs are designed to quickly adapt to changes in product types and volumes through the modular and flexible arrangement of resources, while simultaneously ensuring that operations are conducted in an environmentally responsible and socially beneficial manner.
Artificial intelligence (AI) techniques, categorized by four key features—human thinking, human acting, rational thinking, and rational acting—are utilized to enable systems (machines and equipment) to acquire knowledge from information and data collected from their external environment [
11,
12]. These techniques allow systems to apply cognitive capabilities that support humans in performing complex tasks [
13,
14,
15,
16]. AI algorithms are generally inspired by the functioning of human cognitive systems and natural organisms, processing information through mechanisms such as learning, adaptation, reproduction, and survival [
17,
18,
19]. Numerous prior studies have highlighted the significant potential of AI techniques to enhance intelligent decision-making processes and to develop proactive and predictive capabilities within supply chain and production systems [
16,
20]; however, there seems to be a leveling off in the adoption of AI techniques for enhancing system resiliency and intelligent decision-making across many companies [
21]. As evidenced by the literature, research integrating AI techniques with intelligent decision-making to build resilient systems is still in its early stages, with most studies adopting case-study approaches to investigate specific problems [
12,
22]. To this end, the existing research landscape on SRMSs and their integration with AI reveals a significant gap. This gap highlights a crucial research opportunity and stresses the need for pioneering studies to explore how AI techniques can contribute to this understudied context. Motivated by addressing the recognized gap, this study aims to present a deliberation on the subject matter, with a particular focus on assessing AI techniques for SRMSs.
To achieve this, I developed an AI-enabled methodological approach using fuzzy logic programming in Python as the computational foundation. Belhadi et al. [
12] provided valuable insight into the potential of AI techniques like fuzzy logic programming, big data analytics, machine learning, agent-based systems, etc., in enhancing supply chain systems’ resiliency. They highlighted the importance of these techniques but also acknowledged a gap in research, particularly in applying fuzzy logic programming. Thus, this study reveals that developing fuzzy logic programming solutions in Python for decision-making problems in SRMSs could be a promising avenue for research and practical application. Fuzzy logic provides a framework for handling uncertainty and imprecision, which are common in sustainability- and resiliency-related decision-making processes. Python’s simplicity and flexibility make it well suited for prototyping, experimenting with different algorithms, and integrating with concepts. However, the literature lacks such AI-driven approaches, and only a few scholars have consistently advocated for enhancing decision-making processes with these AI techniques [
12,
23]. The contribution of this research also entails uniquely presenting an AI-powered decision-making application using large language models (LLMs) in the field of natural language processing (NLP), which has become a dominant branch of artificial intelligence [
24,
25]. Marking a first in measurement/decision science, this research leverages LLMs to conduct assessments, introducing an innovative approach that incorporates unbiased expert judgment even in the context of limited knowledge and expert availability.
To expound upon my research’s contribution, this paper is organized as follows:
Section 2 provides insights into the core domains for the sake of the research aim.
Section 3 presents the AI-enabled methodological approach.
Section 4 clarifies the approach developed in this study through an AI-powered decision-making application to accomplish the purpose of the research. Next,
Section 5 delves into a thorough discussion of the findings and implications. Finally,
Section 6 outlines the conclusions and recommendations.
3. AI-Enabled Methodological Approach
Rooted in the foundational work of Bellman and Zadeh [
93], fuzzy logic has evolved into an AI-driven approach for addressing multicriteria decision-making problems [
12,
83,
84]. In 1970, Bellman and Zadeh [
93] introduced a framework in which a decision criterion is represented as a fuzzy subset within a set of decision alternatives, denoted as X. In this framework, the membership function of an alternative
x in a given criterion
Cr, expressed as
Cr (
x), measures the degree to which
x satisfies the criterion
Cr. When dealing with multiple criteria labeled
Crj, where
j ranges from 1 to
q, these authors proposed a method for constructing an aggregate decision function
, such that
. This aggregate function, which is also a fuzzy subset of
X, is defined by
, reflecting the extent to which
x simultaneously satisfies all of the criteria. Since the development of this foundational model, subsequent research has explored the use of alternative operators to combine satisfaction levels across various criteria. These studies have investigated different methods for aggregating criteria, moving beyond the original approach, to improve decision-making processes in complex multicriteria environments. The integration of these alternative operators provides more flexibility and precision in determining how well alternatives meet the combined requirements of multiple criteria.
Building on the concept of ‘technique for order performance by similarity to ideal solution’, initially formulated by Hwang and Yoon [
94], this study developed an AI-driven decision-making model using Python to facilitate decision-making in complex scenarios. These scenarios often involve intricate analysis and assortment across multiple characteristics, criteria, and stakeholders, all within a fuzzy environment. To address the uncertainty present in decision data and group decision-making processes, linguistic or artificial variables were employed to assess both the weights of the criteria and the ratings of each alternative against each criterion. Triangular fuzzy numbers (TFNs) were primarily utilized as artificial variables for preference assessment due to their ease of use and simplicity in calculation, which aids decision-makers in fuzzy environments. A TFN is defined by a triplet (A, B, C), where A represents the smallest possible value, B is the most likely or probable value, and C denotes the largest possible value. This structure enables decision-makers to account for uncertainty and variability, encapsulating a range of values that reflect the imprecision inherent in assessments or measurements within a fuzzy system.
Definition 1. Let Y = (A, B, C) and Z = (A1, B1, C1) be two triangular fuzzy numbers. Then, the basic operations of TFNs are defined as follows: The distance between fuzzy numbers Y and Z is computed:
In the case of a group consisting of P decision makers, where each decision-maker (for p = 1, 2, 3, …, P) provides a fuzzy rating in the form of a positive triangular fuzzy number and the membership function represents the degree to which a given value x belongs to the fuzzy set. The aggregated fuzzy rating R = (A, B, C) is determined by applying a chosen aggregation operator, which combines the fuzzy ratings provided by the group of decision-makers. This aggregation process integrates the varying assessments of each decision-maker into a unified representation. The aggregation operators frequently used in this study include a variety of methods, which are outlined as follows:
Arithmetic Mean of TFNs—this is a method that is commonly used to aggregate fuzzy ratings by calculating the average values of the parameters that define the fuzzy numbers across multiple inputs or data points. This method assumes that all inputs are of equal weight (importance). It is computed as follows:
Weighted Arithmetic Mean of TFNs—this method extends the basic arithmetic mean by incorporating weights that represent the relative importance or reliability of each input. This method is particularly beneficial in situations where certain decision-makers or criteria are considered more influential or significant than others. The weighted arithmetic mean allows for a sounder aggregation by assigning different weights to the inputs. The following formula is applied accordingly:
where w
p is the weight assigned to the p-th decision-maker’s rating and
is the total weight. This method ensures that the aggregated fuzzy rating reflects the varying levels of significance or trust placed in the inputs, thereby offering a more accurate and context-sensitive representation of the group’s overall assessment.
Min–Max–Mean Method—this method calculates the minimum, mean, and maximum values of the parameters defining the fuzzy numbers across a set of inputs. This method is designed to capture a broad range of perspectives, from the most conservative to the most optimistic evaluations. By considering these three distinct points—minimum, mean, and maximum—the method provides a more comprehensive view of potential outcomes, reflecting the full spectrum of uncertainty in decision-making. The approach ensures that decision-makers account for the lowest possible, most likely, and highest possible scenarios, offering a balanced representation of the varying degrees of confidence in the input data.
Each of these aggregation methods serves a distinct purpose and may be applied in specific decision-making contexts. Given these considerations, the following algorithm summarizes the main steps used in the proposed approach.
# Step 1: Define criteria (characteristics) and their types, i.e., benefit and cost criteria.
# Step 2: Design TFNs corresponding to the importance of the criteria and the AI techniques’ performance.
# Step 3: Determine criteria weights and performance ratings using TFNs assigned by the AI models. Equations (6), (7) or (8) can be used to aggregate.
# Step 4: Normalize fuzzy decision matrix:
where m and n represent the number of alternatives and criteria, respectively, and
, which represents the normalized fuzzy rating of alternative i for criterion j, is calculated as follows:
where J and J′ are associated with benefit and cost criteria, respectively.
# Step 5: Formulate the weighted normalized fuzzy decision matrix for all AI techniques:
where
and
is the weight of the jth criterion.
# Step 6: Compute the fuzzy positive optimal outcome (FPO) and fuzzy negative optimal outcome (FNO) for all AI techniques:
# Step 7: Compute the distances from FPO and FNO following Equation (5).
# Step 8: Calculate the closeness coefficient (CC) and order the AI techniques based on
values:
# Step 9: Sensitivity analysis (SA), regarded as the hermeneutics of mathematical modeling [
95], systematically alters input parameters, such as weights, to assess their impact on the model’s outcomes. This approach helps confirm the robustness of the results [
96], examining how changes in the criteria weights
wj affect
CCi. For each experiment, a new set of
CCi values for all AI techniques is programmatically calculated:
where
and
are the distances from the FPO and FNO, respectively, recalculated for the k-th set of weights. Thus, in each experiment, the weights assigned to the criteria
wj are varied to observe how these changes impact
CCi, providing insights into the robustness of the outcomes.
The validation of this proposed method, coined as the Fuzzy set Technique for Order Performance using Python (FuTOPy), is detailed in
Appendix A, showcasing the algorithm’s practical applicability and providing scientific evidence of its validity. It includes a comprehensive analysis of a real-world scenario, offering empirical evidence that demonstrates the method’s effectiveness and reliability in practical settings. It introduces researchers to the essential aspects of how the method can be applied to tackle complex decision-making problems. The employment of varied data and operators in the case study highlights the adaptability and robustness of the methodological approach, emphasizing its capability to address diverse decision-making challenges effectively. Therefore, it can serve as a pedagogical tool, enhancing understanding and providing valuable insights, particularly for researchers who are new to the concept.
4. AI-Powered Decision-Making Application
Natural language processing (NLP) has become a fundamental branch within the broader field of artificial intelligence (AI), as discussed in
Section 2. NLP combines computational linguistics—rule-based modeling of human language—with statistical and machine learning models. It is used in a myriad of applications: more advanced uses involve interactive conversational agents, such as chatbots and personal assistants, that can engage in human-like dialogs, make decisions, and offer recommendations based on contextual understanding [
15,
19]. According to Chowdhary [
24], a growing volume of natural language text makes it difficult for humans to extract knowledge efficiently, a task which automated NLP aims to accomplish with accuracy and speed.
ChatGPT, an artificial intelligence-generated content model developed by OpenAI [
18], has gained worldwide attention for its ability to manage complex language understanding and generation tasks in conversational form. This large language model (LLM) utilizes advanced technologies like deep learning, unsupervised learning, instruction fine-tuning, multi-task learning, in-context learning, and reinforcement learning, all of which are highly effective in processing sequential data and have been revolutionary in the field of NLP [
18,
25]. Built upon the original GPT (generative pre-trained transformer) model, which has evolved from GPT-1 in 2018 to GPT-4, an LLM capable of processing both image and text inputs and generating text outputs, ChatGPT demonstrates human-level performance across various professional and academic benchmarks [
97].
Advanced ChatGPT can process extensive prompts and maintain context over longer interactions, allowing for more coherent and contextually relevant responses, which is a critical feature for applications in complex domains such as SRMSs. Although it does not dynamically learn during interaction, it can be fine-tuned on specific datasets to better perform in niche areas like sustainable manufacturing, providing insights based on the vast array of data it was trained on. With the ability to understand and generate multiple languages, ChatGPT can serve a range of geographical locations, making it a valuable tool for global operations. The model excels in generating informative, accurate, and engaging content, which is useful for reports, summaries, and analysis in decision-making processes. The study aimed to leverage the unique capabilities of its advanced models, i.e., ChatGPT-4, which is known for its robust performance in generating contextual and responses, making it suitable for analyzing complex system interactions and sustainability criteria; ChatGPT-40mini (while hypothetical, if assumed to be a scaled-down version, it could be ideal for rapid, less computationally intensive queries, allowing for quick hypothesis testing or preliminary analysis); and ChatGPT-40—this model represents a significant upscale in processing power and knowledge base, potentially providing deeper insights and more comprehensive analyses.
Incorporating multiple LLMs in a decision support system offers a promising avenue for enhancing decision-making processes by leveraging diverse computational perspectives and capabilities. Therefore, the initial step involved an assessment of each model across three critical dimensions: accuracy—the precision with which models respond to queries related to SRMSs; relevance—the degree to which each model’s training and fine-tuning align with the specific requirements of SRMSs; and consistency—the reliability with which each model provides dependable outputs across a range of inputs. These assessments can be derived from a combination of preliminary testing phases, where models’ outputs are benchmarked against known datasets. Based on the evaluation, differential weights are assigned: ChatGPT-4, with a weight of 0.5, is recognized for its extensive training database and proven effectiveness across a wide range of scenarios, indicating high reliability and accuracy. ChatGPT-40, with a weight of 0.3, is presumed to incorporate newer algorithms that may offer fresh insights or enhanced computational methods, warranting a substantial but cautiously optimistic weighting. ChatGPT-40mini, with a weight of 0.2, likely a scaled-down version, is designated for less complex or highly specific tasks within the SRMSs, reflecting its focused utility and narrower scope of application.
Thus, due to the scarcity of available knowledge and experts in the field, this study utilized the advanced capabilities of these LLMs to explore a range of AI techniques (T1–T17, discussed in
Table 2) for the sake of SRMSs. The core characteristics (criteria) of these systems (C1–C6; discussed in
Table 1) were examined in depth. The application of these LLMs was primarily due to the scarcity of available knowledge and experts in the field, positioning these AI tools as essential resources for filling knowledge gaps and providing expert-level insights. The primary objective was to ascertain the contribution of various AI techniques to the core characteristics essential for sustainable manufacturing. Each LLM was queried based on the weights and performance ratings of the criteria and AI techniques, e.g., applied to each criterion: “how would you weight the importance of modularity in sustainable manufacturing on a scale from “very low (VL)” to “very high (VH)”, and why?”, etc.; applied to each AI technique performance rating: “how would you rate the performance of network-based algorithms for modularity in sustainable reconfigurable manufacturing systems on a scale from “very poor (VP)” to “very good (VG)”, and why?”, etc. Using this template ensures that all questions are aligned in their structure, making it straightforward for LLMs to understand what is being asked and providing a uniform basis on which to give their insights. This structured approach not only aids in collecting detailed feedback (data) on the specific contributions of each AI technique to RMS core characteristics but also helps to synthesize comprehensive insights that can be crucial for strategic decision-making within sustainable manufacturing contexts.
To this end, the proposed method (FuTOPy) is applied to solve such decision-making problems following the steps defined in
Section 3. This approach is particularly well suited for situations in which decisions are complex and involve uncertainty or vagueness, allowing for a more intelligent analysis compared to traditional crisp decision-making models. As shown in
Figure 1, the script was executed in a Python 3.12.1 environment using Visual Studio Code (VS Code). This version of Python provides advanced features and improved performance, ensuring efficient handling of complex calculations involved in fuzzy logic processing. vs. Code is a highly popular, free, open-source integrated development environment (IDE) developed by Microsoft. It is widely recognized for its versatility, user-friendly interface, and robust support for a wide range of programming languages, including Python [
98]. It effortlessly integrates with Python 3.12.1, allowing us to write, execute, and debug the script to solve multicriteria decision-making problems in fuzzy environments.
As revealed in
Figure 1, several key libraries play a crucial role in facilitating the intelligent fuzzy set decision support models. NumPy, a fundamental package for scientific computing in python, is instrumental for handling numerical operations, especially arrays and matrix manipulations [
99,
100], which are essential for processing and calculating triangular fuzzy numbers (TFNs). Matplotlib, a versatile plotting library, is utilized to visualize these TFNs, offering a clear graphical representation that aids in understanding the shades of fuzzy logic evaluations. It enables the creation of intuitive plots [
101] that illustrate the range and distribution of linguistic assessments converted into fuzzy numerical values. Lastly, Tabulate, a library for generating structured tables, is crucial for presenting the aggregated data, normalized matrices, and final decision-making results in an easily interpretable format. Together, these libraries form the backbone of the script, providing the necessary tools for numerical computation, data visualization, and result presentation, thereby enhancing the efficiency and clarity of the decision-making process.
Reviewing the inputs (
Figure 1) reveals that six criteria (C1–C6) were defined for evaluation, with each criterion categorized as either a “cost” or a “benefit” in # Step 1. This classification is crucial, as it impacts how the criteria are normalized and weighted during the analytical process.
Figure 2 illustrates TFNs designed in # Step 2. The linguistic terms (ranging from “Very Low (VL)” to “Very High (VH)”) are translated into TFNs. This mapping allows LLMs to express their feedback in an effective manner, acknowledging the inherent uncertainty and subjectivity in assessing the importance of criteria. Similarly to weights, the performance ratings are expressed in linguistic terms (ranging from “Very Poor (VP)” to “Very Good (VG)”) and converted into TFNs. In # Step 3 (
Figure 1), the AI_TFNcw represents the aggregated feedback of three LLMs (ChatGPT-4, ChatGPT-40, and ChatGPT-40mini) on the importance of each criterion. Each criterion’s weight (cw) is expressed in linguistic terms, later to be converted into TFNs using the TFNcw. Next, these TFNs are aggregated to form a consensus or average representation of each model’s feedback based on the weights given to the LLMs (AIw1, AIw2, AIw3) by using Equation (8). A similar process is conducted for the AI techniques ratings across all criteria.
Table 3 and
Table 4 show the criteria weights and the performance ratings of the 17 AI techniques given by the LLMs, respectively.
# Step 3: Criteria weights and performance ratings using TFNs assigned by the LLMs were determined and aggregated following Equation (8).
Table 5 and
Table 6, which are screenshots of outcomes displayed in the terminal (see
Figure 1), illustrate the aggregated weights of the criteria and the aggregated performance ratings for AI techniques based on criteria, respectively.
# Step 4: The normalized fuzzy decision matrix is presented in
Table 7.
# Step 5: The weighted normalized fuzzy decision matrix is shown in
Table 8.
# Step 6: The fuzzy positive optimal outcome (FPO) and fuzzy negative optimal outcome (FNO) for all AI techniques are displayed in
Table 9.
# Step 7: The distances from FPO and FNO are provided in
Table 10.
# Step 8: The closeness coefficient (CC), presented in
Table 11, is finally calculated to rank the performance of AI techniques. A higher CC value is desirable, as it signifies that the AI technique is closer to achieving the optimal outcome based on the assessed criteria. As shown in
Table 11, the AI techniques ranked from most to least preferable based on the closeness coefficient are T5, T10, T12 and T15 (tied), T14, T17, T7, T16, T6, T13, T11, T4, T1, T8, T9, T2, and T3.
# Step 9: This study conducted a sensitivity analysis (SA) by varying major criteria weights to evaluate the performance of AI techniques for SRMSs. In this regard, 62 experiments were conducted, each representing a different condition, to evaluate the various combinations of the six criteria/characteristics. For each experiment, a new set of CC values for all AI techniques using Equation (18) through the AI-enabled methodology was programmatically calculated.
Figure 3 shows the SA of the current study, revealing the performance of AI techniques across 62 experiments. Consequently, AI techniques “T5” and “T10” consistently exhibit higher closeness coefficients across most experiments, indicating their robust applicability and effectiveness in enhancing multiple core characteristics of SRMSs.
5. Discussion on Findings and Implications
This study provides valuable insights into essential AI techniques for SRMSs, aiding policymakers and decision-makers in understanding and adopting these technologies. It proposes an AI-enabled methodology, outlined in
Section 3, that effectively addresses the uncertainties in decision-making processes. More significantly, it showcases the use of AI in decision-making through the application of large language models (
Section 4), i.e., ChatGPT-4, ChatGPT-40, and ChatGPT-40mini, which have proven to be powerful tools in the realm of artificial intelligence. Marking a first in decision science, this research leverages these large language models to provide expert-like assessments, introducing an innovative approach that incorporates unbiased expert judgment despite the limited availability of knowledge and specialists in the field. This aligns with the observations of Choi et al. [
23] and Belhadi et al. [
12], who noted a gap in the literature for such intelligent approaches, with only a few scholars consistently advocating for the enhancement of decision-making processes through these AI techniques.
Table 1 outlines the six core characteristics (criteria), serving as a foundation for the thorough investigation. According to Koren et al. [
10], to manufacture sustainable products through sustainable processes, production systems must have capabilities that enhance economic, environmental, and societal sustainability—these criteria not only facilitate rapid system responsiveness at a low cost but also play an important role in promoting overall system sustainability. As noted by Huang et al. [
9], the characteristics impacting emission metrics include modularity, customization, and convertibility, which are crucial for controlling hazardous gasses and total GHG generation. Modularity and customization are key to managing waste generation and recovery, including liquid, solid, and hazardous waste. Customization and convertibility contribute significantly to reducing water consumption and increasing reuse/recycling. Several characteristics, particularly customization, convertibility, and scalability, are linked to improving energy usage and efficiency, including the reduction in idle energy losses. Nearly all characteristics impact operational metrics like the lead time, productivity, labor utilization, and percentage of on-time delivery. Labor costs, material costs, transportation, and maintenance costs are heavily influenced by customization, convertibility, and scalability. Diagnosability also plays a role, mainly in minimizing equipment-related and maintenance costs. This indicates how these characteristics play a critical role in enhancing sustainability on various fronts—environmental (emission, waste, water/energy efficiency) and economic (operational performance, manufacturing costs). Customization, modularity, and convertibility seem to be the most influential characteristics affecting multiple sustainability metrics.
To this end, this study developed an AI-enabled methodological approach to appraise the performance of AI techniques based on these characteristics (criteria) using Python programming that integrates fuzzy logic to effectively navigate uncertainties inherent in the investigation. The choice of Python as the computational backbone ensures access to an extensive ecosystem of libraries and tools that facilitate sophisticated data manipulation, optimization, and analysis, thus enhancing the model’s computational capabilities. The findings revealed that machine learning and big data analytics (T5) as well as fuzzy logic and programming (T10) stand out as the most promising AI techniques for SRMSs. The AI techniques ranked from most to least preferable based on the closeness coefficient were T5, T10, T12, and T15 (tied), T14, T17, T7, T16, T6, T13, T11, T4, T1, T8, T9, T2, and T3. This demonstrates that human acting and rational thinking techniques are the most important categories for SRMSs, with T5 and T10 standing out as the top performers. The application also confirmed that using fuzzy logic programming in Python as the computational foundation significantly enhances precision, efficiency, and execution time, offering critical insights that enable more timely and informed decision-making in the field.
The incorporation of sensitivity analysis further enabled a thorough evaluation of how input variations impact decision-making outcomes. This examination is instrumental in understanding the robustness of decisions against uncertainties, offering stakeholders a deeper insight into the implications of their decisions. As shown in
Figure 3, T5 is consistently one of the top-performing techniques across all experiments, e.g., in E#1 (C1), it has a CC of 0.6952, and in E#9 (C1, C4), it scores 0.695. This suggests that T5 is highly suitable for addressing the dynamic requirements of SRMSs. T10 is another high-ranking technique, performing particularly well in contexts involving uncertainty. In E#5 (C5), it has a CC of 0.6515, while in E#12 (C2, C3), it scores 0.6631. This indicates that fuzzy logic is effective when dealing with variability and imprecision in SRMSs. T12 and T15 both perform well across several experiments, e.g., in E#12 (C2, C3), T12 has a CC of 0.6426, and T15 also performs consistently with values like 0.6523 in E#24 (C1, C2, C5). These techniques are critical for optimization and control in uncertain and complex SRMSs environments. T14 and T17 show varying performance but are still among the more effective AI techniques for SRMSs, e.g., in E#16 (C3, C4), T14 has a CC of 0.6327, and T17 achieves 0.6424 in the same experiment. Both techniques offer adaptability and automation, making them useful in specific contexts, such as simulation and automated decision-making. T2 and T3 consistently rank lower across most experiments, e.g., T2 has a CC of 0.4333 in E#1 (C1) and T3 scores 0.2654 in the same experiment. This suggests that these techniques are less suitable for SRMSs, which may be due to their limitations in handling dynamic, real-time manufacturing environments.
Going through the criteria analysis indicates that C1 has high alignment with techniques like T5 (0.6952 in E#1) and T10 (0.6807 in E#1). Modularity requires flexibility [
2,
8,
38], and AI techniques that support data-driven decision-making and adaptability seem to be the most effective. Regarding C2, T5 and T10 perform well under this characteristic, as seen in E#2, with CCs of 0.6767 and 0.6694, respectively. Integrability requires effective communication between different systems [
2,
8,
38], making AI techniques that enhance system integration highly relevant. Techniques like T5 and T10 also perform well under C3, e.g., in E#3, T5 scores 0.6639 and T10 achieves 0.6602. Diagnosability benefits from AI techniques that can handle complex diagnostics and provide predictive capabilities. T5 and T10 continue to rank highly under C4, with T5 scoring 0.6787 and T10 scoring 0.6713 in E#4. Convertibility requires adaptability [
2,
8,
9], for which machine learning and fuzzy logic provide effective support. In E#5, T5 and T10 maintain strong performance under C5, with CCs of 0.6576 and 0.6515, respectively. Customization in SRMSs benefits from AI techniques that can handle variability and offer tailored solutions for specific product families. Techniques like T5, T10, and T15 perform well in experiments focusing on C6, e.g., in E#6, T5 scores 0.6816, and T10 scores 0.6689. Scalability demands AI techniques that can manage increasing or decreasing production capacities while maintaining system efficiency. In general, the analyses demonstrated that T5 and T10 are the most effective AI techniques for SRMSs, providing robust solutions across different characteristics such as modularity, integrability, and scalability. Techniques like T12 and T15 also rank highly, offering strong support for optimization and control in dynamic and uncertain environments. Lower-ranking techniques, such as T2 and T3 found to be less suitable for SRMSs. The purpose of considering all combinations of criteria in this analysis has important practical implications for understanding the effects and interactions among criteria under different scenarios, especially in the context of SRMSs. By running experiments with various combinations of the criteria, ranging from single-criterion experiments to experiments involving all six criteria together, the analysis becomes comprehensive and highly informative. This approach yields insights into how different criteria interact with each other and how these interactions impact decision-making. By analyzing the full spectrum of criteria combinations, such analyses ensure that no critical interaction is overlooked, offering data that highlight both macro and micro-level impacts of decisions.
6. Conclusions and Recommendations
Despite substantial research efforts advancing the fields of artificial intelligence (AI) and sustainable reconfigurable manufacturing systems (SRMSs), a notable gap remains in the current landscape: no comprehensive study has been conducted to explore and evaluate AI techniques toward SRMSs. This gap highlights critical research opportunities; as such, this study aimed to present a deliberation on the subject matter, with a particular focus on assessing AI techniques for the sake of SRMSs.
To achieve the aim, an AI-enabled methodological approach was developed to appraise the performance of techniques using Python programming, which integrates fuzzy logic to effectively navigate uncertainties inherent in the assessment. More significantly, this study demonstrated the use of AI in assessing and decision-making through the application of large language models, i.e., ChatGPT-4, ChatGPT-40, and ChatGPT-40mini, which have proven to be powerful tools in the context of artificial intelligence. Thus, this research represents a breakthrough in decision science by utilizing large language models to deliver expert-level assessments, offering an innovative approach that brings unbiased expert judgment to fields where knowledge and specialist availability are limited. This approach aligns with earlier studies that identified a significant gap in the literature regarding intelligent decision-making methods, with only a handful of scholars consistently promoting the use of AI techniques to improve these processes. Additionally, the integration of sensitivity analysis allowed for a comprehensive evaluation of how variations in input affect decision-making outcomes. Consequently, the findings revealed that machine learning and big data analytics, as well as fuzzy logic and programming, stand out as the most promising AI techniques for SRMSs. The application further demonstrated that employing fuzzy logic programming in Python as the computational backbone significantly improves precision, efficiency, and execution speed, providing key insights that facilitate more timely and informed decision-making. As a result, this study not only fills a crucial gap in the literature but also presents an intelligent approach to support complex decision-making processes. This is especially beneficial in situations requiring careful analysis and assortment among multiple characteristics, criteria, and stakeholders in uncertain environments.
Future research could explore the application of the proposed approach across various industries and domains to assess its versatility and effectiveness in different contexts; the method’s scalability and its ability to handle increasingly complex decision-making scenarios, including those with multiple stakeholders and uncertainties; the usability of the method, focusing on how intuitive and accessible it can be for decision-makers with varying levels of expertise; and the performance of the proposed model via comparative analyses with existing decision-making frameworks to identify areas of improvement and potential synergies.