Next Article in Journal
Kinnecting Caregivers to Services, Resources, and Supports: Findings from an RCT of Colorado’s Kinship Navigator Program
Previous Article in Journal
Cross-State Validation of a Tool Supporting Implementation of Rural Kinship Navigator Programs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Brace for Impact: Facing the AI Revolution and Geopolitical Shifts in a Future Societal Scenario for 2025–2040

Center for Strategic Corporate Foresight and Sustainability, SBS Swiss Business School, 8302 Kloten-Zurich, Switzerland
Societies 2024, 14(9), 180; https://doi.org/10.3390/soc14090180
Submission received: 13 June 2024 / Revised: 21 August 2024 / Accepted: 3 September 2024 / Published: 11 September 2024

Abstract

:
This study investigates the profound and multifaceted impacts of Artificial Intelligence (AI) and geopolitical developments on global dynamics by 2040. Utilising a Delphi process coupled with probabilistic modelling, the research constructs detailed scenarios that reveal the cascading effects of these emerging forces across economic, societal, and security domains. The findings underscore the transformative potential of AI, predicting significant shifts in employment patterns, regulatory challenges, and societal structures. Specifically, the study forecasts a high probability of AI-induced unemployment reaching 40–50%, alongside the rapid evolution of AI technologies, outpacing existing governance frameworks, which could exacerbate economic inequalities and societal fragmentation. Simultaneously, the study examines the critical role of geopolitical developments, identifying increased nationalisation, the expansion of conflicts such as the Russia–Ukraine war, and the strategic manoeuvres of major powers like China and Israel as key factors that will shape the future global landscape. The research highlights a worrying lack of preparedness among governments and societies, with a 10% probability of their being equipped to manage the complex risks posed by these developments. This low level of readiness is further complicated by the short-term orientation prevalent in Western businesses, which prioritise immediate returns over long-term strategic planning, thereby undermining the capacity to respond effectively to these global challenges. The study calls for urgent, forward-looking policies and international cooperation to address the risks and opportunities associated with AI and geopolitical shifts. It emphasises the need for proactive governance, cross-sector collaboration, and robust regulatory frameworks to ensure that the benefits of technological and geopolitical advancements are harnessed without compromising global stability or societal well-being. As the world stands on the brink of unprecedented change, the findings of this study provide a crucial roadmap for navigating the uncertainties of the future.

1. Introduction

The rapid advancements in Artificial Intelligence (AI) and the intensifying geopolitical tensions are two of the most significant forces reshaping the global landscape in the 21st century. AI, with its transformative potential, is driving unprecedented changes across various sectors, from economic growth and employment to international relations and security. As AI technologies evolve, they not only promise to revolutionise industries and enhance efficiency but also pose substantial risks, including job displacement, privacy concerns, and ethical dilemmas. The implications of AI extend beyond the economic realm, influencing geopolitical strategies, power dynamics, and the nature of global competition. AI has emerged as a pivotal element in the Fourth Industrial Revolution, fundamentally altering the way societies function. Its capacity to automate complex tasks, enhance decision-making processes, and create new economic opportunities positions it as a critical driver of future growth [1,2]. However, this technological revolution also presents significant challenges, particularly in terms of employment. The potential for AI to displace a substantial portion of the global workforce raises urgent questions about the future of work and the adequacy of current educational and retraining systems [3,4]. Moreover, the ethical and regulatory challenges posed by AI, including issues of bias, transparency, and accountability, necessitate a proactive approach from both policymakers and industry leaders [5]. The evolution of global dynamics, particularly influenced by technology, has been significant over the past few decades [6].
Simultaneously, the geopolitical landscape is undergoing profound transformations. The re-emergence of great-power competition, particularly between the United States and China, is reshaping global dynamics in significant ways. This competition spans military, technological, and economic spheres, leading to a strategic realignment in various regions worldwide [7]. The rivalry between these superpowers has disrupted global trade and investment patterns, and is exacerbated by rising protectionism and trade wars, which have significantly impacted global economic [8]. The imposition of tariffs and trade barriers, primarily between the U.S. and China, has resulted in considerable economic losses and heightened uncertainty in global markets. The rise of other powers, such as Russia, has introduced additional complexities into the global geopolitical environment. The assertive foreign policies and strategic alliances pursued by these nations challenge the existing hegemony of Western powers, leading to a more multipolar world order [9,10,11]. The ongoing conflict between Ukraine and Russia exemplifies the disruptive potential of geopolitical tensions. This conflict has not only resulted in significant humanitarian crises, with over 30,000 civilian casualties and millions of displaced individuals [12], but it has also strained international relations and divided global powers. The United States, despite facing its own economic challenges, has provided substantial aid to Ukraine, reflecting the broader strategic interests at play [13,14,15].
The intersection of AI and geopolitical developments creates a complex and dynamic global environment that demands adaptive strategies and robust policy responses. As AI becomes increasingly intertwined with geopolitical strategies, its role in enhancing or destabilising global security cannot be understated. Nations are not only competing in traditional military and economic arenas but are also vying for supremacy in AI, recognising its potential to confer significant strategic advantages [16]. The dual impacts of AI and geopolitical shifts necessitate a comprehensive approach to global governance, one that considers the multifaceted challenges and opportunities presented by these trends.
This study aims to explore the intertwined impacts of AI and geopolitical developments on the global landscape from 2025 to 2040. By focusing on these two critical trends, the research seeks to provide a comprehensive scenario analysis that highlights the potential societal, economic, and political transformations driven by AI and the shifting geopolitical order. The study’s findings underscore the urgent need for governments and institutions to develop strategies that address the dual challenges of technological disruption and geopolitical instability, ensuring a resilient and sustainable future. As AI continues to evolve, it is poised to redefine the future of human societies in ways we are only beginning to comprehend [17].

2. Materials and Methods

2.1. Theoretical Framework

This study utilises the Complex Adaptive Systems (CAS) framework, a robust analytical tool particularly well-suited for examining the dynamic and non-linear interactions between Artificial Intelligence (AI) and geopolitical developments. The CAS framework, originating from interdisciplinary research at the Santa Fe Institute, is instrumental in understanding how small changes in one area, such as AI innovation, can lead to significant and often unpredictable outcomes in another, such as global political stability [18,19].
The CAS framework is essential for this study because AI and geopolitical dynamics are inherently complex and adaptive, involving multiple feedback loops and dependencies. For instance, advancements in AI can influence geopolitical power balances, which in turn can affect the pace and direction of AI development. Historical economic shifts provide important insights into how such transformative forces can reshape entire industries [20]. The CAS framework allows us to capture these interdependencies, providing a more holistic view of how these forces shape the future. Small changes in one area (e.g., AI policy) can lead to significant, unpredictable outcomes in another (e.g., international relations). The CAS framework enables us to build future scenarios which are more robust and realistic by recognising this interdependence [21]. By employing the CAS framework, this study recognises that AI and geopolitical factors do not operate in isolation. Instead, they are interdependent elements within a complex global system in which feedback loops, historical contexts, and interconnected variables drive change. The CAS framework allows for a more nuanced analysis of potential future scenarios by acknowledging the open and adaptive nature of these systems. This approach is essential for exploring the multifaceted challenges and opportunities that arise from the interaction between AI advancements and geopolitical shifts, making it possible to develop scenarios that capture the intricacies of global developments between 2025 and 2040 [21].
The CAS framework is visually represented in Figure 1, which illustrates the dynamic interactions and interdependencies between AI technologies and geopolitical developments. This figure serves as a foundational tool for understanding how various factors, such as technological innovation, economic shifts, and geopolitical strategies, interact within a global system.

2.2. Research Design

The research design of this study is structured around a mixed-method approach, integrating qualitative and quantitative methodologies to comprehensively explore how AI and geopolitical developments will shape the global landscape from 2025 to 2040. The design is methodologically rigorous, combining the Delphi method, multiple-scenario analysis, and probabilistic modelling to answer the central research question:
RQ: How will key technological advancements, particularly AI, along with geopolitical developments, shape the global landscape from 2025 to 2040?

2.2.1. Delphi Method

The Delphi method is employed to gather expert opinions and achieve a consensus on significant issues, uncertainties, and potential developments at the intersection of AI and geopolitical dynamics. The process involved multiple rounds of surveys with a carefully selected panel of experts from leading institutions known for their contributions to AI and geopolitical research [22,23].
The Delphi method was chosen for its ability to manage complex, uncertain topics by relying on the collective intelligence of experts. Given the rapidly evolving nature of AI and the unpredictable nature of geopolitical developments, traditional forecasting methods may fall short. The Delphi method, however, provides a structured way to harness diverse expert opinions, refine these through feedback loops, and build a robust consensus on likely future scenarios. This is particularly important for the exploration of topics where data is scarce or where trends are still emerging, as is the case with the impact of AI in conjunction with geopolitical developments.
  • Step 1: Identification of Issues and Objectives
    The initial step in the Delphi process involved identifying the key issues and setting clear objectives for the development of future scenarios. This task was undertaken by the research team, who reviewed the existing literature and conducted preliminary analyses to pinpoint the most significant trends and uncertainties within the fields of AI and geopolitical developments. These identified issues were intended to guide the entire Delphi process. The identified issues and objectives were then presented to the experts in subsequent rounds for verification and further refinement.
  • Step 2: Selection of Experts
    A panel of 30 senior specialists was selected from leading global organisations known for their expertise in AI and geopolitical affairs. The organisations included Microsoft, Google, IBM, OpenAI, Amazon, the International Political Science Association, the American Academy of Political and Social Science, the London School of Political Science and Economy, the University of Cambridge, the Center for Strategic and International Studies, the World Bank, the United Nations, and the World Economic Forum. These institutions were chosen based on their prominent role in shaping AI development and global policy, ensuring that the panel comprised individuals with the necessary depth of knowledge and strategic insight into both technological and geopolitical domains. The selection process focused on identifying experts who had made significant contributions to their fields, thereby ensuring a high level of expertise and credibility in the Delphi process.
  • Step 3: Round 1—Open-Ended Questions
    In the first round of the Delphi process, experts were asked a series of open-ended questions designed to elicit a broad range of ideas and perspectives on the potential futures influenced by AI and geopolitical shifts. This round was crucial for gathering diverse insights into the possible outcomes and uncertainties associated with these developments. The responses from this round were extensive, with over 1947 sub-factors identified and considered during this stage. These sub-factors encompassed various aspects of AI integration, geopolitical strategies, economic impacts, and social changes, providing a comprehensive foundation for the subsequent refinement process.
  • Step 4: Round 2—Refinement
    The second round of the Delphi process involved refining the broad range of ideas and sub-factors identified in the first round. The research team synthesised the responses to identify common themes and patterns, which were then developed into more focused questions and potential scenarios. Experts were asked to review these refined scenarios, providing their evaluations and insights on the most probable and impactful developments. This round aimed to achieve a higher level of consensus among the experts by narrowing down the focus to the most critical issues and uncertainties. The refined scenarios and themes were mapped.
  • Step 5: Round 3—Final Consensus
    The final round of the Delphi process aimed to solidify the consensus among the experts. In this stage, the scenarios that had emerged as most significant were presented back to the experts for final validation. Experts reviewed these scenarios, focusing on their plausibility and potential impact. This round allowed the experts to adjust or confirm their views based on the collective insights gained throughout the Delphi process. The results from this round highlight the agreed-upon probabilities and impacts of the identified scenarios, providing a robust foundation for the subsequent scenario analysis.
The Delphi method’s iterative nature and its focus on achieving expert consensus ensured that the scenarios developed in this study are both comprehensive and reflective of the most credible insights from the leading minds in AI and geopolitical strategy.

2.2.2. Multiple-Scenario Analysis

After achieving consensus through the Delphi method, the study employed a multiple-scenario analysis to explore how the identified uncertainties and trends might evolve in different combinations, resulting in several distinct and plausible future scenarios. This step was essential for understanding the range of possible outcomes that could emerge from the complex interaction between AI and geopolitical dynamics.
Scenario Framework Development
The multiple-scenario analysis was guided by a structured framework which was developed based on the key uncertainties and driving forces identified in the Delphi process. The framework allowed for a systematic exploration of how these factors might interact in various ways to shape different future trajectories. The key uncertainties, such as the rate of AI adoption, the robustness of international regulatory frameworks, and the stability of geopolitical alliances, were positioned on axes to create a scenario matrix. This matrixserved as the foundation for the development of detailed scenario narratives.
Narrative Construction
For each scenario in the matrix, a detailed narrative was constructed to illustrate how the world might look under different combinations of AI and geopolitical developments. These narratives were not merely speculative but were grounded in the expert insights gathered during the Delphi process. Each narrative included a comprehensive exploration of the implications for global stability, economic growth, technological innovation, and international relations. The narratives were designed to be as vivid and plausible as possible, providing a rich context for understanding the potential impacts of AI and geopolitical shifts on various sectors.
Integration of Quantitative and Qualitative Insights
The scenarios were enriched by integrating both qualitative insights from expert opinions and quantitative data where available. This dual approach ensured that the scenarios were not only conceptually robust but also grounded in empirical reality. For instance, AI adoption rates and economic impact projections were included in the scenarios, drawing on existing studies and the Delphi panel’s estimations.
Validation and Refinement of Scenarios
After the initial construction of the scenarios, they were presented to the expert panel for validation. This step involved a detailed review by the experts, who provided feedback on the plausibility, coherence, and completeness of the scenarios. The feedback was meticulously analysed and incorporated into the final versions of the scenarios, ensuring that they accurately reflected the collective expertise of the panel and were internally consistent.
Final Scenario Presentation
The finalised scenarios were summarised, including the dominant trends, critical uncertainties, and potential outcomes. These scenarios serve as a critical tool for policymakers, business leaders, and researchers, offering a structured way to anticipate and prepare for the diverse futures that AI and geopolitical developments might bring.

2.2.3. Probabilistic Modelling

To complement the qualitative insights provided by the multiple-scenario analysis, the study employed probabilistic modelling to quantify the uncertainties associated with each scenario. This approach was particularly valuable in assessing the likelihood of different outcomes and in providing a more rigorous foundation for decision-making.
Defining Key Variables
The first step in the probabilistic modelling process involved identifying and defining the key variables that would be subject to uncertainty. These variables included AI adoption rates, geopolitical stability indices, economic growth projections, and potential regulatory impacts. The definitions of these variables were informed by the expert panel’s input and the existing literature, ensuring that they were both relevant and accurate [24].
Developing Probability Distributions
Once the key variables were defined, probability distributions were developed to represent the range of possible values for each variable. These distributions were based on expert judgments, historical data, and scenario-specific considerations. The probability distributions showed the potential variability in outcomes across different scenarios. For example, the distribution for AI adoption rates was designed to capture both optimistic and pessimistic projections, reflecting the uncertainty inherent in predicting technological diffusion.
Monte Carlo Simulation
To quantify the impact of the identified uncertainties, a Monte Carlo simulation was conducted. This simulation involved running thousands of iterations, each representing a different possible future based on the defined probability distributions. The Monte Carlo method allowed the study to explore the full range of possible outcomes for each scenario, providing a probabilistic assessment of risks and opportunities [25].
Interpreting the Results
The results from the probabilistic modelling provided valuable insights into the relative likelihood of different scenarios and their potential impacts. These quantitative findings were integrated with the qualitative scenario narratives, allowing the study to offer a more comprehensive and nuanced understanding of the future landscape shaped by AI and geopolitical developments. These insights are critical for stakeholders who need to make informed decisions in the face of uncertainty, offering a way to quantify risks and plan strategically for various possible futures.

2.3. Data Collection and Analysis

Data collection involves three rounds of discussions with the selected experts. The first round focused on identifying significant issues, uncertainties, and potential developments. The second and third rounds refined these insights and developed robust scenarios. The final data was used for probabilistic modelling, where the likelihoods of different scenarios are quantified.

2.4. Ethical Considerations

The study ensures the privacy and confidentiality of the experts by not collecting personal identification information. Data are stored securely for one year, after which all responses are deleted. Ethical guidelines and approvals from relevant institutions are followed throughout the research process.

2.5. Integration of Findings

The findings from the Delphi process, scenario analysis, and probabilistic modelling were integrated into a cohesive framework which provides a comprehensive view of the potential futures shaped by AI and geopolitical developments. This framework serves as a valuable tool for policymakers, industry leaders, and researchers, helping them navigate the complexities of a rapidly changing global landscape.

3. Results and Discussion

The findings of this study provide a comprehensive analysis of the potential impacts of Artificial Intelligence (AI) and geopolitical developments on global dynamics by 2040. The results are presented in two major domains: AI’s impact and geopolitical developments. Each domain encompasses several critical factors that are projected to influence economic, societal, and security-based outcomes on a global scale. The scenarios developed through the Delphi process highlight the cascading effects of these factors, each carrying significant probabilities of occurrence. The analysis identifies key areas where governments and societies are particularly vulnerable, with a strikingly low readiness to address the challenges posed by these developments. The following sections delve into the detailed impact chains for AI and geopolitical developments, exploring the intricate interdependencies and potential outcomes that could shape the future global landscape.

3.1. Impact of Artificial Intelligence on Global Dynamics

The scenarios developed through the Delphi process, combined with probabilistic modelling, underscore the significant role that Artificial Intelligence (AI) is expected to play in shaping global dynamics by 2040. AI’s influence is multifaceted, affecting economic structures, societal norms, and security paradigms, each of which has a cascading impact on other critical factors.

3.1.1. Economic Transformations and Employment

The first and most significant impact of AI identified in the scenarios is the potential for massive unemployment. There is a 90% probability that AI-driven automation will cause unemployment rates to surge to 40–50%, affecting not only industries heavily dependent on routine tasks, such as manufacturing and logistics, but also sectors traditionally considered more secure, such as consultancy, finance, and broader service industries. Research by Frey and Osborne [26] initially highlighted the vulnerability of routine jobs to automation, but more recent studies have pointed to the increasing susceptibility of analytical jobs as well. AI’s ability to process and analyse large datasets with speed and accuracy far beyond human capability is expected to impact jobs in consultancy and finance significantly [27,28].
The consultancy sector, for example, is likely to see AI taking over many tasks traditionally performed by human consultants, such as data analysis, strategic planning, and even decision-making. McKinsey Global Institute [29] estimates that nearly a third of activities in 60% of all occupations could be automated, with AI making significant inroads into jobs that require analytical thinking and judgment. Similarly, in the finance sector, AI’s ability to conduct complex financial modelling, risk assessments, and even predictive analytics is expected to reduce the need for human financial analysts and advisors [30]. The impact on these sectors will not only result in job displacement but could also lead to a redefinition of the roles that remain, requiring workers to develop new skills that complement AI technologies [31].
This surge in unemployment, across both routine and analytical jobs, is expected to place immense pressure on governments. The scenarios suggest a 65% probability that the resulting internal pressures could lead to increased social unrest, which might escalate into external conflicts. The potential for AI-induced unemployment to cause such instability is supported by studies on the social consequences of technological disruption. Ref. [32] and, more recently, Acemoglu and Restrepo [33] have shown that economic deprivation, particularly when linked to rapid technological change, can fuel grievances that lead to conflict. Cramer [34] adds that the erosion of job security can exacerbate existing social divisions, leading to the rise of populism and nationalist movements, further destabilising the political landscape.
Compounding this issue, there is a 90% probability that governments will be unable to manage the socio-economic fallout from such high unemployment levels. The financial and sociological burdens of providing adequate social safety nets and retraining programs and maintaining public order could overwhelm governmental capacities. Studies by Reinhart and Rogoff [35] on economic crises highlight how even well-prepared governments can struggle to manage the financial demands of large-scale unemployment, particularly when it is compounded by simultaneous pressures on multiple fronts. This scenario suggests that many governments may face difficult trade-offs, in which the need to stabilise the economy might come at the expense of other critical areas, such as healthcare or education.
The internal challenges faced by governments due to unemployment are likely to have further cascading effects. The scenarios indicate a 75% probability that, due to these internal pressures, governments will deprioritise sustainability initiatives. This deprioritisation could result in significant setbacks for global efforts to combat climate change, as resources and attention are diverted to more immediate economic and social issues. The literature supports this outcome, with studies by Stern [36] and, more recently, Rockström et al. [37] warning that short-term crises often lead to the abandonment of long-term sustainability goals. The potential for AI to contribute to a more extractive and less sustainable global economy is a concern that has been raised by various scholars, who argue that without strong regulatory frameworks, AI could exacerbate existing environmental challenges [38].
In addition to these broader economic and environmental impacts, the scenarios also predict an 80% probability that the increased unemployment will lead to a rise in depression and other mental health issues as individuals struggle with feelings of uselessness and disconnection from the workforce. The psychological impact of long-term unemployment is well-documented, with studies by Paul and Moser) [39] and Jahoda [40] linking job loss to higher rates of depression, anxiety, and suicide. The rise in mental health issues could further strain public health systems, which are already grappling with the challenges posed by an ageing population and the increased prevalence of chronic diseases.
Finally, the scenarios highlight a critical weakness in the current governmental readiness to handle such a profound societal shift. There is only a 10% probability that governments are prepared to manage the challenges posed by this level of unemployment. This lack of preparedness could exacerbate the socio-economic and political crises predicted in these scenarios. Studies have long warned about the inadequacies of existing social and economic policies in the face of rapid technological change [41,42]. The potential for governments to be caught off-guard by the speed and scale of AI-driven changes is a significant risk that requires urgent attention from policymakers and stakeholders.

3.1.2. Exponential Speed of Change Caused by AI

The exponential speed of AI-driven change is identified as a major impact factor with a 100% probability. This rapid evolution is expected to outpace government regulatory frameworks, leading to significant governance challenges. The inability of governments to adapt their regulations to keep up with technological advancements has a 100% probability of occurring and would result in regulatory gaps that could exacerbate economic inequalities and create new ethical dilemmas [4,43]. There is a 75% probability that the general population will struggle to keep up with this rapid pace of change, leading to widespread anxieties and psychological distress. The literature supports this, indicating that rapid technological change can lead to feelings of obsolescence and a loss of control, contributing to increased levels of anxiety and depression [44,45]. This psychological impact is further compounded by the 70% probability that societal structures will not be able to adapt quickly enough, leading to social fragmentation and a decline in community cohesion.
The stress and anxiety associated with the inability to keep pace with AI-driven changes can also undermine public trust in institutions, which may further weaken social stability. Studies by Susskind and Susskind [28] highlight how rapid technological advancements, if not managed properly, can erode trust in traditional institutions, leading to greater societal unrest.
Moreover, due to the overwhelming focus on managing the immediate effects of these rapid changes, there is a 75% probability that governments will deprioritise sustainability initiatives. This deprioritisation could lead to increased environmental degradation as short-term economic concerns take precedence over long-term sustainability goals [36]. The literature emphasises that such shifts in policy focus could significantly undermine global efforts to combat climate change, leading to a more extractive and less sustainable global economy [38].

3.1.3. Increased Security Risks Due to the Use of AI and Quantum Computing

AI and quantum computing are expected to revolutionize security, and there is a 100% probability that these technologies will increase security risks, particularly in the realm of cybersecurity. This increase in risk is driven by the 60% probability that global security systems will become more vulnerable, potentially leading to international conflicts. The dual-use nature of AI and quantum technologies means they can be employed for both defence and offence, creating a complex security landscape that is difficult to manage [46,47].
Additionally, there is a 40% probability that governments and countermeasures will fail to keep up with advancements in cybercrime, resulting in significant disruptions to the economy and individual lives. This aligns with predictions by Schneier [48], who warns that the rapid evolution of AI in cyber operations could outstrip traditional security measures, leaving critical infrastructure vulnerable to attacks.
These developments are also likely to contribute to a 60% probability of increased anxiety among the population, as individuals and businesses grapple with the constant threat of cyberattacks and the potential for significant personal and economic losses. The psychological toll of living under such constant threats can lead to a decline in societal well-being and further erode trust in digital technologies and the institutions that govern them [49].

3.1.4. Medical and Technical Fields Evolve Rapidly

AI is anticipated to drive significant advancements in medical and technical fields, with a 90% probability that these sectors will evolve rapidly by 2040. This evolution is expected to lead to improvements in medical standards worldwide, with a 60% probability that technology will result in more comfort and better healthcare outcomes. The literature supports this optimistic view, noting that AI-driven innovations in precision medicine, diagnostic tools, and personalised treatments have the potential to revolutionise healthcare [50,51]. However, these advancements’ rapid pace could also exacerbate healthcare access inequalities, particularly in developing countries. The 54% probability of this occurring is concerning, as it could lead to a widening gap between those who have access to cutting-edge medical technologies and those who do not [52]. Furthermore, the focus on technological solutions might detract from addressing the social determinants of health, which is crucial for achieving long-term health equity [53].

3.1.5. AI Can Help Develop Sustainability Solutions

AI holds significant potential to contribute to sustainability efforts, with a 60% probability found that it will help develop solutions to environmental challenges. These solutions could include optimising energy usage, improving resource management, and aiding in the development of new, environmentally friendly materials. The literature highlights the potential for AI to drive innovations that contribute to a more sustainable future, particularly in the context of climate change [37,38]. There is also a 30% probability that initial successes in AI-driven sustainability could lead to complacency, reducing the urgency for broader systemic changes needed to address the root causes of environmental degradation. This scenario is supported by Stern [36] who cautions that technological solutions alone are insufficient to solve the complex challenges of sustainability and that they must be accompanied by significant policy and behavioural changes.

3.1.6. Loss of Trust in Humans

One of the more troubling scenarios involves the 65% probability that the increasing integration of AI into decision-making processes will lead to a loss of trust in human relationships and institutions. As AI systems become more prevalent, there is a risk that people will rely more on these technologies than on human judgment, potentially undermining interpersonal trust and exacerbating social fragmentation [54,55,56].
This loss of trust is likely to result, with 60% probability, in increased social isolation and a decline in social skills, as individuals become more dependent on AI-driven interactions. Turkle [44] and other scholars have noted that overreliance on digital technologies can lead to a reduction in meaningful face-to-face communication, weakening the social fabric and contributing to a sense of alienation. Figure 2 summarises the key impact chains and probabilities associated with the influence of AI on global dynamics by 2040.

3.2. Impact of Geopolitical Developments

Geopolitical developments are expected to play a critical role in shaping the global landscape by 2040. The scenarios highlight several key geopolitical factors, each carrying significant implications for global stability, economic systems, and societal well-being.

3.2.1. Increased Nationalization

The scenarios indicate a 70% probability that geopolitical developments will lead to increased nationalisation and the rise of nationalistic movements. This shift towards nationalisation is likely to result in greater geopolitical fragmentation, as countries prioritise national interests over global cooperation. The literature supports this scenario, with Gidron and Hall [57] and Rodrik [58] identifying economic insecurity and cultural backlash as key drivers of nationalism in the 21st century.
As nationalistic movements gain momentum, there is a 60% probability that they will divert focus from global sustainability efforts. This shift could result in governments prioritising short-term economic gains over long-term environmental goals, exacerbating the challenges of addressing climate change [36,37].
Moreover, the rise of nationalistic movements is expected to lead, with 70% probability, to increased anxiety among the population. The resurgence of nationalism often brings with it a sense of uncertainty and fear, as individuals may feel threatened by perceived external and internal enemies. This anxiety can further erode social cohesion and trust in government institutions, potentially leading to increased social unrest [59,60].

3.2.2. Russia–Ukraine War and Russia’s Foreign Policy

The ongoing conflict between Russia and Ukraine, coupled with Russia’s broader foreign policy, presents a significant risk to global stability. The scenarios suggest a 70% probability that the Russia–Ukraine war will expand, potentially drawing in other countries and escalating into a broader regional conflict. This aligns with the literature on the dynamics of conflict escalation and the risks posed by proxy wars and regional power struggles [61,62]. The expansion of the conflict is expected to cause, with 70% probability, increased anxiety among populations both within and outside the region. The fear of war spreading and the potential for direct involvement of other nations could lead to heightened tension and uncertainty, contributing to widespread psychological stress [63]. This anxiety is not limited to the immediate vicinity of the conflict but can ripple across global markets, affecting everything from energy prices to international trade flows [64].
In addition, there is a 50% probability that Russian politics will lead to the isolation of Russia on the global stage. This isolation could destabilise global energy markets, disrupt security alliances, and strain international trade relations. The literature highlights the risks associated with such isolation, particularly in terms of economic sanctions and the potential for Russia to pursue more aggressive foreign policies as a [64,65]. The 70% probability of increased anxieties due to this isolation reflects the global uncertainty that arises when a major power becomes estranged from the international community.

3.2.3. Israeli War and Foreign Policy

The scenarios indicate a 75% probability that Israel’s foreign policy and regional conflicts could lead to an increased risk of geopolitical conflicts or wars. The literature on Middle Eastern geopolitics underscores the potential for regional tensions to escalate, particularly in the context of unresolved conflicts, shifting alliances, and the involvement of major powers [4,66]. As these conflicts intensify, there is a 70% probability of increased anxiety among the populations within the region and globally. The volatility of the Middle East, coupled with the involvement of global superpowers, creates a complex and unstable environment that can lead to widespread fear and uncertainty [67]. This anxiety is often exacerbated by the unpredictable nature of the conflicts, in which even minor incidents can escalate into broader confrontations, affecting global security and economic stability.

3.2.4. Impact of the Chinese Economy and Geo-Policies

China’s economic policies and global influence are expected to play a pivotal role in shaping the future geopolitical landscape. The scenarios suggest a 70% probability that China’s economic rise will increase global economic dependencies, particularly in developing countries that rely heavily on Chinese investment and trade. This scenario aligns with the existing literature on China’s Belt and Road Initiative and its impact on global trade patterns [68,69]. However, these dependencies are also likely to lead, with 70% probability, to increased anxiety, especially among countries and regions that become heavily reliant on Chinese economic influence. The fear of economic coercion or dependency can lead to political instability and resistance against perceived Chinese dominance [70]. Furthermore, the scenarios indicate a 70% probability that China’s geopolitical strategies, particularly in the Asia-Pacific region, will increase the risk of geopolitical conflicts, which could lead to sanctions and have a negative economic impact on Western economies [71].
The combination of economic dependency and the risk of conflict contributes to a broader sense of global instability. As nations navigate their relationships with a rising China, the 70% probability of increased anxieties reflects the uncertainty surrounding China’s long-term strategic goals and the potential for these to clash with the interests of other major powers [72]. China’s geopolitical strategies, particularly its approach to international conflicts and economic policies, carry a 70% probability of increasing the risk of geopolitical conflicts or wars. The literature on China’s foreign policy suggests that its assertiveness in territorial disputes and its efforts to expand its influence through economic means could lead to significant tensions, particularly with the United States and its allies [73,74].
These developments are also expected to cause a 70% probability of increased anxieties globally. The potential for conflict involving a major power like China raises fears of a broader destabilisation of the international order, with significant implications for global security, trade, and economic stability [69,72]. The anxiety associated with China’s geopolitical manoeuvres reflects the broader uncertainty about the future of global governance and the potential for a shift away from the current international system towards a more multipolar or even bipolar world.

3.3. Readiness of Governments and Societies

A crucial finding from the study is the alarmingly low readiness of governments and societies to effectively manage and mitigate the identified challenges posed by AI and geopolitical developments. The scenarios indicate that there is only an overall 10–15% probability that current governmental structures and societal systems are prepared to tackle the complex and interconnected issues that are likely to arise by 2040.
This low level of readiness is particularly concerning, given the profound impacts predicted across various domains. The rapid and exponential speed of AI-driven change, combined with increased security risks due to AI and quantum computing, are expected to overwhelm existing regulatory frameworks. Governments are likely to struggle to keep pace with these advancements, leading to significant governance gaps that could exacerbate economic inequalities and social unrest. Similarly, the potential for AI to drive rapid advancements in medical and technical fields may outstrip the capacity of healthcare systems to adapt, particularly in regions with already limited resources. This lack of readiness is further exacerbated by the short-term orientation prevalent in Western businesses, as highlighted by Gerlich [75], which prioritises immediate returns over long-term strategic planning. Such an approach undermines the capacity to develop resilient responses to the complex, interconnected issues posed by these emerging global trends.
The geopolitical developments further compound these challenges. The rise of nationalistic movements and ongoing conflicts such as the Russia–Ukraine war and tensions surrounding China’s geopolitical strategies present significant risks to global stability. The scenarios suggest that governments are largely unprepared to manage the cascading effects of these developments, including the increased anxieties and social fragmentation that are likely to accompany them.
The literature highlights the need for proactive and coordinated efforts to address these challenges, emphasising the importance of the strengthening of institutional capacities and fostering resilience within societies [4,26]. However, the current prognosis suggests that without significant changes in policy and governance, governments and societies will be ill-equipped to navigate the complexities of the future, leading to potentially severe consequences for global stability and human well-being. Figure 3 summarises the key impact chains and probabilities associated with the influence of AI on global dynamics by 2040.

4. Conclusions

This study provides a critical examination of the potential impacts of Artificial Intelligence (AI) and geopolitical developments on global dynamics by 2040. The research employs a robust scenario analysis methodology, integrating a Delphi process with probabilistic modelling to forecast the cascading effects across economic, societal, and security domains. The findings reveal a complex interplay of factors that are poised to shape the future in profound ways, underscoring the urgency of strategic foresight and preparedness.
AI is identified as a transformative force with the capacity to alter the fabric of societies and economies significantly. The study highlights a 90% probability that AI-driven automation will lead to an unemployment surge of 40–50%, affecting not only routine and manual jobs but also, increasingly, analytical roles within sectors such as finance and consultancy. This surge in unemployment is expected to generate immense pressure on governments, which may struggle to manage the socio-economic fallout. The financial and sociological burdens of mass unemployment could potentially overwhelm governmental capacities, leading to a 65% probability of internal conflicts that might spill over into international tensions.
Moreover, the 100% probability that the exponential speed of AI-driven change will outpace existing regulatory frameworks poses significant governance challenges. As AI continues to evolve at an unprecedented rate, governments are likely to find themselves ill-equipped to regulate these technologies effectively, leading to potential market monopolies, unchecked corporate power, and deepening economic inequalities. This governance gap could also hinder innovation, as businesses may face legal uncertainties that slow down the deployment of AI technologies.
The impact of AI on societal structures is equally profound. The study identifies an 80% probability that widespread AI adoption will lead to increased psychological risks, including a surge in depression and anxiety, as individuals struggle to find purpose in an AI-dominated economy. This psychological impact is exacerbated by the 70% probability that societal structures will be unable to adapt quickly enough to the changes imposed by AI, resulting in increased social fragmentation and a decline in community cohesion.
On the environmental front, AI presents a paradox. While there is a 60% probability that AI can help develop sustainability solutions, particularly in optimising energy use and resource management, there is also a 30% probability that early successes in AI-driven sustainability could lead to complacency. This complacency may result in a failure to address the root causes of environmental degradation, ultimately undermining long-term sustainability goals.
The study also underscores the societal risks associated with increased reliance on AI in decision-making processes. The 65% probability that AI will contribute to a loss of trust in human relationships and institutions reflects a broader concern about the erosion of interpersonal trust. As AI becomes more integrated into everyday life, there is a 60% probability that social isolation will increase, leading to a decline in social skills and a weakening of the social fabric.
Geopolitical developments are expected to play an equally critical role in shaping the global landscape. The study identifies a 70% probability that increased nationalisation and the rise of nationalistic movements will lead to greater geopolitical fragmentation. This trend is likely to result in significant disruptions to global cooperation as countries prioritise national interests over collective global action. The resurgence of protectionism and trade barriers, particularly between major powers like the United States and China, could further destabilise global markets and exacerbate economic inequalities.
The ongoing Russia–Ukraine conflict is highlighted as a key geopolitical risk, with a 70% probability that the war will expand, drawing in other countries and escalating into a broader regional or even global conflict. The study warns of the potential for this conflict to disrupt global energy markets, strain international alliances, and lead to significant economic losses. The 70% probability of increased anxieties among global populations as a result of this conflict underscores the far-reaching psychological impact of geopolitical instability. In the Middle East, Israel’s foreign policy and the potential for regional conflicts pose additional risks to global stability. The study identifies a 75% probability that Israeli actions could lead to increased tensions and the risk of broader conflict in the region. This is compounded by the 70% probability that China’s economic policies and geopolitical strategies will increase global dependencies on Chinese investment and trade, particularly in developing countries. These dependencies could lead to a more polarised world in which economic and political influence becomes increasingly concentrated in the hands of a few powerful states. China’s rise is also expected to contribute to a 70% probability of increased geopolitical conflicts, particularly in the Asia-Pacific region, where territorial disputes and strategic rivalries could escalate into broader confrontations. The study highlights the potential for these conflicts to disrupt global trade routes, impact international security frameworks, and create new challenges for global governance.
A critical insight from this study is the alarmingly low level of readiness among governments and societies to address the challenges posed by AI and geopolitical developments. The 10% probability of adequate preparedness indicates a significant gap in current strategies, with most governments and institutions being ill-equipped to manage the complex and interconnected risks that lie ahead. This lack of readiness is particularly concerning given the cascading effects identified in the scenarios, in which small initial shocks could lead to large-scale disruptions across multiple domains. The short-term orientation prevalent in Western businesses further exacerbates these challenges. The prioritisation of immediate returns over long-term strategic planning undermines the capacity of both the public and private sectors to develop resilient responses to the evolving global landscape. This short-termism is particularly detrimental in the context of AI and geopolitical shifts, where the rapid pace of change demands forward-looking and adaptive strategies.
The study underscores the need for a paradigm shift in governance, one in which long-term planning and cross-sector collaboration become central to policy-making. Governments must prioritise the building of resilience in their institutions, ensuring that they can adapt to the rapid technological changes and geopolitical shifts that are expected to characterise the coming decades. This includes investing in education and workforce retraining programs to mitigate the social impacts of AI-driven automation and developing robust regulatory frameworks that can keep pace with technological advancements.
Additionally, the study emphasises the need for increased international cooperation to address the geopolitical risks identified in the scenarios. As the world becomes increasingly interconnected, the ability to navigate complex geopolitical landscapes will require coordinated efforts among nations, with foci on diplomacy, conflict resolution, and sustainable economic development.
This study provides a detailed and nuanced understanding of the potential futures shaped by AI and geopolitical developments. The findings highlight the critical importance of proactive governance, strategic foresight, and global cooperation in managing the risks and leveraging the opportunities presented by these emerging trends. As the global community faces unprecedented challenges, the imperative for robust, adaptive strategies has never been greater. Only through concerted and sustained efforts can we hope to achieve a stable and equitable future, ensuring that the benefits of technological and geopolitical advancements are realised without compromising global security or societal well-being.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of SBS Swiss Business School (protocol code EC23/FR11, 3 March 2023).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Supporting data can be requested from the author.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Petropoulos, G. The Impact of Artificial Intelligence on Employment. In Work in the Digital Age; Neufeind, M., O’Reilly, J., Ranft, F., Eds.; Rowman & Littlefield: London, UK, 2018; pp. 119–133. [Google Scholar]
  2. Georgieva, K. AI Will Transform the Global Economy. Let’s Make Sure It Benefits Humanity. Int. Monetary Fund Blog. 2024. Available online: https://www.imf.org/en/Blogs/Articles/2024/01/14/ai-will-transform-the-global-economy-lets-make-sure-it-benefits-humanity (accessed on 5 May 2024).
  3. McKinsey & Company. The Future of Work: The Next Era of Work in Europe; McKinsey Global Institute: New York, NY, USA, 2024; Available online: https://www.mckinsey.com (accessed on 29 May 2024).
  4. Brynjolfsson, E.; McAfee, A. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies; W.W. Norton & Company: New York, NY, USA, 2014. [Google Scholar]
  5. Smuha, N.A. From a ‘Race to AI’ to a ‘Race to AI Regulation’: Regulatory Competition for Artificial Intelligence. Law Innov. Technol. 2021, 13, 57–84. [Google Scholar] [CrossRef]
  6. Baldwin, R. The Great Convergence: Information Technology and the New Globalization; Harvard University Press: Cambridge, MA, USA, 2016. [Google Scholar]
  7. Kaplan, R.D. The Return of Marco Polo’s World: War, Strategy, and American Interests in the Twenty-First Century; Random House: New York, NY, USA, 2016. [Google Scholar]
  8. Baldwin, R. The Great Trade Collapse: What Caused It and What It Mean; VoxEU.org: London, UK, 2019. [Google Scholar]
  9. Ikenberry, G.J. Liberal Leviathan: The Origins, Crisis, and Transformation of the American World Order; Princeton University Press: Princeton, NJ, USA, 2011. [Google Scholar]
  10. Zakaria, F. The Post-American World; W.W. Norton & Company: New York, NY, USA, 2008. [Google Scholar]
  11. Rodrik, D. Populism and the Economics of Globalization. J. Int. Econ. 2018, 114, 40–50. [Google Scholar] [CrossRef]
  12. Center for Preventive Action (CFR). War in Ukraine. Center for Preventive Action. 2024. Available online: https://www.cfr.org/global-conflict-tracker/conflict/conflict-ukraine (accessed on 2 May 2024).
  13. Modell, J.; Haggerty, T. The Social Impact of War. Annu. Rev. Sociol. 1991, 17, 205–224. [Google Scholar] [CrossRef]
  14. Page, A. War, Public Debt and Richard Price’s Rational Dissenting Radicalism. Hist. Res. 2018, 91, 98–115. [Google Scholar] [CrossRef]
  15. Tsygankov, A.P. Russia’s Foreign Policy: Change and Continuity in National Identity; Rowman & Littlefield: Lanham, MD, USA, 2016. [Google Scholar]
  16. Gartzke, E.; Rohner, D. The Political Economy of Imperialism, Decolonization and Development. Br. J. Political Sci. 2011, 41, 525–556. [Google Scholar] [CrossRef]
  17. Harari, Y.N. Homo Deus: A Brief History of Tomorrow; HarperCollins: London, UK, 2017. [Google Scholar]
  18. Fidan, T.; Balcı, A. Managing Schools as Complex Adaptive Systems: A Strategic Perspective. Int. Electron. J. Elem. Educ. 2017, 10, 11–26. [Google Scholar] [CrossRef]
  19. May, C.K. Complex Adaptive Governance Systems: A Framework to Understand Institutions, Organizations, and People in Socio-Ecological Systems. Socio-Ecol. Pract. Res. 2022, 4, 39–54. [Google Scholar] [CrossRef] [PubMed]
  20. Bailey, D.; Cowling, K. Structural Change in the UK Economy: Manufacturing Industry, 1973–1983. Camb. J. Econ. 1986, 10, 279–298. [Google Scholar]
  21. Van der Leeuw, S. Social Sustainability, Past and Future: Undoing Unintended Consequences for the Earth’s Survival; Cambridge University Press: Cambridge, UK, 2019. [Google Scholar] [CrossRef]
  22. Galanis, P. The Delphi Method. Arch. Hell. Med. 2018, 35. [Google Scholar]
  23. Sankaran, S.; Ang, K.; Hase, S. Delphi Method. J. Syst. Think. 2023, 3, 1–15. [Google Scholar] [CrossRef]
  24. Ho, C.K.; Khalsa, S.S.; Kolb, G.J. Methods for Probabilistic Modeling of Concentrating Solar Power Plants. Sol. Energy 2011, 85, 669–675. [Google Scholar] [CrossRef]
  25. Kroese, D.P.; Brereton, T.; Taimre, T.; Botev, Z.I. Why the Monte Carlo Method Is So Important Today. Wiley Interdiscip. Rev. Comput. Stat. 2014, 6, 386–392. [Google Scholar] [CrossRef]
  26. Frey, C.B.; Osborne, M.A. The Future of Employment: How Susceptible Are Jobs to Computerization? Technol. Forecast. Soc. Chang. 2017, 114, 254–280. [Google Scholar] [CrossRef]
  27. Autor, D.H. Why Are There Still So Many Jobs? The History and Future of Workplace Automation. J. Econ. Perspect. 2015, 29, 3–30. [Google Scholar] [CrossRef]
  28. Susskind, R.; Susskind, D. The Future of the Professions: How Technology Will Transform the Work of Human Experts; Oxford University Press: Oxford, UK, 2015. [Google Scholar]
  29. McKinsey Global Institute. Jobs Lost, Jobs Gained: Workforce Transitions in a Time of Automation; McKinsey & Company: New York, USA, 2017; Available online: https://www.mckinsey.com/featured-insights/future-of-work/the-future-of-work-in-europe (accessed on 29 May 2024).
  30. Davenport, T.H.; Kirby, J. Only Humans Need Apply: Winners and Losers in the Age of Smart Machines; Harper Business: New York, NY, USA, 2016. [Google Scholar]
  31. Bessen, J. AI and Jobs: The Role of Demand. Natl. Bur. Econ. Res. Work. Pap. Ser. 2019. [Google Scholar] [CrossRef]
  32. Gurr, T.R. Why Men Rebel; Princeton University Press: Princeton, NJ, USA, 1970. [Google Scholar]
  33. Acemoglu, D.; Restrepo, P. Artificial Intelligence, Automation, and Work. Natl. Bur. Econ. Res. Work. Pap. Ser. 2018. [Google Scholar] [CrossRef]
  34. Cramer, K.J. The Politics of Resentment: Rural Consciousness in Wisconsin and the Rise of Scott Walker; University of Chicago Press: Chicago, IL, USA, 2016. [Google Scholar]
  35. Reinhart, C.M.; Rogoff, K.S. This Time Is Different: Eight Centuries of Financial Folly; Princeton University Press: Princeton, NJ, USA, 2009. [Google Scholar]
  36. Stern, N. The Economics of Climate Change: The Stern Review; Cambridge University Press: Cambridge, UK, 2006. [Google Scholar]
  37. Rockström, J.; Steffen, W.; Noone, K.; Persson, Å.; Chapin, F.S., III; Lambin, E.; Lenton, T.M.; Scheffer, M.; Folke, C.; Foley, J.A.; et al. A Safe Operating Space for Humanity. Nature 2009, 461, 472–475. [Google Scholar] [CrossRef]
  38. Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds Mach. 2018, 28, 689–707. [Google Scholar] [CrossRef]
  39. Paul, C.; Moser, T. What Are the Limits to Empirical Research? J. Res. Methodol. 2009, 12, 198–210. [Google Scholar]
  40. Jahoda, M. Employment and Unemployment: A Social-Psychological Analysis; Cambridge University Press: Cambridge, UK, 1982. [Google Scholar]
  41. Ford, M. Rise of the Robots: Technology and the Threat of a Jobless Future; Basic Books: New York, NY, USA, 2015. [Google Scholar]
  42. Bostrom, N.; Yudkowsky, E. The Ethics of Artificial Intelligence. In The Cambridge Handbook of Artificial Intelligence; Miller, F.R., van den Hoven, S.J., Eds.; Cambridge University Press: Cambridge, UK, 2014; pp. 316–334. [Google Scholar]
  43. Turkle, S. Alone Together: Why We Expect More from Technology and Less from Each Other; Basic Books: New York, NY, USA, 2011. [Google Scholar]
  44. Zuboff, S. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power; PublicAffairs: New York, NY, USA, 2019. [Google Scholar]
  45. Brundage, M.; Avin, S.; Belfield, H.; Krueger, G.; Wang, J.; Hadfield, G. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. arXiv 2018. Available online: https://arxiv.org/abs/1802.07228 (accessed on 4 May 2024).
  46. Blanchard, A.; Thomas, C.; Taddeo, M. Ethical Governance of Artificial Intelligence for Defence: Normative Tradeoffs for Principle to Practice Guidance. AI Soc. 2024. [Google Scholar] [CrossRef]
  47. Schneier, B. Click Here to Kill Everybody: Security and Survival in a Hyper-Connected World; W.W. Norton & Company: New York, NY, USA, 2018. [Google Scholar]
  48. Singer, P.W.; Friedman, A. Cybersecurity and Cyberwar: What Everyone Needs to Know; Oxford University Press: Oxford, UK, 2014. [Google Scholar]
  49. Topol, E. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again; Basic Books: New York, NY, USA, 2019. [Google Scholar]
  50. Topol, E.J. High-Performance Medicine: The Convergence of Human and Artificial Intelligence. Nat. Med. 2019, 25, 44–56. [Google Scholar] [CrossRef] [PubMed]
  51. Vayena, E.; Blasimme, A.; Cohen, I.G. Machine Learning in Medicine: Addressing Ethical Challenges. PLoS Med. 2018, 15, e1002689. [Google Scholar] [CrossRef]
  52. Marmot, M. The Status Syndrome: How Social Standing Affects Our Health and Longevity; Bloomsbury Publishing: London, UK, 2005. [Google Scholar]
  53. Tufekci, Z. Twitter and Tear Gas: The Power and Fragility of Networked Protest; Yale University Press: New Haven, CT, USA, 2017. [Google Scholar]
  54. O’Neil, C. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy; Crown Publishing Group: New York, NY, USA, 2016. [Google Scholar]
  55. Gerlich, M. Exploring Motivators for Trust in the Dichotomy of Human—AI Trust Dynamics. Soc. Sci. 2024, 13, 251. [Google Scholar] [CrossRef]
  56. Norris, P. The Politics of Resentment in the United States and Europe: Democratic Deficit and Populist Backlashes. J. Comp. Polit. 2016, 48, 103–128. [Google Scholar]
  57. Gidron, N.; Hall, P.A. The Politics of Social Status: Economic and Cultural Roots of the Populist Right. Br. J. Sociol. 2017, 68, S57–S84. [Google Scholar] [CrossRef]
  58. Inglehart, R.; Norris, P. Trump, Brexit, and the Rise of Populism: Economic Have-Nots and Cultural Backlash. Perspect. Polit. 2016, 15, 443–454. [Google Scholar] [CrossRef]
  59. Mutz, D.C. Status Threat, Not Economic Hardship, Explains the 2016 Presidential Vote. Proc. Natl. Acad. Sci. USA 2018, 115, E4330–E4339. [Google Scholar] [CrossRef]
  60. Allison, R. Russia, the West, and Military Intervention; Oxford University Press: Oxford, UK, 2018. [Google Scholar]
  61. Mearsheimer, J.J. The Tragedy of Great Power Politics; W.W. Norton & Company: New York, NY, USA, 2014. [Google Scholar]
  62. Levy, J.S. The Causes of War and the Conditions of Peace. Annu. Rev. Political Sci. 2013, 1, 139–165. [Google Scholar] [CrossRef]
  63. Treisman, D. Why Putin Took Crimea: The Gambler in the Kremlin. Foreign Aff. 2016, 95, 47–54. [Google Scholar]
  64. Galtung, J. Theories of Peace: A Synthetic Approach to Peace Thinking; International Peace Research Institute: Oslo, Norway, 1967. [Google Scholar]
  65. Lustick, I. Paradigm Lost: From Two-State Solution to One-State Reality; University of Pennsylvania Press: Philadelphia, PA, USA, 2019. [Google Scholar]
  66. Shlaim, A. The Iron Wall: Israel and the Arab World; W.W. Norton & Company: New York, NY, USA, 2014. [Google Scholar]
  67. Cohen, R. Israel and the Bomb; Columbia University Press: New York, NY, USA, 2011. [Google Scholar]
  68. Rolland, N. China’s Eurasian Century? Political and Strategic Implications of the Belt and Road Initiative; National Bureau of Asian Research: Seattle, WA, USA, 2019. [Google Scholar]
  69. Mearsheimer, J.J. The Gathering Storm: China’s Challenge to US Power in Asia. Chin. J. Int. Politics 2010, 3, 381–396. [Google Scholar] [CrossRef]
  70. Economy, E. The Third Revolution: Xi Jinping and the New Chinese State; Oxford University Press: Oxford, UK, 2018. [Google Scholar]
  71. Allison, G. Destined for War: Can America and China Escape Thucydides’s Trap? Houghton Mifflin Harcourt: Boston, MA, USA, 2017. [Google Scholar]
  72. Ikenberry, G.J. The End of Liberal International Order? Int. Aff. 2018, 94, 7–23. [Google Scholar] [CrossRef]
  73. Kang, D.C. American Grand Strategy and East Asian Security in the Twenty-First Century; Cambridge University Press: Cambridge, UK, 2017. [Google Scholar]
  74. Scobell, A. China and Strategic Culture; Strategic Studies Institute: Carlisle, PA, USA, 2012. [Google Scholar]
  75. Gerlich, M. How Short-Term Orientation Dominates Western Businesses and the Challenges They Face—An Example Using Germany, the UK, and the USA. Adm. Sci. 2023, 13, 25. [Google Scholar] [CrossRef]
Figure 1. CAS framework [21].
Figure 1. CAS framework [21].
Societies 14 00180 g001
Figure 2. Simplified graphic display of the impact chain for AI.
Figure 2. Simplified graphic display of the impact chain for AI.
Societies 14 00180 g002
Figure 3. Simplified graphic display of the impact chain for geo-political developments.
Figure 3. Simplified graphic display of the impact chain for geo-political developments.
Societies 14 00180 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gerlich, M. Brace for Impact: Facing the AI Revolution and Geopolitical Shifts in a Future Societal Scenario for 2025–2040. Societies 2024, 14, 180. https://doi.org/10.3390/soc14090180

AMA Style

Gerlich M. Brace for Impact: Facing the AI Revolution and Geopolitical Shifts in a Future Societal Scenario for 2025–2040. Societies. 2024; 14(9):180. https://doi.org/10.3390/soc14090180

Chicago/Turabian Style

Gerlich, Michael. 2024. "Brace for Impact: Facing the AI Revolution and Geopolitical Shifts in a Future Societal Scenario for 2025–2040" Societies 14, no. 9: 180. https://doi.org/10.3390/soc14090180

APA Style

Gerlich, M. (2024). Brace for Impact: Facing the AI Revolution and Geopolitical Shifts in a Future Societal Scenario for 2025–2040. Societies, 14(9), 180. https://doi.org/10.3390/soc14090180

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop