1. Introduction
Within its broad field of application, artificial intelligence (AI) is increasingly framed as a promising tool to enhance sustainable development. The European Commission sees AI as one of the digital technologies that are a “critical enabler for attaining the sustainability goals of the Green deal” i.a. by accelerating and maximizing “the impact of policies to deal with climate change and protect the environment” [
1] (p. 9).
Recently, attention has been drawn to the environmental impact of AI itself under the umbrella term of “sustainable AI” [
2] (see also [
3]), stressing the need to critically assess especially the immense energy consumption of AI. However, sustainability is not tantamount to the reduction of environmental costs. By shifting the focus to intergenerational justice as one of the constitutive normative pillars of sustainability, the paper demonstrates and addresses the threat of a reductionist view on sustainable AI. It identifies the question of whether and, if so, to what extent AI can be sustainable as a major research question necessitating a theoretical underpinning. The ethical analysis contributes to the assessment of AI’s long-term impacts on sustainability by revealing major implications of intergenerational justice as the underlying normative component (see [
4] (p. 2,4)).
Although “sustainability” is a frequently mentioned standard that institutions and persons commit themselves to, the definition and use of this concept are often inconsistent [
5]. While the concept’s applicability itself is contested [
6], as are different interpretations of its content, there is at least a consensus on its core idea: sustainability is the presupposition of intergenerational equity, implying the obligation to conserve “what matters for future generations” [
7] (p. 54) (see also [
8], p. 60). It is this shared perspective on obligations towards future persons that I will use as the starting point for my analysis.
That is to say, instead of defending a specific interpretation of sustainability, the goal of my analysis is to focus on intergenerational justice as one of its constitutive normative pillars. In so doing, the encompassing demands that are implied with the objective of creating sustainable AI become apparent: if sustainability is fundamentally about conserving “what matters for future generations” [
8] (p. 54), this conservative effort will exceed a mere reduction of environmental costs such as those resulting from high energy consumption. This comprehensive approach to sustainable AI is also reflected in the European Commission’s description of the conditions that AI must satisfy in regard to sustainability: “AI technology must be in line with the human responsibility to ensure the basic preconditions for life on our planet, continued prospering for mankind and preservation of a good environment for future generations” [
9] (p. 19).
By addressing the question of whether and, if so, to what extent the development and use of AI can be sustainable from the specific normative angle of intergenerational justice, the analysis contributes to closing two research gaps. Firstly, it depicts the reductionist understanding of sustainability in the context of sustainable AI, which has been focused on the welcome call for emission reductions and carbon footprint assessments of AI [
10], yet without reference to the further demands of sustainability. This merely implicit reference to intergenerational justice in spite of its fundamental normative function has also been an issue of criticism [
11,
12] of the United Nations’ understanding of sustainability that underlies the formulation of its 17 Sustainable Development Goals (SDGs) [
13]. Secondly, the integration of the concept of intergenerational justice provides an addendum to previous analyses of justice issues raised by AI. Although the principle of justice has frequently been applied to evaluate the different uses of AI, these have been focused on issues of discrimination resulting from biased algorithms or on broader issues of distributive justice, e.g., arising from exclusive access to AI technologies because of diverging financial means (cf. e.g., [
14], p. 699). Within the emerging application of AI to climate mitigation, additional issues of justice have been discussed such as using AI to nudge people into climate-friendly behaviour or the question of who within the global community should bear the costs of using AI to enhance climate mitigation [
3]. Yet, intergenerational justice opens the view on “novel forms of ethical challenges” raised by the use of AI in the context of climate change mitigation and the broader field of environmental policies [
15] (p. 13). While issues of
intragenerational justice raised by AI have been addressed before, the intergenerational justice dimension has received little attention up to now [
3] (p. 70) and, to my knowledge, there has been no analysis in the context of AI.
To address this gap, the analysis turns to a specific field of application of AI that can significantly impact future persons. Challenges of intergenerational justice are especially raised by the use of AI in those fields of application in which AI provides decision support to issues with long-term impacts, such as environmental protection policies or climate mitigation policies. Other areas where especially policies can have significant impacts on future generations are, e.g., funding strategies of pension schemes or public debt management [
16] (p. 62). This paper focuses on the former field of application. Thus, AI, with its specific feature of self-learning (machine learning, ML), is being employed as a tool for climate policy analysis “[…] evaluating the outcomes of past policies and assessing future policy alternatives […]. ML can provide data for policy analysis, help improve existing tools for assessing policy options, and provide new tools for evaluating the effects of policies” [
17] (p. 52f). In addition, AI has been applied to other environmental issues such as monitoring the extent of deforestation or simulating the effects of climate change [
15,
17].
As a first step, this analysis provides a normative framework that helps to explore those applications of AI in the context of climate mitigation and environmental protection that raise issues of intergenerational justice, especially those that may have detrimental impacts on future generations. This shall help to contribute to a conceptually informed understanding of sustainability. In a second step, the analysis provides a list of assessment questions that constitutes the first guideline for the revision of AI techniques in this regard. Overall, the framework offers insights into how sustainable some uses of AI are with the specific normative focus on issues of intergenerational justice.
Although I will mostly refer to ML applications, I use the broader term of AI throughout the paper. The framework and assessment questions will also provide guidance for identifying those types of AI that raise the depicted issues of intergenerational justice.
3. Power-Asymmetry and Intertemporal Discounting
With AI’s strong potential in the evaluation of large sets of data, it is successively being used to improve policy addresses to the complex phenomenon of climate change and its interdependent causes. Integrated assessment models (IAMs) play an important role in predicting and evaluating the interaction of socioeconomic and climate-related factors [
17] (p. 53). The goal of IAMs is “to project alternative future climates with and without various types of climate change policies in place in order to give policymakers at all levels of government and industry an idea of the stakes involved in deciding whether or not to implement various policies” [
22] (p. 116). Due to the complexity of the involved models, as well as the amount of data, AI and especially ML are being applied to various sub-models which, together, form the IAMs [
17] (p. 53). AI has thus been used to support policy-making in domains with a multitude of factors and stakeholders interacting, such as policies on sustainable development [
23] (pp. 22,27) or agricultural public policy [
24].
However, this support of policy-making with the help of AI is also confronted with some of the criticism brought forward against features of these policy models in general. One branch of models that are part of IAMs and have important implications regarding intergenerational justice is cost–benefit analyses of climate policies. These models assess the costs and benefits of climate mitigation across a long period of time, surpassing the lifetime of presently alive persons. They assess how costs and benefits are being distributed between different people (i.e., different generations) across different times. How to weigh costs and benefits between persons living at different times within cost–benefit analysis is usually addressed by the inclusion of a social discount rate. Setting the discount rate high involves assigning a significantly smaller value to benefits that accrue in the distant future. This has important normative implications which can be illustrated regarding carbon emission reduction policies:
“[…] intertemporal equity is extremely important in determining the appropriate rate of implementation of policies designed to reduce carbon emissions […]. Low discount rates generally make rapid implementation of such policies much more urgent than high discount rates because damages are projected to grow steadily over time at a much more rapid rate than mitigation costs”
Against this background, the practice of discounting within cost–benefit analyses with large time horizons—such as those on climate mitigation policies—are faced with considerable objections. On the practical level, it may lead to an underestimation of potentially severe costs for future persons and underplay the urgency of action required in the present to reduce these costs. This is because mitigation policies in the context of climate change imply costs (of climate mitigation) that predominantly accrue to present persons and their losses in consumption. The benefits, however, are reduced risks of climate change which most importantly benefit future persons [
25] (p. 401). Present persons thus face potentially higher burdens and are consequently tempted to include an elevated discount rate to reduce these burdens. On a more general level, if and at which rate to discount refers to a disputed field of normative assumptions. Different justifications for discounting the future have been discussed, for example, that it may be justified to give less weight to benefits for future persons as they will overall be better off under the assumption of an overall steadily increasing wealth [
26] (p. 48f). Whether there are legitimate reasons to discount benefits for future persons has been subject to an extensive discussion within philosophy and between philosophers and economists (see e.g., [
21,
27]). With respect to applying AI to this domain of policy evaluation, it suffices to state in a first step that the integration of a social discount rate in those contexts with large time horizons needs to be accessible for a normative evaluation. Among other considerations, strongly discounting benefits for future persons can bear the risk of assigning excessively high costs to them. This may then equal a negative manifestation of the intergenerational power-asymmetry.
The issue of discounting is, however, not a normative issue genuinely raised by the application of AI. Instead, applying AI to this domain can only be justified if the already discussed limitations of these models are adequately considered. Yet a specific challenge genuine to some of the AI techniques is the issue of providing an explanation for generated decisions. As has been shown, the setting of a social discount rate can have important normative implications regarding future persons. To address these limitations, cost–benefit analyses conducted by AI need to be explainable and transparent regarding the setting of the discount rate, thus leaving the possibility for later revisions of the settings. I will come back to the aspect of explainable AI below. With regard to the limitations of the integrated models, constructive insights for potential revision can be gained from general critical assessments of these models [
22] (pp. 124,128f) and from objections to the practice of discounting, e.g., in climate mitigation [
25] (pp. 401,405).
Regarding the use of AI to support assessments with large time frames such as climate mitigation policies, another aspect under dispute, which has important implications regarding intergenerational justice, is the underlying calculation of costs. A focus on static costs has been shown to neglect the long-term aspect of climate change by neglecting the dynamics between potentially slightly higher costs in the present that may, however, reduce mitigation costs in the distant and near future [
28] (p. 54), thus generating an overall improved cost–benefit ratio. Hence, the calculation of costs represents another aspect that must be accessible for potential revision within assessments that are being conducted or supported by AI.
Finally, policies with long-term impacts will only be able to represent potentially detrimental consequences for future persons if the time frames are set in a way that includes those persons. This illustrates a third aspect that needs to be accessible for potential revision not only within cost–benefit analyses conducted or supported by AI but for all types of policy assessments that may include AI. For example, policy-making regarding energy management relies among others on electricity demand forecasting which is increasingly being supported by AI. Within these forecasts, time horizons for long-term projections range from a couple of years to projections about the next 50 years [
29] (p. 15ff). Consequently, insights about the time frames and thus implicitly about the representation of potential impacts affecting persons in the distant future need to be made accessible within AI-based policy support assessments.
Using AI on contexts and decisions affecting different persons and different times—especially future generations—thus adds to the general challenge of creating AI that is transparent and explainable. Explainability is addressing “the need to understand and hold to account the decision-making processes of AI” [
14] (p. 700). The principle of explainability has been established as a genuine principle for the normative evaluation of AI along with the established bioethical principles of beneficence, non-maleficence, autonomy, and justice. Impacts on future persons constitute a yet-underestimated societal area that ought to be assessed using this principle. This will also contribute to the critical assessment of using AI within policy-making that has importantly been focused on issues of acceptance and trust [
23] (p. 33f).
4. Uncertain Preferences and “Intergenerational Transfer Bias”
Intergenerational relations are characterised by uncertainty in important domains, such as uncertainty about the preferences of future persons. Consequently, there is no data or only fragmentary data that AI can use in this regard. Using AI for assessments with large time frames will accordingly involve assumptions about preferences that future persons will have and how these can be ‘translated’ into opportunities that present persons should leave open for them. For example, the implications for the use of IAMs in the context of climate mitigation can be described like this:
“People making decisions today on behalf of those not yet alive need to make collective ethical choices about what kind of opportunities (usually characterized as a particular state of the climate system measured by global mean temperature, GHG concentration, or maximum climate damages allowable by some future date) they want to leave future inhabitants of planet Earth […]”
It is these choices that have normative implications. Take for example a study [
30] forecasting both CO
2 emission and energy demand that will arise from the transportation sector in Turkey until 2050 based on machine learning algorithms. Such a forecast necessarily includes assumptions about preferences that persons living in the time frame from 2022–2050 will pursue that are tied to emissions, energy use, and choice of transportation means. However, the longer the time frame of the forecast, the more difficult it will be to anticipate the preferences. A longer time frame of the forecast will also complicate the task of anticipating what the pursuit of these preferences will require, e.g., regarding the use of energy, the emission of greenhouse gases, or the choice of transportation means. This is because the use of these—broadly understood—resources such as the use of energy are tied to the pursuit of preferences but do not represent preferences in themselves. People usually do not enjoy emitting CO
2 but partake in activities that can stand in a causal relation to emissions, such as living in adequately heated buildings when the outside temperature is low. Over longer periods of time, both these causal relations, as well as the preferences, can change.
A simple approach to these assumptions about future preferences within AI-supported assessments could be to presuppose that the preferences of persons in the distant future, including future persons, are broadly overlapping with those of current persons. However, this way to proceed may raise the challenge of a so-called transfer of data bias [
31] (p. 4), a challenge especially important in machine learning and its reliance on historic data for training purposes [
32] (p. 6f). Simply ‘transferring’ present preferences may bear the risk of providing insufficiently for opportunities that should be left open for future persons because either future persons’ preferences change significantly or the circumstances in which these preferences can be satisfied change. Most importantly, the satisfaction of preferences such as mobility may rely on very different sets of resources in differing circumstances, thus leaving future persons with different opportunities. The fact that resources may provide different individuals in different circumstances with highly heterogeneous opportunities has been extensively discussed as the issue of “conversion factors” within the literature on the Capabilities Approach [
33]. Besides the potentially differing individual conversion of resources, it is even unclear from a philosophical point of view if future persons should be provided
with the same opportunities. This has been an issue of debate between the adherents of the four most discussed intergenerational “principles” of justice of either equality, proportionality, priority, or sufficiency [
5] (p. 7448). To date, there neither emerged a consensus within this philosophical debate nor is AI technology suited to integrate all (theoretical) facets of the debate. However, this specific type of transfer bias, which I have framed as
intergenerational transfer bias, as well as encompassing questions regarding the choice and extent of opportunities that should be left open for future persons, requires AI applied in these contexts once again to be open for revision. Similar solutions have been proposed for the difficulty of including AI’s potential impacts on non-human animals [
31] (p. 6). This way, potentially adapted preferences or changed circumstances may be added to the algorithms. In other cases, considering the uncertainty about future persons’ preferences may require present persons to provide for broader “choice options” that leave the realisation of different preferences in the distant future open (see [
7] (p. 53) and [
34] (p. 206ff). How this can be realised within AI-based assessments will constitute a challenge for those involved in the design and implementation of these systems.
5. Non-Reciprocity and Indirect Involvement
Unlike with other issues of fairness or justice raised by using AI [
3] (p. 71f), the involvement of stakeholders cannot contribute solutions to the presented issues of intergenerational justice. As future persons are yet unborn, there is no reciprocity between future and present persons. An involvement of future persons can thus only be accomplished indirectly.
The success of indirectly involving future persons by present persons’ concern for the well-being of the former can, however, be rather limited [
35] (p. 19). A more promising way to take aspects of intergenerational justice into account when using AI is to develop a set of evaluative criteria. As a result of the normative challenges described before, a list of questions guiding the potential revision of AI used in context with long-term impacts emerges (cf.
Table 1). The first category of questions is targeted at shaping AI in a way that makes especially those features accessible for potential revision that can have negative impacts on future persons. This way, the threat of having no data on potential detrimental impacts [
36] (p. 9) ought to be avoided. Further aspects and data will have to be added. Thus, in the environmental context, a specific focus on irreversible costs such as the acceleration of biodiversity loss or the generation of hazardous waste may have to be added to the evaluation.
The second category of questions supporting the use and assessment of AI in contexts with long-term impacts is targeted at assessing whether the use of AI
itself negatively impacts future persons. Whereas most of the questions raised above reveal the necessity to revise tools of assessments that are also being operated without AI, the use of AI may itself raise additional challenges to the realisation of intergenerational justice. Here, it is the threat of overseeing insights into potentially detrimental impacts [
36] (p. 9) on future persons from available data, as well as the occurrence of unintended adverse impacts [
32] (p. 8), that is being targeted. The environmental costs of running AI are an example of a negative impact that refers to AI
itself, i.e., a genuine impact on future persons caused by using AI.
Overall, this list of assessment questions will have to be adapted and revised on a regular basis as it serves to ethically accompany nascent technologies [
31] (p. 8). The hope is to provide a normatively informed standard for using AI “properly”, i.e. in accordance with intergenerational justice:
“If AI is underutilised or misused, it may undermine existing environmental policies, slow down efforts to foster sustainability, and impose severe environmental costs on current and future generations. However, if used properly, AI can be a powerful tool to develop effective responses to the climate emergency. Policymakers and the research community must act urgently to ensure that this impact is as positive as possible, in the interest of an equitable and sustainable future”
The list of normative questions adds to this endeavour of realising AI that is sustainable, where intergenerational justice as one of the two ethical dimensions of sustainability provides a central normative standard to assess AI’s sustainability. Starting with the question of whether and, if so, to what extent AI can be sustainable, the presented research developed a normative framework that attempts to integrate major aspects of intergenerational justice which, in turn, can be applied to assess different uses of AI. The application of this framework to specific uses of AI with potentially significant long-term impacts, namely, decision support for climate mitigation and environmental protection policies, resulted in the list of assessment questions presented above. A major implication that has been deduced is the necessity to make AI transparent and open for revision, especially with regard to the setting of a social discount rate and the assumptions about future persons’ preferences whenever it is used in this context.
6. Discussion and Outlook: Towards the Sustainability of AI
Measuring the use of AI against the standard of intergenerational justice may overburden the involved technologies. If current decision-making procedures, especially about policies with important impacts on future persons, do not fulfil this standard, why should AI? For instance, the German Federal Constitutional Court ruled in March 2021 that the provisions of the Federal Climate Change Act and its governing national climate targets are insufficient regarding the emission regulations because it shifts an excessively large part of the mitigation burden to future persons [
38]. The standard of intergenerational justice is thus already presenting severe challenges to policy-making in general. In addition, the normative approaches to intergenerational justice are highly debated and “[…] fall astonishingly short of expectations in attempting to deal with the normative issues raised by environmental and resource depletion problems” [
16] (p. 61). This may impede the attempt to use them as guidelines for AI design.
Two replies are in order. First, even if intergenerational justice is a contested issue, this does not rule out normative guidance. It rather urges to reveal the choice and reasons for the selection of specific normative premises regarding future persons (see for a similar point regarding sustainability [
7] p. 50). The presented list of guideline questions constitutes a framework that supports this endeavour. Impacts on future persons and their normative evaluation thus constitute a further application context for the criteria of transparency and explainability within the debate about AI.
Second, AI technology may even facilitate the application of intergenerational justice as a normative standard. AI’s potential to reduce institutional inefficiency in the context of environmental degradation, climate mitigation, or sustainability policies has already been noted (see e.g., [
3] p. 69 and [
32]). Regarding the intergenerational impact of policies, AI that has been designed and developed in accordance with normative criteria such as those described above may even be employed as a corrective tool by disclosing settings that refer to contested issues of intergenerational justice.
For the time being, however, the use of AI is faced with several constraints regarding intergenerational justice: “[…] AI system adoption practices are heavily technologically determined and reductionist in nature, and do not envisage and develop long-term, ethical, responsible and sustainable solutions” [
39] (p. 3) (see also [
32]). One such reduction is the reduction of the standard of sustainability to the attempt of reducing environmental costs. Unsurprisingly, AI will thus not be able to realise sustainability in itself and instead needs to be included in an encompassing vision as “[…] many of our current sustainability interventions via IT are measures to reduce unsustainability instead of creating sustainability, which means that we have to significantly shift our thinking towards a transformation mindset for a joint sustainable vision of the future” [
4] (p. 11).
The elaborated normative framework provides a list of assessment questions that explore normative issues regarding impacts on future persons and subsequently the potential need for revision of AI techniques within such a technological approach to a sustainable future. In so doing, insights about how AI can be made more sustainable become apparent. This way, AI may contribute to the pervasive political effort of promoting sustainable development.
To this end, topics for future research are distributed between different scientific disciplines. As an addendum to the ethically informed analysis, the future AI-based support for policies on climate mitigation and environmental protection, and its conformity with the concept of sustainability ought to be assessed from the perspective of policy research. The above-developed framework and assessment guide is conceptualised as a normative module that can be complemented by further normative modules. These would have to represent, for example, issues of
intragenerational justice and the use of natural resources as the second ethical dimension of sustainability. Furthermore, they would have to be interlinked with more empirically oriented sustainability assessments of AI to form an encompassing standard assessing the sustainability of AI. Attempts for more encompassing evaluations of AI and its impacts on sustainability have been conducted against the UN’s sustainable development goals (SDGs) [
40,
41,
42], however, not representing issues of intergenerational justice. Also, future research topics include the question of how the policy decisions support provided by AI can be designed to be open for revision in the relevant way described above.