1. Introduction
The complex natures of humanity and the environment and their intertwined connections make almost every aspect of climate change and its policy uncertain. Hence, grasping the relevant set of uncertainties and interpreting them in informative ways for policymakers is a focal issue in the economics of climate change. To this end, climate-economy integrated assessment models (IAMs) are helpful tools. A standard IAM incorporates a straightforward model of the Earth’s climate system with social science or economic model, increasingly being utilized to inform policymakers about climate change’s likely societal and economic consequences. The United Nations Framework Convention on Climate Change (UNFCCC) regards IAM analysis as successful in supporting climate policy to counteract the harmful effects of global warming [
1]. IAMs can employ normative decision theories to derive welfare-optimal climate policy scenarios [
2,
3,
4,
5,
6]. Among those decision-making frameworks are Cost–Benefit Analysis (CBA), Cost-Effectiveness Analysis (CEA), Chance-Constrained Programming (CCP) (as the probabilistic version of CEA), and Cost-Risk Analysis (CRA).
In CBA, the welfare losses induced by mitigation strategies are weighted against the welfare benefits from avoiding climate change damages [
7]. CBA is the most favored approach by many economists because of the availability of sound axioms [
8,
9]. However, the uncertainty of the damage function poses challenges for the reliability of CBA [
3,
10,
11]. More studies looking into improving aggregate and sectoral impact functions are necessary, but such studies are costly and may not be completable in the available time. A switch from the optimality notion to the notions of robust decision making and scenario frameworks seems to be one solution [
5,
12]. Accordingly, CEA and CCP were proposed to find the most cost-effective action complying with a chosen target. Society will see the action as to-be-taken if its costs are small enough [
13]. Indeed, CEA and CCP try to circumvent the difficulties of defining a robust damage function by operating under a (climatic) guardrail such as a 2 °C increase in atmospheric temperature compared to the pre-industrial level [
14]. The 2 °C target has recently become a crucial point for policy-making [
14,
15] and is perceived as a viable option [
4] in the presence of deep uncertainty to follow the precautionary principle [
16]. Indeed, CEA and CCP put effort into distinguishing between policy analysis and the evaluation problem [
13,
17].
Climate sensitivity is defined as the equilibrium change in global mean atmospheric temperature from a doubling of the atmospheric carbon dioxide concentration. The temperature targets can not be met with certainty due to the fat-tailed probability distribution of climate sensitivity [
18]; hence the true target must be probabilistic [
19,
20]. CEA and CCP suffer from other conceptual problems, most notably in the case of receiving new information (called learning) about an uncertain climate variable [
21,
22]. In the context of climate change, such an improvement in our knowledge may happen through further observations about the Earth system’s response to emissions [
23], the paleo information assimilation [
24,
25], and theory and modeling improvements [
5]. If further information arrives, the optimal policy can substantially change [
26,
27] and, consequently, the expected value of information may become negative [
21]—a counter-intuitive phenomenon. In addition, the fat-tailed distribution of some climatic variables, such as climate sensitivity, implies that compliance with the target either becomes impossible (that is, in-feasibility issues) or associated with a considerable cost after learning [
5]. A subset of IAMs has been used to calculate the value of information [
5,
28,
29]. The value of information signals whether a project with the goal of, for example, improving our knowledge of the climate system is considered worthwhile or not.
Trying to overcome the problems mentioned above, Schmidt et al. [
22] suggested CRA in which the risk of overshooting a climate target is traded off for the economic cost of reducing emissions. The expected utility framework in CRA guarantees non-negative expected values of information [
30]. CRA represents the preference order of the proponents of a temperature target when damage functions are inadequately diagnosed [
22]. A target and its compliance probability must be given to the model to operate CRA. Studies using CRA are growing. Neubersch et al. [
5] developed the first integrated assessment model using CRA and measured the value of information. They indicated that the functional form of the risk metric matters because a concave risk function violates the Axiom of Sacrifice Inhibition (see Neubersch et al. [
5] for details). Neubersch et al. [
5] state that CRA is appealing to climate economists who currently opt for CEA rather than CBA. Based on Neubersch et al. [
5], Roth et al. [
31] investigated some delayed policy scenarios, showing that the infeasibility problem associated with a delayed investment in renewables is resolved by using CRA. Mintenig et al. [
32] employed CRA to analyze delayed scenarios when bio-energy and carbon capture and storage (BECCS) are added to the technology portfolio. Roshan et al. [
33] and Roshan et al. [
34] applied CRA in the context of geoengineering in regional and global settings, respectively.
Held [
35] summarizes a procedure on how to use CRA. In short, by utilizing a kinkedlinear risk function that is positive for temperatures above 2 °C and zero otherwise, a trade-off parameter is calibrated such that the likelihood of the temperature path complying with the guardrail is equal to a selected safety level. Held [
35] stated that the calibration of trade-off parameters must be done to ensure that CRA mimics CCP as much as possible. However, using such a setup of the CRA method, Figure 5 in the supplement to Neubersch et al. [
5] shows that the temperature trajectories in CRA and CCP (as the probabilistic version of CEA) diverge after 2050, as CEA has a higher temperature through less mitigation. Although Held [
35] provides proof for CRA mimicking CCP in the period that the temperature peaks, it has not been further discussed why CRA is unable to do so afterward.
Mathematically, the effect of damages and the calibrated risk function are similar—both decrease welfare. In the 2015 Paris Agreement, in addition to preserving the global mean temperature (as against its preindustrial level) below 2 °C, striving for 1.5 °C has been agreed upon. The suggestion of 1.5 °C is because there can also be risks below 2 °C. Most IAMs, such as the Dynamic Integrated Climate-Economy (DICE) model [
36], employ damage functions that calculate positive damage below 2 °C. In addition, the calibrated risk function can be applied to measure the risk associated with any scenario (post calculation) to compare different decision analytic frameworks on the level of risk. Likewise, damages can also be post-calculated under CEA, CCP, or CRA to compare the different decisionanalytic frameworks on the level of damages. Such comparisons have not been conducted. Moreover, the temperature deviations in CCP and CRA appear normatively unappealing because they induce more costs in the CRA analysis than in the CCP. Such additional costs are “undesired” if it is possible to satisfy the normative restrictions on policy and avoid these extra costs. Furthermore, as the most critical policy tool, carbon prices have not been reported in the context of CRA. Hence, this important message for policymakers is missing. Furthermore, the literature lacks a discussion on whether there is a criterion upon which the CRA cannot be applied.
This study makes the following significant contributions: (i) it updates the Probabilistic Integrated model of Climate and the Economy model (PRICE) by Nordhaus and Popp [
28] as a probabilistic version of the latest version of DICE 2016 by Nordhaus [
36] and extends it further to conduct CCP and CRA. The PRICE model in this paper can be used readily to run all decision analytic frameworks discussed. (ii) This study suggests that the target for risk and the target for which the probability of compliance is measured can differ while still ensuring a successful calibration. (iii) It highlights that the current method of applying CRA, which uses only one trade-off parameter, leads to an extra cost compared to the outcome under CCP, which can be avoided by introducing and calibrating time-specific trade-off parameters. (iv) Hence, this study proposes revised instruction on using CRA. (v) This study simulates all decision analytic frameworks discussed in the paper and compares their results concerning the level of risk, damages, and carbon prices. (vi) This study measures the value of information using risk-based methods, that is, current (old) and suggested (new) risk definition and calibration, and compares them with the value of information calculated using the damage-based method. (vii) The carbon prices for CRA scenarios are measured for the first time in this study.
The results show that the assumption about which disutility function is being employed governs the magnitude of the value of information. In case the damage function is used, the values of information are between 0.1% and 0.08% of baseline welfare for new information arriving between 2020 and 2060. If the Old CRA is applied, the benefits of new information arriving between 2020 and 2060 are between 1.6% to 1.2% of baseline welfare. However, if New CRA is used, such benefits are negligible. Then, the values of receiving information in 2020 and 2060 are almost 0% and 0.07% of welfare in the baseline scenario, respectively. If damages or Old CRA are used, the carbon price shows divergence after learning. However, if New CRA is applied, the carbon prices in learning scenarios are fairly that of the no-learning case and suggest that the action does not change subject to receiving new information. Overall, the results support maximum mitigation, which was suggested byWeitzman et al. [
37].
The rest of the paper is organized as follows: In
Section 2, the model and scenarios, which can be run using the current disutility measurement, i.e., the damage function, are described. In
Section 3, CRA is reviewed and elaborated on the assumptions that need to be modified for the method suggested (New) version. In
Section 4, first, the results are presented for no-learning and learning scenarios, and then a discussion is offered.
Section 5 concludes the paper.
2. Model
The primary purpose of this study is to introduce the new specification of CRA (New CRA) and compare the decision-analytic frameworks described in the introduction that are based on an expected-utility-optimization framework. The analysis is performed using a well-known stylized probabilistic IAM. The model is an updated version of the probabilistic model PRICE by Nordhaus and Popp [
28]. PRICE is updated based on the latest specifications in DICE 2016 [
36]. Interested readers are highly encouraged to read Nordhaus and Popp [
28] and Nordhaus [
36] for details of the models.
The PRICE model in this paper is a version of DICE 2016 with an additional dimension representing the different states of the world. In this paper, the different states of the world differ only concerning the magnitude of climate sensitivity. The beauty of using DICE 2016 as the base for the probabilistic model is threefold: Firstly, the model is freely available. Secondly, the model readily calculates the carbon price. Thirdly, the model is computationally inexpensive. The model is further extended to conduct CCP and CRA.
In the model, climate sensitivity is assumed to be distributed in a log-normal form:
[
38]. This distribution selects twenty different climate sensitivities with the same probability (i.e., 5%). The climate sensitivities range between 1.01 °C and 7.15 °C. This approach is consistent with Neubersch et al. [
5], Roth et al. [
31], Roshan et al. [
33], Roshan et al. [
34], and Mintenig et al. [
32].
Regarding the time horizon, for computational purposes, the model runs for 60 timesteps, with each timestep representing five years (i.e., the model runs for 300 years, from 2015 to 2315). This time horizon is long enough to show the differences in the paths of temperature and carbon prices when different decision-analytic tools are applied. Nevertheless, it must be emphasized that a time horizon relevant for investments is likely to be shorter, perhaps only a few decades from now.
All initializations of parameters and variables are kept as described in DICE 2016 [
36]. For example, in DICE 2016,
(the emission control rate) is capped to 1 before timestep 30 and then is capped at 1.2. The increase in
beyond 1 represents technologies that allow for negative emissions. The necessary specifications for this study are added, overwriting existing values if they exist. The model is flexible in switching between different decision-analytic frameworks. The required modifications for running different scenarios are described in the following subsections starting from the original DICE specifications. CRA, specifically, will be described in the next section.
2.1. Baseline and Optimal Scenarios
Baseline and optimal scenarios are the probabilistic versions of the original DICE. The model maximizes the expected welfare in Equation (
1).
where
W is the to-be-maximized welfare,
x is the control variable,
is the control variable path,
U is the consumption-based utility function,
t is the index for the set of time periods,
s is the index for the set of states of the world,
is the probability of each state of the world, and
is the rate of social time preference (
is the average utility social discount rate).
The damages are the same as the original DICE represented in Equation (
2). If the damages are active, the result will be reported as the optimal scenario. If the damages are not functioning, the simulation result is the baseline scenario. An interested reader is encouraged to read Nordhaus [
36] for details.
where
,
, and
are, respectively, the damages, the gross output, and the atmospheric temperature for each timestep and state of the world.
a is the scaling coefficient of damages. Notice that the dimension
s, representing different climate sensitivities (the set of states of the world), is added to the original equations of DICE.
Equation (
3) determines the atmospheric temperature for each state of the world in time
,
, based on the previous period’s temperature and the current period’s forcing
where
and
are the atmospheric and oceanic temperatures, respectively, in time
t,
is the radiative forcing in time
,
is the climate sensitivity for the state of world
s, and
,
, and
are climatic parameters (please see Nordhaus [
36] for details).
Equation (
4) shows the radiative forcing in time
t, depending on the carbon concentration increase in the atmosphere in time
t,
as the carbon concentration increase in the atmosphere, and
as exogenous forcing for other greenhouse gases in time
t.
Equation (
5) shows the carbon concentration increase in the atmosphere in time
t + 1 depending on the carbon concentration increase in time
t and
as emission in time
t
where
is the shallow ocean concentration of carbon dioxide, and
and
are climatic parameters.
Notice that there is no learning of the climate sensitivity in the baseline and optimal scenarios. Hence, there is only one path of the investment decision, resulting in only one path of emissions. Equation (
6) serves to satisfy this condition.
where
is the first state of the world.
can be replaced by any of the states of the world without loss of generality.
2.2. Learning Scenarios
In a learning scenario, all decisions are state-dependent after the new information arrives. The climate sensitivity probability distribution is the only new information obtained in this model. For this purpose, Equation (
6) is replaced by Equation (
7) for time periods only before learning. After learning, emission paths can differ for each state of the world.
where
is the set of periods before learning, and
is the emissions in times before learning for each state of the world.
2.3. Chance Constraint Programming (CCP)
In CCP, the minimum probability of atmospheric temperature complying with a target is fixed. To run a CCP scenario, the damages are inactive (assumed to be zero, with
in Equation (
2)) and the expected welfare in Equation (
1) is maximized subject to Equation (
8).
where (
) is the so-called
, which expresses the probability that the temperature must stay below the target (here 2 °C), and
is the Heaviside function represented in Equation (
9)
where
is the guardrail at which the Heaviside function switches to 1.
DICE 2016 is unable to comply with 2 °C in the deterministic version. Furthermore, PRICE cannot comply with 2 °C with safety of 66%. In this paper, 2 °C with a safety of 50% is reachable. Hence, the rest of the article is based on this combination of target and safety. 50% likelihood of an outcome in the IPCC Fifth Assessment Report (AR5) is interpreted as the term “more likely than not” [
39]. Khabbazan and Held [
40] show that the simple climate modules may need to be re-calibrated against the atmosphere-ocean general circulation models (AOGCMs). In this regard, the simple climate module of DICE 2016 in this paper is not tested or re-calibrated against AOGCMs.
4. Results and Discussion
In this section, the results for scenarios without learning and their major differences are first presented. Then, the learning scenarios will proceed, and the results will be compared with no learning cases to clarify the reasoning for the major effects.
4.1. No Learning Scenarios
Figure 2 shows the atmospheric temperature (ATEM) evolution from 2015 to 2315 for five scenarios: baseline, optimal, CCP, New CRA, and Old CRA. In each scenario, the expected welfare is maximized according to their descriptions in previous sections. The respective carbon prices in dollars per ton of carbon dioxide (
$/tCO
2) for the scenarios are presented in
Figure 3. Because the damages are ignored in the baseline scenario, it represents the situation in which the temperature soars highest, to 12 °C above its preindustrial level for the most heightened climate sensitivity (see panel a in
Figure 2). The respective carbon price in the baseline scenario is the lowest among all scenarios (indeed, the baseline carbon price is limited to a predefined baseline path, whereas in other scenarios, the carbon price is free to increase). In the optimal scenario, where the damages are taken into account, ATEM would reduce, compared to the baseline, and peak at around 8 °C above its preindustrial level for the highest climate sensitivity (see panel b in
Figure 2). The carbon price is higher in the optimal scenario than the baseline scenario. In line with economic principles, the higher the carbon (emission) price, the lower the temperature.
In the current version of the PRICE model, the combination of a target of 2 °C and a safety of 50% is one of the most extreme combinations; that is, any lower (temperature) target or higher safety would be infeasible. Shown in panel c in
Figure 2, CCP poses an extreme case because to reach such a scenario, emissions must decrease very quickly, and hence carbon price must increase extremely fast. Carbon price for the CCP scenario rises to more than 500
$/tCO
2 (much higher than the baseline and optimal scenarios) within one period (from 2015 to 2020) and gradually decreases after that. The temperature evolution in panel d of
Figure 2 for the New CRA scenario and its carbon price path in
Figure 3 almost perfectly mimic the respective CCP paths. However, as shown in panel e of
Figure 2, Old CRA comes with a rapid reduction in temperature after the peak in 2165, while the Old CRA carbon price follows that of CCP and New CRA until 2165, it jumps after that, decreasing emissions even further.
Figure A1 in the
Appendix A shows ATEM evolution from 2015 to 2510 for New and Old CRA.
4.2. Learning Scenarios and Value of Information
Figure 4 shows the temperature evolution in six scenarios. The upper panels use the original DICE damage function, where the risk function is not activated. The middle panels use the calibrated risk in New CRA, and damages are off. The lower panels employ the calibrated risk in Old CRA without damages. The left panels demonstrate a situation where the new information on climate sensitivity arrives in 2020, while in the right panels, the new information arrives in 2060.
Figure 5 shows the carbon prices in the respective scenarios.
Table 1 shows the percentage change of the expected value of welfare, utility, damages, old risk, and new risk across all the scenarios compared to the baseline scenario. The percentage changes are calculated based on
Table A1 in the
Appendix A, which shows the magnitude of the expected value of welfare, utility, damages, old risk, and new risk for each scenario. The amount of Safety achieved in each scenario is given in the last column of
Table A1. Throughout this section, when referring to any of the measures in the table, the reference is to the expected value of that measure. Notice that while welfare refers to the objective (goal) in each scenario, utility refers to only that part of the welfare that does not include risk. Hence, welfare and utility are the same for baseline, optimal, CCP, and learning scenarios that use damages. Still, they differ for risk-based scenarios in which the risk is part of the to-be-maximized welfare.
A general rule of thumb can be suggested: the sooner the information arrives, the better. “Better” in this context means that the optimized welfare increases when the new information arrives. The increase in welfare is generally due to decreases in the temperatures for high climate sensitivity states of the world and increases in the temperatures for low climate sensitivity states of the world. This effect is fairly visible when the associated carbon prices in
Figure 5 are compared with the carbon price in
Figure 3.
However, the magnitudes of change and the dominant motivations for such changes are different for each scenario, which implies that the choice of the damage or risk function matters. The reasoning for each scenario in more detail is as follows. Suppose the disutility procedure is a damage function (panels a and b). In that case, the benefit of decreasing higher temperature (for example, from 7 °C to 6 °C) is more than the harm of increasing the lower temperature by the same amount (for example, from 1 °C to 2 °C). The reason for this is the strictly convex form of the damage function. It induces increasing magnitudes of harm when temperatures rise at the same rate (indeed, this is the definition of a strictly convex function). However, this effect vanishes when the disutility function is linear such as in old and new risk. Yet, the kink at 2 °C in the old risk function has a dominant effect. In the case of utilizing the old risk function, all the temperature paths below 2 °C can increase to 2 °C without any risk (i.e., no harm for such increases). The higher temperatures, which already have a decreasing shape towards the end of the simulation, will also slightly decrease towards the end of the simulation.
What dominant effect derives the changes in the shape of temperature paths after learning in the case of new risk? The function used in new risk does not benefit from a strictly convex disutility function nor a kink at 2 °C. This means that the harm of increased risk from any same increase in temperature for lower climate sensitivity states would be offset by the same decrease in temperature for higher climate sensitivity states of the world. Yet, the role of climate sensitivity in the model justifies such changes. Indeed, if the climate sensitivity is higher, a fixed change in emissions will induce quicker temperature changes and vice versa for the lower climate sensitivities.
The shape of the carbon prices in
Figure 5 reveals an interesting phenomenon. For the Learning case with damages and old risk, the carbon prices start to deviate. However, if new risk is used, carbon prices keep their position until 2165 (when the cap on MIU increases from 1 to 1.2) and diverge slightly after that. Indeed, this phenomenon implies that the benefit from learning is hugely dependent on the functional form of the disutilizing processes in the model. A policy recommendation that can be derived from this is that if the goal is to keep the temperature below a specific limit (here, 2 °C) with a particular minimal likelihood (here, 50%), then the value of information on climate sensitivity will not hugely influence the carbon price until the new information is taken into account. Hence, quick and significant action must be taken in line with no learning scenarios.
4.3. Value of Information
According to the results, welfare in the optimal scenario is almost 0.5% more than welfare for baseline because the damages are reduced by more than 45%. Likewise, the risks decrease either based on the old or new specification, while the Safety in the baseline scenario is less than 1% (almost all states of the world will transgress the 2 °C), it will increase to nearly 10% in the optimal scenario. In both learning cases, receiving new information will increase welfare by 0.56% and 0.54%, respectively, for learning in 2020 and 2060 compared to the baseline scenario. That is, only 0.1% and 0.08%, respectively, for learning in 2020 and 2060 compared to the optimal scenario. Even though such increases happen by reducing damages, they exacerbate the Safety (it is almost 0% for both cases). This effect is visible in panels a and b in
Figure 4, where the temperatures for higher climate sensitivity are less than the amount for the optimal case, whereas the temperatures for lower climate sensitivities rise to nearly more than 2 °C. The increases in welfare by 0.1% and 0.08% in these scenarios are considered the value of information against the no learning (optimal) scenario. Still, the value of receiving information earlier can be measured by comparing the two learning scenarios. Therefore, if the information arrives in 2020 instead of in 2060, the benefit is almost 0.02% of the welfare in the baseline. In baseline, optimal, and learning scenarios, Safety falls far short of the 50% target.
In the CCP scenario, perfect compliance with the target (precisely 50%) is achieved. Yet, quick and significant actions are necessary for such compliance, implying a considerable decrease in temperature paths, as shown in
Figure 2 panel c. CCP requires 1.8% reduction in welfare (and utility) that can reduce the damages by about 80%. Depending on the type of risk, it will be reduced by nearly 59% and 87% for New and Old risks, respectively. New CRA and Old CRA both have the same safety of almost 50%. The deviation from the Safety target in these scenarios stems from the tolerance given to the numerical optimizer in the calibration step (notice that the reason for Safety not being exactly 50% in CRA analysis is that there is a ±0.3% degree of tolerance for numerical purposes in the calibration step). Measured by both old and new risk, the temperature paths in New CRA expose more risk than Old CRA. Likewise, the damages in New CRA are more significant than in Old CRA. Although the utility measure is higher for New CRA than Old CRA, the overall effect is that the welfare in Old CRA is higher than the welfare in New CRA.
In the case of new risk, values of information are almost negligible. The value of receiving information in 2060 and 2020 is nearly 0% (rounded to 2 decimals) and 0.07%, respectively, compared to the CCP scenario. Notice that the simulation is done with an accuracy of 1 × 10
. Considering this, the value of information in any learning scenario is positive, though by a tiny amount. This effect is visible in
Figure 5 panels b and c, where carbon prices mimic that of no learning scenario in
Figure 3 before 2165 and diverge slightly after that. For the case of old risk, however, the value of learning in 2020 compared to a no learning scenario is 1.6% of baseline welfare. The value of learning in 2020 rather than 2060 is nearly 1.2%.
4.4. Discussion
This paper may be appealing to those who consider target-based approaches appropriate for decision-makers when there is a lack of sufficient knowledge about damage functions. In this regard, this paper acknowledges the usefulness of Cost-Risk Analysis (CRA) as a crossbreed decision analytic framework that joins the abilities of Cost-Effectiveness Analysis (CEA) in complying with climate targets and the formal capabilities of Cost-Benefit Analysis (CBA) in employing an expected-welfare optimization framework to handle the scenarios with possible learning. CRA trades off the risk of transgressing a climate target and expected welfare losses caused by abatement efforts.
CRA is capable of calculating the value of information. Neubersch et al. [
5] implemented the first application of CRA in an integrated assessment model (IAM) and suggested that the risk function must be non-concave to prevent emission behaviors mimicking a baseline scenario after learning. Such behavior would violate the preferences of those seeking to limit temperature increases. Held [
35] reviewed the steps needed to apply CRA, in which a linear risk function is kinked at a target for which the compliance is measured too. A couple of studies has applied that method (Old CRA) after Neubersch et al. [
5].
This paper highlights that the Old CRA cannot mimic CCP perfectly, as it abates emissions more than necessary. Such an unnecessary abatement violates the preference order of a decision-maker following the 2 °C warming policy to have the highest economic utility possible while maintaining the guardrail. Furthermore, this paper suggests that the risk and safety targets should not necessarily be the same. For example, there can be risks at positive temperatures below 2 °C when the safety is measured at 2 °C. Hence, this paper proposes an axiomatically more appealing version of the method (New CRA) to satisfy climate and economic-focused decision-makers simultaneously.
New CRA is applied in an updated version of the model PRICE [
28] as a probabilistic version of DICE 2016 [
36]. The model is further extended to conduct CCP and CRA analysis. The attractiveness of using DICE 2016 as the base for the probabilistic model is threefold: Firstly, the model is freely available. Secondly, the model is computationally inexpensive. Thirdly, the model readily calculates the carbon price, an important policy instrument. At the time of writing, none of the CRA studies have reported carbon prices.
Drawing on the procedure suggested by Held [
35], a new procedure to universally apply New CRA is proposed. After calibrating time-specific trade-off parameters, it is shown that the new method is capable of mimicking CCP perfectly at all times for a demonstrative scenario that complies with 2 °C with a probability of 50%. This way, the difficulty of finding an explicit value function in line with society’s value system, for example, protecting the ecological environment, is circumvented when the extra costs are avoided simultaneously. Furthermore, this way, the analytic proof of CRA mimicking CCP before the temperature peak by Held [
35] is embedded into the calibration goal at all times. Notably, there is not necessarily a temperature peak in New CRA (as there is no temperature peak in CCP).
It must be emphasized that the carbon price calculated in this article corresponds to a global carbon price, which is in turn highly influenced by the model assumptions. Therefore, learning scenarios using other models, such as a multi-regional multi-sectoral model, would bring further insights into CRA scenarios and the value of information on climate sensitivity. This research path is left for future research.
5. Conclusions
The main focus of this paper is to introduce the new method of applying CRA (New CRA) that avoids extra costs in terms of welfare compared to the Old CRA scenario. The article suggests an effective calibration procedure, where time-specific trade-off parameters are used to mimic the CCP scenario. The paper employs an updated version of the model PRICE as a probabilistic version of DICE 2016, and it is further extended to compute CCP and CRA. The carbon prices have been measured for CRA scenarios for the first time using the model PRICE.
The new specification of CRA is applied to the learning scenarios to measure the value of information on climate sensitivity and to compare those results with simulations employing Old CRA and using the original damage function of DICE 2016 in its probabilistic version. The value of information is an important measure that describes how much society should invest in researching the actual value of climate sensitivity. For a thorough investigation, simulations employing all decision analytic frameworks listed in the paper are compared concerning their levels of risk, damage, and economic indicators such as welfare and utility.
The results emphasize that the assumption about the functional form representing risk is crucial in deciding whether funding towards reducing the uncertainty concerning climate response parameters in the following decades is valuable or not. If damages are considered, it might be worthwhile to invest in projects to determine the accurate probability distribution of climate sensitivity as the benefits are between 0.1% and 0.08% of baseline welfare for new information arriving between 2020 and 2060. If the Old CRA is applied, the benefits of new information arriving between 2020 and 2060 are between 1.6% and 1.2% of baseline welfare. These numbers support funding for reducing such uncertainty. However, if New CRA is applied, the benefits are much lower. Then, the value of receiving information in 2020 and 2060 is between almost 0.07% and less than 0.01% of welfare in the baseline scenario.
The results on carbon price for the mentioned scenarios support the results. If damages or old risk are used, the carbon price shows a divergence after learning. However, if New CRA is applied, the carbon prices in learning scenarios are very close to those in the no-learning case, suggesting that the mitigation effort does not change subject to new information, especially in the next few decades. Overall, the results suggest maximum mitigation, which supports the findings of Weitzman et al. [
37].
This paper re-emphasizes that until improved damage and impact functions are available, New CRA can be employed as an alternative to CBA, CEA, and CCP. Such a newly calibrated CRA can also analyze delayed-action scenarios where the late action may violate the targets. These research paths are left for the future.