Next Article in Journal
Effects of Perceived Benefits, Value, and Relationships of Brands in an Online-to-Offline Context: Moderating Effect of ESG Activities
Previous Article in Journal
Spatial and Temporal Evolution of the Coupling of Industrial Agglomeration and Carbon Emission Efficiency—Evidence from China’s Animal Husbandry Industry
Previous Article in Special Issue
Predicting Pro-Environmental Behaviours in the Public Sphere: Comparing the Influence of Social Anxiety, Self-Efficacy, Global Warming Awareness and the NEP
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Machine Learning a Probabilistic Structural Equation Model to Explain the Impact of Climate Risk Perceptions on Policy Support

by
Asim Zia
1,2,*,
Katherine Lacasse
3,
Nina H. Fefferman
4,5,6,
Louis J. Gross
4,5 and
Brian Beckage
2,7
1
Department of Community Development and Applied Economics, University of Vermont, Burlington, VT 05405, USA
2
Department of Computer Science, University of Vermont, Burlington, VT 05405, USA
3
Department of Psychology, Rhode Island College, Providence, RI 02908, USA
4
Department of Ecology and Evolutionary Biology, University of Tennessee, Knoxville, TN 37996, USA
5
Department of Mathematics, University of Tennessee, Knoxville, TN 37996, USA
6
National Institute for Modeling Biological Systems, University of Tennessee, Knoxville, TN 37996, USA
7
Department of Plant Biology, University of Vermont, Burlington, VT 05405, USA
*
Author to whom correspondence should be addressed.
Sustainability 2024, 16(23), 10292; https://doi.org/10.3390/su162310292
Submission received: 18 August 2024 / Revised: 18 November 2024 / Accepted: 22 November 2024 / Published: 25 November 2024

Abstract

:
While a flurry of studies and Integrated Assessment Models (IAMs) have independently investigated the impacts of switching mitigation policies in response to different climate scenarios, little is understood about the feedback effect of how human risk perceptions of climate change could contribute to switching climate mitigation policies. This study presents a novel machine learning approach, utilizing a probabilistic structural equation model (PSEM), for understanding complex interactions among climate risk perceptions, beliefs about climate science, political ideology, demographic factors, and their combined effects on support for mitigation policies. We use machine learning-based PSEM to identify the latent variables and quantify their complex interaction effects on support for climate policy. As opposed to a priori clustering of manifest variables into latent variables that is implemented in traditional SEMs, the novel PSEM presented in this study uses unsupervised algorithms to identify data-driven clustering of manifest variables into latent variables. Further, information theoretic metrics are used to estimate both the structural relationships among latent variables and the optimal number of classes within each latent variable. The PSEM yields an R2 of 92.2% derived from the “Climate Change in the American Mind” dataset (2008–2018 [N = 22,416]), which is a substantial improvement over a traditional regression analysis-based study applied to the CCAM dataset that identified five manifest variables to account for 51% of the variance in policy support. The PSEM uncovers a previously unidentified class of “lukewarm supporters” (~59% of the US population), different from strong supporters (27%) and opposers (13%). These lukewarm supporters represent a wide swath of the US population, but their support may be capricious and sensitive to the details of the policy and how it is implemented. Individual survey items clustered into latent variables reveal that the public does not respond to “climate risk perceptions” as a single construct in their minds. Instead, PSEM path analysis supports dual processing theory: analytical and affective (emotional) risk perceptions are identified as separate, unique factors, which, along with climate beliefs, political ideology, and race, explain much of the variability in the American public’s support for climate policy. The machine learning approach demonstrates that complex interaction effects of belief states combined with analytical and affective risk perceptions; as well as political ideology, party, and race, will need to be considered for informing the design of feedback loops in IAMs that endogenously feedback the impacts of global climate change on the evolution of climate mitigation policies.

1. Introduction

Global climate change is projected to have a variety of local-to-regional scale impacts on human societies and ecosystems. The severity of these impacts (risk magnitude) depends upon the extent to which nations at the global scale mitigate Green House Gas (GHG) emissions through more effective implementation of climate policies [1]. While a flurry of studies and Integrated Assessment Models (IAMs) have independently investigated the impacts of switching mitigation policies in response to different climate scenarios, little is understood about the feedback effect of how human risk perceptions of climate change could contribute to switching climate policies for a substantive reduction in GHGs [2,3,4]. Standard Global Climate Models assume a disconnect between risk/adaptation and mitigation policies. IAMs typically ignore human risk perceptions and focus on economic dynamics to predict the effect of adaption on mitigation policies [5,6,7]. While public opinion is a strong driver of policy change in democratic societies [8,9], the complex interactions of climate risk perceptions, beliefs about climate science, and their combined effects on support for policies designed to mitigate climate change are not very well understood. A recent systematic review of 46 empirical studies [10], for example, found that factors influencing policy support could be divided into three general categories: (i) social psychological factors such as risk perceptions, beliefs, political ideology, values, knowledge, and emotions [11,12,13,14,15,16]; (ii) perception of climate policy and its design, such as carrots vs. sticks, perceived efficacy, and fairness [11,17,18,19]; and (iii) contextual factors such as trust, norms, economic, political and geographic aspects, and media events and communications [20,21,22,23,24]. A major finding of the systematic review was that the factors within and across these three categories “cannot be considered in total isolation, as factors are connected and may interact in various ways”, yet little is known about how these factors and drivers of policy support interact [10] (page 868).
This study presents a novel machine learning approach that utilizes a probabilistic structural equation model (PSEM) to analyze the complex interactions among climate risk perceptions, beliefs about climate science, political ideology, demographic factors, and their combined effects on support for mitigation policies. With foundations in Bayesian Network theory [25,26] and information theory [27], PSEMs can use the principle of Kulback–Leibler divergence [28] to rank the relative importance of factors that explain structural drivers and dynamics of support for climate policies among different segments of populations. An advantage of this approach is that it does not require a priori assumptions about which of the many possible factors should be considered as the underlying latent variables in a model, but rather provides a formalized method to determine which is most appropriate to include.
To examine how social psychological factors such as risk perceptions, beliefs, and political ideology drive policy support, researchers frequently design surveys. Respondents answer multiple questions to measure each variable of interest, and then researchers test their hypotheses about which factors are related to higher or lower policy support. Using these survey datasets, the theoretical predictions are tested with regression modeling, mediation analyses, or relative weight analyses by estimating the relative predictive weight of each variable, e.g., [29,30,31,32] or with structural equation modeling (SEM) to test the fit of the model structure, e.g., [12,14,33]. Either way, the theory itself is used to determine the model structure that the data are fit to. Similarly, more recent meta-analyses have also merged findings from dozens of studies together to test the relationships between variables by conducting structural equation models (SEMs), for which the structure of the model was determined a priori [34,35].
Critically, however, traditional SEMs rely on a priori groupings of measured variables to analyze their combined influence as latent variable drivers of policy support. The researchers group measured variables into latent variables based on a theory, and then analyze how correlated each latent variable is with each policy outcome. As a result, these models estimate correlations between these hypothesized latent variables and policy outcomes, but do not naturally serve as independent generators of hypotheses or new theories.
To extend the capability of modeling approaches to generate new theories, we apply other methodological tools of machine learning [26,28,36,37,38,39]. Machine learning allows “generative” induction of PSEMs. That is, the models derived do not rely upon a priori assumptions about key latent variables to include but rather use the data itself to develop them. These PSEM methods can then elucidate novel patterns across the data, as measured variables from surveys may combine in unexpected ways that cross explanatory categories of existing theories to drive policy support. Applying machine learning to survey data acts as a hypothesis generator and allows us to reconceptualize, integrate, and strengthen the predictive power of existing theories to explain and predict climate policy support.
Calls to expand the methodological diversity in research on understanding public support for climate policy [40] note that methods such as machine learning have been underutilized, with some exceptions [41,42,43]. For much of the past work examining public opinions, the initial explanatory theorizing on risk perception and subsequent behavioral and policy choices assumed people were making rational, calculated decisions. They would conduct cost-benefit analyses, weighing the probability of various negative and positive outcomes, using these calculations to come to conclusions (see review [44]). However, the importance of emotions and other affective processes in risky decision-making is now quite clear (for a review, see [45]). Research on the affect heuristic finds that positive affect tends to lead people to perceive high benefits and low risk, whereas negative affect leads to the assumption of low benefits and high risks [46]. The risk-as-feelings hypothesis proposes that emotions play at least as large a role as rational accounting in making decisions about a risk [47], and research has found that these emotions often drive different decisions than the rational calculations [48]. Pre-existing fears can lead to risks being more easily amplified through media coverage and public debates about the issue [49]. Climate change risk perceptions are an area in which both affective and analytical risk processes play important roles [50,51] and both routes help predict policy support [30,32]. Analytical risk perception processes can be distinguished from affective risk perception, and under the dual processing model, both types of risk perception are used to process climate risk and determine individuals’ support or opposition for climate mitigation policies.
The machine learning PSEM approach employed in this paper can help disentangle the relative influence of analytical versus affective risk perceptions. Other work using machine learning has highlighted the insights that this approach can offer. For example, Hasanaj and Stadelmann-Steffen [41] utilized a random forest machine learning technique on Swiss and U.S. survey data to estimate the factors that best predict support for a carbon tax. They found that small variations in the risk perceptions related to climate change led to large variations in policy support, suggesting that communications aimed at emphasizing the risks from continuing climate change may help overcome people’s concerns about the costs of mitigation and the risks related to the solutions.
We analyze the complex interactions among analytical versus affective climate risk perceptions, beliefs, political ideology, demographic factors, and their combined effects on support for climate policies using a PSEM derived from the publicly available mixed pool “Climate Change in the American Mind” (CCAM) survey dataset. The CCAM dataset was collected between 2008 and 2018 (N = 22,416) from a representative sample of the U.S. population each year [52]. We provide details on data and methodology further below. The estimated PSEM enables us to generate an integrative model that combines multiple factors to explain the observed policy support. Further, we use the generative structure of measured and latent variables derived from the PSEM to estimate a standard SEM [53,54] and evaluate its fitness with the observed data. The PSEM yields an R2 of 92.2%, which is a substantial improvement over a traditional regression analysis-based study applied to the CCAM dataset [24] that identified five manifest variables to account for 51% of the variance in policy support. Novel theoretical findings derived from machine learning-derived PSEM approaches can inform the design of feedback loops in Global Climate Models (GCMs) and Integrated Assessment Models (IAMs) for dynamically updating the evolution of climate policies in response to the coevolving emergence of climate risk perceptions under different pathways of global climate change. Findings from our study demonstrate that purely economic rationality theory driven feedback embedded in GCMs and IAMs will misrepresent the evolution of national climate policies as analytical risk perceptions explain a relatively smaller fraction of variance in support for climate policy in the case of USA. The machine learning approach also demonstrates that complex interaction effects of belief states combined with analytical and affective risk perceptions, as well as political ideology, party, and race, will need to be considered for informing the design of feedback loops in GCMs and IAMs that endogenously feedback the impacts of global climate change on the evolution of climate mitigation policies.

2. Materials and Methods

2.1. Sample and Dataset

We analyze the publicly available mixed-pool CCAM survey dataset [52] collected between 2008 and 2018 (N = 22,416) to identify the PSEM and test four alternate specifications of SEMs derived from PSEM. We chose the CCAM dataset since this survey systematically measured climate change beliefs and risk perceptions as well as climate policy support, which were key variables necessary for best understanding public support for climate policy.
We sampled 33 public opinion and sociodemographic questions from the CCAM survey dataset (Table 1). We selected the 14 items depicting public opinion statements from the dataset that had at least N = 17,000 responses across all years of available data, from 2008 to 2018. The 14 public opinion statements were originally designed by the survey team to measure a variety of constructs, including beliefs (items 1–3), risk perceptions (items 4–10), policy support (items 11–13), and behaviors (item 14) related to climate change [52]. We also selected 19 sociodemographic items that assessed the survey respondents’ gender, age, race, political ideology, political party, household factors, as well as the region they live in the U.S. and the year of the survey. Table 1 provides variable name, survey question, response options, and descriptive statistics for these 33 survey items. Tables S1 and S2 in Supporting Information (SI) show frequencies of responses for these 33 survey items.
Figure 1 shows the observed distribution of ideology plotted against the climate policy variable, measuring the level of support for regulating CO2 emissions. While very liberal respondents strongly support the regulation of CO2 emissions, very conservative respondents “somewhat oppose” the proposed climate policy. Somewhat liberal, somewhat conservative, and moderate respondents weakly support the proposed climate policy. Figure 2 shows that the respondents who are either affiliated with the Republican party or refuse to identify their political party or designate their party as “other” party are generally not very worried about global warming. The probability density functions have, however, very different shapes for these three types of respondents. In contrast, Figure 2 shows that the majority of Democrats, Independents, and respondents with no party affiliation are “somewhat worried” about global warming. Some respondents, however, are also very worried and not at all worried across all party affiliations. Figure 3 shows the distribution of support for regulating CO2 emissions over multiple waves of the survey from 2008 to 2018. Although there is some variability in the probability density functions for each level of support during this observation period, the sample median value remains at “somewhat support” level.

2.2. PSEM Machine Learning

Methodologically, Structural Equation Modeling (SEM) is a statistical technique for testing and estimating causal relations using a combination of statistical data and qualitative causal assumptions, a definition originally articulated by the geneticist Sewall Wright [55], refined by the economist Trygve Haavelmo [56] and the policy scientist Herbert Simon [57], and formally defined by Judea Pearl [58]. Structural Equation Models (SEM) allow both confirmatory and exploratory modeling, meaning they are suited for both theory testing and theory development. While Probabilistic Structural Equation Models (PSEMs) are conceptually similar to traditional SEMs, the PSEMs are based on a hierarchical Bayesian network structure as opposed to a series of equations. More specifically, PSEMs can be distinguished from SEMs in terms of key characteristics [28]: (i) All relationships in a PSEM are probabilistic—hence the name, as opposed to having deterministic relationships plus error terms in traditional SEMs. (ii) PSEMs are nonparametric, which facilitates the representation of nonlinear relationships, plus relationships between categorical variables. (iii) The structure of PSEMs is partially or fully machine-learned from data. In addition, we posit that (iv) the PSEMs can be used as exploratory tools to machine learn the relationship of manifest variables with latent variables. (v) PSEMs apply machine learning techniques to estimate best fit structural relationships among latent variables; and (vi) PSEMs apply machine learning to identify the optimal number of classes within each latent variable.
In this paper, we follow the PSEM procedure, as developed by [28] and implemented in BayesiaLab Version 10.2. The PSEM procedure involved four sequential estimation tasks, summarized in Table 2, and explained in more detail below.
Step 1: Estimation of latent variables through unsupervised hierarchical Bayesian network clustering of respondent beliefs.
Unsupervised learning is applied to discover the strongest relationships between the manifest variables, which are measured through direct survey questions (Table 1). Six unsupervised structural learning algorithms were applied to 14 manifest variables that measured respondent beliefs/opinions. SI Figures S1–S6 show the learned network structure, respectively, for the six algorithms and their associated Minimum Description Length (MDL) scores. As explained in [28], the MDL score is a two-component score, which has to be minimized to obtain the best solution. The MDL has been in use in the machine learning community for estimating the number of bits required for representing a “model” and “data given this model”. In this machine learning application, the “model” is a Bayesian network, consisting of a graph and probability tables. The second component is the log-likelihood of the data given the model, which is inversely proportional to the probability of the observations (data) given the Bayesian network (model). Following six unsupervised structural learning algorithms, available in Bayesia Lab Version 10.2, were applied and their associated MDL scores were measured: (1) Max spanning tree + taboo learning (final MDL score = 435,672.512); (2) Taboo learning (final MDL score = 435,622.442); (3) EQ + taboo learning (final MDL score = 435,598.507); (4) Taboo-EQ learning (final MDL score = 435,428.195); (5) SopLEQ + taboo learning (final MDL score = 435,672.508); (6) Taboo Order learning (final MDL score = 434,211.298). The Taboo Order learning network (Figure S6) with the lowest MDL score was selected for further analysis. The Taboo Order algorithm examines node order to assess the most parsimonious causal relationships between variables.
Next, variable clustering, based on the learned Bayesian network with the lowest MDL score, is applied to identify groups of variables that are strongly connected. In PSEM theory, the strong intracluster connections identified in the variable clustering step are ascribed to measure a “hidden common cause”. Finally, for each cluster of variables, a data clustering algorithm is applied to the variables within the cluster only to induce a latent variable that represents the hidden cause.
Step 2: Estimation of Bayesian network of latent variables that minimizes the description length.
Unsupervised learning on latent variables identified in Step 1 was implemented to identify the most likely relationships among the latent variables. At this stage, only Taboo and EQ algorithms can be tested. Both Taboo and EQ produced similar results, shown in the left portion of Figure 4, without the demographic variables. Figure S7 shows the sensitivity of the latent class model to the choice of maximum clustering size parameter equal to 5, 6, or 7. We chose the model with the maximum clustering size equal to 5 (Figure S7, panel a; Figure S8), which predicted four latent variables with relatively lower MDL scores compared with the maximum clustering size parametric values equal to 6 or 7.
Next, we reviewed the wording of the measured survey items included in each latent variable. We labeled each latent variable with a name that best matched the survey items’ CCAM intended constructs, as well as considering what theoretically united the manifest variables measured through the survey items. Then, we reviewed the predicted classes for each latent variable. We labeled each class based on the survey respondents’ common answers to the survey items included within the latent variable. Table S3 shows a comparison of survey items intended by CCAM constructs and PSEM-derived latent variables. Figures S10–S13 show marginal and conditional probability distributions of estimated classes within each latent variable. Section 3.2 provides a more detailed interpretation of the marginal and conditional probability distributions of estimated classes within each of the four measured latent variables.
Step 3: Linking latent variable PSEM with sociodemographic measured variables.
In this step, we applied the Taboo Order algorithm to generate the final PSEM shown in Figure 4. Figure S9 displays the node forces for the same final PSEM. Node force is the sum of all incoming and outgoing arc forces from a node. Arc force is computed by using the Kullback–Leibler Divergence (KLD), which compares two joint probability distributions, P and Q, defined on the same set of variables X. In this step, the latent variable conditional dependencies and causal relationships with respect to the socio-demographic measured variables were estimated.
Step 4: Calibration and k-fold validation of PSEM with target variable as policy support.
In k-fold validation, a dataset is divided into k folds, where in each fold, the data is randomly split into a training and testing set. The training set is used to fit the model, and the testing set is used to determine the goodness of model fit and avoid model overfitting. Table S4 shows model calibration and validation results after Step 2. This includes the estimation of the Bayesian network of latent variables that minimizes MDL. Table S5 shows model calibration results after Step 3, after linking the latent variable PSEM with sociodemographic measured variables.
This PSEM started off with an estimated R2 value of 92.9% on the training dataset. We used k-fold validation with k = 10 for 70%–30% split in training and test datasets. Table S5 shows that the final estimated PSEM has an R2 of 92.2%, a mean precision rate of 96.8%, mean reliability of 96.0%, mean ROC index of 99.8%, and mean calibration index of 78.8%. The k-fold validation shows slightly lower skill in predicting strong opposers at 93.8%, but similar skill for lukewarm supporters at 97.2% and for strong supporters at 97.2%.

2.3. Estimating a SEM Imputed from the PSEM

We estimated SEMs using structural equation modeling algorithms in STATA 17 for Windows (StataCorp LLC, College Station, TX, USA), specifically following SEM algorithms developed by [53,54]. In SEM#1, we applied standard maximum likelihood (ML method in STATA for SEM command), in which observations with missing values were excluded from the analysis and no sampling weights (weight_aggregate variable in CCAM dataset) were assigned to observations. SEM#2 is similar to SEM#1 except that sampling weights are assigned to the observations. In SEM#3, we applied the maximum likelihood with missing value (MLMV) method for SEM command, and no sampling weights were assigned to observations. SEM#4 is similar to SEM#3, except that sampling weights are assigned to the observations.
Tables S7–S10, respectively, show the following output from SEM#1 to SEM#4, including STATA command code, estimated standardized coefficients, SEM fitness statistics, and standardized direct, indirect, and total effects. The analysis involved standard linear SEM with maximum likelihood estimation procedure and an observed information matrix. The measurement model tested the adequacy of the measured independent variable survey items as indicators of the latent variables they were purported to measure, and the structural model examined relationships among the latent variables shown in Figure 2.
We fitted a SEM (Figure 2), whose measurement and latent variable structure were derived from the PSEM. The SEM was tested with directly measured variables, in which the error terms associated with the measured variables were left free to be estimated and also assumed to be uncorrelated with each other. The error terms of exogenous latent variables were tested for covariance-based on the recommendations from the analysis of modification indices of the initial model.

2.4. Goodness-of-Fit Statistics

Goodness-of-fit statistics were used to determine the fit of each estimated SEM to the sample data. Three approaches to measuring the goodness of fit were estimated: (1) population error, (2) baseline comparison, and (3) size of residuals. Table 3 provides model fitness statistics for four SEMs estimated with both MLMV and ML methods. Based on this analysis, especially the size of residuals measured through the coefficient of determination (CD) metric, both SEM#2 and SEM#4 appear to be reasonable fits of the underlying data. Therefore, SEM#4 was chosen due to its larger sample size. The SEM#2 coefficients are generally similar in magnitude and direction to SEM#4.

3. Results

3.1. Data-Driven Machine Learning Models Can Account for Complex Interactions Among Measured and Latent Variables to Explain Climate Policy Support

The machine-learned PSEM structure of the latent and measured variables (Figure 4) shows the complex pathways of interactions of risk perceptions and beliefs on support for climate policy while accounting for sociodemographic factors in the U.S. population. Structurally, the PSEM finds that beliefs and affective risk perception each have a direct influence on policy support, while the effect of analytical risk perception on policy support is only indirect. Among the sociodemographic factors, political ideology also has a direct effect on policy support, with political party and race each having an indirect effect through their impact on affective risk perception. All other sixteen sociodemographic variables also have an indirect effect on policy support, mediated through race, party, and ideology. Table S6 shows standardized effect sizes and their significance levels for all latent and manifest variables shown in the PSEM (Figure 4). All variables are ranked from highest to lowest total effect size. Further, all latent and manifest variables have a statistically significant total effect on policy support determined by G-test-derived p-values, except house_head signifying whether the respondent is head of the household. Generally, the magnitude of the total effect size for sociopsychological latent and manifest variables is relatively higher than sociodemographic variables.
The standard SEM derived from the PSEM-generated structure of latent and measured variables indicates that the SEM has a reasonable fit with the sampled data for the latent variables derived from the measured survey variables (Figure 5; SEM#4 was chosen as the best fit, see more details in Section 2 Table 3). Again, ideology, political party, and race also significantly influence policy support, either directly or indirectly.
Both PSEM and SEM models indicate the importance of the main three social psychological variables (beliefs, affective risk perception, and analytical risk perception) in estimating climate policy support (Table 4). Similar to past research, these models show that these factors are more important drivers than most sociodemographic variables [31,41]. A key finding from both the PSEM and SEM models is that analytical risk perception matters relatively less than the other two factors and does not have a direct effect on policy support. Instead, analytical risk perception is largely driven by beliefs and only indirectly affects policy support through its impact on affective risk perception.
When comparing the magnitude of the standardized total effect sizes, the PSEM and SEM are substantively different (Table 4 with more details in Tables S6 and S10). PSEM predicts that beliefs (42%) and affective risk perception (41%) have the largest standardized total effect sizes on support for climate policy, with analytical risk perception (36%) having a slightly smaller effect. In contrast, the SEM predicts affective risk perception is the most influential (53%), with beliefs mattering relatively less (35%), and analytical risk perception much less (11%). These are meaningful differences, with one model giving relatively equivalent weights to each of the social psychological factors and the other concluding that affective risk perception is a much stronger driver than analytical risk perception. Additionally, the PSEM standardized total effects are quite a bit larger for each of the sociodemographic variables than in the SEM, which leads to different conclusions about how much political ideology, political party, and race are relevant for estimating climate policy support. These differences arise due to underlying assumptions of a continuous linear scale in the standard SEM estimation of latent variables versus Bayesian probability distributions underlying the estimation of discrete “classes” in the PSEM latent variables (see Figures S10–S13) for a description of estimated classes in the latent variables).

3.2. Machine-Learned PSEM Enables Data-Driven Configuration of Measured Variables in Identification of Latent Variables and Their Class Sizes

Our analyses with both PSEM (Figure 4) and standard SEM (Figure 5) show how the individual survey items cluster into latent variables. The primary clusters somewhat align with the social psychological factors that have been theorized and empirically demonstrated to predict support for climate policy: affective risk perception, analytical risk perception, and beliefs about climate change [30,31,32]. However, the way the individual survey items cluster does not fully match the latent variable each was intended to measure.
For example, seven of the survey items were intended to measure “risk perceptions” by assessing how much the respondent perceives that climate change will harm a range of groups [52]. Three items that assess how much climate change will (1) harm themselves personally, (2) harm the US, and (3) harm developing countries all cluster together as expected, and we name this latent variable “analytical risk perception”. However, risk perception items assessing how much climate change will (1) harm future generations or (2) harm plants and animals, cluster together with the three other items intended to measure climate change “beliefs”, i.e., (3) that climate change is happening, (4) what climate change is caused by, and (5) the scientific consensus about climate change. This may suggest that future generations or plants and animals are viewed as more distant from the self and that their harm is more associated with the scientific facts related to climate change rather than their own perceived risk of its effects.
Another latent variable is “affective risk perception” which includes (1) worry about climate change as expected, but also contains items regarding (2) how soon they think climate change will harm the U.S. and (3) how frequently they discuss climate change with friends and family. This suggests that emotional reactions to climate change are associated with how soon its effects are expected to be felt, more so than the perception of impacts that may be severe but will not be felt until the future. It also suggests that interpersonal discussions about climate change are more frequently associated with emotional rather than analytical concerns.

3.3. Marginal and Conditional Probability Analysis of Policy Support Uncovers a Previously Unidentified Class of “Lukewarm Supporters”, Different from Strong Supporters and Opposers

Finally, by examining the response classes for the climate policy support latent variable, we can gain insights into which types of survey respondents tend to provide each level of policy support. When examining the marginal distributions, this PSEM predicts three response classes, with 27% of the U.S. population as “strong supporters” of climate policy action, while 59% are “lukewarm supporters” and 13% are “strong opposers” to climate policies (see Figure S10, panel a). Predicted posterior probabilities of opposers, lukewarm supporters, and strong supporters of climate policy are conditional upon beliefs, affective risk perception, analytical risk perception, ideology, party, and race, and are shown in Table 5 and Figure 6.
Strong supporters of climate policy action follow an expected pattern. They are much more likely to have alarmed beliefs, worried affective risk perception, high or moderate analytical risk perception, and a liberal political orientation. In contrast, strong opponents of climate policy are much more likely to report dismissive beliefs, perceive no affective risk or analytical risk, and have a conservative political orientation.
The conditional probability distributions of the large category of lukewarm policy supporters reveal a novel finding of this PSEM. Lukewarm supporters are more likely to have middle-range beliefs classified as concerned or cautious, but they are divided in their affective risk perception with approximately equal numbers of “not worried” and “worried”. Their analytical risk perception is also lower with more people who perceive little risk. Finally, from a political ideology standpoint, lukewarm supporters represent a relatively larger segment of moderates and “somewhat conservatives”. In sum, although they do lean towards supporting climate policy, their risk perceptions and beliefs are more moderate or lower than those who strongly support climate policies. This indicates that conceptualizing climate policy as simply having “supporters” and “opposers” may lead to inaccurate understanding and prediction since supporters represent a wide and heterogeneous swath of the population, and most of the policy supporters are less concerned about climate change than may be assumed.
Beliefs (measured with the 5-item responses shown in Figure 4) as a latent variable predicted six classes of marginal probability distribution in the US population. As shown in Table 5 and Figure S13 (panel a), the PSEM predicts that 46.50% of the US population are alarmed by human-caused climate change (strongest beliefs supporting human-caused climate change), while 19.41% are concerned. However, 11.37% are cautious, 7.04% disengaged, 5.21% doubtful, and 10.48% dismissive (strongest beliefs denying human-caused climate change). The response classes in the belief latent variable reveal expected findings. For example, the model defined six distinct classes within the beliefs latent variable that aligned quite well with the existing descriptions of The Climate Change Six Americas, [6,45] and we named our classes after them. It is important to note that the generative PSEM recovered “The Climate Change Six Americas” as a “Belief” state latent variable, which serves as a predictor of policy support. Table 5 and Figure S13 (panels b–f) show conditional probability distributions of strong supporters, strong opponents and lukewarm supporters for each of the six climate change belief states of the Six Americas: Notably, 77.95% of the policy supporters are alarmed while less than 5% of the policy supporters are dismissive, doubtful, disengaged, and cautious. Conversely, 40.78% of the policy opposers are dismissive, and only 11% are alarmed or concerned.
Affective risk perception (measured with three item responses shown in Figure 4) predicts only two classes in the US population: The PSEM predicts that 58.91% of the US population is worried about climate change, while the remaining 41.09% are not worried (Table 5, and also see Figure S11, panel a). Conditional probability distributions of policy supporters show that 86.14% are worried, and conversely, 81% of the policy opposers are not worried. Among the lukewarm supporters, 55% are worried and 45% not worried (Table 5 and Figure S11, panels b–c).
Analytical risk perceptions, a latent variable measured with three item responses, estimated five classes in the marginal probability distribution of the US population. As shown in Table 5, and Figure S12 (panel a), 22.11% of the US population perceived high risk from climate change; 30.46% perceived moderate risk; and 18.73% perceived little risk; while 15.94% perceived no risk from climate change; and 12.77% do not know about the climate change risk. From the analysis of conditional probability distributions (Table 5, and Figure S12 (panels b–f), we estimate that 37% of strong policy supporters perceive high risk and 38% perceive moderate risk from climate change. Conversely, we find that 47% of strong opponents of climate policy are risk deniers. Among the lukewarm supporters, 31% perceive moderate risk and 22% perceive little risk, and this class also includes 15% risk deniers and 13.5% do not knowers.
Further, marginal and conditional probability distributions for ideology, party, and race (Table 5) are consistent with findings of the previous literature [16]. Notably, 28% of the strong opposers have very conservative ideology and another 28% conservative ideology, while conversely, 40% of the strong supporters are moderate, and 27% are somewhat liberal. Lukewarm supporters are predicted to be 45% moderates and 24% somewhat conservative. We also find that 38% of the strong opposers are likely Republicans, 18% Democrats, and 20% Independents (Table 5). Among the strong supporters are 16% Republicans, 46% Democrats, and 23% Independents. Lukewarm policy supporters are likely 26% Republican, 32% Democrat, and 24% Independent.

4. Discussion

By utilizing machine learning to better understand how social psychological processes interact to generate support for climate policy, this model demonstrates how the various aspects of risk perception coalesce in the minds of Americans when considering climate change. The PSEM model identified latent variables that can be interpreted through a dual-processing lens [59,60,61,62] where both affective and analytical risk perceptions are identified as separate but related factors, with affective risk perception as the more proximal predictor of climate policy support. This further supports the importance of considering emotional processes in understanding how people perceive and respond to climate change [31], confirms “risk as feelings” hypothesis [47,48], and supports work finding that “worry” and “concern” about climate change are regularly some of the strongest predictors of policy support [32]. The primacy of emotional and affective risk has also been observed in various real-world risk-taking contexts, such as in finance [63] and health decision-making [64].
This PSEM also reported that beliefs have a large effect on policy support directly and also indirectly through its influence on analytical and affective risk perceptions. This is consistent with research showing that climate change belief is often predictive of policy support (see meta-analysis [65]). This may be partially due to political ideology feeding into beliefs or beliefs driving political ideology, and these political elements may be driving how beliefs shape the other risk perception processes. On the other hand, since analytical and affective risk perceptions are not directly related to political ideology, this offers some evidence that while beliefs are very polarized in the U.S., emotional concerns about climate change and estimates of the likelihood of climate change affecting humans may not be polarized to the same extent.
Additionally, the clustering of survey items revealed that affective risk perception included both emotional worry and the perception that climate change is already affecting the U.S. On the other hand, the latent variable we label “analytical risk perception” includes perceptions about how much the respondent will be impacted by climate change without a timeline for when these impacts will occur. This demonstrates that the sense of urgency about immediate risk aligns with emotional responses, whereas rating the likelihood of more distant impacts from their own set of risk perception. Past work on climate change risk perceptions has similarly theorized that worry represents a more emotional state that links more directly to adaptive behavioral responses such as policy support, while ratings of how likely it is for climate-related impacts to occur reflect a more cognitive component with less direct motivational influence [51].
Other work on psychological distance highlights how perceptions about the temporal and spatial distance of climate change can alter people’s risk perceptions and support for climate policies [66,67]. For example, personally experiencing climate change effects through extreme weather events leads to greater concern and support for climate policies (for review, see [51]), perhaps due to the psychological distance from climate change’s effects being reduced. Although we cannot determine whether those who scored higher in affective risk perception necessarily experienced climate change impacts directly (nor if those with lower affective risk perception had not), this may be a factor that differentiates individuals’ experiences and opinions.
Reviewing the response classes for each of the latent variables also revealed some new ways of conceptualizing these factors. For one, it demonstrates the value of moving beyond a linear or two-category partitioning of support for climate policy. Some work has suggested that a majority of the people support climate policy (e.g., see [30,68]), but this analysis indicates that the distinction is not between support and opposition. Our unsupervised machine learning model revealed a third category of “lukewarm supporters”. The supporters and opposers have clearer profiles in terms of their beliefs, risk perceptions, and political ideologies, while the lukewarm supporters contain respondents who vary rather significantly in these factors. They tend towards the middle range in beliefs and risk perceptions, but this group also contains a fair number of respondents on both sides of the extremes.
Indeed, while they are more likely to “somewhat support” these policies, they are also more likely to “somewhat oppose” climate policies than the other two opinion groups (see Figure S10, panel c). They are likely inconsistent in their policy support, which may vary depending on the details of the policy and how it is implemented. This may be the group of people most influenced by policy factors beyond concern about climate change itself, such as trust in those implementing the policy or perceptions of fairness that have been found to alter policy support in previous research [10,42].
Climate change is a global threat, but there are vast differences in the ways Americans perceive and respond to this risk. Much of the initial theorizing on risk perception and subsequent behavior choices, including voting for pro-environmental policies, assumed that these were calculated responses. Decision-making about supporting public policies addressing environmental risk was the result of cost-benefit analyses, estimating the relative probability and severity of a risk’s negative and positive outcomes, in which knowledge and logical calculations led to conclusions (see review [44]). However, research in the last few decades has demonstrated the importance of emotions and other affective processes in influencing risk perceptions [46,47]. These findings support the broader dual processing models of thinking and decision-making, in which evaluations from the emotionally-driven and intuitive “experiential” route and the analytically driven and deliberate “rational” route work together to lead to outcomes [59,60].
There are emotion-specific effects on risk perceptions. Anger leads to more optimistic appraisals and fear leads to more pessimism appraisals [69], and emotions associated with certainty (anger, happiness, disgust) lead to using heuristics in decision-making, while emotions associated with uncertainty (fear, hope, surprise) lead to more systematic decision-making [62]. Anticipation of emotions also alters risk perceptions, as anticipated regret increases the perception of risk and reduces risky behavior [61], while alternately anticipation of positive feelings leads to less risk aversion [70]. There is also a growing body of evidence that analytical reasoning is often less effective in people who have emotional dysfunction [71], and that intuitive processes are actually helpful in leading people to make decisions about complex problems that reflect their best interests [72], although this point is debated in the field [73]. However, research on emotional responses to risk has demonstrated the power of the “affect heuristic” in helping people make balanced decisions [74].
This research contributes to ongoing efforts to accelerate the transition to a sustainable planet by reducing the threat of climate change through effective, yet publicly supported, climate mitigation policies. This is especially timely given heightened societal awareness of and concern about connections between climate change, global security, and sustainable development goals. This work will provide specific insights into which psychological factors differ between those who perceive greater or lesser risk from climate change and those who do and do not support climate policy. It will give insights into where public outreach efforts will make the most difference.
This study is limited by the cross-sectional nature of the survey data fed into training and testing the machine-learned PSEM. While measured over time, CCAM survey data is not a panel dataset in which the same respondents are repeatedly measured over time. Future research efforts must focus on collecting panel data. More advanced, dynamic machine learning models, such as dynamic Bayesian Network Models, Latent Transition Analysis, Long Short Term Memory (LSTM), and vision transformer models, could be applied in future studies on panel datasets to model the impact of climate risk perceptions on support for climate policy. Another limitation of the dataset is its focus on the USA. Similar studies need to be conducted for other 200+ countries that are also implicated in producing GHG emissions. A third limitation is the design of the survey instruments. Alternate social psychological, behavioral, economic, political economy, and policy design theories need to be systematically incorporated in designing data collection protocols, which in turn will likely generate more robust and even foundational machine learning models. A fourth limitation is the measurement error that comes with survey instruments. Quantification of theoretical constructs on ordinal or cardinal scales may induce biases, which in turn decrease both internal and external validity of the conclusions derived from survey data-derived models. The recent advancements in the field of generative Artifiical Intelligence, e.g., Large Language Models (LLMs) derived from vision transformer algorithms, may provide more sophisticated and nuanced approaches to theorize and test the measurement of climate risk perceptions and public support for climate policies.

5. Conclusions

In conclusion, the PSEM approach allows the construction of a model to explain support for climate policy without a priori assumptions about the relationships between survey items representing social and psychological factors. Our results show the model is generally consistent with some aspects of theory while creating novel insights. We find that the survey items frequently cluster in ways that align with the targeted social psychological factors, but in other cases, certain risk perception items align more with beliefs than they do with other affective and analytical risk perceptions. Additionally, we find that belief and affective risk perception each have a larger and more direct influence on climate policy support than analytical risk perception, highlighting the importance of both scientific understanding and emotional responses to the climate crisis. We also find that quite a large portion of the population includes lukewarm supporters of climate policy, who differ meaningfully from strong supporters. This can indicate hope for the future because this group may be particularly persuadable and open to new climate policies if they are proposed in ways that resonate with them. The machine-learned PSEM presented in this study can be tested as a feedback loop in next-generation GCMs and IAMs to represent the impact of global climate change-induced variability in climate risk perceptions and their dynamic, coevolutionary impacts on the emergence of public support for climate mitigation policies.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/su162310292/s1, Table S1. Frequencies of survey responses to 14 public opinion statements included in this study from CCAM dataset. Table S2. Frequencies of survey responses to 19 socio-demographic questions included in this study from CCAM dataset. Table S3. The original CCAM intended constructs, PSEM latent variable groupings and classes for each of the 14 public opinion statement variables. Table S4. PSEM Calibration Indices for predicting “Policy Support”. Table S5. Expanded PSEM validation indices after testing with k-fold validation. Utilizing k=10 for 70% training and 30% test sample splits of the data. Table S6. PSEM G-test statistics for total effect sizes on climate policy support. Table S7. Estimated Parameters of SEM#1 for both measurement and structural components with Maximum Likelihood (ML) estimation method without sampling weights. Table S8. Estimated Parameters of SEM#2 for both measurement and structural components with Maximum Likelihood (ML) estimation method with sampling weights. Table S9. Estimated Parameters of SEM#3 for both measurement and structural components with Maximum Likelihood with Missing Value (MLMV) estimation method without sampling weights. Table S10. Estimated Parameters of SEM#4 for both measurement and structural components with Maximum Likelihood with Missing Value (MLMV) estimation method without sampling weights. Figure S1. Max spanning tree + taboo learning (final mdl score = 435,672.512). Node sizes are scaled by Node Force and arc widths are scaled by Symmetric Relative Mutual Information. Node Force is the sum of all incoming and outgoing arc forces from a node. Arc Force is computed by using the Kullback-Leibler Divergence (KLD), Dkl, which compares two joint probability distributions, P and Q, defined on the same set of variables X. Figure S2. Taboo learning (final mdl score = 435,622.442). Node sizes are scaled by Node Force and arc widths are scaled by Symmetric Relative Mutual Information. Figure S3. EQ + taboo learning (final mdl score = 435,598.507). Node sizes are scaled by Node Force and arc widths are scaled by Symmetric Relative Mutual Information. Figure S4. Taboo-EQ learning (final mdl score = 435,428.195). Node sizes are scaled by Node Force and arc widths are scaled by Symmetric Relative Mutual Information. Figure S5. SopLEQ +taboo learning (final mdl score = 435,672.508). Node sizes are scaled by Node Force and arc widths are scaled by Symmetric Relative Mutual Information. Figure S6. Taboo Order learning (final mdl score = 434,211.298). Node sizes are scaled by Node Force and arc widths are scaled by Symmetric Relative Mutual Information. Figure S7. Testing PSEM latent variable estimation sensitivity to the choice of maximum clustering class size: 5 (panel a), 6 (panel b), 7 (panel c). Figure S8. PSEM predicted four latent variables with the assumption of maximum clustering size of 5 that minimizes description length. Figure S9. Shows node force for each latent variable and normalized symmetric mutual information (NSMI) for the links among each variable in the expanded PSEM. Node Force is the sum of all incoming and outgoing arc forces from a node. Arc Force is computed by using the Kullback-Leibler Divergence (KLD), which compares two joint probability distributions, P and Q, defined on the same set of variables X. Figure S10. Interpretation of 3 classes identified in Policy Support latent variable: Panel a shows marginal probability distribution of three classes of Policy Support; and Panels b, c and d respectively show conditional probability distribution of Strong Opposers, Lukewarm Supporters and Strong Supporters of Climate Policy with respect to three survey items: reg_CO2_pollutant; reg_utilities and fund_research. Arrows in panels b-d show conditional probability variations compared with marginal probabilities. Figure S11. Interpretation of 2 classes identified in Affective Risk Perception latent variable: Panel a shows marginal probability distribution of two classes of Affective Risk Perceptions; and Panels b and c respectively show conditional probability distribution of not worried (affective risk perception = 0) and worried (affective risk perception = 1) with respect to three survey items: worry; when_harm_US and discuss_GW. Arrows in panels b and c show conditional probability variations compared with marginal probabilities. Figure S12. Interpretation of 5 classes identified in Analytical Risk Perception latent variable: Panel a shows marginal probability distribution of five classes of Analytical Risk Perception; and Panels b to f respectively show conditional probability distribution of Don’t Knowers, Risk Deniers, Little Risk Perceivers, Moderate Risk Perceivers and High Risk Perceivers with respect to three survey items: harm_US, harm_dev_countries and harm_personally. Arrows in panels b-f show conditional probability variations compared with marginal probabilities. Figure S13. Interpretation of 6 classes identified in Belief latent variable: Panel a shows marginal probability distribution of six classes of Belief; and Panels b to g respectively show conditional probability distribution of Doubtful, Disengaged, Dismissive, Ambivalent/Cautious, Concerned and Alarmed with respect to five survey items: harm_future_gen, harm_plants_animals, happening, cause_recoded, and sci_consensus. Arrows in panels b-g show conditional probability variations compared with marginal probabilities.

Author Contributions

Conceptualization, A.Z., K.L., N.H.F., L.J.G. and B.B.; methodology, A.Z., K.L. and N.H.F.; software, A.Z.; validation, A.Z., K.L., N.H.F., L.J.G. and B.B.; formal analysis, A.Z., K.L. and N.H.F.; investigation, A.Z., K.L., N.H.F., L.J.G. and B.B.; data curation, A.Z.; writing—original draft preparation, A.Z. and K.L.; writing—review and editing, A.Z., K.L., N.H.F., L.J.G. and B.B.; visualization, A.Z.; supervision, A.Z., K.L., N.H.F., L.J.G. and B.B.; project administration, A.Z., K.L., N.H.F., L.J.G. and B.B.; funding acquisition, A.Z., K.L., N.H.F., L.J.G. and B.B. All authors have read and agreed to the published version of the manuscript.

Funding

This work resulted from a working group jointly supported by both the National Institute for Mathematical Biological Synthesis sponsored by the National Science Foundation through award DBI-1300426 and the National Socio-Environmental Synthesis Center under funding received from National Science Foundation award DBI-1052875. BB was supported in part by NASA Grant Number 80NSSC20M0122 and by the USDA National Institute of Food and Agriculture Hatch, Project Number 1025208. AZ also acknowledges the support from NSF #2026431, USDA#2021-67015-35236 and NOAA #NA22NWS4320003. The statements, findings, conclusions, and recommendations are those of the author(s) and do not necessarily reflect the views of the funding agencies.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available dataset used in this study [52].

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zia, A. Post-Kyoto Climate Governance: Confronting the Politics of Scale, Ideology and Knowledge; Routledge: London, UK, 2013. [Google Scholar]
  2. Beckage, B.; Gross, L.J.; Lacasse, K.; Carr, E.; Metcalf, S.S.; Winter, J.M.; Howe, P.D.; Fefferman, N.; Franck, T.; Zia, A. Linking models of human behaviour and climate alters projected climate change. Nat. Clim. Chang. 2018, 8, 79–84. [Google Scholar] [CrossRef]
  3. Beckage, B.; Lacasse, K.; Winter, J.M.; Gross, L.J.; Fefferman, N.; Hoffman, F.M.; Metcalf, S.S.; Franck, T.; Carr, E.; Zia, A. The Earth has humans, so why don’t our climate models? Clim. Chang. 2020, 163, 181–188. [Google Scholar] [CrossRef]
  4. Zia, A. Synergies and Trade-Offs between Climate Change Adaptation and Mitigation across Multiple Scales of Governance. In Adaptiveness: Changing Earth System Governance; Siebenhuener, B., Djalante, R., Eds.; Cambridge University Press: Cambridge, UK, 2021. [Google Scholar]
  5. Rising, J.A.; Taylor, C.; Ives, M.C.; Ward, R.E. Challenges and innovations in the economic evaluation of the risks of climate change. Ecol. Econ. 2022, 197, 107437. [Google Scholar] [CrossRef]
  6. Wilson, C.; Guivarch, C.; Kriegler, E.; Van Ruijven, B.; Van Vuuren, D.P.; Krey, V.; Schwanitz, V.J.; Thompson, E.L. Evaluating process-based integrated assessment models of climate change mitigation. Clim. Chang. 2021, 166, 1–22. [Google Scholar] [CrossRef]
  7. Van Beek, L.; Oomen, J.; Hajer, M.; Pelzer, P.; van Vuuren, D. Navigating the political: An analysis of political calibration of integrated assessment modelling in light of the 1.5 C goal. Environ. Sci. Policy 2022, 133, 193–202. [Google Scholar] [CrossRef]
  8. Burstein, P. The impact of public opinion on public policy: A review and an agenda. Political Res. Q. 2003, 56, 29–40. [Google Scholar] [CrossRef]
  9. Shapiro, R.Y. Public opinion and American democracy. Public Opin. Q. 2011, 75, 982–1017. [Google Scholar] [CrossRef]
  10. Drews, S.; Van den Bergh, J.C. What explains public support for climate policies? A review of empirical and experimental studies. Clim. Policy 2016, 16, 855–876. [Google Scholar] [CrossRef]
  11. Attari, S.Z.; Schoen, M.; Davidson, C.I.; DeKay, M.L.; de Bruin, W.B.; Dawes, R.; Small, M.J. Preferences for change: Do individuals prefer voluntary actions, soft regulations, or hard regulations to decrease fossil fuel consumption? Ecol. Econ. 2009, 68, 1701–1710. [Google Scholar] [CrossRef]
  12. Dietz, T.; Dan, A.; Shwom, R. Support for climate change policy: Social psychological and social structural influences. Rural Sociol. 2007, 72, 185–214. [Google Scholar] [CrossRef]
  13. Leiserowitz, A. Climate change risk perception and policy preferences: The role of affect, imagery, and values. Clim. Chang. 2006, 77, 45–72. [Google Scholar] [CrossRef]
  14. McCright, A.M.; Dunlap, R.E.; Xiao, C. Perceived scientific agreement and support for government action on climate change in the USA. Clim. Chang. 2013, 119, 511–518. [Google Scholar] [CrossRef]
  15. Steg, L.; Dreijerink, L.; Abrahamse, W. Factors influencing the acceptability of energy policies: A test of VBN theory. J. Environ. Psychol. 2005, 25, 415–425. [Google Scholar] [CrossRef]
  16. Zia, A.; Todd, A.M. Evaluating the effects of ideology on public understanding of climate change science: How to improve communication across ideological divides? Public Underst. Sci. 2010, 19, 743–761. [Google Scholar] [CrossRef]
  17. Bostrom, A.; O’Connor, R.E.; Böhm, G.; Hanss, D.; Bodi, O.; Ekström, F.; Halder, P.; Jeschke, S.; Mack, B.; Qu, M. Causal thinking and support for climate change policies: International survey findings. Glob. Environ. Chang. 2012, 22, 210–222. [Google Scholar] [CrossRef]
  18. Patt, A.G.; Weber, E.U. Perceptions and communication strategies for the many uncertainties relevant for climate policy. Wiley Interdiscip. Rev. Clim. Chang. 2014, 5, 219–232. [Google Scholar] [CrossRef]
  19. Steg, L.; Dreijerink, L.; Abrahamse, W. Why are energy policies acceptable and effective? Environ. Behav. 2006, 38, 92–111. [Google Scholar] [CrossRef]
  20. Adaman, F.; Karalı, N.; Kumbaroğlu, G.; Or, İ.; Özkaynak, B.; Zenginobuz, Ü. What determines urban households’ willingness to pay for CO2 emission reductions in Turkey: A contingent valuation survey. Energy Policy 2011, 39, 689–698. [Google Scholar] [CrossRef]
  21. Franzen, A.; Vogl, D. Two decades of measuring environmental attitudes: A comparative analysis of 33 countries. Glob. Environ. Chang. 2013, 23, 1001–1008. [Google Scholar] [CrossRef]
  22. O‘Connor, R.E.; Bard, R.J.; Fisher, A. Risk perceptions, general environmental beliefs, and willingness to address climate change. Risk Anal. 1999, 19, 461–471. [Google Scholar] [CrossRef]
  23. Owen, A.L.; Conover, E.; Videras, J.; Wu, S. Heat waves, droughts, and preferences for environmental policy. J. Policy Anal. Manag. 2012, 31, 556–577. [Google Scholar] [CrossRef]
  24. Petrovic, N.; Madrigano, J.; Zaval, L. Motivating mitigation: When health matters more than climate change. Clim. Chang. 2014, 126, 245–254. [Google Scholar] [CrossRef]
  25. Marcot, B.G.; Penman, T.D. Advances in Bayesian network modelling: Integration of modelling technologies. Environ. Model. Softw. 2019, 111, 386–393. [Google Scholar] [CrossRef]
  26. Pearl, J. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference; Elsevier: Amsterdam, The Netherlands, 2014. [Google Scholar]
  27. Kullback, S. Information Theory and Statistics; Courier Corporation: North Chelmsford, MA, USA, 1997. [Google Scholar]
  28. Conrady, S.; Jouffe, L. Bayesian Networks and BayesiaLab: A Practical Introduction for Researchers; Bayesia USA: Nashville, TN, USA, 2015. [Google Scholar]
  29. Bouman, T.; Verschoor, M.; Albers, C.J.; Böhm, G.; Fisher, S.D.; Poortinga, W.; Whitmarsh, L.; Steg, L. When worry about climate change leads to climate action: How values, worry and personal responsibility relate to various climate actions. Glob. Environ. Chang. 2020, 62, 102061. [Google Scholar] [CrossRef]
  30. Ding, D.; Maibach, E.W.; Zhao, X.; Roser-Renouf, C.; Leiserowitz, A. Support for climate policy and societal action are linked to perceptions about scientific agreement. Nat. Clim. Chang. 2011, 1, 462–466. [Google Scholar] [CrossRef]
  31. Goldberg, M.H.; Gustafson, A.; Ballew, M.T.; Rosenthal, S.A.; Leiserowitz, A. Identifying the most important predictors of support for climate policy in the United States. Behav. Public Policy 2021, 5, 480–502. [Google Scholar] [CrossRef]
  32. Smith, N.; Leiserowitz, A. The role of emotion in global warming policy support and opposition. Risk Anal. 2014, 34, 937–948. [Google Scholar] [CrossRef]
  33. Marquart-Pyatt, S.T.; Qian, H.; Houser, M.K.; McCright, A.M. Climate change views, energy policy preferences, and intended actions across welfare state regimes: Evidence from the European Social Survey. Int. J. Sociol. 2019, 49, 1–26. [Google Scholar] [CrossRef]
  34. Bamberg, S.; Möser, G. Twenty years after Hines, Hungerford, and Tomera: A new meta-analysis of psycho-social determinants of pro-environmental behaviour. J. Environ. Psychol. 2007, 27, 14–25. [Google Scholar] [CrossRef]
  35. Klöckner, C.A. A comprehensive model of the psychology of environmental behaviour—A meta-analysis. Glob. Environ. Chang. 2013, 23, 1028–1038. [Google Scholar] [CrossRef]
  36. Barber, D. Bayesian Reasoning and Machine Learning; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  37. Binder, J.; Koller, D.; Russell, S.; Kanazawa, K. Adaptive probabilistic networks with hidden variables. Mach. Learn. 1997, 29, 213–244. [Google Scholar] [CrossRef]
  38. Cui, G.; Wong, M.L.; Lui, H.-K. Machine learning for direct marketing response models: Bayesian networks with evolutionary programming. Manag. Sci. 2006, 52, 597–612. [Google Scholar] [CrossRef]
  39. Frey, B.J.; Brendan, J.F.; Frey, B.J. Graphical Models for Machine Learning and Digital Communication; MIT Press: Cambridge, MA, USA, 1998. [Google Scholar]
  40. Kallbekken, S. Research on public support for climate policy instruments must broaden its scope. Nat. Clim. Chang. 2023, 13, 206–208. [Google Scholar] [CrossRef]
  41. Hasanaj, V.; Stadelmann-Steffen, I. Is the problem or the solution riskier? Predictors of carbon tax policy support. Environ. Res. Commun. 2022, 4, 105001. [Google Scholar] [CrossRef]
  42. Levi, S. Why hate carbon taxes? Machine learning evidence on the roles of personal responsibility, trust, revenue recycling, and other factors across 23 European countries. Energy Res. Soc. Sci. 2021, 73, 101883. [Google Scholar] [CrossRef]
  43. Povitkina, M.; Jagers, S.C.; Matti, S.; Martinsson, J. Why are carbon taxes unfair? Disentangling public perceptions of fairness. Glob. Environ. Chang. 2021, 70, 102356. [Google Scholar] [CrossRef]
  44. Yates, J. Risk-Taking Behavior; John Wiley & Sons: Hoboken, NJ, USA, 1992. [Google Scholar]
  45. Visschers, V.; Wiedemann, P.M.; Gutscher, H.; Kurzenhäuser, S.; Seidl, R.; Jardine, C.; Timmermans, D. Affect-inducing risk communication: Current knowledge and future directions. J. Risk Res. 2012, 15, 257–271. [Google Scholar] [CrossRef]
  46. Slovic, P.; Finucane, M.L.; Peters, E.; MacGregor, D.G. The affect heuristic. Eur. J. Oper. Res. 2007, 177, 1333–1352. [Google Scholar] [CrossRef]
  47. Loewenstein, G.F.; Weber, E.U.; Hsee, C.K.; Welch, N. Risk as feelings. Psychol. Bull. 2001, 127, 267. [Google Scholar] [CrossRef]
  48. Weber, E.U. “Risk as feelings” and “perception matters”: Psychological contributions on risk, risk taking and risk management. In Future Risk Risk Management; University of Pennsylvania Press: Philadelphia, PA, USA, 2018; pp. 30–47. [Google Scholar]
  49. Kasperson, R.E.; Renn, O.; Slovic, P.; Brown, H.S.; Emel, J.; Goble, R.; Kasperson, J.X.; Ratick, S. The social amplification of risk: A conceptual framework. Risk Anal. 1988, 8, 177–187. [Google Scholar] [CrossRef]
  50. Van der Linden, S. The social-psychological determinants of climate change risk perceptions: Towards a comprehensive model. J. Environ. Psychol. 2015, 41, 112–124. [Google Scholar] [CrossRef]
  51. Van der Linden, S. Determinants and measurement of climate change risk perception, worry, and concern. In The Oxford Encyclopedia of Climate Change Communication; Oxford University Press: Oxford, UK, 2017. [Google Scholar]
  52. Ballew, M.T.; Leiserowitz, A.; Roser-Renouf, C.; Rosenthal, S.A.; Kotcher, J.E.; Marlon, J.R.; Lyon, E.; Goldberg, M.H.; Maibach, E.W. Climate change in the American mind: Data, tools, and trends. Environ. Sci. Policy Sustain. Dev. 2019, 61, 4–18. [Google Scholar] [CrossRef]
  53. Acock, A. Discovering Structural Equation Modeling Using Stata; Stata Press: College Station, TX, USA, 2013; Volume 1. [Google Scholar]
  54. Ullman, J.B.; Bentler, P.M. Structural equation modeling. In Handbook of Psychology, 2nd ed.; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2012. [Google Scholar]
  55. Wright, S. Correlation and causation. J. Agric. Res. 1921, 20, 557. [Google Scholar]
  56. Haavelmo, T. The statistical implications of a system of simultaneous equations. Econometrica 1943, 11, 1–12. [Google Scholar] [CrossRef]
  57. Simon, H.A. Notes on the observation and measurement of political power. J. Politics 1953, 15, 500–516. [Google Scholar] [CrossRef]
  58. Pearl, J. Causality; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar]
  59. Epstein, S. Integration of the cognitive and the psychodynamic unconscious. Am. Psychol. 1994, 49, 709. [Google Scholar] [CrossRef]
  60. Kahneman, D. Thinking, Fast and Slow; Macmillan: New York, NY, USA, 2011. [Google Scholar]
  61. Lagerkvist, C.J.; Okello, J.; Karanja, N. Consumers’ evaluation of volition, control, anticipated regret, and perceived food health risk. Evidence from a field experiment in a traditional vegetable market in Kenya. Food Control 2015, 47, 359–368. [Google Scholar] [CrossRef]
  62. Tiedens, L.Z.; Linton, S. Judgment under emotional certainty and uncertainty: The effects of specific emotions on information processing. J. Personal. Soc. Psychol. 2001, 81, 973. [Google Scholar] [CrossRef]
  63. Weber, M.; Weber, E.U.; Nosić, A. Who takes risks when and why: Determinants of changes in investor risk taking. Rev. Financ. 2013, 17, 847–883. [Google Scholar] [CrossRef]
  64. Ferrer, R.A.; Klein, W.M. Risk perceptions and health behavior. Curr. Opin. Psychol. 2015, 5, 85–89. [Google Scholar] [CrossRef]
  65. Hornsey, M.J.; Harris, E.A.; Bain, P.G.; Fielding, K.S. Meta-analyses of the determinants and outcomes of belief in climate change. Nat. Clim. Chang. 2016, 6, 622–626. [Google Scholar] [CrossRef]
  66. Rickard, L.N.; Yang, Z.J.; Schuldt, J.P. Here and now, there and then: How “departure dates” influence climate change engagement. Glob. Environ. Chang. 2016, 38, 97–107. [Google Scholar] [CrossRef]
  67. Spence, A.; Poortinga, W.; Pidgeon, N. The psychological distance of climate change. Risk Anal. Int. J. 2012, 32, 957–972. [Google Scholar] [CrossRef] [PubMed]
  68. Stokes, B.; Eike, R.; Carle, J. Global Concern about Climate Change, Broad Support for Limiting Emissions. Pew Research Centers Global Attitudes Project. 2015. Available online: https://www.pewresearch.org/global/2015/11/05/global-concern-about-climate-change-broad-support-for-limiting-emissions/ (accessed on 17 August 2024).
  69. Lerner, J.S.; Keltner, D. Beyond valence: Toward a model of emotion-specific influences on judgement and choice. Cogn. Emot. 2000, 14, 473–493. [Google Scholar] [CrossRef]
  70. Mellers, B.A.; McGraw, A.P. Anticipated emotions as guides to choice. Curr. Dir. Psychol. Sci. 2001, 10, 210–214. [Google Scholar] [CrossRef]
  71. Bechara, A.; Damasio, H.; Tranel, D.; Damasio, A.R. Deciding advantageously before knowing the advantageous strategy. Science 1997, 275, 1293–1295. [Google Scholar] [CrossRef]
  72. Strick, M.; Dijksterhuis, A.; Bos, M.W.; Sjoerdsma, A.; Van Baaren, R.B.; Nordgren, L.F. A meta-analysis on unconscious thought effects. Soc. Cogn. 2011, 29, 738–762. [Google Scholar] [CrossRef]
  73. Acker, F. New findings on unconscious versus conscious thought in decision making: Additional empirical data and meta-analysis. Judgm. Decis. Mak. 2008, 3, 292–303. [Google Scholar] [CrossRef]
  74. Slovic, P. What’s fear got to do with it-It’s affect we need to worry about. Mo. L. Rev. 2004, 69, 971. [Google Scholar]
Figure 1. The viola plot shows ideology on x-axis and the level of support for regulating CO2 on y-axis. In this viola plot, the white dot is a marker for the median, the thick line shows the interquartile range with whiskers extending to the upper and lower adjacent values. This is overlaid with a density of the data.
Figure 1. The viola plot shows ideology on x-axis and the level of support for regulating CO2 on y-axis. In this viola plot, the white dot is a marker for the median, the thick line shows the interquartile range with whiskers extending to the upper and lower adjacent values. This is overlaid with a density of the data.
Sustainability 16 10292 g001
Figure 2. The viola plot shows political party affiliation on x-axis and respondent level of worry about global warming on y-axis. In this viola plot, the white dot is a marker for the median, the thick line shows the interquartile range with whiskers extending to the upper and lower adjacent values. This is overlaid with a density of the data.
Figure 2. The viola plot shows political party affiliation on x-axis and respondent level of worry about global warming on y-axis. In this viola plot, the white dot is a marker for the median, the thick line shows the interquartile range with whiskers extending to the upper and lower adjacent values. This is overlaid with a density of the data.
Sustainability 16 10292 g002
Figure 3. The viola plot shows observation year on x-axis and respondent level of support for regulating CO2 on y-axis. In this viola plot, the white dot is a marker for the median, the thick line shows the interquartile range with whiskers extending to the upper and lower adjacent values. This is overlaid with a density of the data.
Figure 3. The viola plot shows observation year on x-axis and respondent level of support for regulating CO2 on y-axis. In this viola plot, the white dot is a marker for the median, the thick line shows the interquartile range with whiskers extending to the upper and lower adjacent values. This is overlaid with a density of the data.
Sustainability 16 10292 g003
Figure 4. Machine-learned structure of the PSEM (R2 = 92.2%). Nodes are scaled to represent the standardized total effect of all latent variables (red outlined nodes) and measured variables (no outline nodes) on policy support (the target node). The total effect is estimated as the derivative of the target node with respect to the driver node. The standardized total effect represents the total effect multiplied by the ratio of the standard deviation of the driver node and the standard deviation of the target node (see [27,28]). The width of line links between nodes shows the strength of Symmetric Relative Mutual Information (SRMI) among each variable in the PSEM. Node names for survey items are explained in Table 1.
Figure 4. Machine-learned structure of the PSEM (R2 = 92.2%). Nodes are scaled to represent the standardized total effect of all latent variables (red outlined nodes) and measured variables (no outline nodes) on policy support (the target node). The total effect is estimated as the derivative of the target node with respect to the driver node. The standardized total effect represents the total effect multiplied by the ratio of the standard deviation of the driver node and the standard deviation of the target node (see [27,28]). The width of line links between nodes shows the strength of Symmetric Relative Mutual Information (SRMI) among each variable in the PSEM. Node names for survey items are explained in Table 1.
Sustainability 16 10292 g004
Figure 5. Standardized direct effect coefficients estimated for SEM #4 structure learned from PSEM are shown for all measured and latent variables. *** indicate significances at p < 0.001. Results of the SEM estimated by applying Maximum Likelihood with Missing Value (MLMV) algorithm in STATA are presented in Table S10. Model fitness scores and decomposition of direct, indirect, and total effect sizes and their relative statistical significance are also shown.
Figure 5. Standardized direct effect coefficients estimated for SEM #4 structure learned from PSEM are shown for all measured and latent variables. *** indicate significances at p < 0.001. Results of the SEM estimated by applying Maximum Likelihood with Missing Value (MLMV) algorithm in STATA are presented in Table S10. Model fitness scores and decomposition of direct, indirect, and total effect sizes and their relative statistical significance are also shown.
Sustainability 16 10292 g005
Figure 6. PSEM Posterior Mean Analysis: Normalized Mean Values Conditionality to Policy Support. Figure displays how those who are strong supporters, lukewarm supporters, and strong opposers of climate policy differ in their responses for each measured survey item and derived latent variable. The “prior” (red line) represents the normalized means for the whole sample. Means for strong opponents are shown in green, lukewarm supporters are in blue, and strong supporters are in pink. Node names for survey items are explained in Table 1.
Figure 6. PSEM Posterior Mean Analysis: Normalized Mean Values Conditionality to Policy Support. Figure displays how those who are strong supporters, lukewarm supporters, and strong opposers of climate policy differ in their responses for each measured survey item and derived latent variable. The “prior” (red line) represents the normalized means for the whole sample. Means for strong opponents are shown in green, lukewarm supporters are in blue, and strong supporters are in pink. Node names for survey items are explained in Table 1.
Sustainability 16 10292 g006
Table 1. Variable name, survey question, response options, and descriptive statistics of the survey sample. Survey methods, codebook, and data tables are available in a public repository [45]. (Obs = Observations; S.D. = standard deviation).
Table 1. Variable name, survey question, response options, and descriptive statistics of the survey sample. Survey methods, codebook, and data tables are available in a public repository [45]. (Obs = Observations; S.D. = standard deviation).
Variable NameSurvey QuestionResponse OptionsObsMeanS.D.
Public Opinion Statements
1. happeningRecently, you may have noticed that global warming has been getting some attention in the news. Global warming refers to the idea that the world’s average temperature has been increasing over the past 150 years, may be increasing more in the future, and that the world’s climate may change as a result. What do you think: Do you think that global warming is happening?−1. Refused
1. No
2. Don’t know
3. Yes
22,4162.490.78
2. cause_recodedAssuming global warming is happening, do you think it is... (Recoded to include open ends)−1. Refused
1. Don’t know
2. Other
3. Neither because global warming isn’t happening
4. Caused mostly by natural changes in the environment
5. Caused by human activities and natural changes
6. Caused mostly by human activities
22,4164.971.21
3. sci_consensusWhich comes closest to your own view?−1. Refused
1. Don’t know enough to say
2. There is a lot of disagreement among scientists about whether or not global warming is happening
3. Most scientists think global warming is not happening
4. Most scientists think global warming is happening
21,0862.741.21
4. worryHow worried are you about global warming?−1. Refused
1. Not at all worried
2. Not very worried
3. Somewhat worried
4. Very worried
22,4162.530.97
5. harm_personallyHow much do you think global warming will harm: You personally−1. Refused
1. Don’t know
2. Not at all
3. Only a little
4. A moderate amount
5. A great deal
22,4161.991.20
6. harm_USHow much do you think global warming will harm: People in the United States−1. Refused
1. Don’t know
2. Not at all
3. Only a little
4. A moderate amount
5. A great deal
22,4162.351.32
7. harm_dev_countriesHow much do you think global warming will harm: People in developing countries−1. Refused
1. Don’t know
2. Not at all
3. Only a little
4. A moderate amount
5. A great deal
22,4162.471.43
8. harm_future_genHow much do you think global warming will harm: Future generations of people−1. Refused
1. Don’t know
2. Not at all
3. Only a little
4. A moderate amount
5. A great deal
22,4162.751.46
9. harm_plants_animalsHow much do you think global warming will harm: Plant and animal species−1. Refused
1. Don’t know
2. Not at all
3. Only a little
4. A moderate amount
5. A great deal
21,0862.751.43
10. when_harm_USWhen do you think global warming will start to harm people in the United States?−1. Refused
1. Don’t know
2. Not at all
3. Only a little
4. A moderate amount
5. A great deal
22,4163.871.96
11. reg_CO2_pollutantHow much do you support or oppose the following policies?
Regulate carbon dioxide (the primary greenhouse gas) as a pollutant.
−1. Refused
1. Strongly oppose
2. Somewhat oppose
3. Somewhat support
4. Strongly support
21,4062.841.09
12. reg_utilitiesHow much do you support or oppose the following policies?
Require electric utilities to produce at least 20% of their electricity from wind, solar, or other renewable energy sources, even if it costs the average household an extra $100 a year.
−1. Refused
1. Strongly oppose
2. Somewhat oppose
3. Somewhat support
4. Strongly support
17,3902.611.16
13. fund_researchHow much do you support or oppose the following policies?
Fund more research into renewable energy sources, such as solar and wind power.
−1. Refused
1. Strongly oppose
2. Somewhat oppose
3. Somewhat support
4. Strongly support
22,4163.091.06
14. discuss_GWHow often do you discuss global warming with your family and friends?−1. Refused
1. Never
2. Rarely
3. Occasionally
4. Often
22,4162.110.89
Sociodemographic variables
1. genderAre you…?1. Male
2. Female
22,4161.510.49
2. age_categoryHow old are you? [recoded]1. 18–34 years
2. 35–54 years
3. 55+ years
22,4162.230.78
3. educ_categoryWhat is the highest level of school you have completed? [recoded]1. Less than high school
2. High school
3. Some college
4. Bachelor’s degree or higher
22,4162.900.96
4. income_categoryResponses to “income” were categorized into
the following three groups.
1. Less than $50,000
2. $50,000 to $99,999
3. $100,000 or more
22,4161.870.80
5. raceResponses to “race” were categorized into the following four groups.1. White, non-Hispanic
2. Black, non-Hispanic
3. Other, non-Hispanic
4. Hispanic
22,4161.510.98
6. ideologyIn general, do you think of yourself as...−1. Refused
1. Very liberal
2. Somewhat liberal
3. Moderate, middle of the road
4. Somewhat conservative
5. Very conservative
22,4163.041.20
7. partyGenerally speaking, do you think of yourself as a...−1. Refused
1. Republican
2. Democrat
3. Independent
4. Other; please specify:
5. No party/not interested in politics
22,4162.321.26
8. registered_voterAre you currently registered to vote, or not−1. Refused
1. Registered
2. Not registered
3. Not sure
4. Don’t know
5. Prefer not to answer
22,4161.240.82
9. region9Computed based on state of residence1. New England
2. Mid-Atlantic
3. East-North Central
4. West-North Central
5. South Atlantic
6. East-South Central
7. West-South Central
8. Mountain
9. Pacific
22,4166.065.25
10. religionWhat is your religion?−1. Refused
1. Baptist–any denomination
2. Protestant
3. Catholic
4. Mormon
5. Jewish
6. Muslim
7. Hindu
8. Buddhist
9. Pentecostal
10. Eastern Orthodox
11. Other Christian
12. Other–non-Christian
13. Agnostic
14. Atheist
15. None of the above
22,4166.065.25
11. evangelicalWould you describe yourself as “born-again” or evangelical?−1. Refused
1. Yes
2. No
3. Don’t know
22,4161.790.64
12. service_attendanceHow often do you attend religious services?−1. Refused
1. Never
2. Once a year or less
3. A few times a year
4. Once or twice a month
5. Once a week
6. More than once a week
22,4163.081.80
13. marit_statusAre you now…?1. Married
2. Widowed
3. Divorced
4. Separated
5. Never married
6. Living with partner
22,4162.361.80
14. employmentDo any of the following currently describe you?1. Working—as a paid employee
2. Working—self-employed
3. Not working—on temporary layoff from a job
4. Not working—looking for work
5. Not working—retired
6. Not working—disabled
7. Not working—other
22,4162.932.18
15. house_headRespondents were asked “Is your residence in…” with response options “Your name only”, “Your name with someone else’s name (jointly owned or rented)”, or “Someone else’s name only”. Respondents who said “Someone else’s name only” were coded as 0 = “Not head of household;” the other two responses were coded as 1 = “Head of household”1. Not head of household
2. Head of household
22,4161.830.38
16. house_sizeHow many people live in your household [recoded]Open ended22,4162.671.47
17. house_typeWhich best describes the building where you live?1. One-family house detached from any other house
2. One-family house attached to one or more houses (such as a condo or townhouse)
3. Building with 2 or more apartments
4. Mobile home
5. Boat, RV, van, etc.
22,4161.530.92
18. house_ownAre your living quarters…1. Owned by you or someone in your household
2. Rented
3. Occupied without payment of rent
22,4161.270.49
19. yearYear of survey data collection1. 2008,
2. 2010, ….
10. 2018
22,4165.722.88
weight_waveSampling weight specific to each wave--22,4160.990.66
weight_aggregateSampling weight if aggregating multiple waves--22,4160.990.71
Table 2. Four sequential estimation tasks that were implemented for estimating PSEM.
Table 2. Four sequential estimation tasks that were implemented for estimating PSEM.
Step NumberDescription
Step 1Estimation of latent variables through unsupervised hierarchical Bayesian network clustering of respondent beliefs.
Step 2Estimation of Bayesian network of latent variables that minimizes the description length.
Step 3Linking latent variable PSEM with sociodemographic measured variables.
Step 4Calibration and k-fold validation of PSEM with target variable as policy support.
Table 3. Goodness-of-fit (GoF) statistics estimated for four alternate specifications of the standard SEM.
Table 3. Goodness-of-fit (GoF) statistics estimated for four alternate specifications of the standard SEM.
Recommended Goodness of the Fit (GoF) Value [47]SEM#1
Standard SEM with ML Method & No Sampling Weights
SEM#2
Standard SEM with ML Method & Sampling Weights
SEM#3
Standard SEM with MLMV Method & No Sampling Weights
SEM#4
Standard SEM with MLMV Method & Sampling Weights
Sample Size 16,38016,38022,41622,416
(1) Population error [Root Mean Squared Error of Approximation, RMSEA]Less than 0.10.08RMSEA not reported due to model fit with vce (robust)0.08RMSEA not reported due to model fit with vce (robust)
(2A) Baseline comparison [Comparative Fit Index, CFI]Closer to 10.92CFI not reported due to model adding sampling weights0.92CFI not reported due to model adding sampling weights
(2B) Baseline comparison [Tucker Lewis Index, TLI]Closer to 10.90TLI not reported due to model adding sampling weights0.90TLI not reported due to model adding sampling weights
(3) Size of residuals [Standardized Root Mean Squared Residual, SRMR]Less than 0.080.060.06SRMR is not reported due to missing value treatment.SRMR is not reported due to missing value treatment.
(4) Size of residuals [Coefficient of determination, CD]Less than 0.080.100.070.100.08
Table 4. Estimated standardized total effect sizes predicting policy support. In parentheses, G-test scores are reported for PSEM and z-test scores for SEM. All estimated effects are significant at p < 0.001. This is SEM#4 from Table 3—see Methods for explanation.
Table 4. Estimated standardized total effect sizes predicting policy support. In parentheses, G-test scores are reported for PSEM and z-test scores for SEM. All estimated effects are significant at p < 0.001. This is SEM#4 from Table 3—see Methods for explanation.
PSEM
(N = 22,416)
SEM
(N = 22,416)
Affective Risk Perception0.41
(4221.64)
0.53
(24.35)
Analytical Risk Perception0.36
(4188.44)
0.11
(11.64)
Beliefs0.42
(6431.69)
0.35
(37.76)
Ideology−0.19
(3361.94)
−0.05
(−4.39)
Party0.06
(2142.29)
0.02
(7.06)
Race0.05
(56.49)
0.02
(7.60)
Table 5. PSEM predicted marginal (a priori) and conditional (posterior) probabilities of strong opposers, lukewarm supporters, and strong supporters of climate policy conditional upon beliefs, affective risk perception, analytical risk perception, ideology, political party, and race.
Table 5. PSEM predicted marginal (a priori) and conditional (posterior) probabilities of strong opposers, lukewarm supporters, and strong supporters of climate policy conditional upon beliefs, affective risk perception, analytical risk perception, ideology, political party, and race.
Variables and Their Categorical Class ValuesMarginal (a Priori) Probability (%)Conditional (Posterior) Probability for Strong Opposers (%)Conditional (Posterior) Probability for
Lukewarm Supporters (%)
Conditional (Posterior) Probability for
Strong Supporters (%)
Policy Support
  • Strong Opposers
13.15100.00
  • Lukewarm Supporters
59.46 100.00
  • Strong Supporters
27.38 100.00
Beliefs
  • Dismissive
10.4840.787.921.49
  • Doubtful
5.218.224.884.46
  • Disengaged
7.0412.798.012.19
  • Cautious
11.3715.9914.252.89
  • Concerned
19.4111.0325.1211.02
  • Alarmed
46.5011.1939.8277.95
Affective Risk Perception
  • Not Worried
41.0981.0444.7913.86
  • Worried
58.9118.9655.2186.14
Analytical Risk Perception
  • Do not Knowers
12.7719.7013.547.77
  • Risk Deniers
15.9447.1314.713.62
  • Little Risk
18.7315.9522.0612.82
  • Moderate Risk
30.4611.6330.9238.51
  • High Risk
22.115.5918.7737.28
Ideology
  • Refused
2.549.941.660.92
  • Very liberal
7.283.364.2815.67
  • Somewhat liberal
17.785.9516.0227.30
  • Moderate, middle of the road
41.1425.0345.3739.69
  • Somewhat conservative
21.0327.8723.7911.76
  • Very conservative
10.2227.868.894.66
Party
  • Refused
1.344.630.960.57
  • Republican
24.5638.2325.6115.72
  • Democrat |
34.1318.2232.1546.07
  • Independent
23.3520.4124.1323.05
  • Other; Please specify:
2.483.862.421.94
  • No party/not interested in politics
14.1514.6614.7312.65
Race
  • White, Non-Hispanic
66.0571.8166.4862.36
  • Black, Non-Hispanic
11.729.7311.6512.83
  • Other, Non-Hispanic
7.436.197.328.25
  • Hispanic
14.8012.2614.5616.55
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zia, A.; Lacasse, K.; Fefferman, N.H.; Gross, L.J.; Beckage, B. Machine Learning a Probabilistic Structural Equation Model to Explain the Impact of Climate Risk Perceptions on Policy Support. Sustainability 2024, 16, 10292. https://doi.org/10.3390/su162310292

AMA Style

Zia A, Lacasse K, Fefferman NH, Gross LJ, Beckage B. Machine Learning a Probabilistic Structural Equation Model to Explain the Impact of Climate Risk Perceptions on Policy Support. Sustainability. 2024; 16(23):10292. https://doi.org/10.3390/su162310292

Chicago/Turabian Style

Zia, Asim, Katherine Lacasse, Nina H. Fefferman, Louis J. Gross, and Brian Beckage. 2024. "Machine Learning a Probabilistic Structural Equation Model to Explain the Impact of Climate Risk Perceptions on Policy Support" Sustainability 16, no. 23: 10292. https://doi.org/10.3390/su162310292

APA Style

Zia, A., Lacasse, K., Fefferman, N. H., Gross, L. J., & Beckage, B. (2024). Machine Learning a Probabilistic Structural Equation Model to Explain the Impact of Climate Risk Perceptions on Policy Support. Sustainability, 16(23), 10292. https://doi.org/10.3390/su162310292

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop