Identifying key-psychological factors influencing the acceptance of yet emerging technologies–A multi-method-approach to inform climate policy

The best combination of possible climate policy options (mitigation, adaptation and different climate engineering technologies) to tackle climate change is unknown. Climate policy is facing a hard decision in answering the question whether climate engineering technologies should be researched, limitedly deployed or even deployed at global scale. Such technologies bear large epistemic and ethical uncertainties and their use as well as non-use might have severe consequences. To deal with such uncertainties, the (ethical) assessment of climate engineering technologies should include the perspectives of various stakeholders including laypersons to inform climate policy. To facilitate (ethical) technology assessment, we propose a novel 2-step methodology to collect and analyze data on ethical concerns and the acceptability of climate engineering technologies. Thereby we focus on Stratospheric Aerosol Injection (SAI) as an use case. We propose an innovative combination of newly developed methods consisting of two data collection tools (Cognitive-Affective Mapping and large-scale survey) and two types of data analyses (using graph theory and factor analysis). Applying this multi-method approach we were able to identify (1) central ethical and governance related concerns regarding SAI (by Cognitive-Affective Maps) and (2) to estimate the relative importance of core constructs (positive and negative affect, risk and benefit perception, trust) on the acceptability of SAI (by large-scale survey).


Introduction
At the 21 st Conference of the Parties in Paris in 2015, over 190 states decided to limit global warming to well below 2 degrees, at best to 1.5 degrees celsius [1]. However, according to the Intergovernmental Panel on Climate Change (IPCC), the current "Nationally Determined Contributions" to reduce greenhouse gas emissions bear a "greater than 50% likelihood that upstream engagement) and theoretically derive an integrative model to predict the acceptability of SAI. In the second part of the article the study design is described, which includes two central procedures: (a) Step I procedure: "Cognitive-Affective Mapping", which was conducted as a pre-study to extend the theoretically derived integrative model and (b) Step II procedure, a large-scale survey to measure the acceptability of SAI. These procedures combine different tools of data collection and different types of statistical analyses, which are explained in detail in the respective sections. The final discussion (General Discussion) section aims to summarize the utility of the proposed methodology for future research on CET to inform climate policy.

Importance of upstream engagement
It has been argued that possible risks and benefits of CET should be assessed in a centralized top-down approach by experts. This "classical technology assessment" [e.g. 29, 30] was rather popular and mainly driven by the American Office of Technology Assessment [31]. Yet, to face the situation of post-normal science [28], which is characterized by high systems uncertainties (like the complexity of the climate system) and high decision stakes (how humankind should counter climate change), a more inclusive, participatory approach of public engagement seems necessary [32][33][34]. The involvement of public perspectives and concerns on CET, which are at an early stage of development, is called "upstream engagement". Upstream means future oriented from bottom to top, i.e. that the concerns of all stakeholders affected by controversial technologies are heard and at best lead to changes in the research and implementation process of climate technologies [35]. Considering CET, the Royal Society Report "Geoengineering the climate" (2009) emphasized that "the acceptability of geoengineering will be determined as much by social, legal and political factors, as by scientific and technical factors" [10 p ix].
Therefore, we propose a highly economical methodology to collect and analyze data on ethical concerns and the acceptability of CET, whereby we focus on SAI as a use case. Applying online tools like surveys and an exploratory method called Cognitive-Affective Maps (explained later) enables us to elicit broad public opinions on SAI without requiring a strong form of active public engagement [cf. 33].

Model to predict acceptability of SAI
In this study, we propose and test an integrative model to predict acceptability of SAI based on central constructs (assessed by survey scales), see Fig 1. Thereby acceptability is understood as an evaluative judgment (attitude) and not as a real behavior, like concrete support or resistance towards SAI [cf. 36] (According to the theory of planned behavior [37], behavioral intentions (acceptability) should be predictive for the performance of a behavior under sufficient perceived behavioral control (so called sufficiency assumption) [38]. However, as SAI is an emerging technology, one may argue that behavioral control, the "people's perception of the ease or difficulty of performing the behavior of interest" [37], is something purely speculative and potential real behavior can only be imagined. So our proposed integrative model, which focuses on an emerging technology, cannot be mainly motivated by "classical models" (e.g. the "Technology Acceptance Model" by [39]) for measuring technology acceptance of socially entrenched and well established technologies). For this reason, we decided to base our proposed model on central value theories like the "theory of basic individual values" [e.g. 40] and the "value-belief-norm theory" [e.g. 41], which emphasized the role of values for creating a predisposition towards certain behavior, like the acceptability of SAI.
To derive this model, in the following the central empirical research on SAI and reviews supporting and reflecting the proposed structure of the integrative model in Fig 1 are provided. An overview of the single constructs is given in Table 2 and in the online supplementary (see B in; https://osf.io/vb5qe). We derived Positive and Negative Affect, Risk and Benefit Perception and Trust as central constructs predicting the acceptability of SAI (see Fig 1) based on multiple studies [42][43][44][45][46][47], which jointly assessed the influence of one or all of the identified constructs (see Table 1 for an overview).
In the identified studies, acceptability was measured ranging from a single item question [42] to a 19 item scale, which also included aspects of benefit perception [44]. Also the other central constructs were operationalized quite differently, with only limited content-wise overlap for the respective constructs. To test the integrative model in this article, we selected and adjusted published scales carefully, depending on theoretical and statistical arguments.
The proposed structure of the integrative model (also called nomological structure) in Fig 1 is justified by (1) existing empirical literature and (2) central reviews. By nomological structure we mean the empirical observable quantities (correlations) between the proposed core constructs to each other [cf. 48]. In empirical studies, the effect of trust on risk and benefit perception was shown [46,49]. The perceived risks and benefits are influenced by positive and negative affect [46,50]. An impact of trust on risk and benefit perception and positive and negative affect was proposed in Huijts et al. [36]. Further, the proposed relations are discussed and reflected in multiple reviews [36, [51][52][53][54][55]. Overall it is argued that laypersons, especially if their knowledge on a surveyed topic is low, rely mainly on heuristics: initial affect influences risk and benefit perceptions of a technology (affect heuristic) and a stronger negative affect leads to higher perceived risks and less benefits. The reverse holds true for perceived initial positive affect, which explains why perceived risk and benefit are inversely related [e.g. 53,55,56]. If

PLOS CLIMATE
knowledge is low, laypersons can rely on trust (trust heuristic), based on a perceived similarity with (or perceived competencies of) relevant stakeholders or institutions (e.g. scientists who are researching SAI). Thereby, trust and affect are substantially correlated [cf. 52].
Please note that trust is a complex construct [cf. 57] and humans often draw inferences about unobservable properties of the informants to trust, such as value similarities, intent or competences [58]. In general, scientists or environmental groups are more trusted than institutions or companies [59], whereby for CET scientists are trusted more than companies only in the early stages of technology development [cf. 60,61].
Additional factors without proposing specific structural relations. Fig 1 also depicts influential factors outside of the core structural model. These predictors were derived based on Step 1 of our proposed procedure (see Study Design section for a detailed description). Using Cognitive Affective Maps (CAMs), we identified ecological, ethical and trust-related concerns that might also influence the acceptability of SAI (see constructs highlighted in gray in Fig 1). To include the identified concerns we included scales for Climate Change Concern [62], Moral Hazard, and Tampering with Nature [47, cf. 61,63]. Additionally we included our recently developed "Empirical Ethics Scale for Technology Assessment" [64]. Because the structural relation of these factors is less clear, we are presenting them as potential influential factors outside of the central core model. All scales, empirically investigated in this study, are shown with their reliability coefficients in Table 2.

Study design
The study is composed of two central procedures: (a) as step I procedure "Cognitive-Affective Mapping" was conducted as a Pre-Study to inform (b) the step II procedure, a large-scale survey to measure the acceptability of SAI (see Fig 2). Please note that 56 out of 58 participants of the pre-study also participated in the survey study and answered both studies within 2 weeks. Multiple reviews have shown that more than 80% of the respondents were not familiar with SAI or CET in general [60,[69][70][71]. Therefore, we decided to use a scenario text to inform participants about SAI prior to collecting CAMs or survey data: the scenario text, which describes the SAI technology and presents the technology in the context of climate change (see A in the online supplementary; https://osf.io/vb5qe), resembles a classical scenario-based approach. In the context of climate engineering scenarios, the scenario text is a strategic conversation scenario, which aims to frame key messages (operation principle, pros and cons) to stimulate discussion and research [72]. While designing the SAI scenario, we considered the steps to develop scenarios described by P. Schwartz [73] and relied on empirical SAI studies, which already used a scenario-based approach and tested their scenario texts in advance [44,46]. To enhance the quality of the scenario text and check if laypersons understood the described SAI technology, the scenario text was pretested and slightly adjusted (see C in the online supplementary; https://osf.io/vb5qe). Additionally, we considered theoretical quality criteria, like the readability, plausibility and ambivalence of the scenario text describing the SAI technology [see 74,75].

Online data collection
The step I procedure "Cognitive-Affective Mapping" and the step II procedure, a large-scale survey were conducted online using the participant marketplace Prolific. The only prerequisite for participation was to speak English fluently and to live in the United Kingdom. With the increased anonymity of an online-study, we aim to collect beliefs which could otherwise be perceived as socially undesirable [cf. 76,77]. In the two studies, we adhered to established standards of web-based research [78,79]. The online studies were programmed in lab.js [80], which gave us the flexibility to also collect paradata (e.g., recording if and when participants left the fullscreen of the online-study). The studies were hosted on a local university JATOS server [81], which guaranteed highest privacy standards.

Technical aspects analyses
All of the following analyses have been conducted using the statistical software R [82] and Mplus [83]. Using two statistical softwares allows us to consider the influence of different implementations of the statistical procedures (e.g., estimation) and thereby increase the robustness of the results [cf. 84,85]. Additionally, we could consider procedures which have not been fully implemented in R [e.g. 86]. Using the R package "MplusAutomation" [87] it was possible to write the complete analysis scripts in R with text annotations. Using the "rmarkdown" R package [88] allowed us to write all the analysis scripts in the form of reproducible dynamic scripts, which combine code, output and texts in one document, so that the proposed methodology can easily be transferred to future studies. In the following sections the procedures of the novel 2-step methodology are motivated by their general relevance, followed by a more technical analysis section and the presentation of the results, followed by a discussion.
Step I procedure-Cognitive-Affective Maps

Motivation of methodology
By applying CAMs, it is possible to explore possible concerns of the public regarding emerging technologies in a relatively general and unrestricted way. In doing so, we could iteratively extend our proposed integrative model [cf . 89]. As an explorative and time-efficient method, CAMs allow to identify similar concerns regarding CET as more time consuming interview / focus group based methods [e.g. [90][91][92][93][94].
CAMs are a quantitative and qualitative research tool to identify, visually represent and analyze existing belief structures [e.g. 95,96]. From a theoretical perspective, CAMs are appealing for this study because they "provide an immediate gestalt of the whole system and of the simultaneous interactions between, and relationships among, its parts" [97 p2]. This emphasizes the high face validity of the method. CAMs have already been applied as a tool to study political conflicts and general belief systems [97][98][99][100][101] and it has been shown that CAMs provide added value to empirical survey studies [102,103]. CAMs have also been applied to identify ethical values [cf. 104,105] and were proposed in the context of the theory of "ethical coherence" [106]. As an exploratory method, CAMs offer added value compared to a survey, as the latter can only consider pre-identified potential factors influencing SAI acceptability [cf. 107,108]. This method enabled us to enrich the theoretical proposed integrative model by ethical and climate change related concerns (see Fig 1).

Methods
Participants. In the step I procedure, 58 participants were recruited in total (mean age 38, SD = 10.40, 47% female). Participants were compensated with GPB 9.08 an hour for participation.
Procedure. After providing written informed consent, reading an instruction on how to draw CAMs and reading the scenario text on SAI, participants were asked to draw a CAM. After creating their CAM, the participants were asked for feedback on what they have drawn. One participant (2%) faced technical problems, but no one had to stop drawing the CAM for technical reasons. The pre-study took on average 46 minutes (SD = 16.94) Using CAMs participants can freely add and connect concepts they associate with SAI (see in Detail Results Section). As such, CAMs are an exploratory quantitative and qualitative research tool to identify existing belief structures [e.g. 95,96]. CAMs incorporate so-called affective valences by representing whether a person associates positive, negative, neutral or ambivalent emotions with a drawn concept (positive and negative ratings ranging from [-3,3]). Furthermore, it is possible to connect concepts with dashed lines for inhibitory connections and solid lines for supporting/strengthening connections and to specify the connections in different strengths. Additionally, CAMs contain directional arrows, which represent a directional effect.
Participants drew a CAM online using the recently developed tool called "Cognitive-Affective Maps extended logic" [109]. Six concepts were predefined: positive feelings, negative feelings, trust in political institutions, perceived risks, perceived benefits and acceptability of SAI; this latter concept was placed in the center of the CAM. Participants were technically prevented from deleting, changing or moving the central concept "acceptability of SAI", but were able to move and delete the other predefined concepts. Participants freely added further concepts, changing the valence of and adding comments to these concepts. After a concept was drawn, it could be connected to others via connections of different strengths and types (i.e. inhibitory vs. supportive connection). The CAMs of the pre-study with the lowest and highest average mean valences are depicted for illustrative purposes in S1 Text.

Results
Data preparation and analysis. Data preparation. We summarized the CAMs using the dedicated CAM-App [110], which is a Shiny-Application [111]. The CAM-App generates a protocol, which tracks every summarizing step, so that the qualitative process of summarizing CAM data is completely transparent. The procedure for summarizing CAM data is following a five-step procedure, which was theoretically motivated by existing CAM literature [cf. 107,108,112,113] and qualitative handbooks [114][115][116]: (1) In a first deductive step, super-and subordinate categories are derived from existing literature. Thereby, semantically identical terms can be positive or negative depending on the context. (2) In a subsequent inductive step, all CAMs are separately studied and superordinate and subordinate categories are recorded in the form of memos. Further, by looking at the most positive and most negative CAMs regarding their mean valence, overview-like case summaries are created. (3) Subcategories are formed inductively, taking existing theories into account. In the first coding step (3a), categories are formed and their respective frequencies are noted. In the second coding step (3b), the existing category system is reduced by combining subcategories that thematically refer to a similar subject. This process (3a, 3b) is repeated until all terms in the CAMs have been coded. (4) All subcategories that have been formed are combined to form topics at a higher level of abstraction, and finally (5) the results are presented in the form of tables and graphics.
Analysis. In total we collected 58 CAMs, whereby participants drew on average 25.40 (SD = 2.06) concepts and 44.21 (SD = 32.49) connectors (please note that the technical settings required participants to draw at least 24 concepts in total). The valence for the concepts range from [-3,-1] for negative and [1,3] for positive concepts, with ambivalent and neutral concepts being assigned a value of 0. The mean average valence over all the CAMs was −.33 (SD = .51). In 14% of the CAMs one or more (up to 4) of the other predefined concepts were deleted. The valence of the predefined concepts was changed in (only) 69% of the cases (despite instructing participants to do so). After drawing the CAM, participants gave feedback to what extent they felt that the CAM they just drew reflected their thoughts and feelings of the SAI technology on a scale ranging from 1 = completely unrepresentative to 7 = fully representative. With an average value of 5.90 (SD = .95), participants indicated a relatively high value for the representativeness of their drawn CAM.
Following the five-step procedure, the 1063 unique concepts (1473 in total) were reduced to 52 concepts (see detailed list of the 52 concepts in D in the online supplementary; https://osf. io/vb5qe). Negative concepts are especially pronounced for the different perceived risks for health, nature, society and weather (e.g., SAI could lead to more rain and therefore flooding), wherein participants highlighted the possible increased acidity of the oceans. Positive concepts are the perceived benefit for health, society and nature (e.g., SAI could protect polar ice caps). Some of these concepts, like the possibility of acid rain, were partly included in the scenario text on SAI, but interestingly participants came up with two additional relevant clusters of concepts for climate policy, namely ethical arguments and governance.
Ethical arguments. In 38 drawn concepts the ethical argument of the "Termination Problem" was highlighted, which states that an abrupt termination of SAI can lead to accelerated hearting, due to large concentrations of atmospheric carbon dioxide [e.g. 17]. The "Termination Problem" was perceived strongly negative on average (M = −2.00, SD = 1.01) and the problem was emphasized in comments (e.g., "Once started SAI would be required for decades", "if there are problems can [SAI] be stopped once d[e]ployed?"). Another negative ethical argument is the "Moral Hazard" argument, which is explained by a participant in the comments as "humans may feel like they do not need to combat global warming actively and put the responsibility down to SRM" and another participant states that "companies, governments will carry on polluting and using bad technology/fossil fuels". "Playing God" (also called "Hubris argument") is a further negative ethical argument, whereby participants see SAI as an "unacceptable natural interference" and another participant questions if "[SAI is] another way in which humans mess things up by playing God?". The "Emergency Argument" is seen as rather ambivalent, and participants emphasized SAI should only be used if it is the "last chance" and only as an emergency solution. The "Arming the Future" and "Buying Time" arguments are positively perceived, the first arguing that SAI could "creat[e] a better future for future generations", while the second states that SAI could "give time for better technology to be deployed, allowing a 'pause' in the effects of global warming until a solution is found". Finally, in the "Innovation argument" participants emphasized that SAI is an innovative / pioneering idea, which could inspire other ideas.
Governance. Multiple summarized concepts emerged, which are related to questions of governance of SAI. They were rated negatively overall: within the summarized concept of accountability, participants asked who is responsible and accountable for SAI deployment. A possible agreement between countries was seen as rather unrealistic (see concept "disagreement countries" in online supplementary), because countries are fighting and distrusting each other, although an international agreement must be found before SAI is deployed. Taken together, the role of national governments is perceived rather diverse, because "[a]fter elections new government might have new priorities" or "government focuses policy on electoral cycles" and that "controversial topics [like SAI] cause division of political groups and nothing gets done". Summarized in the concept of "wrong motivations", participants argued that "those in power" are "self centered", have "vested interest" and are "extreme campaigners", who cannot be trusted to make moral judgments. The overall mean valence of the predefined concept "trust in political institutions" was negative as well, which was emphasized in the summarized concept trust lost: due to "corruption", "lack of credibility" or "political selfishness" experts / governments are not trusted. In sharp contrast to the concepts mentioned above, participants argued that SAI could "bring parties and countries together with a universal mission" and could foster a "united world" and "political harmony" (see summarized concept collaboration). Within the positive concept of "trust" participants expressed a "faith in politics" or in "scientists opinion".
To highlight the interrelatedness of all summarized concepts, the CAMs have been aggregated by creating a so-called "canonical adjacency matrix". A "canonical adjacency matrix" is a symmetric matrix only composed of integers, whereby the off diagonals represent the number of connections between two concepts and the main diagonal represents the frequency of a specific concept being drawn. This matrix can be represented as a network (see Fig 3), which is an overall graphical representation of the 58 drawn CAMs. The size of the concept and the thickness of the connection is proportional to the frequency of the drawn concepts and the pairwise connections respectively. This process of summarizing and depicting CAM data was overall motivated by literature on graph theory [e.g. 117], semantic networks [e.g. 118,119] and Prior's [120] book section on content analysis. Pictures of all CAMs, a searchable Excel file (https://osf.io/fnkjp) and the protocol file can be found on OSF.

Discussion
Comparing the central concerns identified in CAMs to existing literature, we found similar concerns regarding efficacy / feasibility (we used the term "effectiveness"), governance issues, economical aspects, ethical and political concerns and possible risks [cf. 90,92,121]. The CAM analysis additionally identified perceived benefits (like benefits for nature, or hope in possible transnational cooperation), which have been barely reported by existing deliberative studies [e.g. [90][91][92][93][94].
The results regarding governance could be in some way alarming for climate policy: participants in our study indicate a strong pessimism regarding a possible global agreement on implementing SAI (concept "disagreement countries") and that it is unclear who is responsible

PLOS CLIMATE
for implementing such a technology (concept "accountability"). Without solving these issues and the agreement of the broader public, participants fear future conflicts. Similar governance related issues have also been identified by deliberative studies [e.g. 32, 33, 91,122]. Such pessimism on implementing governance structures seems to be related to a slightly negative trust in political institutions in our dataset.
Most importantly for the purpose of the current study, we identified central ethical arguments that we had not included in the theoretically derived core model to predict acceptability (see Fig 1). The ethical concerns regarding SAI in the CAMs resemble central arguments of existing ethical literature on CETs, for example that SAI does nothing against the inherent cause of climate change [e.g. 17,18,22]. The overall results of the pre-study emphasize the importance of including technology specific influential factors, like, in our case, the inclusion of two additional ethical scales, the Tampering with Nature and Moral Hazard scale. The identified "risks for nature" indicate the need to include a scale for Climate Change Concern. These scales have been empirically included in the model that will be tested in the next section, the Step II procedure.
Step II procedure-Survey

Motivation of methodology
We conducted a large-scale survey to test the empirical support for the theoretically derived integrative model (see Fig 1) by means of a so-called structural equation model. This model empirically tests all the indirect and direct effects and the strength of the estimated parameters indicate the relative importance of the identified constructs on predicting the Acceptability of SAI. In general by applying statistical approaches of so called "latent variable models" [e.g. 123,124] it is possible to account for the unreliability of scales (caused by measurement error), test for certain response patterns and formulate simple exploratory or measurement models or more complex structural equation models.

Methods
Participants. In the large-scale survey (step II procedure) a total of 600 participants were recruited. The pre-study was technically linked to the survey study and 56 of 58 participants (3% drop-out rate) participated in both studies. Participants were compensated with GPB 9.99 an hour.
We tested for Insufficient Effort Responding, which indicates reduced efforts by participants when answering survey questions [125]. Participants were flagged on five criteria (participants who needed extremely long to answer the complete survey or single components and participants who showed a low intra-individual response variability, no variability at all or could be identified as multivariate outliers, when answering survey scales). On these criteria we conducted a latent class analysis [cf. 125,126]. By doing so we identified 21 participants (3.5%) who showed a suspicious response behavior and subsequently removed these participants from the analysis.
The final sample of the survey study included in the analyses consisted of N = 579 participants, 47% were female (2 participants preferred to not say / provide their gender) with a mean age of 40 years (SD = 13.26; range 18 to 87).
Procedure. Participants first provided written informed consent and were motivated to provide thoughtful answers, which was additionally checked by a commitment check. Subsequently participants read the scenario text on SAI and answered the generic question "When, in your opinion, is the described 'Stratospheric Aerosol Injection' technology morally right?" to increase reflective thinking to enhance data quality [cf. 34, 61]. Subsequently participants answered multiple survey scales (see Table 2). To avoid order effects, the single survey scales were presented in randomized order for each participant. After answering all survey scales, participants answered socio-demographic questions. Participants were surveyed regarding prior knowledge and socio-demographic variables (age, gender, education, left-right scale, religious scale) as control variables. We also asked participants, using three items (e.g., "I have a clear opinion about stratospheric aerosol injection."), for their perceived certainty of their evaluation of the SAI technology [cf. 127]. Finally participants gave feedback to the study. We had no missing data, because it was technically required to answer all items and participants were motivated to answer each question, even if they were not completely sure. To check the quality of the resulting data based on literature on opinion quality [128,129], we tested for opinion consistency (extent to which opinions are consistent with theoretically related variables), opinion stability (if the overall evaluation of SAI is stable) and opinion confidence (the subjective validity about one's given evaluations) (cf. [130]; for results see D in the online supplementary; https://osf.io/vb5qe). The main-study took on average 29.30 minutes (SD = 11.27).

Results
The statistical models were analyzed using the R packages psych [131], lavaan [132] and regsem [133] and the statistical software Mplus [83]. Measuring the univariate skewness and kurtosis of the items within the proposed integrative model indicate a modest to severe deviations of normality with an average skewness of .16 (maximum 1.6) and average kurtosis of 2.8 (maximum 4.8). By applying the MLR estimation in Mplus we corrected the standard errors by Huber-White sandwich estimation [134] and test statistics to account for non-normality.
Exploratory and confirmatory analysis. Exploratory factor analysis. We conducted multiple exploratory factor analyses on all scales included in the proposed integrative model (within dashed line, see Fig 1) and used parallel analysis to determine the numbers of factors [135] for all the single scales in the proposed integrative model. The exploratory factor analyses and the tests of the Fornell-Larcker criterion, which was met, can be found in the central analysis script (see section "exploratory factor analysis" in the analysis script in the online supplementary; https://osf.io/a9xpy).
Analysis measurement model. To prepare the structural equation model we analyzed confirmatory factor analyses and identified so called local item dependencies, indicated by remaining inter-item correlations after accounting for the latent variable, which violates the assumption of local independence [cf. 136]. The main driver for these correlations were meaning similarities between items (e.g., in the trust scale participants rated "politicians" and "political parties" quite similar), which were adjusted by allowing for correlated residuals [cf. 137,138]. The pitfall by improving model fit by allowing for correlated residuals is an adjustment by chance, which limits the generalizability of the model to "out-of-sample" data (specified model overfits the data) [e.g. 139,140]. To rigorously allow only for correlated residuals between items with strong statistical support, we applied procedures of regularization to the problematic measurement models, which penalizes the complexity of the model until a more parsimonious model is achieved [133,141]. To evaluate the model fit of the proposed measurement models and the structural equation model in line with recommendations [e.g. 142, 143], we considered the point estimates of SRMR and RMSEA (along with their 90% confidence interval) of around .08 and .06 to indicate well-fitting models. Additionally the TLI and CFI are reported, whereby a value close to .95 indicates a well-fitting model [144].
In the following, we report analyses only for scales, where we deleted items or allowed correlated residuals. To test the robustness of the specified correlated residuals we also conducted tests of regularization and cross-validation, using 60% of the data as training data [145], which are not reported here but are available on OSF (see section "confirmatory factor analysis" in the analysis script in the online supplementary; https://osf.io/a9xpy).
Acceptability. Two items of the acceptability scale (relating to acceptance of research) were deleted, surveying the acceptability of research on SAI, because these items were, in opposition to the other items, strongly supported. The fit of the measurement model by including these items was slightly worse when including these items, e.g., RMSEA including these two items was .09 (90% CI: .08-.095) compared to an RMSEA of .08 ([.069, .086]). These results in addition to substantive reasons indicate that these items measure something different, and not general acceptability (e.g., "I am a supporter of SAI") or behavioral intentions (e.g., "In a national referendum I would vote for deployment."). Also multiple studies have shown high support for research compared to significantly lower support for deployment [e.g. 45, 60,127,146].
Negative and positive affect. The error variances between three items for the perceived negative affect (guilty and ashamed; scared and afraid; hostile and irritable) and three items for the perceived positive affect (interested and attentive; attentive and active; strong and active) of SAI were allowed to correlate freely. Additionally the item "alert" of the perceived positive affect scale was deleted, because the item showed positive correlations to the items of the negative affect scale. Substantially we assume that in the context of SAI "alert" could be interpreted as a negative emotion (e.g., sign of alertness / emergency). Without allowing for correlated residuals the model fit of the measurement models of negative affect (e.g., RMSEA = .126, CFI = .873) and positive affect (e.g,. RMSEA = .13, CFI = .882) were not sufficient. In a validation study by Crawford and Henry [147], the authors had also to allow for correlated residuals between items from the same mood categories as well, which is an adjustment quite similar to the one the adjustment in this study.
Trust. The error variances between three items for the trust scale in public institutions (legal system and police; politicians and political parties; European Parliament and United Nations) were allowed to correlate freely. The three item pairs are highly similar regarding wording and meaning. Without allowing for the correlated residuals, the fit was insufficient (e.g., RMSEA = .199, CFI = .83). To the best of our knowledge this scale was never tested by means of latent variable models (e.g. [148] only reported Principal Component Analyses). Analysis structural equation model. First of all, we determined the latent correlations of all central constructs of our proposed model by fitting a confirmatory factor model of first order, which specifies one latent factor for each construct and the latent variables can freely correlate. Although, the global likelihood-ratio test statistic was significant, Χ 2 (1854) = 3932.59, p < .01, the SRMR = .067 and the RMSEA = .044 (90% CI: .042-.046) indicated an acceptable fit. Also the CFI = .917 and TLI = .912 were close to .95 as well. The estimated latent correlations among the central constructs, along with Cronbach's Alpha and descriptive statistics, are shown in Table 3. The majority of correlations exceeded r = .40. Risk Perception is correlated strongly negative (r = -.81) and Benefit Perception strongly positive (r = .89) to Acceptability. The correlation between Acceptability and Tampering with Nature is also strongly negative (r = -.8), other factors were less strongly (albeit moderately) associated with Acceptability, e.g., Positive Affect (r = .57). Benefit Perception and Risk Perception are strongly correlated negatively (r = -.84).
Given the substantial intercorrelations among certain constructs and the substantive theory explained before, we continued specifying a structural equation model. The structural equation model additionally contains the construct Tampering with Nature, which we justify by the results of the step I procedure and the strong latent correlations from Tampering with Nature to multiple constructs (a fitted structural equation model without Tampering with Nature can be found in E in the online supplementary; https://osf.io/vb5qe). The fit of the model was acceptable and again the global likelihood-ratio test statistic was significant Χ 2 (1348) = 2914.90, p < .01, but the SRMR = .068 and the RMSEA = .043 (90% CI: .043-.047) indicated an acceptable fit. Also the CFI = .923 and TLI = .919 were close to .95 as well. The belief that SAI disturbs the natural balance in the environment (Tampering with Nature) was strongly related to an increased Risk Perception (standardized coefficient = .66) and decreased Benefit Perception (-.64). Further, Tampering with Nature was predictive for a higher perceived Negative Affect (.44) and lower Positive Affect (-.45). The person's personal trust in public institutions was only slightly predictive for Positive Affect (.13). Benefit Perception had the strongest direct effect on Acceptability (.46), followed by a moderately negative effect of Tampering with Nature on Acceptability (-. 22). The indirect effect between Acceptability and Tampering with Nature mediated by Benefit Perception (multiplying the standardized coefficients -.64 time .46 and test for statistical significance) was somewhat stronger (-.29). Finally, Risk Perception and Benefit Perception were strongly inversely related (-.50). The R-squared of Acceptability shows that 79% of the variance of Acceptability can be explained by the included scales.

Discussion
To the best of your knowledge, the constructs within our proposed model have never been tested by means of latent variable models (confirmatory factor analyses). The necessity to specify correlated residuals for single scales indicates possible substantive problems with these scales. Compared to the specified structural equation model (see Fig 4), Merk and Pönitzsch [46] fitted a similar model (a so-called path model). Their results however differed because they found multiple significant effects of Trust on the other constructs and a significant direct effect of Risk Perception on Acceptability. As indicated by the dotted lines, in our data Trust had only a significant effect on Positive Affect and excluding Trust from the model would not reduce the amount of explained variance of Acceptability. Therefore, personal trust in political institutions did not have a substantive effect on the Acceptability of SAI in our study. Neither had Risk Perception a significant direct effect on Acceptability in the proposed model. In contrast to Merk and Pönitzsch [46], we included a correlation between Risk and Benefit Perception, which caused the insignificance of the direct effect of Risk Perception on Acceptability (in the analysis script in the "structural equation model (SEM)" section we fitted a model which excluded a correlation between Risk and Benefit perception; https://osf.io/a9xpy). By including this correlation, we allow for the possibility that participants do not seem to differentiate between risk and benefits perceptions of SAI. Such a strong inverse relation of risk and benefits perceptions has been explained in studies by the affect heuristic [55,56]. In addition, the strong latent correlations (all greater than .80) of Tampering with Nature, Risk Perception and Benefit Perception to Acceptability (see Table 3) indicate that these constructs are the main driver for the acceptability of SAI, whereby all these constructs are strongly interrelated. Especially, Tampering with Nature could be a highly relevant concept for this specific technology and could indicate a strong moral heuristic (see also "Playing God" concept in the Step I procedure section).

General discussion
In this article, we proposed a multi-method approach to inform climate policy, consisting of two central analysis steps: in the step I procedure, using CAMs we identified two central clusters of concerns relevant for climate policy. Participants in our study had the ethical concerns that SAI is a possible "Moral Hazard" (summarized concept), that using this technology is like "Playing God" and that the technology needs to be deployed for a long time ("Termination Problem"). Positively, participants argued that SAI could be "Buying Time" and could be an option for future generations ("Arming the Future"). Participants had mixed feelings whether SAI should be deployed in a case of "Emergency". Regarding "Governance" participants had slightly negative feelings and, for example, it is not clear who is accountable for deploying such a technology and participants highlighted that it is difficult to achieve an agreement ("disagreement countries"). The concept "trust in political institutions" was overall negative and some participants argued that they have no trust in experts and that institutions (like governments) have wrong motivations. However, SAI was seen as a highly effective technology with multiple benefits for nature, society and health. Based on the pre-study, we included Moral Hazard and Tampering with Nature in the subsequent step II procedure. Especially, Tampering with Nature was of central importance and predicted all central constructs (Trust, Negative and Positive Affect, Risk and Benefit Perception, and Acceptability) of our proposed integrative model. From our perspective, the survey study revealed three central insights: (1) a possible integrative model to predict the acceptability of CETs seems to be technology specific, because moral heuristics like "Tampering with Nature" are probably not influential on all kind of CETs (e.g., large-scale afforestation). (2) The measurements we used in the survey study need to be revised, because we found multiple local dependencies. Not accounting for these dependencies within the measurement models could lead, for example, to some kind of over weighting of certain aspects of a scale (e.g., overweighting the politics aspect by asking for politicians and political parties in the Trust scale). Finally (3) an important finding was, besides the relative importance of the single constructs on predicting acceptability, that participants do not seem to differentiate between risk and benefit perceptions of SAI.

Suggestion for future research and potential policy applications
Possible positive and negative (side-) effects of emerging technologies are uncertain, because the future is mostly unknown and only in the case of quantifiable risks (partly) predictable [e.g. [149][150][151]. Therefore, there is a call for "anticipatory governance", aimed at improving the ability to manage emerging technologies while there is still room for adjustment. Such an approach is future-oriented as it does not attempt to predict the future, because the future is something that is actively constructed by political action, technological innovation and pluralistic worldviews [152,153]. A central aspect of anticipatory governance is the inclusion of various stakeholders to combine diverse types of knowledge, consider local contexts and enrich perspectives of possible futures [154,155]. Nicholson et al. [153] are proposing the establishment of a global forum to consider public input and initiate dialog between different stakeholders concerning the control and research of SRM technologies with the goal to advise policy makers. With our proposed methodology (combining CAM and survey data, using a scenario-based approach followed by various statistical analyses) climate policy can be informed: the methodology is highly economic and allows for the inclusion of various stakeholders in real-time. When identified, ethical, trust, emotional or cost / benefit related concerns could serve as a broad early warning system [cf. 64] and could inform the adaptive management of CETs to handle uncertainties through active feedback [156]. As described in the general discussion section, the proposed methodology makes it possible to identify (1) ethical (unconsidered) concerns, which were not included in the initial integrative model (CAM results) and (2) to measure the relative importance of core constructs on the acceptability of SAI (survey results).
However, the proposed methodology is not able to answer in detail why exactly there are possible concerns regarding the CET and thus we emphasize that such correlative results (survey) and standardized qualitative results (CAM) should be further informed by additional ethical tools [e.g. 157,158] or by general tools of technology assessment [e.g. [159][160][161]. For future research we suggest applying our proposed methodology to all currently discussed CETs, because every single CET implies different, social, political and ethical concerns [e.g. 42, 162]. A possible result of such a comprehensive study would be to further inform all possible climate policy options (mitigation, adaptation and different CETs) and their best combination to tackle climate change [17]. For such an ambitious comprehensive study, it could be interesting to test more complex study and survey designs: the applied correlational research design, using only one measurement point (cross-sectional) in this study, does not allow to identify any cause-and-effect relationships. Sophisticated experimental and longitudinal designs should be conducted in the future to validate the proposed integrative model [see critic 32, 33, 52]. Experimentally, it could be tested whether different framings, like using or not using natural analogies in scenario texts describing CETs [e.g. 71,163,164], have an effect on the evaluation. Additionally, limited deployment scenarios could be researched when such applications of SAI become more realistic in the future [27]. To investigate more deeply information-seeking strategies, informative complex survey designs like the decision pathway survey [165] or informed choice questionnaire [166] could be applied.
We hope that our proposed methodology and the online resources provided to this article will be helpful in the future to inform climate policy by considering the attitudes and concerns of multiple stakeholders. In our view, discourse on all possible climate policy options should be encouraged using methods developed within social sciences, which is in line with current calls from psychologists [167] and philosophers [168].
Supporting information S1 Text. CAMs of the pre-study with the lowest and highest average mean valences.