Figures
Abstract
The best combination of possible climate policy options (mitigation, adaptation and different climate engineering technologies) to tackle climate change is unknown. Climate policy is facing a hard decision in answering the question whether climate engineering technologies should be researched, limitedly deployed or even deployed at global scale. Such technologies bear large epistemic and ethical uncertainties and their use as well as non-use might have severe consequences. To deal with such uncertainties, the (ethical) assessment of climate engineering technologies should include the perspectives of various stakeholders including laypersons to inform climate policy. To facilitate (ethical) technology assessment, we propose a novel 2-step methodology to collect and analyze data on ethical concerns and the acceptability of climate engineering technologies. Thereby we focus on Stratospheric Aerosol Injection (SAI) as an use case. We propose an innovative combination of newly developed methods consisting of two data collection tools (Cognitive-Affective Mapping and large-scale survey) and two types of data analyses (using graph theory and factor analysis). Applying this multi-method approach we were able to identify (1) central ethical and governance related concerns regarding SAI (by Cognitive-Affective Maps) and (2) to estimate the relative importance of core constructs (positive and negative affect, risk and benefit perception, trust) on the acceptability of SAI (by large-scale survey).
Citation: Fenn J, Helm JF, Höfele P, Kulbe L, Ernst A, Kiesel A (2023) Identifying key-psychological factors influencing the acceptance of yet emerging technologies–A multi-method-approach to inform climate policy. PLOS Clim 2(6): e0000207. https://doi.org/10.1371/journal.pclm.0000207
Editor: Muhammad Irfan Ashraf, UAAR: University of Arid Agriculture, PAKISTAN
Received: December 18, 2022; Accepted: April 23, 2023; Published: June 6, 2023
Copyright: © 2023 Fenn et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All the study files and analyses described in the research article are on OSF: https://osf.io/zn7vy/ (Identifier: DOI 10.17605/OSF.IO/ZN7VY). A detailed explanation of all the uploaded files can be found in the Wiki of the OSF project.
Funding: The authors (J.F., J.H., L.K., A.K.) were supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC-2193/1 – 390951807 and the grant 2277, Research Training Group “Statistical Modeling in Psychology" (SMiP). One author (P.H.) received support from the PRIME program of the German Academic Exchange Service (DAAD) with funds from the German Federal Ministry of Education and Research (BMBF). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Introduction
At the 21st Conference of the Parties in Paris in 2015, over 190 states decided to limit global warming to well below 2 degrees, at best to 1.5 degrees celsius [1]. However, according to the Intergovernmental Panel on Climate Change (IPCC), the current “Nationally Determined Contributions” to reduce greenhouse gas emissions bear a “greater than 50% likelihood that global warming will reach or exceed 1.5°C in the near-term [2022–2040], even for the very low greenhouse gas emissions scenario” [2 p44]. This indicates that the 2 degrees celsius climate target might not be reachable without Climate Engineering Technologies (CET), which already have been proposed in the 5th IPCC report [cf. 3–6]. There are two different approaches of CET [see reports 7–10]: Carbon Dioxide Removal (CDR) and Solar Radiation Management (SRM). CDR technologies remove carbon dioxide from the atmosphere and thus address the root cause of climate change. SRM aims to reflect a small percentage of the solar radiation back into space before it reaches the earth.
In this article, we focus on one specific SRM technology, which is called Stratospheric Aerosol Injection (SAI). SAI is able to reduce incoming solar radiation, e.g. by releasing sulfur particles into higher regions of the atmosphere (stratosphere), which enhances the reflective properties of the aerosol layer. The technology is mimicking the mechanism of past volcanic eruptions, which produced a cooling effect [cf. 11] by releasing sulfur particles into the atmosphere (see also scenario text in A in the online supplementary; https://osf.io/vb5qe). We decided to focus on SAI because simulation studies predicted a larger effect for this technology compared to other CET in reducing the global mean temperature [e.g. 12, 13]. Furthermore, compared to other CET, the SAI technology is highly affordable and timely [10]. In addition, SAI is one of the most studied CET and seems to be the most likely SRM approach to be implemented [14]. Nevertheless, there is an ongoing debate whether SAI, or more generally, any other CET, should be researched or even deployed.
On the one hand, there are two central ethical arguments to justify the research of CET, especially SRM. First, the “arming-the-future” argument states that we are morally obliged to explore all options in order to provide future generations with an optimal basis for decision-making, especially in the case of a climate emergency. Second, the “buying time” argument promotes the idea that SRM could be a stopgap measure to buy time until an “aggressive” abatement of emissions shows an effect on a global scale [15–17]. On the other hand, the “arming-the-future” argument is highly contested [18], because the characteristics of a climate emergency (like potential tipping points) are hardly predictable [19] and an emergency framing in general can lead to adverse climate policy effects [20, 21]. Several further arguments against climate engineering, like the “moral hazard” argument, which states that the mere prospect of CET will encourage many actors to continue to emit a lot of carbon dioxide, are brought forward [for an overview see 16, 17, 22]. Furthermore, it is highlighted in the IPCC 2022 “Summary for Policymakers”, that SRM approaches “introduce a widespread range of new risks to people and ecosystems, which are not well understood”, and there are “[l] arge uncertainties and knowledge gaps” [23 p19]. In line with these claims, a recent simulation study showed that the potential ecological impacts of using SAI are for the most part unknown [14]. Moreover, two expert surveys did not only identify SAI as the climate technology with the highest temperature-reduction potential, but also as the technology with the highest composite risks [24, 25]. All these uncertainties and possible issues (especially consequences of maladaptation) lead some scientists to argue for an international non-use agreement for SAI [26].
Climate policy is facing a hard decision in answering the question if a technology like SAI should be researched, limitedly deployed or even deployed at global scale [cf. 27]. Decisions under such high uncertainty are characteristic for ‘post-normal’ science and require innovative methodologies [28]. Here, we propose an innovative combination of newly developed methods consisting of two data collections tools (Cognitive-Affective Mapping and large-scale survey) and two types of data analyses (using graph theory and factor analyses). This is intended to assess the acceptability and potential ethical concerns related to SAI.
The article is divided into three parts: in the first part, we will motivate the importance of including laypersons in the process of (ethical) technology assessment (Section: Importance of upstream engagement) and theoretically derive an integrative model to predict the acceptability of SAI. In the second part of the article the study design is described, which includes two central procedures: (a) Step I procedure: “Cognitive-Affective Mapping”, which was conducted as a pre-study to extend the theoretically derived integrative model and (b) Step II procedure, a large-scale survey to measure the acceptability of SAI. These procedures combine different tools of data collection and different types of statistical analyses, which are explained in detail in the respective sections. The final discussion (General Discussion) section aims to summarize the utility of the proposed methodology for future research on CET to inform climate policy.
Importance of upstream engagement
It has been argued that possible risks and benefits of CET should be assessed in a centralized top-down approach by experts. This “classical technology assessment” [e.g. 29, 30] was rather popular and mainly driven by the American Office of Technology Assessment [31].
Yet, to face the situation of post-normal science [28], which is characterized by high systems uncertainties (like the complexity of the climate system) and high decision stakes (how humankind should counter climate change), a more inclusive, participatory approach of public engagement seems necessary [32–34]. The involvement of public perspectives and concerns on CET, which are at an early stage of development, is called “upstream engagement”. Upstream means future oriented from bottom to top, i.e. that the concerns of all stakeholders affected by controversial technologies are heard and at best lead to changes in the research and implementation process of climate technologies [35]. Considering CET, the Royal Society Report “Geoengineering the climate” (2009) emphasized that “the acceptability of geoengineering will be determined as much by social, legal and political factors, as by scientific and technical factors” [10 p ix].
Therefore, we propose a highly economical methodology to collect and analyze data on ethical concerns and the acceptability of CET, whereby we focus on SAI as a use case. Applying online tools like surveys and an exploratory method called Cognitive-Affective Maps (explained later) enables us to elicit broad public opinions on SAI without requiring a strong form of active public engagement [cf. 33].
Model to predict acceptability of SAI
In this study, we propose and test an integrative model to predict acceptability of SAI based on central constructs (assessed by survey scales), see Fig 1. Thereby acceptability is understood as an evaluative judgment (attitude) and not as a real behavior, like concrete support or resistance towards SAI [cf. 36] (According to the theory of planned behavior [37], behavioral intentions (acceptability) should be predictive for the performance of a behavior under sufficient perceived behavioral control (so called sufficiency assumption) [38]. However, as SAI is an emerging technology, one may argue that behavioral control, the “people’s perception of the ease or difficulty of performing the behavior of interest” [37], is something purely speculative and potential real behavior can only be imagined. So our proposed integrative model, which focuses on an emerging technology, cannot be mainly motivated by “classical models” (e.g. the “Technology Acceptance Model” by [39]) for measuring technology acceptance of socially entrenched and well established technologies). For this reason, we decided to base our proposed model on central value theories like the “theory of basic individual values” [e.g. 40] and the “value-belief-norm theory” [e.g. 41], which emphasized the role of values for creating a predisposition towards certain behavior, like the acceptability of SAI.
Note. Based on the results in the step I procedure using “Cognitive-Affective Maps” measures for Climate Change Concern, Moral Hazard, and Tampering with Nature (highlighted in gray) were included. These additional factors, as well as the Ethical Factors, are depicted without specifying the effect of the single factors to the central constructs included in the core model (within dotted rectangle).
To derive this model, in the following the central empirical research on SAI and reviews supporting and reflecting the proposed structure of the integrative model in Fig 1 are provided. An overview of the single constructs is given in Table 2 and in the online supplementary (see B in; https://osf.io/vb5qe). We derived Positive and Negative Affect, Risk and Benefit Perception and Trust as central constructs predicting the acceptability of SAI (see Fig 1) based on multiple studies [42–47], which jointly assessed the influence of one or all of the identified constructs (see Table 1 for an overview).
In the identified studies, acceptability was measured ranging from a single item question [42] to a 19 item scale, which also included aspects of benefit perception [44]. Also the other central constructs were operationalized quite differently, with only limited content-wise overlap for the respective constructs. To test the integrative model in this article, we selected and adjusted published scales carefully, depending on theoretical and statistical arguments.
The proposed structure of the integrative model (also called nomological structure) in Fig 1 is justified by (1) existing empirical literature and (2) central reviews. By nomological structure we mean the empirical observable quantities (correlations) between the proposed core constructs to each other [cf. 48]. In empirical studies, the effect of trust on risk and benefit perception was shown [46, 49]. The perceived risks and benefits are influenced by positive and negative affect [46, 50]. An impact of trust on risk and benefit perception and positive and negative affect was proposed in Huijts et al. [36]. Further, the proposed relations are discussed and reflected in multiple reviews [36, 51–55]. Overall it is argued that laypersons, especially if their knowledge on a surveyed topic is low, rely mainly on heuristics: initial affect influences risk and benefit perceptions of a technology (affect heuristic) and a stronger negative affect leads to higher perceived risks and less benefits. The reverse holds true for perceived initial positive affect, which explains why perceived risk and benefit are inversely related [e.g. 53, 55, 56]. If knowledge is low, laypersons can rely on trust (trust heuristic), based on a perceived similarity with (or perceived competencies of) relevant stakeholders or institutions (e.g. scientists who are researching SAI). Thereby, trust and affect are substantially correlated [cf. 52].
Please note that trust is a complex construct [cf. 57] and humans often draw inferences about unobservable properties of the informants to trust, such as value similarities, intent or competences [58]. In general, scientists or environmental groups are more trusted than institutions or companies [59], whereby for CET scientists are trusted more than companies only in the early stages of technology development [cf. 60, 61].
Additional factors without proposing specific structural relations.
Fig 1 also depicts influential factors outside of the core structural model. These predictors were derived based on Step 1 of our proposed procedure (see Study Design section for a detailed description). Using Cognitive Affective Maps (CAMs), we identified ecological, ethical and trust-related concerns that might also influence the acceptability of SAI (see constructs highlighted in gray in Fig 1). To include the identified concerns we included scales for Climate Change Concern [62], Moral Hazard, and Tampering with Nature [47, cf. 61, 63]. Additionally we included our recently developed “Empirical Ethics Scale for Technology Assessment” [64]. Because the structural relation of these factors is less clear, we are presenting them as potential influential factors outside of the central core model. All scales, empirically investigated in this study, are shown with their reliability coefficients in Table 2.
Study design
The study is composed of two central procedures: (a) as step I procedure “Cognitive-Affective Mapping” was conducted as a Pre-Study to inform (b) the step II procedure, a large-scale survey to measure the acceptability of SAI (see Fig 2). Please note that 56 out of 58 participants of the pre-study also participated in the survey study and answered both studies within 2 weeks.
Multiple reviews have shown that more than 80% of the respondents were not familiar with SAI or CET in general [60, 69–71]. Therefore, we decided to use a scenario text to inform participants about SAI prior to collecting CAMs or survey data: the scenario text, which describes the SAI technology and presents the technology in the context of climate change (see A in the online supplementary; https://osf.io/vb5qe), resembles a classical scenario-based approach. In the context of climate engineering scenarios, the scenario text is a strategic conversation scenario, which aims to frame key messages (operation principle, pros and cons) to stimulate discussion and research [72]. While designing the SAI scenario, we considered the steps to develop scenarios described by P. Schwartz [73] and relied on empirical SAI studies, which already used a scenario-based approach and tested their scenario texts in advance [44, 46]. To enhance the quality of the scenario text and check if laypersons understood the described SAI technology, the scenario text was pretested and slightly adjusted (see C in the online supplementary; https://osf.io/vb5qe). Additionally, we considered theoretical quality criteria, like the readability, plausibility and ambivalence of the scenario text describing the SAI technology [see 74, 75].
Online data collection
The step I procedure “Cognitive-Affective Mapping” and the step II procedure, a large-scale survey were conducted online using the participant marketplace Prolific. The only prerequisite for participation was to speak English fluently and to live in the United Kingdom. With the increased anonymity of an online-study, we aim to collect beliefs which could otherwise be perceived as socially undesirable [cf. 76, 77]. In the two studies, we adhered to established standards of web-based research [78, 79]. The online studies were programmed in lab.js [80], which gave us the flexibility to also collect paradata (e.g., recording if and when participants left the fullscreen of the online-study). The studies were hosted on a local university JATOS server [81], which guaranteed highest privacy standards.
Technical aspects analyses
All of the following analyses have been conducted using the statistical software R [82] and Mplus [83]. Using two statistical softwares allows us to consider the influence of different implementations of the statistical procedures (e.g., estimation) and thereby increase the robustness of the results [cf. 84, 85]. Additionally, we could consider procedures which have not been fully implemented in R [e.g. 86]. Using the R package “MplusAutomation” [87] it was possible to write the complete analysis scripts in R with text annotations. Using the “rmarkdown” R package [88] allowed us to write all the analysis scripts in the form of reproducible dynamic scripts, which combine code, output and texts in one document, so that the proposed methodology can easily be transferred to future studies.
In the following sections the procedures of the novel 2-step methodology are motivated by their general relevance, followed by a more technical analysis section and the presentation of the results, followed by a discussion.
Step I procedure—Cognitive-Affective Maps
Motivation of methodology
By applying CAMs, it is possible to explore possible concerns of the public regarding emerging technologies in a relatively general and unrestricted way. In doing so, we could iteratively extend our proposed integrative model [cf. 89]. As an explorative and time-efficient method, CAMs allow to identify similar concerns regarding CET as more time consuming interview / focus group based methods [e.g. 90–94].
CAMs are a quantitative and qualitative research tool to identify, visually represent and analyze existing belief structures [e.g. 95, 96]. From a theoretical perspective, CAMs are appealing for this study because they “provide an immediate gestalt of the whole system and of the simultaneous interactions between, and relationships among, its parts” [97 p2]. This emphasizes the high face validity of the method. CAMs have already been applied as a tool to study political conflicts and general belief systems [97–101] and it has been shown that CAMs provide added value to empirical survey studies [102, 103]. CAMs have also been applied to identify ethical values [cf. 104, 105] and were proposed in the context of the theory of “ethical coherence” [106]. As an exploratory method, CAMs offer added value compared to a survey, as the latter can only consider pre-identified potential factors influencing SAI acceptability [cf. 107, 108]. This method enabled us to enrich the theoretical proposed integrative model by ethical and climate change related concerns (see Fig 1).
Methods
Participants.
In the step I procedure, 58 participants were recruited in total (mean age 38, SD = 10.40, 47% female). Participants were compensated with GPB 9.08 an hour for participation.
Procedure.
After providing written informed consent, reading an instruction on how to draw CAMs and reading the scenario text on SAI, participants were asked to draw a CAM. After creating their CAM, the participants were asked for feedback on what they have drawn. One participant (2%) faced technical problems, but no one had to stop drawing the CAM for technical reasons. The pre-study took on average 46 minutes (SD = 16.94)
Using CAMs participants can freely add and connect concepts they associate with SAI (see in Detail Results Section). As such, CAMs are an exploratory quantitative and qualitative research tool to identify existing belief structures [e.g. 95, 96]. CAMs incorporate so-called affective valences by representing whether a person associates positive, negative, neutral or ambivalent emotions with a drawn concept (positive and negative ratings ranging from [–3,3]). Furthermore, it is possible to connect concepts with dashed lines for inhibitory connections and solid lines for supporting/strengthening connections and to specify the connections in different strengths. Additionally, CAMs contain directional arrows, which represent a directional effect.
Participants drew a CAM online using the recently developed tool called “Cognitive-Affective Maps extended logic” [109]. Six concepts were predefined: positive feelings, negative feelings, trust in political institutions, perceived risks, perceived benefits and acceptability of SAI; this latter concept was placed in the center of the CAM. Participants were technically prevented from deleting, changing or moving the central concept “acceptability of SAI”, but were able to move and delete the other predefined concepts. Participants freely added further concepts, changing the valence of and adding comments to these concepts. After a concept was drawn, it could be connected to others via connections of different strengths and types (i.e. inhibitory vs. supportive connection). The CAMs of the pre-study with the lowest and highest average mean valences are depicted for illustrative purposes in S1 Text.
Results
Data preparation and analysis.
Data preparation. We summarized the CAMs using the dedicated CAM-App [110], which is a Shiny-Application [111]. The CAM-App generates a protocol, which tracks every summarizing step, so that the qualitative process of summarizing CAM data is completely transparent. The procedure for summarizing CAM data is following a five-step procedure, which was theoretically motivated by existing CAM literature [cf. 107, 108, 112, 113] and qualitative handbooks [114–116]: (1) In a first deductive step, super- and subordinate categories are derived from existing literature. Thereby, semantically identical terms can be positive or negative depending on the context. (2) In a subsequent inductive step, all CAMs are separately studied and superordinate and subordinate categories are recorded in the form of memos. Further, by looking at the most positive and most negative CAMs regarding their mean valence, overview-like case summaries are created. (3) Subcategories are formed inductively, taking existing theories into account. In the first coding step (3a), categories are formed and their respective frequencies are noted. In the second coding step (3b), the existing category system is reduced by combining subcategories that thematically refer to a similar subject. This process (3a, 3b) is repeated until all terms in the CAMs have been coded. (4) All subcategories that have been formed are combined to form topics at a higher level of abstraction, and finally (5) the results are presented in the form of tables and graphics.
Analysis. In total we collected 58 CAMs, whereby participants drew on average 25.40 (SD = 2.06) concepts and 44.21 (SD = 32.49) connectors (please note that the technical settings required participants to draw at least 24 concepts in total). The valence for the concepts range from [–3,–1] for negative and [1, 3] for positive concepts, with ambivalent and neutral concepts being assigned a value of 0. The mean average valence over all the CAMs was −.33 (SD = .51). In 14% of the CAMs one or more (up to 4) of the other predefined concepts were deleted. The valence of the predefined concepts was changed in (only) 69% of the cases (despite instructing participants to do so). After drawing the CAM, participants gave feedback to what extent they felt that the CAM they just drew reflected their thoughts and feelings of the SAI technology on a scale ranging from 1 = completely unrepresentative to 7 = fully representative. With an average value of 5.90 (SD = .95), participants indicated a relatively high value for the representativeness of their drawn CAM.
Following the five-step procedure, the 1063 unique concepts (1473 in total) were reduced to 52 concepts (see detailed list of the 52 concepts in D in the online supplementary; https://osf.io/vb5qe). Negative concepts are especially pronounced for the different perceived risks for health, nature, society and weather (e.g., SAI could lead to more rain and therefore flooding), wherein participants highlighted the possible increased acidity of the oceans. Positive concepts are the perceived benefit for health, society and nature (e.g., SAI could protect polar ice caps). Some of these concepts, like the possibility of acid rain, were partly included in the scenario text on SAI, but interestingly participants came up with two additional relevant clusters of concepts for climate policy, namely ethical arguments and governance.
Ethical arguments. In 38 drawn concepts the ethical argument of the “Termination Problem” was highlighted, which states that an abrupt termination of SAI can lead to accelerated hearting, due to large concentrations of atmospheric carbon dioxide [e.g. 17]. The “Termination Problem” was perceived strongly negative on average (M = −2.00, SD = 1.01) and the problem was emphasized in comments (e.g., “Once started SAI would be required for decades”, “if there are problems can [SAI] be stopped once d[e]ployed?”). Another negative ethical argument is the “Moral Hazard” argument, which is explained by a participant in the comments as “humans may feel like they do not need to combat global warming actively and put the responsibility down to SRM” and another participant states that “companies, governments will carry on polluting and using bad technology/fossil fuels”. “Playing God” (also called “Hubris argument”) is a further negative ethical argument, whereby participants see SAI as an “unacceptable natural interference” and another participant questions if “[SAI is] another way in which humans mess things up by playing God?”. The “Emergency Argument” is seen as rather ambivalent, and participants emphasized SAI should only be used if it is the “last chance” and only as an emergency solution. The “Arming the Future” and “Buying Time” arguments are positively perceived, the first arguing that SAI could “creat[e] a better future for future generations”, while the second states that SAI could “give time for better technology to be deployed, allowing a ‘pause’ in the effects of global warming until a solution is found”. Finally, in the “Innovation argument” participants emphasized that SAI is an innovative / pioneering idea, which could inspire other ideas.
Governance. Multiple summarized concepts emerged, which are related to questions of governance of SAI. They were rated negatively overall: within the summarized concept of accountability, participants asked who is responsible and accountable for SAI deployment. A possible agreement between countries was seen as rather unrealistic (see concept “disagreement countries” in online supplementary), because countries are fighting and distrusting each other, although an international agreement must be found before SAI is deployed. Taken together, the role of national governments is perceived rather diverse, because “[a]fter elections new government might have new priorities” or “government focuses policy on electoral cycles” and that “controversial topics [like SAI] cause division of political groups and nothing gets done”. Summarized in the concept of “wrong motivations”, participants argued that “those in power” are “self centered”, have “vested interest” and are “extreme campaigners”, who cannot be trusted to make moral judgments. The overall mean valence of the predefined concept “trust in political institutions” was negative as well, which was emphasized in the summarized concept trust lost: due to “corruption”, “lack of credibility” or “political selfishness” experts / governments are not trusted. In sharp contrast to the concepts mentioned above, participants argued that SAI could “bring parties and countries together with a universal mission” and could foster a “united world” and “political harmony” (see summarized concept collaboration). Within the positive concept of “trust” participants expressed a “faith in politics” or in “scientists opinion”.
To highlight the interrelatedness of all summarized concepts, the CAMs have been aggregated by creating a so-called “canonical adjacency matrix”. A “canonical adjacency matrix” is a symmetric matrix only composed of integers, whereby the off diagonals represent the number of connections between two concepts and the main diagonal represents the frequency of a specific concept being drawn. This matrix can be represented as a network (see Fig 3), which is an overall graphical representation of the 58 drawn CAMs. The size of the concept and the thickness of the connection is proportional to the frequency of the drawn concepts and the pairwise connections respectively. This process of summarizing and depicting CAM data was overall motivated by literature on graph theory [e.g. 117], semantic networks [e.g. 118, 119] and Prior’s [120] book section on content analysis. Pictures of all CAMs, a searchable Excel file (https://osf.io/fnkjp) and the protocol file can be found on OSF.
Note. A PDF file to zoom in and out can be found on OSF (https://osf.io/jb3vk). Yellow represents neutral, green positive and red negative concepts. A concept was drawn as neutral if the average valence of the respective concept is within [-.5,.5] to indicate that the emotional feelings regarding the concept are mixed.
Discussion
Comparing the central concerns identified in CAMs to existing literature, we found similar concerns regarding efficacy / feasibility (we used the term “effectiveness”), governance issues, economical aspects, ethical and political concerns and possible risks [cf. 90, 92, 121]. The CAM analysis additionally identified perceived benefits (like benefits for nature, or hope in possible transnational cooperation), which have been barely reported by existing deliberative studies [e.g. 90–94].
The results regarding governance could be in some way alarming for climate policy: participants in our study indicate a strong pessimism regarding a possible global agreement on implementing SAI (concept “disagreement countries”) and that it is unclear who is responsible for implementing such a technology (concept “accountability”). Without solving these issues and the agreement of the broader public, participants fear future conflicts. Similar governance related issues have also been identified by deliberative studies [e.g. 32, 33, 91, 122]. Such pessimism on implementing governance structures seems to be related to a slightly negative trust in political institutions in our dataset.
Most importantly for the purpose of the current study, we identified central ethical arguments that we had not included in the theoretically derived core model to predict acceptability (see Fig 1). The ethical concerns regarding SAI in the CAMs resemble central arguments of existing ethical literature on CETs, for example that SAI does nothing against the inherent cause of climate change [e.g. 17, 18, 22]. The overall results of the pre-study emphasize the importance of including technology specific influential factors, like, in our case, the inclusion of two additional ethical scales, the Tampering with Nature and Moral Hazard scale. The identified “risks for nature” indicate the need to include a scale for Climate Change Concern. These scales have been empirically included in the model that will be tested in the next section, the Step II procedure.
Step II procedure—Survey
Motivation of methodology
We conducted a large-scale survey to test the empirical support for the theoretically derived integrative model (see Fig 1) by means of a so-called structural equation model. This model empirically tests all the indirect and direct effects and the strength of the estimated parameters indicate the relative importance of the identified constructs on predicting the Acceptability of SAI. In general by applying statistical approaches of so called “latent variable models” [e.g. 123, 124] it is possible to account for the unreliability of scales (caused by measurement error), test for certain response patterns and formulate simple exploratory or measurement models or more complex structural equation models.
Methods
Participants.
In the large-scale survey (step II procedure) a total of 600 participants were recruited. The pre-study was technically linked to the survey study and 56 of 58 participants (3% drop-out rate) participated in both studies. Participants were compensated with GPB 9.99 an hour.
We tested for Insufficient Effort Responding, which indicates reduced efforts by participants when answering survey questions [125]. Participants were flagged on five criteria (participants who needed extremely long to answer the complete survey or single components and participants who showed a low intra-individual response variability, no variability at all or could be identified as multivariate outliers, when answering survey scales). On these criteria we conducted a latent class analysis [cf. 125, 126]. By doing so we identified 21 participants (3.5%) who showed a suspicious response behavior and subsequently removed these participants from the analysis.
The final sample of the survey study included in the analyses consisted of N = 579 participants, 47% were female (2 participants preferred to not say / provide their gender) with a mean age of 40 years (SD = 13.26; range 18 to 87).
Procedure.
Participants first provided written informed consent and were motivated to provide thoughtful answers, which was additionally checked by a commitment check. Subsequently participants read the scenario text on SAI and answered the generic question “When, in your opinion, is the described ‘Stratospheric Aerosol Injection’ technology morally right?” to increase reflective thinking to enhance data quality [cf. 34, 61]. Subsequently participants answered multiple survey scales (see Table 2). To avoid order effects, the single survey scales were presented in randomized order for each participant. After answering all survey scales, participants answered socio-demographic questions. Participants were surveyed regarding prior knowledge and socio-demographic variables (age, gender, education, left-right scale, religious scale) as control variables. We also asked participants, using three items (e.g., “I have a clear opinion about stratospheric aerosol injection.”), for their perceived certainty of their evaluation of the SAI technology [cf. 127]. Finally participants gave feedback to the study. We had no missing data, because it was technically required to answer all items and participants were motivated to answer each question, even if they were not completely sure. To check the quality of the resulting data based on literature on opinion quality [128, 129], we tested for opinion consistency (extent to which opinions are consistent with theoretically related variables), opinion stability (if the overall evaluation of SAI is stable) and opinion confidence (the subjective validity about one’s given evaluations) (cf. [130]; for results see D in the online supplementary; https://osf.io/vb5qe). The main-study took on average 29.30 minutes (SD = 11.27).
Results
The statistical models were analyzed using the R packages psych [131], lavaan [132] and regsem [133] and the statistical software Mplus [83]. Measuring the univariate skewness and kurtosis of the items within the proposed integrative model indicate a modest to severe deviations of normality with an average skewness of .16 (maximum 1.6) and average kurtosis of 2.8 (maximum 4.8). By applying the MLR estimation in Mplus we corrected the standard errors by Huber-White sandwich estimation [134] and test statistics to account for non-normality.
Exploratory and confirmatory analysis.
Exploratory factor analysis. We conducted multiple exploratory factor analyses on all scales included in the proposed integrative model (within dashed line, see Fig 1) and used parallel analysis to determine the numbers of factors [135] for all the single scales in the proposed integrative model. The exploratory factor analyses and the tests of the Fornell–Larcker criterion, which was met, can be found in the central analysis script (see section “exploratory factor analysis” in the analysis script in the online supplementary; https://osf.io/a9xpy).
Analysis measurement model. To prepare the structural equation model we analyzed confirmatory factor analyses and identified so called local item dependencies, indicated by remaining inter-item correlations after accounting for the latent variable, which violates the assumption of local independence [cf. 136]. The main driver for these correlations were meaning similarities between items (e.g., in the trust scale participants rated “politicians” and “political parties” quite similar), which were adjusted by allowing for correlated residuals [cf. 137, 138]. The pitfall by improving model fit by allowing for correlated residuals is an adjustment by chance, which limits the generalizability of the model to “out-of-sample” data (specified model overfits the data) [e.g. 139, 140]. To rigorously allow only for correlated residuals between items with strong statistical support, we applied procedures of regularization to the problematic measurement models, which penalizes the complexity of the model until a more parsimonious model is achieved [133, 141]. To evaluate the model fit of the proposed measurement models and the structural equation model in line with recommendations [e.g. 142, 143], we considered the point estimates of SRMR and RMSEA (along with their 90% confidence interval) of around .08 and .06 to indicate well-fitting models. Additionally the TLI and CFI are reported, whereby a value close to .95 indicates a well-fitting model [144].
In the following, we report analyses only for scales, where we deleted items or allowed correlated residuals. To test the robustness of the specified correlated residuals we also conducted tests of regularization and cross-validation, using 60% of the data as training data [145], which are not reported here but are available on OSF (see section “confirmatory factor analysis” in the analysis script in the online supplementary; https://osf.io/a9xpy).
Acceptability. Two items of the acceptability scale (relating to acceptance of research) were deleted, surveying the acceptability of research on SAI, because these items were, in opposition to the other items, strongly supported. The fit of the measurement model by including these items was slightly worse when including these items, e.g., RMSEA including these two items was .09 (90% CI: .08-.095) compared to an RMSEA of .08 ([.069, .086]). These results in addition to substantive reasons indicate that these items measure something different, and not general acceptability (e.g., “I am a supporter of SAI”) or behavioral intentions (e.g., “In a national referendum I would vote for deployment.”). Also multiple studies have shown high support for research compared to significantly lower support for deployment [e.g. 45, 60, 127, 146].
Benefit perception. The error variance between two items with similar wording and meaning (item 2 “. . .is effective in terms of reducing the Earth’s temperature.” and item 3 “. . .is a cost-efficient measure for reducing the Earth’s temperature.”) was allowed to correlate freely, which significantly improved model fit, i.e., RMSEA reduced from .165 (90% CI: .135 - .197) to .07 ([.035, .108]) and CFI from .894 to .985.
Negative and positive affect. The error variances between three items for the perceived negative affect (guilty and ashamed; scared and afraid; hostile and irritable) and three items for the perceived positive affect (interested and attentive; attentive and active; strong and active) of SAI were allowed to correlate freely. Additionally the item “alert” of the perceived positive affect scale was deleted, because the item showed positive correlations to the items of the negative affect scale. Substantially we assume that in the context of SAI “alert” could be interpreted as a negative emotion (e.g., sign of alertness / emergency). Without allowing for correlated residuals the model fit of the measurement models of negative affect (e.g., RMSEA = .126, CFI = .873) and positive affect (e.g,. RMSEA = .13, CFI = .882) were not sufficient. In a validation study by Crawford and Henry [147], the authors had also to allow for correlated residuals between items from the same mood categories as well, which is an adjustment quite similar to the one the adjustment in this study.
Trust. The error variances between three items for the trust scale in public institutions (legal system and police; politicians and political parties; European Parliament and United Nations) were allowed to correlate freely. The three item pairs are highly similar regarding wording and meaning. Without allowing for the correlated residuals, the fit was insufficient (e.g., RMSEA = .199, CFI = .83). To the best of our knowledge this scale was never tested by means of latent variable models (e.g. [148] only reported Principal Component Analyses).
Analysis structural equation model. First of all, we determined the latent correlations of all central constructs of our proposed model by fitting a confirmatory factor model of first order, which specifies one latent factor for each construct and the latent variables can freely correlate. Although, the global likelihood-ratio test statistic was significant, Χ2(1854) = 3932.59, p < .01, the SRMR = .067 and the RMSEA = .044 (90% CI: .042-.046) indicated an acceptable fit. Also the CFI = .917 and TLI = .912 were close to .95 as well. The estimated latent correlations among the central constructs, along with Cronbach’s Alpha and descriptive statistics, are shown in Table 3. The majority of correlations exceeded r = .40. Risk Perception is correlated strongly negative (r = -.81) and Benefit Perception strongly positive (r = .89) to Acceptability. The correlation between Acceptability and Tampering with Nature is also strongly negative (r = —.8), other factors were less strongly (albeit moderately) associated with Acceptability, e.g., Positive Affect (r = .57). Benefit Perception and Risk Perception are strongly correlated negatively (r = —.84).
Given the substantial intercorrelations among certain constructs and the substantive theory explained before, we continued specifying a structural equation model. The structural equation model additionally contains the construct Tampering with Nature, which we justify by the results of the step I procedure and the strong latent correlations from Tampering with Nature to multiple constructs (a fitted structural equation model without Tampering with Nature can be found in E in the online supplementary; https://osf.io/vb5qe). The fit of the model was acceptable and again the global likelihood-ratio test statistic was significant Χ2(1348) = 2914.90, p < .01, but the SRMR = .068 and the RMSEA = .043 (90% CI: .043-.047) indicated an acceptable fit. Also the CFI = .923 and TLI = .919 were close to .95 as well. The belief that SAI disturbs the natural balance in the environment (Tampering with Nature) was strongly related to an increased Risk Perception (standardized coefficient = .66) and decreased Benefit Perception (-.64). Further, Tampering with Nature was predictive for a higher perceived Negative Affect (.44) and lower Positive Affect (-.45). The person’s personal trust in public institutions was only slightly predictive for Positive Affect (.13). Benefit Perception had the strongest direct effect on Acceptability (.46), followed by a moderately negative effect of Tampering with Nature on Acceptability (-.22). The indirect effect between Acceptability and Tampering with Nature mediated by Benefit Perception (multiplying the standardized coefficients -.64 time .46 and test for statistical significance) was somewhat stronger (-.29). Finally, Risk Perception and Benefit Perception were strongly inversely related (-.50). The R-squared of Acceptability shows that 79% of the variance of Acceptability can be explained by the included scales.
Discussion
To the best of your knowledge, the constructs within our proposed model have never been tested by means of latent variable models (confirmatory factor analyses). The necessity to specify correlated residuals for single scales indicates possible substantive problems with these scales. Compared to the specified structural equation model (see Fig 4), Merk and Pönitzsch [46] fitted a similar model (a so-called path model). Their results however differed because they found multiple significant effects of Trust on the other constructs and a significant direct effect of Risk Perception on Acceptability. As indicated by the dotted lines, in our data Trust had only a significant effect on Positive Affect and excluding Trust from the model would not reduce the amount of explained variance of Acceptability. Therefore, personal trust in political institutions did not have a substantive effect on the Acceptability of SAI in our study. Neither had Risk Perception a significant direct effect on Acceptability in the proposed model. In contrast to Merk and Pönitzsch [46], we included a correlation between Risk and Benefit Perception, which caused the insignificance of the direct effect of Risk Perception on Acceptability (in the analysis script in the “structural equation model (SEM)” section we fitted a model which excluded a correlation between Risk and Benefit perception; https://osf.io/a9xpy). By including this correlation, we allow for the possibility that participants do not seem to differentiate between risk and benefits perceptions of SAI. Such a strong inverse relation of risk and benefits perceptions has been explained in studies by the affect heuristic [55, 56]. In addition, the strong latent correlations (all greater than .80) of Tampering with Nature, Risk Perception and Benefit Perception to Acceptability (see Table 3) indicate that these constructs are the main driver for the acceptability of SAI, whereby all these constructs are strongly interrelated. Especially, Tampering with Nature could be a highly relevant concept for this specific technology and could indicate a strong moral heuristic (see also “Playing God” concept in the Step I procedure section).
Note. Only the structural model is shown. Each circle represents a measurement model. Dotted lines indicate a non-significant direct path.
General discussion
In this article, we proposed a multi-method approach to inform climate policy, consisting of two central analysis steps: in the step I procedure, using CAMs we identified two central clusters of concerns relevant for climate policy. Participants in our study had the ethical concerns that SAI is a possible “Moral Hazard” (summarized concept), that using this technology is like “Playing God” and that the technology needs to be deployed for a long time (“Termination Problem”). Positively, participants argued that SAI could be “Buying Time” and could be an option for future generations (“Arming the Future”). Participants had mixed feelings whether SAI should be deployed in a case of “Emergency”. Regarding “Governance” participants had slightly negative feelings and, for example, it is not clear who is accountable for deploying such a technology and participants highlighted that it is difficult to achieve an agreement (“disagreement countries”). The concept “trust in political institutions” was overall negative and some participants argued that they have no trust in experts and that institutions (like governments) have wrong motivations. However, SAI was seen as a highly effective technology with multiple benefits for nature, society and health.
Based on the pre-study, we included Moral Hazard and Tampering with Nature in the subsequent step II procedure. Especially, Tampering with Nature was of central importance and predicted all central constructs (Trust, Negative and Positive Affect, Risk and Benefit Perception, and Acceptability) of our proposed integrative model. From our perspective, the survey study revealed three central insights: (1) a possible integrative model to predict the acceptability of CETs seems to be technology specific, because moral heuristics like “Tampering with Nature” are probably not influential on all kind of CETs (e.g., large-scale afforestation). (2) The measurements we used in the survey study need to be revised, because we found multiple local dependencies. Not accounting for these dependencies within the measurement models could lead, for example, to some kind of over weighting of certain aspects of a scale (e.g., overweighting the politics aspect by asking for politicians and political parties in the Trust scale). Finally (3) an important finding was, besides the relative importance of the single constructs on predicting acceptability, that participants do not seem to differentiate between risk and benefit perceptions of SAI.
Suggestion for future research and potential policy applications
Possible positive and negative (side-) effects of emerging technologies are uncertain, because the future is mostly unknown and only in the case of quantifiable risks (partly) predictable [e.g. 149–151]. Therefore, there is a call for “anticipatory governance”, aimed at improving the ability to manage emerging technologies while there is still room for adjustment. Such an approach is future-oriented as it does not attempt to predict the future, because the future is something that is actively constructed by political action, technological innovation and pluralistic worldviews [152, 153]. A central aspect of anticipatory governance is the inclusion of various stakeholders to combine diverse types of knowledge, consider local contexts and enrich perspectives of possible futures [154, 155]. Nicholson et al. [153] are proposing the establishment of a global forum to consider public input and initiate dialog between different stakeholders concerning the control and research of SRM technologies with the goal to advise policy makers. With our proposed methodology (combining CAM and survey data, using a scenario-based approach followed by various statistical analyses) climate policy can be informed: the methodology is highly economic and allows for the inclusion of various stakeholders in real-time. When identified, ethical, trust, emotional or cost / benefit related concerns could serve as a broad early warning system [cf. 64] and could inform the adaptive management of CETs to handle uncertainties through active feedback [156]. As described in the general discussion section, the proposed methodology makes it possible to identify (1) ethical (unconsidered) concerns, which were not included in the initial integrative model (CAM results) and (2) to measure the relative importance of core constructs on the acceptability of SAI (survey results).
However, the proposed methodology is not able to answer in detail why exactly there are possible concerns regarding the CET and thus we emphasize that such correlative results (survey) and standardized qualitative results (CAM) should be further informed by additional ethical tools [e.g. 157, 158] or by general tools of technology assessment [e.g. 159–161]. For future research we suggest applying our proposed methodology to all currently discussed CETs, because every single CET implies different, social, political and ethical concerns [e.g. 42, 162]. A possible result of such a comprehensive study would be to further inform all possible climate policy options (mitigation, adaptation and different CETs) and their best combination to tackle climate change [17]. For such an ambitious comprehensive study, it could be interesting to test more complex study and survey designs: the applied correlational research design, using only one measurement point (cross-sectional) in this study, does not allow to identify any cause-and-effect relationships. Sophisticated experimental and longitudinal designs should be conducted in the future to validate the proposed integrative model [see critic 32, 33, 52]. Experimentally, it could be tested whether different framings, like using or not using natural analogies in scenario texts describing CETs [e.g. 71, 163, 164], have an effect on the evaluation. Additionally, limited deployment scenarios could be researched when such applications of SAI become more realistic in the future [27]. To investigate more deeply information-seeking strategies, informative complex survey designs like the decision pathway survey [165] or informed choice questionnaire [166] could be applied.
We hope that our proposed methodology and the online resources provided to this article will be helpful in the future to inform climate policy by considering the attitudes and concerns of multiple stakeholders. In our view, discourse on all possible climate policy options should be encouraged using methods developed within social sciences, which is in line with current calls from psychologists [167] and philosophers [168].
Supporting information
S1 Text. CAMs of the pre-study with the lowest and highest average mean valences.
https://doi.org/10.1371/journal.pclm.0000207.s001
(DOCX)
Acknowledgments
This work was inspired by discussions with our colleagues at the University of Freiburg and the University of Kassel. We thank Katja Pollak for the illuminating discussions and James Fisher for the correction of the English scenario text.
References
- 1.
United Nations. Adoption of the Paris agreement [Internet]. 2015. Available from: https://unfccc.int/process-and-meetings/the-paris-agreement/the-paris-agreement
- 2.
Pörtner HO, Roberts DC, Tignor M, Poloczanska ES, Mintenbeck K, Alegría A, et al. IPCC, 2022: Climate Change 2022: Impacts, Adaptation, and Vulnerability. Contribution of Working Group II to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change [Internet]. Cambridge University Press; 2022. Available from: https://www.ipcc.ch/report/sixth-assessment-report-working-group-ii/
- 3. Hajima T, Kawamiya M, Watanabe M, Kato E, Tachiiri K, Sugiyama M, et al. Modeling in Earth system science up to and beyond IPCC AR5. Prog Earth Planet Sci. 2014 Dec 18;1(1):29.
- 4. Höhne N, Kuramochi T, Warnecke C, Röser F, Fekete H, Hagemann M, et al. The Paris Agreement: resolving the inconsistency between global goals and national contributions. Clim Policy. 2017 Jan 2;17(1):16–32.
- 5.
Masson-Delmotte V, Zhai P, Pörtner HO, Roberts D, Skea J, Shukla PR, et al. IPCC, 2018: Global Warming of 1.5°C. An IPCC Special Report on the impacts of global warming of 1.5°C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty [Internet]. Cambridge University Press; 2018. Available from: https://www.ipcc.ch/sr15/
- 6. Pachauri RK, Meyer LA. IPCC, 2014: Climate Change 2014: Synthesis Report. Contribution of Working Groups I, II and III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Internet]. 2014. Available from: https://www.ipcc.ch/report/ar5/syr/
- 7. Caviezel C, Revermann C. Climate Engineering: Kann und soll man die Erderwärmung technisch eindämmen? edition sigma; 2014. 338 p.
- 8.
Dowling A. Greenhouse Gas Removal [Internet]. Royal Society; 2018 [cited 2022 Jul 7]. Available from: https://royalsociety.org/topics-policy/projects/greenhouse-gas-removal/
- 9.
National Research Council. Climate Intervention: Reflecting Sunlight to Cool Earth. National Academies Press; 2015. 276 p.
- 10.
Shepherd JG. Geoengineering the climate: science, governance and uncertainty [Internet]. Royal Society; 2009 Sep [cited 2022 Jul 7] p. 98. Available from: https://eprints.soton.ac.uk/156647/
- 11. Zhang H, Wang F, Li J, Duan Y, Zhu C, He J. Potential Impact of Tonga Volcano Eruption on Global Mean Surface Air Temperature. J Meteorol Res. 2022 Feb 1;36(1):1–5.
- 12. Keller DP, Feng EY, Oschlies A. Potential climate engineering effectiveness and side effects during a high carbon dioxide-emission scenario. Nat Commun. 2014 Feb 25;5(1):3304. pmid:24569320
- 13. Sonntag S, Ferrer González M, Ilyina T, Kracher D, Nabel JEMS, Niemeier U, et al. Quantifying and Comparing Effects of Climate Engineering Methods on the Earth System. Earths Future. 2018;6(2):149–68.
- 14. Zarnetske PL, Gurevitch J, Franklin J, Groffman PM, Harrison CS, Hellmann JJ, et al. Potential ecological impacts of climate intervention by reflecting sunlight to cool Earth. Proc Natl Acad Sci. 2021 Apr 13;118(15):e1921854118. pmid:33876741
- 15.
Neuber F. Buying Time with Climate Engineering? An analysis of the buying time framing in favor of climate engineering [PhD Thesis]. Karlsruher Institut für Technologie (KIT); 2018.
- 16.
Ott K. Argumente für und wider „Climate Engineering “. In: Fallstudien zur Ethik in Wissenschaft, Wirtschaft, Technik und Gesellschaft. KIT Scientific Publishing; 2011. p. 198–210.
- 17.
Ott K, Neuber F. Climate engineering. In: Oxford Research Encyclopedia of Climate Science. Oxford University Press; 2020.
- 18.
Is Gardiner S. “Arming the Future” with Geoengineering Really the Lesser Evil? Some Doubts about the Ethics of Intentionally Manipulating the Climate System. In: Climate Ethics Essential Readings. 2010. p. 284–312.
- 19. Sillmann J, Lenton TM, Levermann A, Ott K, Hulme M, Benduhn F, et al. Climate emergencies do not justify engineering the climate. Nat Clim Change. 2015 Apr;5(4):290–2.
- 20. McHugh LH, Lemos MC, Morrison TH. Risk? Crisis? Emergency? Implications of the new climate emergency framing for governance and policy. WIREs Clim Change. 2021;12(6):e736.
- 21. Patterson J, Wyborn C, Westman L, Brisbois MC, Milkoreit M, Jayaram D. The political effects of emergency frames in sustainability. Nat Sustain. 2021 Oct;4(10):841–50.
- 22.
Betz G, Cacean S. Ethical Aspects of Climate Engineering. KIT Scientific Publishing; 2012. 170 p.
- 23.
Pörtner HO, Roberts DC, Poloczanska ES, Mintenbeck K, Tignor M, Alegría A, et al. IPCC, 2022: Summary for Policymakers [Internet]. Cambridge University Press; 2022. Available from: https://www.ipcc.ch/report/sixth-assessment-report-working-group-ii/
- 24. Sovacool BK, Baum CM, Low S. Determining our climate policy future: expert opinions about negative emissions and solar radiation management pathways. Mitig Adapt Strateg Glob Change. 2022 Oct 3;27(8):58. pmid:36200076
- 25. Sovacool BK, Baum CM, Low S. Beyond climate stabilization: Exploring the perceived sociotechnical co-impacts of carbon removal and solar geoengineering. Ecol Econ. 2023 Feb 1;204:107648.
- 26. Biermann F, Oomen J, Gupta A, Ali SH, Conca K, Hajer MA, et al. Solar geoengineering: The case for an international non-use agreement. WIREs Clim Change. 2022;13(3):e754.
- 27. Sugiyama M, Arino Y, Kosugi T, Kurosawa A, Watanabe S. Next steps in geoengineering scenario research: limited deployment scenarios and beyond. Clim Policy. 2018 Jul 3;18(6):681–9.
- 28. Funtowicz SO, Ravetz JR. Science for the post-normal age. Futures. 1993 Sep 1;25(7):739–55.
- 29. Floridi L, Strait A. Ethical Foresight Analysis: What it is and Why it is Needed? Minds Mach. 2020 Mar 1;30(1):77–97.
- 30.
Lucivero F. Ethical Assessments of Emerging Technologies: Appraising the moral plausibility of technological visions. Springer; 2016. 216 p.
- 31. Sadowski J. Office of Technology Assessment: History, implementation, and participatory critique. Technol Soc. 2015;42:9–20.
- 32. Bellamy R, Chilvers J, Vaughan NE, Lenton TM. A review of climate geoengineering appraisals. WIREs Clim Change. 2012;3(6):597–615.
- 33. Corner A, Pidgeon N, Parkhill K. Perceptions of geoengineering: public attitudes, stakeholder perspectives, and the challenge of ‘upstream’ engagement. WIREs Clim Change. 2012;3(5):451–66.
- 34. Pidgeon N. Engaging publics about environmental and technology risks: frames, values and deliberation. J Risk Res. 2021 Jan 2;24(1):28–46.
- 35. Frumhoff PC, Stephens JC. Towards legitimacy of the solar geoengineering research enterprise. Philos Trans R Soc Math Phys Eng Sci. 2018 May 13;376(2119):20160459. pmid:29610369
- 36. Huijts NMA, Molin EJE, Steg L. Psychological factors influencing sustainable energy technology acceptance: A review-based comprehensive framework. Renew Sustain Energy Rev. 2012 Jan 1;16(1):525–31.
- 37. Ajzen I. The theory of planned behavior. Organ Behav Hum Decis Process. 1991 Dec 1;50(2):179–211.
- 38. Ajzen I. The theory of planned behaviour: Reactions and reflections. Psychol Health. 2011 Sep 1;26(9):1113–27. pmid:21929476
- 39. Venkatesh V, Bala H. Technology Acceptance Model 3 and a Research Agenda on Interventions. Decis Sci. 2008;39(2):273–315.
- 40. Schwartz SH, Cieciuch J, Vecchione M, Davidov E, Fischer R, Beierlein C, et al. Refining the theory of basic individual values. J Pers Soc Psychol. 2012;103:663–88. pmid:22823292
- 41. Stern PC, Dietz T, Abel T, Guagnano GA, Kalof L. A Value-Belief-Norm Theory of Support for Social Movements: The Case of Environmentalism. Hum Ecol Rev. 1999;6(2):81–97.
- 42. Braun C, Merk C, Pönitzsch G, Rehdanz K, Schmidt U. Public perception of climate engineering and carbon capture and storage in Germany: survey evidence. Clim Policy. 2018 Apr 21;18(4):471–84.
- 43. Jobin M, Siegrist M. Support for the Deployment of Climate Engineering: A Comparison of Ten Different Technologies. Risk Anal. 2020;40(5):1058–78. pmid:32112448
- 44. Klaus G, Ernst A, Oswald L. Psychological factors influencing laypersons’ acceptance of climate engineering, climate change mitigation and business as usual scenarios. Technol Soc. 2020 Feb 1;60:1–16.
- 45. Mercer AM, Keith DW, Sharp JD. Public understanding of solar radiation management. Environ Res Lett. 2011 Oct;6(4):1–9.
- 46. Merk C, Pönitzsch G. The Role of Affect in Attitude Formation toward New Technologies: The Case of Stratospheric Aerosol Injection. Risk Anal. 2017;37(12):2289–304. pmid:28244119
- 47. Visschers VHM, Shi J, Siegrist M, Arvai J. Beliefs and values explain international differences in perception of solar radiation management: insights from a cross-country survey. Clim Change. 2017 Jun 1;142(3):531–44.
- 48. Cronbach LJ, Meehl PE. Construct validity in psychological tests. Psychol Bull. 1955;52(4):281–302. pmid:13245896
- 49. Siegrist M. The Influence of Trust and Perceptions of Risks and Benefits on the Acceptance of Gene Technology. Risk Anal. 2000;20(2):195–204. pmid:10859780
- 50. Sütterlin B, Siegrist M. Public perception of solar radiation management: the impact of information and evoked affect. J Risk Res. 2016;20(10):1292–307.
- 51. Hoogendoorn G, Sütterlin B, Siegrist M. Tampering with Nature: A Systematic Review. Risk Anal. 2021;41(1):141–56. pmid:33141501
- 52. Siegrist M. Trust and Risk Perception: A Critical Review of the Literature. Risk Anal. 2021;41(3):480–90. pmid:31046144
- 53. Siegrist M, Árvai J. Risk Perception: Reflections on 40 Years of Research. Risk Anal. 2020;40(S1):2191–206. pmid:32949022
- 54. Siegrist M, Hartmann C. Consumer acceptance of novel food technologies. Nat Food. 2020 Jun;1(6):343–50. pmid:37128090
- 55. Slovic P, Finucane ML, Peters E, MacGregor DG. The affect heuristic. Eur J Oper Res. 2007 Mar 16;177(3):1333–52.
- 56. Finucane ML, Alhakami A, Slovic P, Johnson SM. The affect heuristic in judgments of risks and benefits. J Behav Decis Mak. 2000;13(1):1–17.
- 57. Hoff KA, Bashir M. Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust. Hum Factors. 2015 May 1;57(3):407–34. pmid:25875432
- 58. Landrum AR, Eaves BS, Shafto P. Learning to trust and trusting to learn: a theoretical framework. Trends Cogn Sci. 2015 Mar 1;19(3):109–11. pmid:25563822
- 59. Cologna V, Siegrist M. The role of trust for climate change mitigation and adaptation behaviour: A meta-analysis. J Environ Psychol. 2020 Jun 1;69:101428.
- 60. Merk C, Pönitzsch G, Kniebes C, Rehdanz K, Schmidt U. Exploring public perceptions of stratospheric sulfate injection. Clim Change. 2015 May 1;130(2):299–312.
- 61. Raimi KT. Public perceptions of geoengineering. Curr Opin Psychol. 2021 Dec 1;42:66–70. pmid:33930833
- 62. Shi J, Visschers VHM, Siegrist M. Public Perception of Climate Change: The Importance of Knowledge and Cultural Worldviews. Risk Anal. 2015;35(12):2183–201. pmid:26033253
- 63. Raimi KT, Wolske KS, Hart PS, Campbell-Arvai V. The Aversion to Tampering with Nature (ATN) Scale: Individual Differences in (Dis)comfort with Altering the Natural World. Risk Anal. 2020;40(3):638–56. pmid:31613025
- 64.
Fenn J, Höfele P, Livanec S, Reuter L, Kiesel A. An Empirical Ethics Scale for Technology Assessment: Challenges and Perspectives for a Real Time Ethics for Emerging Technologies. Institute of Psychology, University of Freiburg; Manuscript in preparation.
- 65. Sugiyama M, Asayama S, Kosugi T. The North–South Divide on Public Perceptions of Stratospheric Aerosol Geoengineering?: A Survey in Six Asia-Pacific Countries. Environ Commun. 2020 Jul 3;14(5):641–56.
- 66. Watson D, Clark LA, Tellegen A. Development and validation of brief measures of positive and negative affect: The PANAS scales. J Pers Soc Psychol. 1988;54:1063–70. pmid:3397865
- 67.
ESS. ESS Round 10: European Social Survey Round 10 Data [Internet]. 2020. Available from: https://dx.doi.org/10.21338/NSD-ESS10-2020
- 68.
Bollen KA. Structural Equations with Latent Variables. John Wiley & Sons; 1989. 528 p.
- 69. Burns ET, Flegal JA, Keith DW, Mahajan A, Tingley D, Wagner G. What do people think when they think about solar geoengineering? A review of empirical social science literature, and prospects for future research. Earths Future. 2016;4(11):536–42.
- 70. Carlisle DP, Feetham PM, Wright MJ, Teagle DAH. The public remain uninformed and wary of climate engineering. Clim Change. 2020 May 1;160(2):303–22.
- 71. Cummings CL, Lin SH, Trump BD. Public perceptions of climate geoengineering: a systematic review of the literature. Clim Res. 2017 Sep 13;73(3):247–64.
- 72. Talberg A, Thomas S, Christoff P, Karoly D. How geoengineering scenarios frame assumptions and create expectations. Sustain Sci. 2018 Jul 1;13(4):1093–104.
- 73.
Schwartz P. The Art of the Long View: Planning for the Future in an Uncertain World. Crown; 1991. 290 p.
- 74.
Kosow H, Gaßner R. Methods of Future and Scenario Analysis: Overview, Assessment, and Selection Criteria [Internet]. Bonn: Deutsches Institut für Entwicklungspolitik gGmbH; 2008. 133 p. (DIE Studies; vol. 39). Available from: https://www.ssoar.info/ssoar/handle/document/19366
- 75. Mietzner D, Reger G. Advantages and Disadvantages of Scenario Approaches for Strategic Foresight. Int J Technol Intell Plan. 2005;1(2):220–39.
- 76. Larson RB. Controlling social desirability bias. Int J Mark Res. 2019 Sep 1;61(5):534–47.
- 77. Reiber F, Pope H, Ulrich R. Cheater Detection Using the Unrelated Question Model. Sociol Methods Res. 2020 Apr 10;1–23.
- 78. Reips UD. Web-based research in psychology: A review. Z Für Psychol. 2021;229(4):198–213.
- 79. Sauter M, Draschkow D, Mack W. Building, Hosting and Recruiting: A Brief Introduction to Running Behavioral Experiments Online. Brain Sci. 2020 Apr;10(4):251. pmid:32344671
- 80. Henninger F, Shevchenko Y, Mertens UK, Kieslich PJ, Hilbig BE. lab.js: A free, open, online study builder. Behav Res Methods. 2022 Apr 1;54(2):556–73. pmid:34322854
- 81. Lange K, Kühn S, Filevich E. "Just Another Tool for Online Studies” (JATOS): An Easy Solution for Setup and Management of Web Servers Supporting Online Studies. PLOS ONE. 2015 Jun 26;10(6):1–14.
- 82.
R Core Team. R: A Language and Environment for Statistical Computing [Internet]. Vienna, Austria: R Foundation for Statistical Computing; 2020. Available from: https://www.R-project.org/
- 83.
Muthén LK, Muthen BO. Mplus User’s Guide. Eighth Edition. [Internet]. Muthén & Muthén; 2017. Available from: https://www.statmodel.com/html_ug.shtml
- 84. Robitzsch A, Dörfler T, Pfost M, Artelt C. Die Bedeutung der Itemauswahl und der Modellwahl für die längsschnittliche Erfassung von Kompetenzen. Z Für Entwicklungspsychologie Pädagog Psychol. 2011 Oct;43(4):213–27.
- 85.
Shadish WR, Cook TD, Campbell DT. Experimental and quasi-experimental designs for generalized causal inference. Boston, MA, US: Houghton, Mifflin and Company; 2002. 623 p. (Experimental and quasi-experimental designs for generalized causal inference).
- 86. Nylund-Gibson K, Grimm R, Quirk M, Furlong M. A Latent Transition Mixture Model Using the Three-Step Specification. Struct Equ Model Multidiscip J. 2014 Jul 3;21(3):439–54.
- 87. Hallquist MN, Wiley JF. MplusAutomation: An R Package for Facilitating Large-Scale Latent Variable Analyses in Mplus. Struct Equ Model Multidiscip J. 2018 Jul 4;25(4):621–38.
- 88.
Xie Y, Allaire JJ, Grolemund G. R Markdown: The Definitive Guide [Internet]. New York: Chapman and Hall/CRC; 2018. 338 p. Available from: https://bookdown.org/yihui/rmarkdown/
- 89.
Jaccard J, Jacoby J. Theory Construction and Model-Building Skills, Second Edition: A Practical Guide for Social Scientists. Guilford Publications; 2020. 546 p.
- 90. Bellamy R, Chilvers J, Vaughan NE. Deliberative Mapping of options for tackling climate change: Citizens and specialists ‘open up’ appraisal of geoengineering. Public Underst Sci. 2016 Apr 1;25(3):269–86. pmid:25224904
- 91. Bellamy R, Lezaun J, Palmer J. Public perceptions of geoengineering research governance: An experimental deliberative approach. Glob Environ Change. 2017 Jul 1;45:194–202.
- 92. McLaren D, Parkhill KA, Corner A, Vaughan NE, Pidgeon NF. Public conceptions of justice in climate engineering: Evidence from secondary analysis of public deliberation. Glob Environ Change. 2016 Nov 1;41:64–73.
- 93.
Parkhill K, Pidgeon N, Corner A, Vaughan N. Deliberation and Responsible Innovation: A Geoengineering Case Study. In: Responsible Innovation [Internet]. John Wiley & Sons, Ltd; 2013 [cited 2022 Oct 31]. p. 219–39. Available from: https://onlinelibrary.wiley.com/doi/abs/10.1002/9781118551424.ch12
- 94. Pidgeon N, Parkhill K, Corner A, Vaughan N. Deliberating stratospheric aerosols for climate geoengineering and the SPICE project. Nat Clim Change. 2013 May;3(5):451–7.
- 95.
Thagard P. EMPATHICA: A Computer Support System with Visual Representations for Cognitive-Affective Mapping. In: Workshops at the Twenty-Fourth AAAI Conference on Artificial Intelligence [Internet]. 2010 [cited 2022 Jul 7]. p. 79–81. Available from: https://www.aaai.org/ocs/index.php/WS/AAAIW10/paper/view/1981
- 96.
Thagard P. Mapping minds across cultures. In: Grounding Social Sciences in Cognitive Sciences. MIT Press; 2012. p. 35–60.
- 97. Homer-Dixon T, Milkoreit M, Mock SJ, Schröder T, Thagard P. The Conceptual Structure of Social Disputes: Cognitive-Affective Maps as a Tool for Conflict Analysis and Resolution. SAGE Open. 2014 Jan 1;4(1):1–20.
- 98. Homer-Dixon T, Maynard JL, Mildenberger M, Milkoreit M, Mock SJ, Quilley S, et al. A Complex Systems Approach to the Study of Ideology: Cognitive-Affective Structures and the Dynamics of Belief Systems. J Soc Polit Psychol. 2013 Dec 16;1(1):337–63.
- 99.
Milkoreit, Manjana. Mindmade Politics—The Role of Cognition in Global Climate Change Governance [Internet] [PhD Thesis]. UWSpace; 2013. Available from: http://hdl.handle.net/10012/7711
- 100.
Thagard P. The Cognitive–Affective Structure of Political Ideologies. In: Martinovsky B, editor. Emotion in Group Decision and Negotiation [Internet]. Dordrecht: Springer Netherlands; 2015 [cited 2022 Nov 18]. p. 51–71. (Advances in Group Decision and Negotiation). Available from: https://doi.org/10.1007/978-94-017-9963-8_3
- 101. Thagard P. The cognitive science of COVID-19: Acceptance, denial, and belief change. Methods. 2021 Nov 1;195:92–102. pmid:33744395
- 102. Mansell J, Mock S, Rhea C, Tecza A, Piereder J. Measuring attitudes as a complex system: Structured thinking and support for the Canadian carbon tax. Polit Life Sci. 2021 ed;40(2):179–201. pmid:34825808
- 103. Mansell J, Reuter L, Rhea C, Kiesel A. A Novel Network Approach to Capture Cognition and Affect: COVID-19 Experiences in Canada and Germany. Front Psychol. 2021;12:1–14. pmid:34177719
- 104. Höfele P, Reuter L, Estadieu L, Livanec S, Stumpf M, Kiesel A. Connecting the methods of psychology and philosophy: Applying Cognitive-Affective Maps (CAMs) to identify ethical principles underlying the evaluation of bioinspired technologies. Philos Psychol. 2022 Sep 6;0(0):1–24.
- 105. Yilmaz L, Franco-Watkins A, Kroecker TS. Computational models of ethical decision-making: A coherence-driven reflective equilibrium model. Cogn Syst Res. 2017 Dec 1;46:61–74.
- 106. Thagard P. Ethical coherence. Philos Psychol. 1998 Dec 1;11(4):405–22.
- 107. Livanec S, Stumpf M, Reuter L, Fenn J, Kiesel A. Who’s gonna use this? Acceptance prediction of emerging technologies with Cognitive-Affective Mapping and transdisciplinary considerations in the Anthropocene. Anthr Rev. 2022 Mar 3;1–20.
- 108. Reuter L, Mansell J, Rhea C, Kiesel A. Direct assessment of individual connotation and experience: An introduction to cognitive-affective mapping. Polit Life Sci. 2022;41(1):131–9.
- 109. Fenn J, Gouret F, Kiesel A. Cognitive-Affective Maps extended logic [Internet]. 2022. Available from: https://camgalaxy.github.io/
- 110. Fenn J, Gouret F, Kiesel A. Shiny CAM application [Internet]. 2022. Available from: https://fennapps.shinyapps.io/shinyCAMEL_v02/
- 111.
Wickham H. Mastering Shiny [Internet]. O’Reilly Media, Inc.; 2021. 372 p. Available from: https://mastering-shiny.org/index.html
- 112. Luthardt J, Schröder T, Hildebrandt F, Bormann I. “And Then We’ll Just Check If It Suits Us”–Cognitive-Affective Maps of Social Innovation in Early Childhood Education. Front Educ. 2020;5:1–19.
- 113. Wolfe SE. Water Cognition and Cognitive Affective Mapping: Identifying Priority Clusters Within a Canadian Water Efficiency Community. Water Resour Manag. 2012 Aug;26(10):2991–3004.
- 114.
Kuckartz U, Rädiker S. Qualitative Inhaltsanalyse. Methoden, Praxis, Computerunterstützung [Internet]. Beltz Juventa; 2022. 273 p. Available from: https://content-select.com/de/portal/media/view/5e623532-20b8-4f33-b19e-4a1db0dd2d03?forceauth=1
- 115.
Mayring P. Qualitative Content Analysis: A Step-by-Step Guide. SAGE; 2021. 160 p.
- 116.
Mayring P, Fenzl T. Qualitative Inhaltsanalyse. In: Baur N, Blasius J, editors. Handbuch Methoden der empirischen Sozialforschung [Internet]. Wiesbaden: Springer Fachmedien; 2019 [cited 2022 Nov 18]. p. 633–48. Available from: https://doi.org/10.1007/978-3-658-21308-4_42
- 117.
Newman M. Networks: An Introduction. Oxford University Press; 2018.
- 118.
Bernard HR, Wutich A, Ryan GW. Analyzing Qualitative Data: Systematic Approaches. SAGE Publications; 2016. 577 p.
- 119. Borge-Holthoefer J, Arenas A. Semantic Networks: Structure and Dynamics. Entropy. 2010 May;12(5):1264–302.
- 120.
Prior L. Content analysis. In: The Oxford Handbook of Qualitative Research. Oxford University Press Oxford; 2014. p. 359–79.
- 121. Wright MJ, Teagle DAH, Feetham PM. A quantitative evaluation of the public response to climate engineering. Nat Clim Change. 2014 Feb;4(2):106–10.
- 122.
Parkhill K, Pidgeon N. Public Engagement on Geoengineering Research : Preliminary Report on the SPICE Deliberative Workshops. In: Public Engagement on Geoengineering Research [Internet]. 2011. Available from: https://eprints.whiterose.ac.uk/82892/
- 123.
Skrondal A, Rabe-Hesketh S. Generalized Latent Variable Modeling: Multilevel, Longitudinal, and Structural Equation Models. New York: Chapman and Hall/CRC; 2004. 528 p.
- 124. Skrondal A, Rabe-Hesketh S. Latent Variable Modelling: A Survey*. Scand J Stat. 2007;34(4):712–45.
- 125. Hong M, Steedle JT, Cheng Y. Methods of Detecting Insufficient Effort Responding: Comparisons and Practical Recommendations. Educ Psychol Meas. 2020 Apr 1;80(2):312–45. pmid:32158024
- 126. Meade AW, Craig SB. Identifying careless responses in survey data. Psychol Methods. 2012;17:437–55. pmid:22506584
- 127. Merk C, Klaus G, Pohlers J, Ernst A, Ott K, Rehdanz K. Public perceptions of climate engineering: Laypersons’ acceptance at different levels of knowledge and intensities of deliberation. GAIA—Ecol Perspect Sci Soc. 2019 Dec 19;28(4):348–55.
- 128. Price V, Neijens P. Opinion quality in public opinion research. Int J Public Opin Res. 1997 Dec 1;9(4):336–60.
- 129. Price V, Neijens P. Deliberative polls: Toward improved measures of “informed” public opinion? Int J Public Opin Res. 1998 Jul 1;10(2):145–76.
- 130. ter Mors E, Terwel BW, Daamen DDL, Reiner DM, Schumann D, Anghel S, et al. A comparison of techniques used to collect informed public opinions about CCS: Opinion quality after focus group discussions versus information-choice questionnaires. Int J Greenh Gas Control. 2013 Oct 1;18:256–63.
- 131.
Revelle W. psych: Procedures for Psychological, Psychometric, and Personality Research [Internet]. Evanston, Illinois: Northwestern University; 2021. Available from: https://CRAN.R-project.org/package=psych
- 132. Rosseel Y. lavaan: An R Package for Structural Equation Modeling. J Stat Softw. 2012;48(2):1–36.
- 133. Jacobucci R. regsem: Regularized Structural Equation Modeling [Internet]. arXiv; 2017 [cited 2022 Nov 26]. Available from: http://arxiv.org/abs/1703.08489
- 134. Yuan KH, Bentler PM. 5. Three Likelihood-Based Methods for Mean and Covariance Structure Analysis with Nonnormal Missing Data. Sociol Methodol. 2000 Aug 1;30(1):165–200.
- 135. Auerswald M, Moshagen M. How to determine the number of factors to retain in exploratory factor analysis: A comparison of extraction methods under realistic conditions. Psychol Methods. 2019;24:468–91.
- 136. Yen WM. Scaling Performance Assessments: Strategies for Managing Local Item Dependence. J Educ Meas. 1993;30(3):187–213.
- 137. Bandalos DL. Item Meaning and Order as Causes of Correlated Residuals in Confirmatory Factor Analysis. Struct Equ Model Multidiscip J. 2021 Nov 2;28(6):903–13.
- 138.
Hoyle RH. Handbook of Structural Equation Modeling. Guilford Press; 2012. 754 p.
- 139. Preacher KJ. Quantifying Parsimony in Structural Equation Modeling. Multivar Behav Res. 2006 Sep 1;41(3):227–59. pmid:26750336
- 140. Yarkoni T, Westfall J. Choosing Prediction Over Explanation in Psychology: Lessons From Machine Learning. Perspect Psychol Sci. 2017 Nov 1;12(6):1100–22. pmid:28841086
- 141. Li X, Jacobucci R, Ammerman BA. Tutorial on the Use of the regsem Package in R. Psych. 2021 Dec;3(4):579–92.
- 142.
Kline RB. Principles and Practice of Structural Equation Modeling, Fourth Edition. Guilford Publications; 2015. 553 p.
- 143. Moshagen M, Auerswald M. On congruence and incongruence of measures of fit in structural equation modeling. Psychol Methods. 2018 Jun;23(2):318–36. pmid:28301200
- 144. Hu L, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Struct Equ Model Multidiscip J. 1999 Jan 1;6(1):1–55.
- 145. Browne MW. Cross-Validation Methods. J Math Psychol. 2000 Mar 1;44(1):108–32. pmid:10733860
- 146. Merk C, Pönitzsch G, Rehdanz K. Do climate engineering experts display moral-hazard behaviour? Clim Policy. 2019 Feb 7;19(2):231–43.
- 147. Crawford JR, Henry JD. The Positive and Negative Affect Schedule (PANAS): Construct validity, measurement properties and normative data in a large non-clinical sample. Br J Clin Psychol. 2004;43(3):245–65.
- 148. Zmerli S, Newton K. Social Trust and Attitudes Toward Democracy. Public Opin Q. 2008 Nov 6;72(4):706–24.
- 149. Keller K, Helgeson C, Srikrishnan V. Climate Risk Management. Annu Rev Earth Planet Sci. 2021;49(1):95–116.
- 150. Mittelstadt BD, Stahl BC, Fairweather NB. How to Shape a Better Future? Epistemic Difficulties for Ethical Assessment and Anticipatory Governance of Emerging Technologies. Ethical Theory Moral Pract. 2015 Nov 1;18(5):1027–47.
- 151. Sollie P. Ethics, technology development and uncertainty: an outline for any future ethics of technology. J Inf Commun Ethics Soc. 2007 Jan 1;5(4):293–306.
- 152.
Foley RW, Guston DH, Sarewitz D. Towards the anticipatory governance of geoengineering. In: Geoengineering Our Climate? Routledge; 2018. p. 223–43.
- 153. Nicholson S, Jinnah S, Gillespie A. Solar radiation management: a proposal for immediate polycentric governance. Clim Policy. 2018 Mar 16;18(3):322–34.
- 154. Bessette DL, Mayer LA, Cwik B, Vezér M, Keller K, Lempert RJ, et al. Building a Values-Informed Mental Model for New Orleans Climate Risk Management. Risk Anal. 2017;37(10):1993–2004. pmid:28084634
- 155.
Blackstock JJ, Low S. Geoengineering our Climate? Ethics, Politics, and Governance. Routledge; 2018. 364 p.
- 156. MacMartin DG, Irvine PJ, Kravitz B, Horton JB. Technical characteristics of a solar geoengineering deployment and implications for governance. Clim Policy. 2019 Nov 26;19(10):1325–39.
- 157. Legault GA, Béland JP, Parent M, K.-Bédard S, Bellemare CA, Bernier L, et al. Ethical Evaluation in Health Technology Assessment: A Challenge for Applied Philosophy. Open J Philos. 2019 Jun 17;9(3):331–51.
- 158. Reijers W, Wright D, Brey P, Weber K, Rodrigues R, O’Sullivan D, et al. Methods for Practising Ethics in Research and Innovation: A Literature Review, Critical Analysis and Recommendations. Sci Eng Ethics. 2018 Oct 1;24(5):1437–81. pmid:28900898
- 159.
Böschen S, Grunwald A, Krings BJ, Rösch C. Technikfolgenabschätzung: Handbuch für Wissenschaft und Praxis. Nomos Verlag; 2021. 497 p.
- 160.
Grunwald A. Technikfolgenabschätzung Einführung. Nomos Verlag; 2022. 283 p.
- 161. Markus ML, Mentzer K. Foresight for a responsible future with ICT. Inf Syst Front. 2014 Jul 1;16(3):353–68.
- 162. Pereira JC. Geoengineering, Scientific Community, and Policymakers: A New Proposal for the Categorization of Responses to Anthropogenic Climate Change. SAGE Open. 2016 Jan 1;6(1):2158244016628591.
- 163. Bellamy R, Lezaun J. Crafting a public for geoengineering. Public Underst Sci. 2017 May 1;26(4):402–17. pmid:26315719
- 164. Corner A, Pidgeon N. Like artificial trees? The effect of framing by natural analogy on public perceptions of geoengineering. Clim Change. 2015 Jun 1;130(3):425–38.
- 165. Gregory R, Satterfield T, Hasell A. Using decision pathway surveys to inform climate engineering policy choices. Proc Natl Acad Sci U S A. 2016;113(3):560–5. pmid:26729883
- 166.
Dowd AM, Rodriguez M, Jeanneret T, De Best-Waldhober M, Straver K, Mastop J, et al. Deliberating emission reduction options [Internet]. 2012 Jun [cited 2022 Sep 20]. Available from: https://www.globalccsinstitute.com/archive/hub/publications/53781/deliberating-emission-reduction-options.pdf
- 167. Converse BA, Hancock PI, Klotz LE, Clarens AF, Adams GS. If humans design the planet: A call for psychological scientists to engage with climate engineering. Am Psychol. 2021;76(5):768–80. pmid:33090814
- 168. Brownstein M, Levy N. Philosophy’s other climate problem. J Soc Philos. 2021 Dec;52(4):536–53.