University scientists conducting research on topics of potential health concern often want to partner with a range of actors, including government entities, non-governmental organizations, and private enterprises. Such partnerships can provide access to needed resources, including funding. However, those who observe the results of such partnerships may judge those results based on who is involved. This set of studies seeks to assess how people perceive two hypothetical health science research collaborations. In doing so, it also tests the utility of using procedural justice concepts to assess perceptions of research legitimacy as a theoretical way to investigate conflict of interest perceptions. Findings show that including an industry collaborator has clear negative repercussions for how people see a research partnership and that these perceptions shape people’s willingness to see the research as a legitimate source of knowledge. Additional research aimed at further communicating procedures that might mitigate the impact of industry collaboration is suggested.
Citation: Besley JC, McCright AM, Zahry NR, Elliott KC, Kaminski NE, Martin JD (2017) Perceived conflict of interest in health science partnerships. PLoS ONE 12(4): e0175643. https://doi.org/10.1371/journal.pone.0175643
Editor: Joshua L. Rosenbloom, Iowa State University, UNITED STATES
Received: June 30, 2016; Accepted: March 29, 2017; Published: April 20, 2017
Copyright: © 2017 Besley et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: Data will be uploaded to ICPSR if the manuscript is accepted (https://www.icpsr.umich.edu/icpsrweb/deposit/index.jsp).
Funding: A grant from the Science Studies at State (S3) program, Michigan State University, provided funding for this project.
Competing interests: Norbert Kaminsky has received research funding from the Dow Chemical Company. This research is not directly related to the current project. This does not alter our adherence to PLOS ONE policies on sharing data and materials.
Industry is an increasingly important but controversial source of funding for scientific research and development (R&D) in the United States. Whereas the federal government funded roughly two-thirds of R&D in the U.S. during the 1960s and 1970s, this relationship is now flipped so that private industry is now funding roughly two-thirds of U.S. R&D . Further, university scientists increasingly are participating in public-private partnerships with industry, government service or regulatory agencies, and/or non-governmental organizations (NGOs) to engage with stakeholders and secure more consistent research funding in the face of budgetary constraints on governmental research funding [2–4].
Nevertheless, public skepticism of industry-funded research (e.g., [5, 6]) might constrain the efficacy and influence of its results. The tobacco industry’s efforts to deny the detrimental public-health effects of its products is a well-known example of activities that have raised public awareness of the possibility that industry-funded science could be less than fully reliable . More recently, publications about the safety testing of industrial chemicals and pharmaceuticals [8–12], as well as about anthropogenic climate change [13–15], may have further contributed to public concerns about the legitimacy of science in which industry has played a role. On the other hand, the simple solution of asking scientific researchers to eschew all industry funding and research collaborations seems unfeasible and undesirable, as a significant amount of important and scientifically sound research would not be performed—to the detriment of society .
The question of how researchers might benefit from available industry funding without diminishing the real or perceived quality of their research suggests the potential for a multidisciplinary research program that examines the procedures that might be used to protect research. The current set of studies represents one piece of such a work program and is built on the assumption that efforts to design procedures for protecting research should be informed by information about the types of procedures and/or criteria that generate public confidence in the legitimacy of the resulting research. More generally, the current set of studies seeks to develop an initial understanding of how laypeople’s perceptions of a scientific research collaboration are influenced by concerns about conflicts of interest. As is discussed below, the choice of research partners is conceptualized as a basic way to communicate to others that a research team is concerned about taking multiple viewpoints into account and that no one set of interests will dominate. Other procedures to limit conflict of interest (see the discussion section) are also possible but an initial focus on research partners is meant to help establish a foundation for future work.
Past scholarship suggests that procedural justice concepts from social psychology may offer insights for assessing how laypeople perceive research partnerships, given that such collaborations can be understood as processes that researchers use to control biases and consider alternative viewpoints. After reviewing existing work in this area, we identify selected research questions and derive key hypotheses from procedural justice scholarship. Specifically, we are interested in how different collaborations (i.e., processes) can affect perceptions that research will be done appropriately (i.e., procedural fairness) and the degree to which such perceptions may also shape overall perceptions of the research (i.e., perceived legitimacy). We then discuss our research protocols and present the results from three experiments designed to examine how the inclusion of different partners in a health-related research collaboration influences the perceived procedural fairness and perceived legitimacy of the resulting research. In each experiment, subjects considered a hypothetical research collaboration studying the health effects of transfats in foods (Studies 1 and 3) or the health effects of genetically modified (GM) food (Study 2) that included an agrifood business, a research university, a government agency, and/or an NGO. In our conclusion, we outline a further research agenda for investigating the extent to which certain safeguards designed to reduce conflicts of interest may facilitate the production of high-quality industry-funded research that laypeople perceive as legitimate. In our current set of three studies, we focus on perceptions of conflict of interest and do not make specific claims about the degree to which procedures such as partnership arrangements are actually effective at limiting the effects of conflicted interests.
Given the paucity of theoretically driven work in this area, our project conceptualizes collaboration as a conflict of interest mitigation procedure. It does so to build upon and extend a study that employs well-established procedural justice scholarship (e.g., ) to examine perceived conflict of interest in science . Analyzing data from a cross-sectional survey of people who attended Food and Drug Administration (FDA) advisory meetings, McComas, et al.  find that attendees’ fairness perceptions were associated both with satisfaction with the committee process and willingness to accept the outcomes from the process (i.e., perceived legitimacy). Our project builds upon that study, as well as the broader procedural justice literature, by using experimental procedures to examine how the inclusion of different partners affects procedural fairness perceptions and, ultimately, willingness to use the research as a legitimate basis for personal or public decision-making.
Defining conflict of interest
Most scholarship characterizes a conflict of interest as a situation in which an individual or organization has a decision-making role that might allow them to improperly benefit from the decisions they might make. This benefit could be financial, but it also could be personal (i.e., helping a friend) or a combination of the two (i.e., helping a work colleague). A benefit also might include avoidance of harm. Importantly, the existence of a conflict of interest does not require any specific behavior to occur. A conflict of interest is only the potential or tendency for biased behavior to occur (for a longer discussion, see: [19, 20, 21]). A relatively recent U.S. Institute of Medicine (IOM) report similarly defines conflicts of interest as “circumstances that create a risk that professional judgments or actions regarding a primary interest will be unduly influenced by a secondary interest” ( p. 6). Primary interests here include issues such as the protection of research integrity, students, and patients, while secondary interests include personal and professional concerns.
Most academic writing about conflicts of interest focuses on issues of disclosure. For example, leading medical journals such as the Journal of the American Medical Association ([e.g., [23, 24–26]) and the New England Journal of Medicine (e.g., [27, 28]) feature numerous articles reviewing existing disclosure policies and their impacts. Commentaries also suggest potential issues and recommend policy changes (e.g., [29, 30–32]). Such recommendations are indeed a central part of the IOM report cited above . Although disclosure policies are clearly important, they are not a panacea ; indeed, some research demonstrates that disclosure policies may decrease the quality of advice that experts provide [34, 35]. Less common are studies on the origins and impact of conflict of interest perceptions.
Experimental research on conflict of interest
As expected, the scant available scholarship suggests that conflict of interest perceptions have negative impacts on how people view research. A 2010 systematic review identifies 20 relevant peer-reviewed journal articles focused on perceptions of ‘financial ties’ in the context of health and medicine . Most studies find that, when asked directly, research participants report worrying about the impact of such conflicts and wanting to know about such conflicts. More important for our project, the limited experimental research reviewed finds that disclosure (versus non-disclosure) of industry funding decreases perceived research quality [37, 38] and decreases trust in the researchers . More recent research confirms such effects . Further, increasing the relative size of potential financial conflicts decreases trust perceptions . Nevertheless, medical research participants do not appear to be substantially less likely to enroll in clinical trials in the face of conflicts of interest [36, 42].
These earlier experimental studies typically ask subjects to review a research product (e.g., a paper) written by authors either in an industry or an academic position. Our project is novel since we more closely approximate the current reality of research funding and partnerships. Briefly, in our three experiments we randomly assign subjects to one of 15 different combinations of research partners—rather than simply ask subjects to consider a simple study or single research outcome.
Procedural fairness and the study of conflict of interest
We further bolster the novelty of our project by integrating insights from theory associated with procedural justice scholarship into the study of conflict of interest in science. Research in this area consistently shows that people often perceive decisions as legitimate—even if the person feels the decision may go against their interests—if they believe that such decisions result from fair procedures (i.e., ones that allow stakeholders to have a voice and are free from biases that might arise from conflicts of interest) employed by decision-makers who are seen as interpersonally fair (i.e., actors who are respectful) [43, 44]. Indeed, this focus on the effects of ‘procedural’ rather than ‘distributive’ forms of fairness has emerged from a critique of a rational choice approach to decision-making assuming that people only care about outcomes .
The three studies reported below conceptualize research involving potential risks as a ‘decision’ about which an observer (e.g., someone who is trying to decide about whether to consider new evidence) might make legitimacy judgments. The work further views the choice of research partners as a conflict of interest mitigation process that researchers can use to, at least partly, make the decision process more procedurally fair. The logic of considering collaboration as a conflict of interest mitigation process is similar to why one might want to include representatives from different parts of an organization when making decisions about major hiring. Likewise, the logic underlies why a political leader might want to build a coalition that includes different constituencies in a cabinet. The expectation is that having multiple voices participate is likely to limit the effect of any one voice.
The adaption of procedural justice concepts seems important for the current set of studies because past research on conflict of interest does not feature theory prominently (e.g.,). Building on past procedural justice theory can both provide a framework for measurement and analyses in the current work as well as suggest additional areas of future research. As noted above, McComas and her colleagues  apply procedural justice theory and find that the degree to which attendees saw FDA advisory committees as following fair procedures was associated with overall satisfaction and willingness to accept committee decisions. However, whereas this earlier study examines perceptions of an existing process, the current work seeks to identify processes that researchers might use to mitigate perceptions of bias.
Although not tested here, two related mechanisms likely underlie the impact of fairness perceptions . The most prominent is that people use procedural fairness as a heuristic to assess an outcome when they are unsure about what would constitute a correct outcome [47, 48]. This argument is consistent with findings that fair processes have little impact on those who see a decision as morally mandated—and thus not uncertain (e.g., ). Second, other scholarship posits that fair processes and fair treatment matter because they communicate to those affected by a decision that they are valued members of a social identity group. Research supporting this assertion shows that fair process perceptions have a greater impact within groups than between groups [50, 51]. These underlying mechanisms suggest that a person might use information about the partners in a research collaboration to make heuristic sense of a scientific study for which she or he may not have the time or ability to assess based its scientific merits. As such, we assume that our experimental subjects are using partnership information as a heuristic cue to help make sense of the likely fairness of the partnership to which they are randomly assigned.
Although much fairness research centers on workplace settings, scholars also consider activities in legal (e.g., [45, 52, 53]) and political (e.g., [54, 55, 56]) contexts. Of particular relevance here is recent research that adapts the study of fairness to public perceptions of scientific topics involving perceived health and environmental risk [57–62].
The current research
We perform three experimental studies to examine how different combinations of partners in a research collaboration influence subjects’ perceived procedural fairness and perceived legitimacy for that collaborative partnership. Although it is possible to imagine a range of ways to characterize a research partnership, we focus here on what we think is the most basic information about a partnership that might be communicated to create or alleviate concerns about conflict of interest: who is performing the research.
First, procedural justice scholarship and past research on conflict of interest suggests that a collaboration with industry or supported by industry funding will be seen as less fair and, through fairness, less legitimate. We formally state this as a two-part hypothesis describing fairness perceptions as a mediator.
Including an industry partner in a research collaboration (Hypothesis1a) directly reduces perceived fairness and (H1b) indirectly reduces perceived legitimacy.
It is less clear, however, how including other partners will influence fairness and legitimacy perceptions. As mentioned earlier, we consider three additional types of collaborators—university scientists, government agency scientists, and NGO scientists—on their own and in combination. Regarding the former, some recent research finds that university scientists generally have positive ratings of trust or fairness [63, 64]. Thus, it seems likely that including university scientists in a collaboration will have a positive effect on fairness and legitimacy perceptions.
Including a university partner in a research collaboration (H2a) directly increases perceived fairness and (H2b) indirectly increases perceived legitimacy.
Past research on conflict of interest does not, however, examine how people perceive government agency or NGO partners. The limited research on trust in scientists reveals that both of these groups are seen more positively than are industry scientists but less positively than are university scientists (e.g., [65, 66]). Given the lack of clear theoretical or empirical direction, we pose the effects of these partners as research questions.
What effect does including a government agency partner in a research collaboration have (Research Question 1a) directly on perceived fairness and (RQ1b) indirectly on perceived legitimacy?
What effect does including an NGO partner in a research collaboration have (RQ2a) directly on perceived fairness and (RQ2b) indirectly on perceived legitimacy?
The Institutional Review Board of the Human Research Protection Program at Michigan State University approved this research (IRB#x15-167e)(exempt).
Study 1 subjects were told that we wanted to learn their views about “a potential new cooperative research partnership aimed at studying the possible negative health impacts of low levels of transfats in food.” “S1 Text Stimulus.” The only part of the message that varied across the experimental conditions was the specific combination of the following partners:
- Kellogg’s (a food company);
- Purdue University;
- The U.S. Centers for Disease Control and Prevention (CDC, a government agency); and
- The Union of Concerned Scientists (a non-governmental organization).
The 15 experimental conditions included each possible combination of one, two, three, and four of these partners. We selected these four organizations as representatives of agrifood businesses, research universities, governmental agencies, and NGOs, respectively, given the results of a pre-test with undergraduate students at a large Midwestern university “S1 Table.” Briefly, the pre-test participants viewed these four organizations more positively and less negatively than they viewed similar organizations in these four sectors.
We administered our experiment via Qualtrics to subjects we recruited via Amazon Mechanical Turk (AMT), a crowdsourcing website where “requesters” solicit “workers” to perform “human intelligence tasks” (HITs) for pay. AMT has emerged as a practical way for recruiting a large number of participants from a reasonably wide cross-section of the general public—considerably more diverse than the traditional experiment recruitment pools of university undergraduates or surrounding community residents—either for online experiments (e.g., ) or for designing and testing new measurement instruments (e.g., ) across the social sciences (e.g., [69, 70]). Although the external validity of crowdsourced samples is limited (especially when compared to the generalizability of rigorously drawn representative samples), several studies confirm the value of AMT samples for assessing internal validity related to cause and effect, especially with between-subject experiments [70–72]. Further, we see no a priori reason to expect that the pattern of results found using our AMT sample would vary substantially with a different sample.
To solicit a broad cross-section of research subjects and minimize self-selection by AMT workers highly interested in food issues, we advertised a HIT titled “Industry Views Survey.” We limited participation to adults residing in the United States. Because this was an online experiment, the first page of the questionnaire included the approved consent statement, and subjects were asked to indicate consent by continuing with the questionnaire.
Subjects completed our experiment between April 19 and May 15, 2015. They earned $0.40 for participating, and the average response time was approximately seven minutes. We used two factual comprehension questions to screen out those subjects who failed to correctly identify the collaborative partners mentioned in their stimulus text. Our final sample includes 526 subjects who correctly answered both questions. An a priori power analysis for one-way ANOVA (main effects and interactions) with 15 groups conducted using G*Power , with a desired alpha of .05 and a power of .90 for a small/medium effect size (f = 0.15), shows that 469 respondents were needed for the current design. Approximately 53% of these subjects are female, 80% are white, and the average age is 39 (SD = 13). Also, about 27% of these subjects only have a high school diploma or GED, another 16% have earned up to an Associate’s degree, 43% hold a Bachelor’s degree, and an additional 4% have earned a doctorate or PhD.
As noted, we operationalized conflict of interest as perceived procedural fairness with a scale of seven items adapted from Leventhal . Subjects indicated the extent to which they disagreed (“strongly disagree” = 1) or agreed (“strongly agree” = 7) that “the research partnership” will “ignore stakeholders that they disagree with” [reverse-coded], “listen to each other’s views,” “draw on the best available evidence,” “keep the best interests of consumers in mind,” “hide important findings if they don’t support their organizations” [reverse-coded], “work hard to avoid biasing their results,” and “slant their research to favor industry needs” [reverse-coded]. The order of these items was randomized. This perceived procedural fairness scale is highly reliable (Cronbach’s alpha = .93).
The perceived procedural fairness scale correlated highly (r = -.67, p <.01) with a single-item indicator of conflict of interest included in the experiment. For the latter, subjects reported whether they believed the partnership described would create “NO conflict of interest” = 1 to “a COMPLETE conflict of interest = 7 (M = 3.47, SD = 2.05). We used the perceived procedural fairness scale rather than this single-item indicator here because of the measurement advantages of multi-item scales and the conceptual desire to advance procedural fairness as a theoretically grounded way to study conflict of interest. The results of additional analyses using the single-item indicator (available upon request) are similar to what we present below.
We measured perceived legitimacy with a scale of three items asking subjects about how the proposed research should be used. Subjects indicated the extent to which they disagreed (“strongly disagree” = 1) or agreed (“strongly agree” = 7) with these three items: “I would make decisions about my diet based on the results of this partnership,” “government should pay attention to the results of this research partnership,” and “I would share the results of this research with people I know.” The order of these items was randomized. This perceived legitimacy scale is reliable (Cronbach’s alpha = .79).
The results of exploratory factor analysis using Maximum Likelihood Estimation and Varimax rotation (available upon request) indicate that the three perceived legitimacy items are distinct from the seven perceived procedural fairness items. Yet, the two scales are highly correlated (r = .64, p<0.01).
We conducted most of our analyses with SPSS 22. We first performed a one-way ANOVA with post-hoc tests looking only at main effects. The results of a General Linear Model (GLM) analysis (not shown) suggested no substantial interaction effects between the various conditions and we therefore focus here on main effects in the context of the hypothesized mediation. To this end, after looking at means, we conducted a more detailed analysis using the PROCESS macro for testing simple mediation . PROCESS is well-established add-on to statistical packages such as SPSS that uses ordinary linear regression to estimate direct and indirect relationships and test for mediation. It can also be used to probe interactions. We ran this mediation model (with 1,000 bootstrap samples) four times to obtain estimates of both direct and indirect effects associated with having each type of organization involved in the partnership. Bootstrapping allows for the calculation of confidence intervals for indirect effects. These confidence intervals can be used to assess the statistical significance, something not possible in older approaches to assessing mediation such as those popularized by Baron and Kenny . Mediation analyses such as these cannot confirm that fairness perceptions cause legitimacy perceptions, but the ordering is consistent with the underlying literature. Both variables are also treated as continuous variables for the purpose of linear modeling and, as the independent variables are dichotomous, the provided coefficients can be understood as the effect of including that partner in the partnership on the dependent variable. We include the PROCESS analyses as the primary assessment of the degree to which the results are at least consistent with our hypotheses and research questions.
Additional ways of analyzing the data (e.g., including a measure for the number of partners, or a measure for the number of non-industry partners in a regression model, not shown) similarly provided little additional explanatory power. Also, as noted above, we focus here on the separate main effect of each potential collaborator in the research partnership on both perceived procedural fairness and, ultimately, perceived legitimacy. Using Mplus 7, we also performed structural equation modeling (SEM) using latent variables for perceived procedural fairness and perceived legitimacy. These SEM results—which are displayed in “S2 Table”—are similar to PROCESS analyses.
Table 1 reports post hoc analyses associated with an initial one-way, between-subjects ANOVA for Study 1. The one way ANOVA test for main effects found that there were statistically significant but somewhat small differences in main effect between the various types of partnerships, both for perceived procedural fairness of the research process [F(511, 14) = 6.90, p < .01, η2 = .16)] and for perceived legitimacy of the resulting research [F(511, 14) = 4.17, p < .01, η2 = .10].
Table 1 provides the results of post-hoc Tukey HSD tests for the perceived procedural fairness comparisons and the results of Games-Howell tests for the perceived legitimacy comparisons. We used two different tests to assess pairwise differences between the various conditions because Levene tests suggested equal variances between conditions for the perceived procedural fairness variable [F(511, 14) = .80, p = .67] but unequal variances for the perceived legitimacy variable [F(511, 14) = 1.94, p = .02]. The results of these tests indicate that including Kellogg’s in a research partnership investigating the health effects of small amounts of transfats in foods decreases both perceived procedural fairness and perceived legitimacy of the proposed research. Indeed, the partnerships with the eight lowest means for perceived procedural fairness and for perceived legitimacy each include Kellogg’s. The impacts of the inclusion of other partners are less clear in Table 1, though it does appear that partnerships including the UCS are perceived as fairer and more legitimate than partnerships that do not include the NGO.
Table 2 presents the results of the PROCESS mediation model for Study 1. Such models are based on Ordinary Least Squares regression but include two parts; the first part involves predicting the initial outcome that is hypothesized as the mediator (perceived procedural fairness, in this case) and the second part involves predicting the outcome that is the ultimate dependent variable (perceived legitimacy, in this case). In both cases, the estimates can be understood as unstandardized regression coefficients. The PROCESS macro then also provides a calculation of the overall direct relationship between the various independent variables and the indirect effect that is accounted for through the mediator with bootstrapping used to provide 95% confidence intervals in the absence of a statistical test for indirect effects. Statistical significance for the indirect effects, in this regard, can thus be understood as confidence intervals that do not include zero.
Overall the Study 1 model explains 15% of the variation in perceived procedural fairness and 42% of the variation in perceived legitimacy. These results, more than the mean comparisons, clearly convey the effect of including Kellogg’s in a research partnership investigating the health effects of small amounts of transfats in foods. Including Kellogg’s leads to an almost 1-unit decrease in the 7-unit scale for perceived procedural fairness (supporting H1a). The test of indirect effects in the bottom part of Table 2 shows that the inclusion of Kellogg’s has a statistically significant negative indirect effect on perceived legitimacy through its negative effect on perceived procedural fairness (supporting H1b).
Further, including the UCS in a partnership leads to a 0.41-unit increase in perceived procedural fairness (RQ2a). This inclusion also has a positive effect on perceived legitimacy via the indirect, positive path through perceived procedural fairness (RQ2b). Including Purdue University in a partnership has no statistically significant effect on either perceived procedural fairness or perceived legitimacy, offering no support to H2a and H2b, respectively. Also, including the CDC in a partnership has no statistically significant effect on perceived procedural fairness (RQ1a), but it does have a positive, direct effect on perceived legitimacy (RQ1b).
Overall, Study 1 clearly supports H1a and H1b. Including a private sector representative in a research partnership decreases the extent to which people perceive the collaboration as procedurally fair. Further, the pattern of results is consistent with the idea that this negative impact may reduce the extent to which people perceive the research results as a legitimate source of knowledge for use in their lives. Of course, these results may somewhat be an artifact of the particular issue we examined in Study 1. To allay such concern, we conducted a second, nearly identical experiment that focused on a different health science issue.
For Study 2, we chose to focus on the health impacts of genetically modified (GM) food. Industry funding of GM food research has received considerable public attention (e.g., ). Further, science and risk communication scholars  are devoting more attention to the topic, even applying procedural justice scholarship to understand public perceptions . To aim for wider generalizability across Studies 1 and 2, we further clarified that the proposed research on GM food would investigate possible positive health impacts of new GM grains (e.g., rice, wheat, corn) that are being designed to absorb less toxic substances from the soil than do current grains. “S2 Text Stimulus.” This focus on the potential positive health impacts of GM food in Study 2 may tap different dynamics than does the focus on the potential negative health impacts of small amounts of transfats in food in Study 1. (The consent process was identical to Study 1).
As with study 1, the Institutional Review Board of the Human Research Protection Program at Michigan State University approved this research (IRB#x15-167e)(exempt).
Other than shifting the substantive focus from transfats to GM food and replacing the CDC with the U.S. Food and Drug Administration (FDA) as the governmental agency partner, Study 2 employed the same design and identical measures to those used in Study 1. We again administered our experiment via Qualtrics to adult U.S. residents we recruited via AMT with a HIT titled “Collaboration between Different Research Partners.” Subjects completed our experiment between July 8 and July 14, 2015, and earned $0.75 for participating. The average response time was approximately eleven minutes. After excluding subjects who participated in previous studies, our final sample includes 627 subjects who correctly identified the collaborative partners mentioned in their stimulus text. For a discussion of power, see Study 1. Approximately 55% of subjects are female, 88% are white, and the average age is 37 (SD = 13). Also, about 31% of subjects have only a high school diploma or GED, another 14% have earned up to an Associate’s degree, 42% hold a Bachelor’s degree, and an additional 3% have earned a doctorate or PhD.
We measured conflict of interest as perceived procedural fairness with the same 7-item scale (Cronbach’s alpha = .93) and perceived legitimacy with the same 3-item scale (Cronbach’s alpha = .81) that we used in Study 1. As in Study 1, the results of exploratory factor analysis using Maximum Likelihood Estimation and Varimax rotation (available upon request) indicate that the three perceived legitimacy items are distinct from the seven procedural fairness items. Yet, the two scales are highly correlated (r = .61, p < .01). Also, the correlation between a 7-point single-item indicator of conflict of interest and our perceived procedural fairness scale was again quite high (r = -.60, p < .01). We also performed SEM using latent variables for perceived procedural fairness and perceived legitimacy. These SEM results, which are displayed in S2 Table, are similar to those presented in Table 2.
Table 3 again reports means and post-hoc test results from a one-way, between-subjects ANOVA results for Study 2. As with Study 1, the omnibus results reveal statistically significant main effect differences between the various types of partnerships both for perceived procedural fairness of the research process [F(612, 14) = 5.10, p < .01, η2 = .10)] and for perceived legitimacy of the resulting research [F(612, 14) = 3.80, p < .01, η2 = .08)]. This pattern of results, coupled with the post-hoc tests also reported in Table 3, indicates that including Kellogg’s in a research partnership investigating the health impacts of new GM grains decreases both the perceived procedural fairness (H1a) and perceived legitimacy (H1b) of the proposed research.
Table 2 also presents the results of the PROCESS mediation model for Study 2. Overall this model explains 9% of the variation in perceived procedural fairness and 61% of the variation in perceived legitimacy. These results of the PROCESS mediation model clearly convey the effect of including Kellogg’s in a research partnership investigating GM grains. Including Kellogg’s leads to a 0.73-unit decrease in perceived procedural fairness (supporting H1a). The test of indirect effects in the bottom part of Table 2 suggests that the inclusion of Kellogg’s has a statistically significant negative indirect effect on perceived legitimacy through its negative effect on perceived procedural fairness (supporting H1b).
As in Study 1, including the UCS in a partnership leads to an increase in perceived procedural fairness (RQ2a). The results are again consistent with a potential positive effect of this partner on perceived legitimacy via the indirect, positive path through perceived procedural fairness (RQ2b). Including Purdue in a partnership has no statistically significant effect on either perceived procedural fairness or perceived legitimacy, offering no support to H2a and H2b, respectively. While including a governmental agency (the CDC) in a partnership has a positive, direct effect on perceived legitimacy (RQ1b) in Study 1, it does not have such an effect in Study 2 (the FDA).
The results of Study 2 largely confirm those of Study 1, increasing our confidence in the patterns we have found. Given the size and consistency of this effect, we conducted a third experiment to gather some qualitative data to better understand why subjects perceive the inclusion of a private sector partner with such skepticism.
As with studies 1 and 2, the Institutional Review Board of the Human Research Protection Program at Michigan State University approved this research (IRB#x15-167e)(exempt).
Study 3 used the same transfats stimulus and between-subjects design with 15 randomly assigned conditions as in Study 1. After providing consent in the same manner as in Studies 1 and 2, subjects read their assigned description of a proposed partnership to examine the effects of small amounts of transfats in foods and answered the same three comprehension questions as used in Study 1. The remainder of the experiment simply asked subjects to respond to three open-ended questions about the partnership and its research. Below, we describe the results of a content analysis of responses to the following question.
Please share your views about the nature of the proposed research partnership itself. In your answer, please address the following. What are the strengths and limitations of the proposed partnership? Please fully explain your answer, including any positive or negative thoughts you have about any specific partners.
The other two questions addressed perceived impacts of the research and the types of activities the respondent thought might help mitigate potential problems and were meant to assist with future research. These are not addressed here.
We administered our experiment via Qualtrics to adult U.S. residents we recruited via AMT with a HIT titled “Answer a Few Questions about a Proposed Scientific Partnership.” Our experiment was completed by 305 U.S. residents on May 31, 2015. Subjects earned $0.75 for completing the experiment, which took slightly less than eight minutes on average. After excluding subjects who participated in Study 1, our final sample includes 222 subjects who provided correct answers to three comprehension questions. That is, they (a) knew that food producers use transfats to give food softer texture, (b) knew that the described study would examine if small amounts of transfats are unhealthy, and (c) correctly identified the presence or absence of Kellogg’s in the proposed partnership.
A co-author well-experienced in qualitative research performed the qualitative data analysis. Given that our specific objective (to better understand why inclusion of a private sector partner engendered heightened skepticism) was fairly straightforward and the data structure (brief responses to a single open-ended question) was relatively simple, we decided that multiple coders were unnecessary. Our co-author employed an inductive coding scheme to analyze responses to the italicized question above. First, he read all the responses, coding them for the presence or absence of negative, positive, and/or neutral comments. In doing so, he also identified 13 substantive themes that emerged from subjects’ responses . With this list of potential themes, he then re-read all the responses and coded them for the presence or absence of each substantive theme. Approximately 11% of responses contained no theme, nearly 49% contained just one, about 35% contained only two, and approximately 6% contained three themes. Below, we report and discuss those seven substantive themes appearing in at least 10% of responses.
Given our interest in public reaction to the inclusion of a private sector representative in a scientific partnership, we analyzed subjects’ responses in two subsamples: subjects who responded to partnerships including Kellogg’s (120 or 54%) and those who responded to partnerships that did not include Kellogg’s (102 or 46%). When the proposed partnership included Kellogg’s, 77% (92 of 120) of responses included a negative comment, 54% (65) included a positive comment, and 4% (5) remained neutral with neither a clearly positive nor clearly negative comment. When the proposed partnership did not include Kellogg’s, only 28% (29) of responses included a negative comment, 79% (81) of responses included a positive comment, and 9% (9) of responses remained neutral. The rather large difference in the percentage of negative responses across the two subsamples (48%) generally confirms the patterns revealed in Studies 1 and 2.
Table 4 reports the most prevalent themes—with italicized illustrative examples—about the proposed partnership that emerged from our content analysis. The percentages in Table 4 sum to more than 100, since subjects’ responses may contain more than one theme.
With Kellogg’s as a partner.
Three major themes emerged from responses by subjects exposed to partnerships including Kellogg’s. Almost 70% of these subjects noted that Kellogg’s is so problematic as a partner that its mere inclusion may call the entire partnership into question. The presence of Kellogg’s in the partnership provoked distrust and skepticism among many subjects. Indeed, when these subjects mentioned both positive and negative aspects of the partnership, the positives nearly always referred to partners or features other than Kellogg’s, and the negatives nearly always referred to Kellogg’s. Some assumed that Kellogg’s would likely try to skew the research results to their private interests and/or conceal undesirable results. Kellogg’s was described as “clearly,” “very,” “extremely,” “incredibly,” and “strongly” biased with an “ulterior motive,” “profit motive,” "vested financial interest,” and “conflict of interest.”
Almost one-third of responses (30%) claimed that other groups in the partnership might provide at least some checks on the financial influence and possibly dubious activities of Kellogg’s. These subjects believed that non-Kellogg’s partners would add scientific credibility and honesty to the research, while reducing the likelihood that Kellogg’s bias would influence methods and results. Subjects sometimes expected, and at other times merely hoped, that this would be the case. Generally, subjects seemed more confident in Purdue’s ability to counterbalance Kellogg’s influence than in the abilities of the CDC or UCS to do so. Indeed, a few subjects even acknowledged that other partners (notably, the CDC) may be easily manipulated by Kellogg’s.
Finally, about one-fourth of subjects (27%) mentioned that the partnership was at least minimally good, even if giving reservations. These responses described the partnership as “good,” “great,” “worthwhile,” “strong,” and “balanced.” They noted that the diverse membership of organizations working together would bring different interests and points of view together to strengthen the resulting research.
Without Kellogg’s as a partner.
Four major themes emerged from the responses of subjects exposed to partnerships not including Kellogg’s. The most prevalent theme, found in 67% of responses, was that the partnership was at least somewhat good—as illustrated by responses calling the partnership “a very good idea,” “fruitful,” “very beneficial,” “greatly beneficial,” and “great.” Several subjects voiced the adage that “more heads are better than one,” noting that multiple partners assure thoroughness and that partnerships can often achieve greater advances than when just one actor goes it alone. Overall, a good number of responses conveyed the expectation that the partnership would produce important, reliable, and useful results that would improve our understanding of the health risks of transfats.
Approximately one-fourth of subjects (25%) explicitly noted that the partners seemed immune to outside influence and had little or no conflict of interest or bias. Subjects claimed that the partners seem “fairly objective,” “like relatively neutral parties,” and “an unbiased group” with no clear agenda. Key here seemed to be the perception of no obvious interests in the outcome of the research, since the partners were not funded or employed by food companies. Because of this, subjects expected that the research would be performed in “an honest and unbiased manner.”
On the other hand, a third theme (found in 17% of responses) expressed concern about potential conflict of interest or bias. Subjects conveyed either general concerns about the broad influence of money or the potential for one partner to dominate another, or they focused more specifically on the potential financial conflicts of interests of individual scientists. Yet, a final theme (found in 12% of responses) claimed that the differences between the partners would balance out and their different points of view would likely neutralize any potential conflicts of interest or bias. Such subjects argued that one or more partners would serve as a counterbalance to any pressure another partner would potentially assert to bias the methods or results.
Private industry now dominates U.S. R&D funding , and university scientists are under increasing pressure to seek non-government funding and participate in public-private partnerships with private sector representatives [2–4]. Yet, much research has documented that in some cases industry-funded research may be less than credible, especially where the risk of regulation is nontrivial (e.g., [8, 9, 11, 79]). Further, public skepticism of industry-funded research—heightened by revelations about bias in such work—may likely constrain the efficacy and influence of its results (e.g., [5, 6]).
We performed three experiments to investigate public perception of industry-funded research. Building upon efforts to adapt procedural justice scholarship to the study of health and risk communication (e.g., [57, 59, 80]), we drew insights from procedural justice research to examine how the inclusion of an industry partner in a scientific collaboration influences the perceived procedural fairness of the partnership and the perceived legitimacy of the resulting research.
We found fairly consistent results across Studies 1 and 2. In both experiments, including Kellogg’s in a scientific partnership indirectly decreased perceived legitimacy of the partnership’s research via its negative effect on the perceived procedural fairness of the partnership. Also in both experiments, including an NGO (UCS) in a scientific partnership indirectly increased perceived legitimacy via its positive effect on perceived procedural fairness. Neither the inclusion of Purdue University nor the inclusion of a government agency (the CDC or the FDA) in the scientific partnership had a consistent effect on perceived procedural fairness or perceived legitimacy across both studies.
The results of Study 3 provided qualitative insights for better understanding the negative effect of including Kellogg’s on both the perceived procedural fairness and perceived legitimacy of the scientific partnership. Subjects exposed to a scientific partnership including Kellogg’s were nearly three times more likely to offer a negative comment about the partnership than were subjects exposed to a scientific partnership not including Kellogg’s. The former group of subjects expressed deep concern that Kellogg’s financial interests would lead it to skew the results of the partnership in ways favorable for the company. Although some subjects hoped that including other actors—whether research universities, government agencies, or NGOs—in the partnership might reduce this potential for bias, few reported with confidence that this would actually happen. In contrast, when Kellogg’s was not included in the partnership, subjects generally believed that the resulting research would be conducted fairly.
We end by briefly outlining an agenda for moving this scholarship forward. First, additional research should aim to replicate this study to more completely assess the consistency and generalizability of our results. Such future research could (a) explore a wider array of substantive areas (e.g., environmental impacts associated with new technologies, health impacts of industrial chemicals, effectiveness of drugs or supplements), (b) consider different kinds of potential research partners, and (c) examine whether it matters what the partnership ultimately finds. Procedural justice scholarship could provide additional insight here inasmuch as past results suggest that people may discount results they disagree with if they learn about these results before learning about the processes that lead to the results . Another important avenue of research would (d) involve considering additional procedures (e.g., independent advisory boards, transparency tools, etc.), beyond partnerships, that organizations might use either on their own or in combination. Also, as noted above, it may be worth considering additional work related to the question of interactions between partners or the effects of adding multiple partners. As noted, we conducted initial analyses on these types of questions using the data here, but ultimately we decided to focus on the main effects of each partner. We saw little evidence that the combinations mattered, and it therefore made more sense to us to focus on the most parsimonious model for this initial study. However, this should not be taken as strong evidence that it does not make sense to consider partnership combinations both for research and in real life.
Second, given the sizable contribution of industry funding to scientific research in the U.S. and especially if additional studies replicate our results, future multidisciplinary work should investigate historical and contemporary efforts to implement procedures that aim to reduce conflicts of interest due to industry involvement. This may include both historical and sociological research into what organizations have done or are doing, as well as potentially identifying the additional procedures (or combinations of procedures) that might be tried. Insights from procedural justice scholarship may further guide this research. For example, those interested in the cognitive mechanisms behind fairness perceptions could test, for example, whether communicating about conflict of interest mitigation procedures are more effective when subjects are forced to think heuristically or when they are more uncertain about the underlying topics (e.g., ).
S1 Table. Descriptive statistics from the pre-test of potential research partners.
S2 Table. Unstandardized direct, indirect, and total effects from structural equation model predicting perceived legitimacy (latent variable) by presence of collaborative partner with perceived procedural fairness (latent variable) as mediator.
S1 Text. Stimulus.
Study 1 stimulus with manipulated text italicized.
- Conceptualization: JCB AMM KCE NEK JDM.
- Data curation: JCB AMM NZ.
- Formal analysis: JCB AMM NZ.
- Funding acquisition: KCE.
- Investigation: JCB AMM NZ.
- Methodology: JCB AMM KCE NEK JDM.
- Project administration: KCE.
- Visualization: JCB AMM NZ.
- Writing – original draft: JCB AMM.
- Writing – review & editing: JCB AMM KCE NZ NEK JDM.
- 1. National Science Board. Research and development: National trends and international comparisons (Chapter 4). Washington, DC: National Science Foundation, 2014 October 22, 2014. Report No.
- 2. Gulbrandsen M, Mowery D, Feldman M. Introduction to the special section: Heterogeneity and university—industry relations. Research Policy. 2011;40(1):1–5.
- 3. Nestle M. Corporate funding of food and nutrition research: Science or marketing? JAMA Internal Medicine. 2016;176(1):13–4. pmid:26595855
- 4. Perkmann M, Tartari V, McKelvey M, Autio E, Broström A, D’Este P, et al. Academic engagement and commercialisation: A review of the literature on university—industry relations. Research Policy. 2013;42(2):423–42.
- 5. Horel S, Bienkowski B. Special Report: Scientists Critical of EU Chemical Policy Have Industry Ties 2013 [updated October 8, 2015]. http://www.environmentalhealthnews.org/ehs/news/2013/eu-conflict
- 6. Kloor K. GM-crop opponents expand probe into ties between scientists and industry. Nature. 2015;524(7564):145. pmid:26268173
- 7. Proctor RN. The history of the discovery of the cigarette—lung cancer link: evidentiary traditions, corporate denial, global toll. Tobacco Control. 2012;21(2):87–91. pmid:22345227
- 8. Rosner D, Markowitz G. Deceit and Denial: The Deadly Politics of Industrial Pollution. Berkeley, CA: University of California Press; 2002.
- 9. Michaels D. Doubt is Their Product: How Industry's Assault on Science Threatens Your Health. New York, NY: Oxford University Press,; 2008.
- 10. Elliott KC. Is a Little Pollution Good for You?: Incorporating Societal Values in Environmental Research. New York, NY: Oxford University Press; 2011. x, 246 p. p.
- 11. McGarity TO, Wagner W. Bending Science: How Special Interests Corrupt Public Health Research. Cambridge, MA: Harvard University Press; 2008. viii, 384 p. p.
- 12. Michaels D, Monforton C. Manufacturing uncertainty: Contested science and the protection of the public’s health and environment. American Journal of Public Health. 2005;95(S1):S39–S48.
- 13. Dunlap RE, McCright AM. Challenging climate change: The denial countermovement. In: Dunlap RE, Brulle RJ, editors. Climate Change and Society: Sociological Perspectives. New York, NY: Oxford University Press; 2015. p. 300–32.
- 14. Brulle RJ. Institutionalizing delay: foundation Funding and the creation of U.S. climate change counter-movement organizations. Climatic Change. 2014;122(4):681–94.
- 15. Oreskes N, Conway EM. Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. 1st U.S. ed. New York, NY: Bloomsbury Press; 2010. 355 p. p.
- 16. Malina D, Rosenbaum L. Conflicts of interest: Understanding bias, The case for careful study. The New England Journal of Medicine. 2015;372(20):1959–63.
- 17. Colquitt JA, Conlon DE, Wesson MJ, Porter C, Ng KY. Justice at the millennium: A Meta-analytic review of 25 years of organizational justice research. J Appl Psychol. 2001;86(3):425–45. pmid:11419803
- 18. McComas KA, Tuite LS, Waks L, Sherman LA. Predicting satisfaction and outcome acceptance with advisory committee Meetings: The role of procedural justice. Journal of Applied Social Psychology. 2007;37(5):905–27.
- 19. McComas KA, Simone LM. Media coverage of conflicts of interest in science. Sci Commun. 2003;24(4):395–419.
- 20. McComas KA, Tuite LS, Sherman LA. Conflicted scientists: the “shared pool” dilemma of scientific advisory committees. Public Underst Sci. 2005;14(3):285–303.
- 21. Davis M. Introduction. In: Davis M, Stark A, editors. Conflict of Interest in the Professions. New York, NY: Oxford University Press; 2001. p. 3–19.
- 22. Field MJ, Lo B, editors. Conflict of Interest in Medical Research, Education, and Practice. Washington, DC: National Academies Press; 2009.
- 23. Cho MK, Shohara R, Schissel A, Rennie D. POlicies on faculty conflicts of interest at us universities. JAMA. 2000;284(17):2203–8. pmid:11056591
- 24. Bekelman JE, Li Y, Gross CP. Scope and impact of financial conflicts of interest in biomedical research: A systematic review. JAMA. 2003;289(4):454–65. pmid:12533125
- 25. Lurie P, Almeida CM, Stine N, Stine AR, Wolfe SM. FInancial conflict of interest disclosure and voting patterns at food and drug administration drug advisory committee meetings. JAMA. 2006;295(16):1921–8. pmid:16639051
- 26. Anderson TS, Dave S, Good CB, Gellad WF. Academic medical center leadership on pharmaceutical company boards of directors. JAMA. 2014;311(13):1353–5. pmid:24691612
- 27. McCrary SV, Anderson CB, Jakovljevic J, Khan T, McCullough LB, Wray NP, et al. A national survey of policies on disclosure of conflicts of interest in biomedical research. The New England Journal of Medicine. 2000;343(22):1621–6. pmid:11096171
- 28. Hampson L, Agrawal M, Joffe S, Gross CP, Verter J, Emanuel EJ. Patients' Views on Financial Conflicts of Interest in Cancer Research Trials. The New England Journal of Medicine. 2006;355(22):2330–7. pmid:17135586
- 29. Korn D, Carlat D. Conflicts of interest in medical education: Recommendations from the pew task force on medical conflicts of interest. JAMA. 2013;310(22):2397–8. pmid:24327035
- 30. Brennan TA, Rothman DJ, Blank L, Blumenthal D, Chimonas SC, Cohen JJ, et al. Health industry practices that create conflicts of interest: A policy proposal for academic medical centers. JAMA. 2006;295(4):429–33. pmid:16434633
- 31. Levinsky NG. Nonfinancial conflicts of interest in research. The New England Journal of Medicine. 2002;347(10):759–61. pmid:12213950
- 32. Studdert DM, Mello MM, Brennan TA. Financial Conflicts of Interest in Physicians' Relationships with the Pharmaceutical Industry—Self-Regulation in the Shadow of Federal Prosecution. The New England Journal of Medicine. 2004;351(18):1891–900. pmid:15509824
- 33. Elliott KC. Scientific judgment and the limits of conflict-of-interest policies. Accountability in Research. 2008;15(1):1–29. pmid:18298027
- 34. Cain Daylian M, Loewenstein G, Moore Don A. The Dirt on Coming Clean: Perverse Effects of Disclosing Conflicts of Interest. The Journal of Legal Studies. 2005;34(1):1–25.
- 35. Cain Daylian M, Loewenstein G, Moore Don A. When sunlight fails to disinfect: Understanding the perverse effects of disclosing conflicts of interest. Journal of Consumer Research. 2011;37(5):836–57.
- 36. Licurse A, Barber E, Joffe S, Gross Ca. The impact of disclosing financial ties in research and clinical care: A systematic review. Archives of Internal Medicine. 2010;170(8):675–82. pmid:20421551
- 37. Chaudhry S, Schroter S, Smith R, Morris J. Does declaration of competing interests affect readers' perceptions? A randomised trial. BMJ. 2002;325(7377):1391–2. pmid:12480854
- 38. Schroter S, Morris J, Chaudhry S, Smith R, Barratt H. Does the type of competing interest statement affect readers' perceptions of the credibility of research? Randomised trial. BMJ: British Medical Journal. 2004;328(7442):742–3. pmid:14980983
- 39. Goodwin RE, Mullan BA. Trust not in money: The effect of financial conflict of interest disclosure on dietary behavioural intention. British Food Journal. 2009;111(5):408–20.
- 40. Kesselheim AS, Robertson CT, Myers JA, Rose SL, Gillet V, Ross KM, et al. A randomized study of how physicians interpret research funding disclosures. New England Journal of Medicine. 2012;367(12):1119–27. pmid:22992075
- 41. Weinfurt K, Hall M, Dinan M, DePuy V, Friedman J, Allsbrook J, et al. Effects of disclosing financial interests on attitudes toward clinical research. J GEN INTERN MED. 2008;23(6):860–6. pmid:18386101
- 42. Agoritsas T, Deom M, Perneger TV. Study design attributes influenced patients' willingness to participate in clinical research: a randomized vignette-based study. Journal of Clinical Epidemiology. 2011;64(1):107–15. pmid:20558036
- 43. Colquitt JA, Greenberg J, Zapata-Phelan CP. What is organizational justice? A historical overview. In: Greenberg J, Colquitt JA, editors. Handbook of Organizational Justice. Mahwah, NJ: Lawrence Erlbaum Associates; 2005. p. 3–58.
- 44. Tyler TR. Social Justice: Outcome and Procedure. International Journal of Psychology. 2000;35(2):117–25.
- 45. Thibaut JW, Walker L. Procedural Justice: A Psychological Analysis. Mahwah, NJ: Lawrence Erlbaum Associates; 1975.
- 46. Blader SL, Tyler TR. How can theories of organizational justice explain the effects of fairness. In: Greenberg J, Colquitt JA, editors. Handbook of Organizational Justice. Mahwah, NJ: Lawrence Erlbaum Associates; 2005. p. 329–54.
- 47. van den Bos K, Vermunt R, Wilke HAM. Procedural and distributive justice: What is fair depends more on what comes first than on what comes next. Journal of Personality and Social Psychology. 1997;72(1):95–104.
- 48. van den Bos K, Miedema J. Toward understanding why fairness matters: The influence of mortality salience on reactions to procedural fairness. Journal of Personality and Social Psychology. 2000;79(3):355–66. pmid:10981839
- 49. Skitka LJ, Houston DA. When due process is of no consequence: Moral mandates and presumed guilt or innocence. Social Justice Research. 2001;14(3):305–26.
- 50. Tyler TR, Blader SL. The group engagement model: Procedural justice, social identity, and cooperative behavior. Personality and Social Psychology Review. 2003;7(4):349–61. pmid:14633471
- 51. Blader SL, Tyler TR. Testing and extending the group engagement model: Linkages between social identity, procedural justice, economic outcomes, and extrarole behavior. J Appl Psychol. 2009;94(2):445–64. pmid:19271800
- 52. Tyler TR. Trust and legitimacy: Policing in the USA and Europe. European Journal of Criminology. 2011;8(4):254–66.
- 53. Lind EA, Walker L, Kurtz S, Musante L, Thibaut JW. Procedure and outcome effects on reactions to adjudicated resolution of conflicts of interest. Journal of Personality and Social Psychology. 1980;39(4):643.
- 54. Rasinski KA, Tyler TR, Fridkin K. Exploring the function of legitimacy: Mediating effects of personal and institutional legitimacy on leadership endorsement and system support. Journal of Personality and Social Psychology. 1985;49(2):386–94.
- 55. Gangl A. Procedural justice theory and evaluations of the lawmaking process. Political Behavior. 2003;25(2):119–49.
- 56. Hibbing JR, Theiss-Morse E. Stealth Democracy: Americans' Beliefs about how Government Should Work. New York, NY: Cambridge University Press; 2002. xiv, 284 p. p.
- 57. McComas KA, Besley JC, Steinhardt J. Factors influencing U.S. consumer support for genetic modification to prevent crop disease. Appetite. 2014;78(1).
- 58. Besley JC, Oh S-H. The Impact of Accident Attention, Ideology, and Environmentalism on American Attitudes Toward Nuclear Energy. Risk Anal. 2014;35(5):949–64.
- 59. Besley JC, McComas KA. Fairness, public engagement and risk communication. In: Arvai JL, Rivers L, editors. Effective Risk Communication. New York, NY: Routledge/Earthscan; 2014. p. 108–23.
- 60. Joss S, Brownlea A. Considering the concept of procedural justice for public policy-and decision-making in science and technology. Science and Public Policy. 1999;26(5):321–30.
- 61. McComas KA, Trumbo CW, Besley JC. Public meetings about suspected cancer clusters: The impact of voice, interactional justice, and risk perception on attendees’ attitudes in six communities. J Health Commun. 2007;12(6):527–49. pmid:17763051
- 62. Thrasher JF, Besley JC, Gonzalez W. Perceived justice and popular support for public health laws: A case study around comprehensive smoke-free legislation in Mexico City. Soc Sci Med. 2010;70(5):787–93. pmid:20022682
- 63. Fiske ST, Dupree C. Gaining trust as well as respect in communicating to motivated audiences about science topics. Proceedings of the National Academy of Sciences. 2014;111(Supplement 4):13593–7.
- 64. McComas KA, Besley JC, Yang Z. Risky business: The perceived justice of local scientists and community support for their research. Risk Anal. 2008;28(6):1539–52. pmid:18808391
- 65. Huffman WE, Rousu M, Shogren JF, Tegene A. Who do consumers trust for information: The case of genetically modified foods? American Journal of Agricultural Economics. 2004;86(5):1222–9.
- 66. Frewer LJ, Howard C, Hedderley D, Shepherd R. What determines trust in information about food-related risks? Underlying psychological constructs. Risk Anal. 1996;16(4):473–86. pmid:8819340
- 67. Clements JM, McCright AM, Dietz T, Marquart-Pyatt ST. A behavioural measure of environmental decision-making for social surveys. Environmental Sociology. 2015;1(1):27–37.
- 68. McCright AM, Dentzman K, Charters M, Dietz T. The influence of political ideology on trust in science. Environmental Research Letters. 2013;8(4):044029.
- 69. Paolacci G, Chandler J. Inside the turk: Understanding Mechanical Turk as a participant pool. Current Directions in Psychological Science. 2014;23(3):184–8.
- 70. Weinberg JD, Freese J, McElhattan D. Comparing data characteristics and results of an online factorial survey between a population-based and a crowdsource-recruited sample. Sociological Science. 2014;1:292–310.
- 71. Berinsky AJ, Huber GA, Lenz GS. Evaluating online labor markets for experimental research: Amazon.com's Mechanical Turk. Polit Anal. 2012;20(3):351–68.
- 72. Paolacci G, Chandler J, Ipeirotis PG. Running experiments on Amazon Mechanical Turk. Judgement and Decision Making. 2010;5(5):411–9.
- 73. Faul F, Erdfelder E, Buchner A, Lang A. G* Power (version 3.1)[Computer software]. Bonn: Bonn University; 1992.
- 74. Leventhal GS. What should be done with equity theory? New approaches to the study of fairness in social relationships. In: Gergen K, Greenberg M, Wilis R, editors. Social Exchange: Advances in Theory and Research. New York, NY: Plenum Press; 1980.
- 75. Hayes AF. Introduction to Mediation, Moderation, and Conditional Process Analysis: A Regression-based Approach. New York, NY: The Guilford Press; 2013. xvii, 507 pages p.
- 76. Baron RM, Kenny DA. The moderator—mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of personality and social psychology. 1986;51(6):1173. pmid:3806354
- 77. Brossard D, Shanahan J, Nesbitt TC. The Media, the Public and Agricultural Biotechnology. Cambridge, MA: CABI; 2007. xxiv, 405 p. p.
- 78. Guest G, MacQueen KM, Namey EE. Applied Thematic Analysis. Los Angeles, CA: Sage Publications; 2012. xx, 295 p. p.
- 79. Lundh A, Sismondo S, Lexchin J, Busuioc O, Bero L. Industry sponsorship and research outcome. Cochrane Database of Systematic Reviews. 2012(12).
- 80. Webler T. Why risk communicators should care about the fairness and competence of their public engagement processes. In: Arvai JL, Rivers L, editors. Effective Risk Communication: Earthscan; 2013. p. 124–41.
- 81. van den Bos K. Uncertainty management: The influence of uncertainty salience on reactions to perceived procedural fairness. 2001;80(6):931–41.