Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Does Stigmatized Social Risk Lead to Denialism? Results from a Survey Experiment on Race, Risk Perception, and Health Policy in the United States

  • Yarrow Dunham,

    Affiliation Department of Psychology, Yale University, New Haven, Connecticut, United States of America

  • Evan S. Lieberman ,

    Affiliation Department of Political Science, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America

  • Steven A. Snell

    Affiliation Social Science Research Institute, Duke University, Durham, North Carolina, United States of America

Does Stigmatized Social Risk Lead to Denialism? Results from a Survey Experiment on Race, Risk Perception, and Health Policy in the United States

  • Yarrow Dunham, 
  • Evan S. Lieberman, 
  • Steven A. Snell


In this article, we report findings from an original survey experiment investigating the effects of different framings of disease threats on individual risk perceptions and policy priorities. We analyze responses from 1,946 white and African-American participants in a self-administered, web-based survey in the United States. We sought to investigate the effects of: 1) frames emphasizing disparities in the racial prevalence of disease and 2) frames emphasizing non-normative (blameworthy or stigmatized) behavioral risk factors. We find some evidence that when treated with the first frame, African-Americans are more likely to report higher risk of infection (compared to an African-American control group and to whites receiving the same treatment); and that whites are more likely to report trust in government data (compared to a White control group and to African-Americans receiving the same treatment). Notwithstanding, we find no support for our hypotheses concerning the interactive effects of providing both frames, which was a central motivation for our study. We argue that this may be due to very large differences in risk perception at baseline (which generate limits on possible treatment effects) and the fact that in the context of American race relations, it may not be possible to fully differentiate racialized and stigmatized frames.

1 Introduction

One of the most important welfare-preserving functions that government officials, international organizations, and other governance actors can perform is the provision of information about possible dangers and threats to citizens. Such information can be used by citizens to take actions to protect themselves from such dangers and possibly to support policies that would help to mitigate the consequences of those risks. Because public officials generally cannot provide individualized assessments of risk, they are routinely faced with a fundamental choice: present information that suggests that risks are universal across all individuals in the community or emphasize distinctions about the relative risks across particular groups. The latter is often a preferred strategy among public health officials, in particular because it is thought to support to more efficient allocation of prevention resources to the subsets of the population where they are most needed.

The present research asks whether it is always a good idea to communicate risks in terms of group differences. Are there contexts in which doing so might link a group to socially stigmatized behavior and other negative outcomes that might have consequences for intergroup conflict and political polarization? Our concerns are rooted in a theoretical appreciation of the social construction of risk perception and the implications of intergroup conflict and social identity theories. More broadly, although many everyday dangers are well characterized by experts and policy-makers, a long line of research demonstrates that how individuals perceive risks is far from straightforward: risk perception is not merely a function of cold, hard facts concerning the prevalence of certain dangers within a population. In particular, as several scholars have convincingly explained, perceptions of and responses to “objective” dangers tend to deviate from economic notions of expected utility [14].

In this article we consider the potential drivers of group-based disparities in risk perceptions and risk-related policy preferences through analysis of a survey experiment of a racially stratified sample of Americans that treats individuals with varying informational frames. Specifically, we focus on attitudes and preferences regarding Acquired Immune Deficiency Syndrome (AIDS) and diabetes. Beginning with the seminal contribution of [5], scholars have demonstrated important framing effects on risk perceptions. Informational frames provide alternative perspectives akin to different “visual scene(s)” for understanding the problem about one must make a decision [5]. [6] distinguishes among different types of framing effects. On the one hand, scholars have routinely employed the types of “equivalency” frames discussed in [5], in which the same pieces of information are described in slightly different ways, often varying in terms of positive and negative portrayals. On the other hand, “emphasis” framing involves treating individuals with a different “subset of potentially relevant considerations” [6], i.e. actually providing different pieces of information.

We focus here on the differential impact of emphasis frames with respect to a set of disease threats. In particular, we investigate the effects of information about racial prevalence and about non-normative behavioral risk factors (which we also refer to as blameworthy or stigmatized behaviors). In both cases, the frames were chosen because they provide salient information for risk calculation that could lead individuals to rationally update their risk perceptions. Further, they are both frames that are routinely employed in actual public health campaigns and public discourse around disease threats. Importantly, however, these two frames are not merely informative. That is, in addition to providing information relevant to risk assessment, these frames, singly or in combination, potentially invoke intergroup concerns as well as other identity-relevant aspects of information processing. For example, racial prevalence frames may evoke social identity concerns, with citizens internalizing messages about whether their group is at relatively low or relatively high risk. Frames emphasizing non-normative behavioral risk factors link the disease to stigma [7], which individuals might be motivated to minimize or avoid. And combining these two frames by including both race prevalence and non-normative behavioral risk factors shifts the stigma from the level of the individual to the group.

How might these frames affect informational uptake? For members of the lower risk group, such a combined frame might evoke negative stereotypes of the higher risk group, which has now been linked to non-normative behavior via its higher disease-prevalence. On the other hand, for members of the higher risk group, a combined frame might evoke fear of being so stereotyped (a form of social identity threat) or anger that the presentation links negative connotations to their group. Much research in social psychology has documented the psychological power of these linkages in a variety of domains. While the broader pattern of results is somewhat complex, threats to the group can lead to rejection or denial of information and recommitment to the group in the highly identified, and de-identification in those that are not highly identified with the group [8]. Increasing the salience of potentially stigmatizing information at either the individual or group level also frequently leads to disengagement, for example by attributing the information to unfairness or discrimination [9], and effects of this nature have previously been described in the health domain [10, 11]. Thus, on the one hand, our research design allows us to test the sensitivity of individuals to new information that is objectively relevant to risk assessment (Is prevalence higher in my group? Do I engage in behaviors known to be high risk?). On the other hand, we also explore the possibility that the same factors that increase risk assessment on purely objective grounds might in some cases decrease risk assessment when more complex identity and intergroup processes are engaged.

To explore these issues, we proposed and pre-registered several hypotheses related to these concerns. Our hypotheses and analysis plan were pre-registered on April 24, 2014 at, before we analyzed our data. First, when an informational frame emphasizes disparities in disease prevalence across groups, members of those groups will rationally adjust their risk calculations and policy priorities in the direction of the prevalence information. Second, when prevalence information emphasizes non-normative behaviors, individuals will tend to reduce their risk perception as a means of distancing themselves from such behaviors. Third and finally, when both emphases are combined, we expect that identity threat invoked in high-prevalence groups will lead members of that group to distance themselves from the threat by denying the risk, while members of the low-prevalence group will also reduce their risk perception because of the additive effects described in our first two hypotheses.

As we discuss below, even after controlling for a wide range of factors that ought to be clear and proximate predictors of risk perception, we find that both race and gender were strong predictors of risk assessment at baseline. Perhaps in part due to these differences, we do not subsequently find substantial support for our core hypotheses concerning the effects of different emphasis frames introduced in the experimental treatment. While many estimated raw treatment effects are large and in the predicted direction, standard errors are also large, and we do not find many more statistically significant effects than one might expect from chance. Notwithstanding, several of the findings suggest avenues for future research. In particular, informational frames that highlight race disparities positively affect reported risk perceptions and negatively affect reported trust in the government data (that indicates group disparities in prevalence or danger) for members of the high danger group; and we find exactly the opposite for the low danger group. In short, information about racial prevalence can intensify group-based polarization of beliefs.

2 Theory and hypotheses

Our central motivation for the study was to better understand the group-oriented or social-relational dynamics of risk perception. While any individual has a potentially unique risk profile given their knowledge, resources, genetics, social networks, and other factors, membership in ascriptively defined groups, such as race or gender, may also independently affect risk perception in contexts where those cleavages are socially salient. In particular, we raise the possibility that group-differentiated messages about particular dangers might have unintended consequences leading to risk denialism. We raise this concern based on a recognition that intergroup relations may strongly affect individual risk perceptions, particularly when a given danger carries negative social connotations—i.e., some form of social stigma, including blame or shame for non-normative behavior. Theoretically, we draw on a series of insights from social psychology [1214], which emphasize the consequences of intergroup conflict for cognitions and behavior. Of particular relevance for our purposes is the consistent finding associated with social identity theory (SIT) that individuals will use group heuristics to interpret new information, which in turn biases how they process that information as well as their associated attitudes and behaviors. Moreover, individuals strive to develop and to maintain a positive self-image and they will use and protect their group identity to that end [15]. While usually discussed in terms of “self-enhancing” effects such as preferring the ingroup, the same broad motivation should also lead group members to downplay or to deny information that paints their group in a negative light, for example by impugning its reliability or simply not attending to it as fully.

As depicted in Table 1, we develop our propositions with respect to how different types of information affect members of “high-danger” and “low-danger” social groups. Specifically, if information about a given danger is presented in a group-differentiated manner and has been framed (explicitly, or through previous socialization) as the product of potentially blameworthy behavior, individual members of the group identified as being at higher risk are likely to engage in efforts to distance themselves from the threatening portrayal of their group and thereby protect themselves from the shame of association [16]. Although it is theoretically possible for individuals to disidentify, i.e. to distance themselves from the group itself, in the case of salient, personally meaningful, and externally ascribed social identities such as those connected with race and ethnicity, disidentification will often not be a viable strategy. Thus, we anticipate that denialism will be the more common response to this kind of identity threat.

Following the same logic, we expect that members of the social group identified as being at lower risk of a stigmatized condition (and particularly those individuals who strongly identify with the group) are likely to more positively assess the reliability of the framing information precisely because it deflects the stigmatized risk away from their own group. This also can be interpreted in terms of identity maintenance, in that it also corresponds to perceiving their own risks as being very low.

From this theoretical foundation, our hypotheses pertain to subject populations in which three key conditions hold: there exist identifiable groups within the subject population who engage in a degree of intergroup conflict; there exists a substantial danger which increases in likelihood as a function of presumed voluntaristic/non-normative behavior (e.g., lung cancer and smoking in the American context); and there exists credible data suggesting that the danger is more prevalent in one group as compared with the other. We subsequently refer to these groups as high-danger (HD) and low-danger (LD), respectively. Such conditions are routinely met in the real world, and our examination focuses on two real world health concerns as well as a real world intergroup cleavage, but future research could induce similar conditions in a laboratory context.

Primarily, our research seeks to test the three-way interaction between group identity, the framing of the danger in terms of social group heuristics, and the framing of the danger as the product of stigmatized or “blameworthy” behavior. As such, we consider four different treatment arms being applied to two different groups, for a total of eight treatment conditions:

We advance the three following hypotheses, which grow from our registered pre-analysis plan (in our pre-analysis plan, we specified H2 as our principle research hypothesis and H1 and H3 as auxiliary hypotheses):

  • H1: The Group Heuristic hypothesis: In the absence of stigmatization, group-differentiated information will cause individuals to adjust their risk perceptions in line with group membership. Members of the low-danger group should decrease, and those from the high-danger group should increase their risk perception in the case of group-differentiated information about prevalence. With respect to the outcomes of risk perception and support for policies and practices that protect against the specific danger (such as increased budget expenditures or special insurance benefits targeted at that danger), we predict the following relationships post-treatment: LD2 < LD1 and HD2 > HD1.
  • H2: The Denialism hypothesis: When high-danger groups are treated with both the stigmatized (blame-worthy) frame and group-differentiated information, we expect to see various manifestations of denialism. In the context of the stigmatized frame, we should see the introduction of information about group-differentiated prevalence lead to a decrease in risk perception (denialism), lower levels of trust in the government data identifying group-differentiated risks, greater reporting of feelings of shame, and less sympathy for those infected, and less support of protective policies among members of the high-danger group. With respect to these outcomes, we predict the following relationship post-treatment: HD4 < HD3.
  • H3: The Status Confirming hypothesis: When members of the low-danger group receive both the stigmatizing frame and group-differentiated information, we expect that this information will confirm pre-existing ingroup biases, because this combined treatment implies that members of their own group are at relatively lower risk (as compared with members of the outgroup) for a “blame-worthy” danger. Thus, we expect those from the low-danger group who receive the combined frame will report greater trust in government data identifying group-differentiated prevalence, as compared with low-danger group members who receive only the stigmatized frame. With respect to reported trust in government data, we predict the following relationship post-treatment: LD4 > LD3.

3 Methods

3.1 Design overview

The crux of the design is a block-randomized framing experiment, which incorporated the four treatment conditions described in Table 1. The experiment was fielded online with a national sample of roughly equal parts white and African-American respondents. Respondents self-reported their race, using a multiple response race question. We coded as white those who self-identified as “White/Caucasian,” including those who identified as some combination of white and some other non-African-American race. We coded as African-American respondents who self-identified as African-American, including those who identified as a combination of African-American and any other race.

All survey data were collected during the summer of 2014 through self-administered online surveys. The majority of responses were collected in a six-week window between late May and early July, but in order to increase the effective sample size of African-American respondents, the researchers collected data from an additional 185 African-American respondents during August 2014. We are comfortable combining the cases because African-American respondents from the early and late summer look similar to one another on demographics and pertinent attitudes about disease. Furthermore, an indicator variable recording when the data were collected is not a significant predictor of attitudes or policy preferences.

Prior to random assignment to treatment condition, subjects were randomly assigned to one of two disease conditions—AIDS or diabetes—which were included so that we could reach conclusions that were not necessarily disease-specific. Randomization of treatment was automated to achieve balance within groups.

Our research protocol was approved by Princeton University’s Institutional Review Board (#5708) as an expedited review because it was deemed to pose only “minimal risk” to participants. At the start of our online survey, we described the nature of the questions and the likely time commitment of participation, and subjects were informed that they could opt out at any time. Subjects were asked about their consent to proceed, and if they clicked affirmatively, the online survey would commence.

All survey participants were invited to participate in two survey waves. The first survey served as a baseline (pre-treatment) survey, primarily gauging demographic characteristics and pretreatment concern about disease. We measured baseline perceptions of risk in such a way as to minimize respondents’ awareness of our particular focus on AIDS and diabetes. The relevant question on the wave one survey asked, “How much of a concern do you think each of the following health and medical problems should be for the future development of the American health care system?” and we offered a sliding response scale with anchors of “Not a concern”; “Minor concern”; “Substantial concern”; “Important concern”; and “Critical concern.” Respondents were asked to select a level of concern for each of the following: asthma, cancer, poor hearing/deafness, diabetes, AIDS, influenza, obesity, poor vision/blindness. Based on this list, we identified a baseline concern with our two key disease conditions. The second wave survey included the experimental treatment, outlined below, and asked about policy preferences, perceived risks, and other outcomes related to public health and health care.

We programmed the survey software to balance random assignment to treatment within blocks of respondents based on responses to the wave 1 survey. Blocks were constructed in terms of self-reported racial identity, gender, income, and pre-treatment responses to questions eliciting concern about diabetes and AIDS. After receiving one of the four treatment conditions, all respondents were asked to respond to questions about policy preferences and perceptions of risk relevant to the particular disease. Because our predictions are distinct for each race group, we analyze the post-treatment data separately for each group and with respect to each disease condition.

This two-wave approach has several advantages. First, by measuring demographics separately from the survey experiment, the design minimizes the chances that questions pertaining to one would contaminate responses to questions about the other. The design is also more efficient than simple randomization because it ensures greater balance of covariates across treatment arms [1719]. However, this approach was also more expensive than a simple single-wave survey, and involved some attrition of subjects (as discussed below).

3.2 Research subjects and data quality

Qualtrics Panels, acting on our behalf, recruited from standing online, non-probability-based panels a national sample roughly-evenly comprised of white and African-American respondents. Prior to fielding the experiment, we conducted a small scale pilot study (N = 100) to gather initial evidence as to effect sizes of interest. This study yielded effect sizes for our hypothesized comparisons ranging from.10 < f <.18, and based on our design we prespecified a sample size of 1000 per disease condition to provide us with acceptable power (.89) to detect the smaller of those effects.

Anticipating that many respondents would not return for the second wave, we collected 3,030 total responses in the first wave survey–from 1,255 white and 1,775 African-American respondents–in order to obtain approximately 1,000 white and 1,000 African-American complete cases. Of these respondents, 1,946–including 1,020 self-identified white and 926 self-identified African-American respondents–also completed the second wave. This is to say that we experienced an attrition rate of approximately 36%. Consistent with research on panel attrition (e.g. [20]), we find higher attrition among African-American (48%) than white respondents (19%). Notably, the respondents who returned for the second wave were generally very similar to the larger pool of first wave respondents: the only metric on which they diverge is race—many fewer African-Americans took the second survey. Nevertheless, since there is so little information provided by the respondent in wave 1, we drop from analysis all respondents that failed to participate in the wave 2 survey. Descriptive statistics are presented in Table A in S1 Appendix.

While the data draw on a national pool of respondents, they are not, strictly speaking, nationally representative. By design, our sample includes an oversample of African-Americans. Furthermore, by virtue of the composition of the online pool of subjects from which our sample is drawn, our sample is disproportionately female (59% of respondents are female). Table 2 shows that our sample is not perfectly representative of the U.S. population: compared to the gold-standard Current Population Survey, our respondents are more likely to be female and fewer of our respondents are very poor, have very low levels of education, or are very young. Nevertheless, our use of an online, convenience sample–despite its limitations–is consistent with similar experimental research [21, 22], including research on risk perception (e.g. [23]).

Table 2. Comparison of survey subjects and national population.

We took precautions not to prime respondents to think about diabetes or AIDS before consenting to participate in the study. Our recruitment materials encouraged panel members to complete a survey on “Priorities in American Health Care.” Our consent script further directed, “Health care in America is an important political issue. Citizens hold many different opinions about which health problems should receive the most attention from government. In this survey, we ask you to share your views about what government should do.” As such, we believe that self-selection bias is minimal because respondents had only a vague notion of the study’s topic when they consented to participate.

We also included several data quality checks in the second, main survey in order to ensure the quality of responses. First, building on [24], we aimed to improve subject attention by training respondents to read carefully. Our training exercise showed all respondents a screen with information about pancreatic cancer and then on the next screen asked the respondents to identify the type of cancer that was previously referenced. This exercise proved relatively easy, as 93% of subjects passed. All respondents continued in the survey except for the 17 subjects (<1%) who failed the training exercise twice and were eliminated from subsequent analysis (these respondents were terminated before the actual experiment, so all following analyses use the reduced sample size of 1,946).

The survey also employed a manipulation check and a more standard attention check. After respondents were randomized to an experimental condition and presented the condition-specific information about diabetes or HIV/AIDS, they were presented two statements and asked which piece of information was presented in the text on the previous screen. In the treatment groups, the correct answer was a statement that summarized the experimental elements. This serves as a manipulation check in the sense that the correct response identifies the elements that are different across the conditions. About 93% of respondents (1,804 of 1,946) passed this manipulation check. Those that failed the manipulation check were shown the control or treatment condition screen again, but were not further quizzed about its content. Finally, the more traditional attention check came in a 10-item matrix near the end of the survey. The last item in the matrix instructed the respondent to “Please select ‘not likely’.” Respondents were not informed whether they passed or failed this attention check. This final check had the highest failure rate of the three types of checks: 13% (259 of 1,946) of respondents either failed to respond to this item or selected some response other than what they were directed to select.

Generally speaking, respondents who failed the manipulation or attention checks exhibit other behaviors suggestive of satisficing or of poor attention to the survey, such as “straight-lining” (repeatedly selecting the same response to items presented in a matrix format) and poor differentiation across presumably opposing items (reporting equal or nearly equal levels of identification with opposing groups like Republicans and Democrats or young and old). Such inattentiveness increases random noise and therefore biases against finding reliable effects; nevertheless, eliminating such cases could limit our generalizability to only attentive respondents. We strike a balance between these poles by preserving inattentive respondents in our main analyses but controlling for inattentiveness with a dummy variable. We also draw attention to instances in which differences across the groups of attentive and inattentive respondents are most pronounced.

3.3 Overview of Experimental conditions

Based on respondents’ race, gender, income, and pre-treatment concern about diabetes or AIDS, we randomly assigned white and African-American subjects to one of four experimental conditions for each disease, based on crossing two emphasis frames. The randomization worked such that subjects are balanced across treatment arms on these demographic characteristics (see Table B and C in S1 Appendix). The first focused on the group-based distribution of disease burden and the second focused on non-normative behavioral risk factors. Both diseases were “racialized” by referencing “Black America” and by providing information about the higher rates of the disease among African-Americans. The non-normative behavior treatment reminded respondents that HIV infection can be caused by “unprotected sex with multiple partners” and that people who eat foods “high in sugar and fat” as well as those who do not get adequate exercise have an increased likelihood of getting diabetes. Absent these targeted pieces of information about non-normative behaviors, subjects were told that “researchers continue to study what causes HIV infection” and that “genetics play a role in who develops type II diabetes.” Full screen shots of the entire treatments are available in Figs A-H in S4 Appendix.

Both the racial and behavioral elements are informational emphasis frames because amidst the vast quantity of facts that one might know or learn about a particular problem, and specifically, the problem of the danger of becoming infected with HIV or contracting diabetes, these are two perceived realities that public health officials, the media, and individuals in personal networks may or may not choose to highlight. Both frames have been commonly available in popular discourse and in the media. Crucially, our research depends on our ability to, at least temporarily, affect how individuals come to understand various dangers. If they already hold firm views about these two dimensions, which cannot be even temporarily manipulated, we would not identify any treatment effects.

3.4 Outcome measures

Following receipt of the treatment, respondents were asked various questions about how they perceive risks of being infected with cancer and either AIDS or diabetes (depending on which disease condition they were randomly assigned into). The risk perception question asked, “What is the likelihood that the following will be newly afflicted with [CANCER/AIDS/DIABETES] in the next five years? (That is, do not include individuals who already suffer from this disease.)” Respondents were asked to move a slider across a scale that reports to us a quantity from 0 to 100. Respondents did not see the number reported, but rather, they saw seven evenly spaced qualitative anchors, ranging from “No chance,” to “Extremely High.” Respondents were required to consider their own risk as well as the likelihood that “Any member of your family,” “Any close friend,” and “Anyone you know personally” would be afflicted with the particular disease.

Beyond specific questions about risk, we also asked respondents their views on public policy, under the assumption that answers about policy would be a function of risk perceptions. First, we asked, “If you had a say in making up the federal budget this year, should federal spending on each kind of research be decreased, kept about the same, or increased?” And for each of “Cancer Research,” and “[EXPERIMENTAL CONDITION] research,” respondents were asked to provide a response on a 100-point sliding scale with 5 anchors ranging from substantially decreased to substantially increased. We attempt to cross-validate responses to this question by asking respondents to allocate the government’s health budget between just cancer and AIDS or cancer and diabetes, depending on condition. Here, the questions ask, “Imagine you have the opportunity to discuss with your senator how he/she should allocate a portion of the health budget to just these two problems. Assuming that overall effectiveness of prevention methods and treatments are similar on a dollar for dollar basis, how should health spending be allocated? (Total must equal 100%)”. This additional budget question was intended to sidestep the problem that would arise if certain segments of the population favor or oppose all spending increases.

As a final set of policy-related questions, we asked respondents to indicate how much they would be willing to pay for health insurance that would cover advanced diabetes or HIV/AIDS treatment. The question directed respondents to imagine that they were currently paying a $500 monthly premium and then asked how much more they would be willing to pay each month for additional coverage of advanced HIV/AIDS or diabetes treatment. We provided respondents a slider that ranged from $0 to $200. For a baseline, this question also asked about coverage for advanced cancer treatment. (See question wording and distributions in S2 Appendix).

In addition to our primary dependent measures described above, we included several exploratory measures to provide additional insight into the psychological effects of the different emphasis frames. First, to explore the hypothesis that our emphasis frames exert influence in part by cuing emotional responses such as anger, blame, or shame, we also included a set of items asking respondents to report on their emotional responses to learning that a friend had contracted the disease in question. Respondents were asked to respond on 5-point Likert scales reflecting how strongly they would experience each of the following: sympathy, anger, surprise, shame, disgust, worry about the individual’s health, worry about the individual’s friends and family, and wonder about whether the individual had engaged in unsafe or unhealthy behavior.

Second, to explore the possibility that a key mechanism for denialism would be mistrust of information, we ask a question about the respondent’s trust in government data: “How confident are you about the accuracy of official statistics concerning disparities in disease prevalence across RACE groups?” This question was asked in a battery that also asked about confidence in statistics making distinctions by age and gender, to disguise our primary interest in race. It is worth noting that we asked this battery in both the pre-test and in the endline study, which allows us to estimate within- and between-subjects treatment effects. (Specifically, we treat as our dependent variable the difference in reported trust between endline and baseline surveys.)

Third and finally, in order to test whether our experimental manipulations were capable of affecting actual behavior, we concluded the study by offering respondents the opportunity to learn more about their assigned disease. A closing screen announced, “We have no more questions for you, but we would like to provide you an opportunity to learn more about [diabetes/AIDS].” We provided clickable screenshots that were labeled as information about charities, prevention, and testing relevant to diabetes or HIV/AIDS. Unbeknownst to the respondent, we recorded how long the respondent spent with the links and how many total links they clicked on. In the analyses below, we consider a variable that capture the number of items (ranging from 0 to 3) that the individual clicked.

We also included several supplemental measures focusing on whether participants identify with various groups including race, wealth, political, and age-related groups. (For brevity, analyses focusing on these measures are provided in S3 Appendix).

3.5 Additional covariates

In order to ensure that random assignment generated relatively comparable groups, we collected data on a number of covariates that arguably should directly affect reported risk perceptions. Specifically, during the baseline survey, we asked respondents to report their gender, age, level of education, and income.

We report summary statistics for these variables separately for African-American and white respondents in Table 3. Additionally, we show in the supplementary materials that the block randomization succeeded such that we have balance in these demographics and in pretreatment concern about disease across the treatment arms.

4 Analysis and findings

4.1 Baseline

In our baseline survey, we asked respondents to identify their level of concern with various health problems on a 0-100 scale, which we interpret as a preliminary measure of risk perception. Table 3 presents average responses across race groups for cancer, HIV/AIDS, and diabetes. We find a substantively large and highly significant relationship between racial identity and risk, with higher risk perceptions among African-American respondents for all three diseases. In further multivariate modeling we find that this race difference persists in the presence of other demographic controls and that gender too plays a role in who is concerned about disease, with women reporting higher estimates of risk (see Table 4). Perhaps the most substantively important finding of our study turns out to be the degree to which African-Americans express higher levels of concern even beyond the large set of factors that distinguish whites and African-Americans in the American context and which we can control for in our sample (i.e., group differences in education, income, residence type, and likelihood of personally knowing someone who is afflicted with one of the diseases discussed in this study.)

Table 4. OLS Estimates of Pre-Treatment Concern / Risk Perception.

Furthermore, while Table 4 shows that race has a substantively large and statistically significant relationship with all eight diseases, the largest relationship by far is with one of our focus diseases, HIV/AIDS. We find that our race dummy variable is associated with an 11.8 point increase on the 0-100 scale, which is approximately one-third of a standard deviation. The effect is also large for diabetes, representing a 7.0 point increase on the same scale. In both instances, we were surprised by the magnitude of these differences after having controlled for many of the factors that differentiate individuals associated with the respective race groups.

We also note consistent relationships with other demographic factors, most notably age and education, with age generally positively related to risk perception and education consistently negatively related to risk perception.

4.2 Analysis of treatment effects

Our primary goal was to estimate the treatment effects of our various informational emphasis frames. We analyze the two-wave dataset with ordinary least squares (OLS) regression, including each treatment arm as a separate binary regressor variable, and the control group is left as the omitted category. First, we estimate the treatment effects for each outcome variable in the full dataset (that is, not making distinctions between the two disease conditions), interacting each treatment arm with the race dummy variable, which allows us to estimate the effects for each race group.

Specifically, Yi is our outcome measured at T2. X1, X2, and X3 are dummy variables for the respective treatment arms (race frame only; stigma frame only; race + stigma frame), Black is a dummy variable that takes a value of 1 for respondents who self-reported as African-American and 0 for respondents who self-reported as white; Z is a vector of pre-treatment covariates (including race and gender) measured at T1, and ϵi is the error term.

After estimating this model, we generate bootstrapped estimates from that regression, calculating the effects of each treatment arm for each of eight outcomes relative to the control condition, first for African-Americans, then for whites (see Fig 1).

Fig 1. Estimated treatment effects, full sample.

Based on sampling 1000 draws with replacement from the observed data, we estimate responses conditional on having received each treatment. Points depict average difference between each treatment condition and control (treatment effect) among members of the indicated respondent group; lines represent 95 percent confidence intervals; horizontal crosses represent 90 percent confidence intervals. Estimated quantities are presented in terms of standard deviations of the outcome variable. Effects are averaged across disease conditions.

Subsequently, in a series of OLS regressions, we estimate the effects of our experimental treatments separately for each disease condition (AIDS or diabetes) and for each race group (African-Americans and whites), which takes the form:

We report the results from these more disaggregated statistical tests in Tables 58. And as we discuss below, while we generally do not find strong support for any of our main hypotheses, we do find some important relationships that merit further investigation. Unless stated otherwise, all effects are estimated relative to the no-treatment control group.

Table 5. OLS estimates of Treatment Effects, AIDS Condition (Blacks Only).

Table 6. OLS estimates of Treatment Effects, AIDS Condition (Whites Only).

Table 7. OLS estimates of Treatment Effects, Diabetes Condition (Blacks Only).

Table 8. OLS estimates of Treatment Effects, Diabetes Condition (Whites Only).

We find the most empirical support for hypothesis 1: we expected that the racialized framing treatment would positively affect African-American responses to questions about risk perceptions and protective policies for both disease conditions, and negatively affect white responses. And as depicted in the first plot of the first row of Fig 1, we find that risk perception does increase substantially for African-Americans, and decreases modestly for whites. Moreover, in response to the racialized frame treatment, we find, as expected, modest increases in support for increased budget allocations (for the relevant disease) among African-Americans and decreases among whites, though none of these estimates are statistically different from zero. Amongst whites who received the combined racialized-stigma treatment, we estimate a large and statistically significant reduction in preferences for percent expenditure on the given disease (as predicted).

These effects are also estimated in more disaggregated form in the first row of Tables 58. We do find that with respect to our total AIDS risk perception measure, the estimated effect for the racialized treatment (only) is relatively large and positive for African-Americans (model 1 in Table 5), and large and negative for whites (model 1 in Table 6), but in both cases, the size of the standard errors is also large enough to limit interpretability. African Americans who received the racialized treatment also reported preferring a larger AIDS budget as compared with those in the control condition, and the reverse was true with respect to whites (models 2 and 3 in Tables 5 and 6), as predicted, but again, these effects were not consistently statistically different from zero. With respect to the question about payment for an extra insurance premium for AIDS, African-American responses were in the opposite direction of our prediction (model 4 in Table 5). And finally, in general, our web click outcomes proved to be noisy measures of risk perception and responsiveness, and we find no significant results with respect to that outcome (model 5 in Tables 58).

When considering hypothesis 1 through analysis of data from respondents in the diabetes experiment, the results are also generally directionally consistent with our hypotheses, but certainly not sufficiently precise to reject the null hypothesis of no effect. Although we find that the racialized frame treatment generates substantial increases in total risk perception among African-Americans and small decreases in risk perception among whites (model 1 in Tables 7 and 8), the estimated effects are again not statistically significant. Moreover, estimated treatment effects with respect to our budget policy questions are not in the predicted direction, nor are they statistically different from zero. One large and statistically significant treatment effect is identified among African Americans (model 4 of Table 7): Those treated with the racialized frame report a willingness to pay a higher insurance premium for diabetes compared with those in the control group (as originally predicted).

With respect to hypothesis 2 (our main hypothesis), we find virtually no empirical support. We had expected that among African-Americans, the combined racialized and stigmatized frame (estimated in row 3 in Tables 58) would induce denialism, in turn leading us to observe negative point estimates for our central outcomes of interest, and in particular, more negative than with respect to the stigma-only frame. And yet, as clearly depicted in Fig 1, we find just the opposite: the estimated effects of the combined frame are more positive than the stigma-frame alone. With respect to the more disaggregated results among African Americans, we find a positive effect for the combined frame on perceived disease risk (model 1 of Tables 5 and 7), and mixed effects on questions about the budget (models 2 and 3 of Tables 5 and 7), but the only estimate that is statistically significant is in the wrong direction. Moreover, the combined treatment condition had no effect on reported feelings of shame with respect to a friend disclosing their positive HIV status (model 8 of Table 5). And, in all cases, the estimated coefficient in row 3 is more positive than the associated estimated coefficient in row 2, contrary to our stated hypothesis. For example, while the combined frame generated a negative treatment effect for confidence in government data reporting on racial disparities in disease prevalence among African Americans, the treatment effect was also negative but substantially larger and statistically significant in the stigma-only treatment condition (compare rows 2 and 3 of model 6 in Table 5). We do find, as predicted, that the estimated coefficient for the interactive treatment is negative with respect to sympathy in both the AIDS and diabetes conditions. However, only in the case of the diabetes condition, do we find that the estimated coefficient relationship between the combined frame and sympathy is more negative than for the stigma frame alone (compare rows 2 and 3 of model 7 in Table 5).

Finally, with respect to hypothesis 3, the “status-confirming hypothesis,” we do find some empirical support, though not exactly as we had predicted in our pre-analysis plan. Again, we had hypothesized specifically that the interactive treatment would lead to a positive effect—that is, increased confidence in government data reporting such disparities across race groups—among White Americans, the effect sizes are relatively small and they are not statistically different from zero. However, we do find that when whites were presented with the racialized (only) frame, those individuals tended to become significantly more confident in race-based data than those in the control group (see the panel in the fourth column of the third row in Fig 1). Those effects were evident with respect to AIDS (model 6 in Table 6) and diabetes (model 6 in Table 8). By contrast, among African Americans the point estimates of the effects on confidence were negative across all three treatment arms (rows 1–3 of model 6 in both Tables 5 and 7). In short, when White Americans receive information that disease prevalence is worse among African Americans, that generally boosts confidence in those data; and when African Americans receive the same information it diminishes confidence in the data. People seem to place more trust in information that paints their own group in a positive light.

5 Discussion

Quite clearly, these data do not provide strong support for our core hypotheses concerning the interactive effects of racialized and blameworthy/stigma informational frames, and in line with growing social scientific norms to report on “null findings,” we do so here. That said, just as a single experiment with positive results would not conclusively demonstrate the power of a set of claims, this single experiment cannot conclusively rule out the validity of our theory and core hypotheses. We suggest here a few possible factors that may have worked against our ability to identify hypothesized effects.

First and most importantly, we were impressed by the magnitude of the inter-group differences in expressed levels of concern at baseline. This suggests that American citizens in our survey sample had already been widely exposed to information highlighting racial health disparities. And this seems to be not simply a function of racially-endogamous social networks, which would lead to differential exposure to particular diseases. Even when controlling for risk factors and health networks, our findings of strong race-based differences suggest the potential power of race-differentiated messaging. But whatever the reason for the baseline disparities, they create a context in which it is exceedingly difficult to influence our respondents through a light emphasis frame in the context of an online survey experiment, precisely because the frame we were providing was already familiar and thus may have exerted its effect across frames. This provides one potential explanation for the limited effectiveness of the information we provided.

Second, in the context of our diabetes condition, it is not clear that our “blameworthy” treatment text had the intended connotation among respondents, as evidenced by the estimated positive coefficients for the effects of the associated treatment arm across race groups (quite large and statistically significant for whites). For example, in our AIDS condition analyses, the “blameworthy” treatment had generally negative effects on risk perception and policy priorities, as we had expected. That emphasis frame likely caused respondents to distance themselves from the possibility of infection. By contrast, it may be the case that with respect to diabetes, a discussion of poor eating habits and low levels of exercise actually caused many individuals to focus on their own vulnerability and that of others close to them due to their engagement in those very behaviors. If this was indeed the case, it would imply that our experimental treatment was not an effective instantiation of “stigma” and so would not serve as a strong test of our prediction concerning stigma’s role in risk perception.

Third, we found significant differences in terms of response outcomes by gender at baseline, and analysis of our results suggest different treatment effects by gender—with somewhat stronger treatment effects among women, but still not statistically significant. Indeed, the baseline findings resonate with a substantial theoretical literature that highlights a “white male” effect in risk perception, in which White males underestimate their own risk ([2527]), which would imply important differences across race and gender. Unfortunately, while we expected gender to be a potentially important confounding variable, and we stratified by gender prior to random assignment of treatments, our experiment was not designed, and is under-powered, to analyze heterogeneous treatment effects by gender (though preliminary results are available upon request).

6 Conclusion

In this article, we have provided a theoretical discussion of why citizens, despite all of their individual diversity, often perceive disease and other risks not simply as individuals but as members of social identity groups. From our baseline observational research, we confirm the importance of ascriptive group identities as predictors of risk perceptions. Moreover, additional prompts about racial disparities in disease prevalence delivered experimentally were associated with greater risk perception amongst African-Americans, and widened a gap between African Americans and Whites with respect to trust in official government data. This evidence strongly suggests that “social risk” is an important feature of risk perception and how individuals process information and develop policy preferences.

However, our hypotheses concerning the manifestation of inter-group conflict in the dissemination of public health messages find no solid support in our survey experiment. In particular, we predicted that for members of groups known to be at high risk for conditions associated with non-normative or “stigmatized” qualities, they would be more likely to deny those risks when presented with informational frames that emphasized both the high risk to their group and the stigmatized nature of the condition. The data from our experiment suggest that such emphasis frames did not have such an effect.

While we report these findings because we believe it is important to disseminate information from a pre-registered study, we have also highlighted why the findings overall may lead to false inferences of the “type 2” variety. Future research should address these concerns, potentially with other experimental strategies—with respect to other risk conditions, and perhaps with other modalities than a web-based survey.

Supporting Information

S1 Appendix. Data Overview and Balance Across Treatment Arms.


S2 Appendix. Select Question Wording and Distributions.


S3 Appendix. Analyses of Group Identification and Racial Polarization.


S4 Appendix. Screenshots of Treatment Conditions.



We gratefully acknowledge research assistance from Jessica Grody; helpful comments on the research design from Tali Mendelburg, Betsy Paluck, and participants in the Press Research in Experimental Social Science seminar; and financial support from Princeton’s Center for Health and Wellbeing.

Author Contributions

Conceived and designed the experiments: YD EL SS. Performed the experiments: YD EL SS. Analyzed the data: YD EL SS. Contributed reagents/materials/analysis tools: YD EL SS. Wrote the paper: YD EL SS.


  1. 1. Kahneman D, Tversky A. Prospect Theory: An Analysis of Decision under Risk. Econometrica. 1979;47:263–291. Available from:
  2. 2. Douglas M. Risk and Blame: Essays in Cultural Theory. London; New York: Routledge; 1992.
  3. 3. Slovic P. Trust, emotion, sex, politics, and science: Surveying the risk- assessment battlefield. Risk Analysis. 1999;19:689–701. pmid:10765431
  4. 4. Slovic P, Finucane ML, Peters E, MacGregor DG. Risk as Analysis and Risk as Feelings: Some Thoughts about Affect, Reason, Risk, and Rationality; 2004.
  5. 5. Tversky A, Kahneman D. The framing of decisions and the psychology of choice. Science (New York, NY). 1981;211:453–458.
  6. 6. Druckman JN. The implications of framing effects for citizen competence. Political Behavior. 2001;23:225–256.
  7. 7. Parker R, Aggleton P. HIV and AIDS-related stigma and discrimination: a conceptual framework and implications for action. Social Science & Medicine. 2003;57(1):13–9536.
  8. 8. Ellemers N, Spears R, Doosje B. Self and social identity. Annual Review of Psychology. 2002;53(1):161–186. pmid:11752483
  9. 9. Major B, O’Brien LT. The Social Psychology of Stigma. Annual Review of Psychology. 2005 Feb;56(1):393–421. pmid:15709941
  10. 10. Blumberg SJ. Guarding against threatening HIV prevention messages: an information-processing model. Health education & behavior: the official publication of the Society for Public Health Education. 2000 Dec;27(6):780–795.
  11. 11. Meyer IH. Prejudice, social stress, and mental health in lesbian, gay, and bisexual populations: Conceptual issues and research evidence. Psychological Bulletin. 2003 Sep;129(5):674–697. pmid:12956539
  12. 12. Tajfel H. In: Hogg MA, Abrams D, editors. Experiments in intergroup discrimination. Psychology Press; 2001. p. 178–187.
  13. 13. Brewer MB. The importance of being we: human nature and intergroup relations. American Psychologist. 2007;62(8):728.
  14. 14. Dunham Y. An angry = outgroup effect. Journal of Experimental Social Psychology. 2011;47(3):668–671.
  15. 15. Tajfel H, Turner JC. In: Worchel S, Austin WG, editors. The Social Identity Theory of Intergroup Behavior. Rowman & Littlefield; 1986. p. 7–24.
  16. 16. Lieberman ES. Ethnic Politics, Risk, and Policy-Making: A Cross-National Statistical Analysis of Government Responses to HIV/AIDS. Comparative Political Studies. 2007;40(12):1407–1432. Available from:
  17. 17. Horiuchi Y, Imai K, Taniguchi N. Designing and analyzing randomized experiments: Application to a Japanese election survey experiment. American Journal of Political Science. 2007;51(3):669–687.
  18. 18. Moore RT. Multivariate continuous blocking to improve political science experiments. Political Analysis. 2012;20(4):460–479.
  19. 19. Higgins MJ, Sekhon JS. Improving Experiments by Optimal Blocking: Minimizing the Maximum Within-block Distance. Working Paper; 2013.
  20. 20. Fitzgerald J, Gottschalk P, Moffitt RA. An analysis of sample attrition in panel data: The Michigan Panel Study of Income Dynamics. National Bureau of Economic Research Cambridge, Mass., USA; 1998.
  21. 21. Berinsky AJ, Huber GA, Lenz GS. Evaluating online labor markets for experimental research: Amazon. com’s Mechanical Turk. Political Analysis. 2012;20(3):351–368.
  22. 22. Brandon DM, Long JH, Loraas TM, Mueller-Phillips J, Vansant B. Online instrument delivery and participant recruitment services: Emerging opportunities for behavioral accounting research. Behavioral Research in Accounting. 2013;26(1):1–23.
  23. 23. Rosoff H, John RS, Prager F. Flu, Risks, and Videotape: Escalating Fear and Avoidance. Risk Analysis. 2012;32(4):729–743. Available from: pmid:22332702
  24. 24. Berinsky AJ, Margolis MF, Sances MW. Separating the Shirkers from the Workers? Making Sure Respondents Pay Attention on Self-Administered Surveys. American Journal of Political Science. 2014;58(3):739–753.
  25. 25. Flynn J, Slovic P, Mertz CK. Gender, race, and perception of enviornmental health risks. Risk Analysis. 1994;14:1101–1108. pmid:7846319
  26. 26. Kahan DM, Braman D, Gastil J, Slovic P, Mertz CK. Culture and Identity-Protective Cognition: Explaining the White-Male Effect in Risk Perception. Journal of Empirical Legal Studies. 2007;4:465–505. Available from:
  27. 27. Finucane ML, Slovic P, Mertz CK, Flynn J, Satterfield TA. Gender, race, and perceived risk: the’white male’ effect. Health, Risk & Society. 2000;2(2):159–172.