Abstract
Guidelines concerning the potentially harmful effects of scientific studies have historically focused on ethical considerations for minimizing risk for participants. However, studies can also indirectly inflict harm on individuals and social groups through how they are designed, reported, and disseminated. As evidenced by recent criticisms and retractions of high-profile studies dealing with a wide variety of social issues, there is a scarcity of resources and guidance on how one can conduct research in a socially responsible manner. As such, even motivated researchers might publish work that has negative social impacts due to a lack of awareness. To address this, we propose 10 simple rules for researchers who wish to conduct socially responsible science. These rules, which cover major considerations throughout the life cycle of a study from inception to dissemination, are not aimed as a prescriptive list or a deterministic code of conduct. Rather, they are meant to help motivated scientists to reflect on their social responsibility as researchers and actively engage with the potential social impact of their research.
Citation: Zivony A, Kardosh R, Timmins L, Reggev N (2023) Ten simple rules for socially responsible science. PLoS Comput Biol 19(3): e1010954. https://doi.org/10.1371/journal.pcbi.1010954
Editor: Russell Schwartz, Carnegie Mellon University, UNITED STATES
Published: March 23, 2023
Copyright: © 2023 Zivony et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: This research was partially supported by the Israel Science Foundation, grant number 540/20, to N.R. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors report no competing interests.
This is a PLOS Computational Biology Methods paper.
More than ever before, scientists are being called upon to acknowledge and engage with the social impact of their scientific outputs (see [1–3] for reviews). This is perhaps most clearly reflected in a paper about racial disparities in police shootings in the United States [4]. The authors of this study reported that they found “no evidence of anti-Black or anti-Hispanic disparities.” Following heavy criticism [5,6], the study was subsequently retracted, not by the journal but rather by the authors themselves. The authors initially rejected the scientific criticism [7]. However, they later justified their retraction due to “the continued use” of the work to “support for the idea that there are no racial biases in fatal shootings, or policing in general” [8]. In other words, scientists concluded that they should retract their study in the name of social responsibility, as it was written in a way that could have harmful effects (“grimpacts;” [9,10]) on public discourse (and thus potentially harm specific groups).
This study is but a single example out of a string of recent high-profile studies from various fields that drew harsh criticism from the general public and scientists alike for using science to promote ideas that could potentially inflict harm on individuals and social groups (e.g., [4,11–21]). Such concerns are far from new (e.g., [22–26]). However, although scientific papers used to be accessible only to relatively few experts, such outputs can nowadays reach an incredibly wide audience very quickly. A study that captures the public’s attention can reach an audience of many millions via online news outlets, Twitter, podcasts, TV shows, YouTube channels, online forums, and so on. This puts great power over the public sphere in the hands of relatively few scientists. Moreover, the speed and breadth of dissemination expose scientists to a new challenge that may catch many of us off-guard: addressing the myriad ways our published research will be interpreted and evaluated by the general public.
The recent torrent of visible yet contentious studies points to the difficult question: What is the responsibility of scientists over the social impacts of their research? This question has been debated for over a century. Not long ago, many scientists rejected any responsibility for social impacts, as such responsibility was viewed to be in direct contradiction with scientific freedom (see [1] for review). In the last few decades, this attitude has been slowly changing, and a consensus has been growing that scientists should be responsible for at least some consequences of their research. For example, it is now widely agreed that scientists should minimize the potential risks to the physical and mental well-being of people and populations participating in their studies. Indeed, such considerations have been institutionalized and are regulated via education, periodic training, and regulation by ethics boards all over the world [27]. Moreover, growing concerns about the societal ramifications of emerging technologies have already led to substantial policy changes among many actors and stakeholders involved in science and technology development (e.g., [3,28]). This is perhaps most clearly exemplified by the recent push for policies that promote “responsible development” or “Responsible Research and Innovation”—research processes that account for sustainability and potential impacts on society and aim to produce “socially desirable” outcomes [2].
Given these changes, the time seems ripe for scientists to consider their responsibility over the possible impact of their research outputs. However, determining the desired degree of individual responsibility involves significant challenges, as scientists can only be asked to have responsibility for impacts that are “reasonably” foreseeable [1,10]. In many cases, research can have indirect social impacts that are impossible to predict. For example, a study that focuses on a specific social group (see examples below) can help shape the general public’s beliefs about the said group. While it is widely agreed that social beliefs can have real effects on the physical well-being, psychological welfare, and livelihood of people (e.g., [29–32]), it is hard to tell what effect, if any, can be attributed to the specific study in question [33]. In such indirect cases, what is considered a “reasonably foreseeable impact” will often be a matter of debate.
It is also arguable that promoting socially responsible science should rely on institutionalization and regulation by scientific organizations rather than thrusting more responsibilities upon individual scientists. Policy and structural factors, rather than individual actions, are viewed as the key to ensuring responsibility and societal desirability of the scientific process and its outputs [2,3,34,35]. Indeed, without relevant education, clearly articulated and regulated standards, and support in their interpretation and implementation, it is unlikely that already overburdened individual scientists will successfully integrate all ethical considerations in their research. Unfortunately, such structures are largely absent in many scientific fields. For example, in many fields, scientists receive some training in the ethical treatment of human participants or animal subjects (e.g., [36]) but little to no training in considering the ethical ramifications of their work on society (with a few notable exceptions, such as several subdisciplines of sociology and anthropology, e.g., [37–39]).
Moreover, scientists are actually incentivized to disregard any aspects of their work that hinder swift publication, such as addressing the limitations of their methods or considering the potential long-term, broad implications and interpretations of their results. In this climate, current scientific structures discourage social responsibility. The competition for jobs and funding opportunities in academia drives scientists to churn out high-impact publications at an ever-increasing rate [40,41]. Consequently, to maximize impact, scientists are incentivized to publish novel or controversial findings while overstating the veracity of their conclusions [42–44]. In short, scientists are pushed to vie for the public’s attention but to downplay or ignore altogether any negative social impact (“grimpacts”) their research might have [9].
Given these constraints, formally—and justly—characterizing scientists’ obligations to minimize the potential societal harm of their research remains a daunting task, especially when such harm is indirect. When it comes to broad societal impacts, it may take a long time before the scientific community can agree on how to balance scientific freedom on the one hand and the principles of benevolence and non-maleficence in the context of broad societal impacts on the other (for a related discussion, see [45]). Until such time comes, we would like to offer a potential path forward.
In this paper, we offer 10 simple rules for socially responsible science. We follow the life cycle of a study, from inception to dissemination, and provide concrete suggestions that can help scientists to reflect, plan, and act to minimize potential societal harms stemming from their work. Because different scientists may consider social impacts at different stages, and because the production of scientific output is far from linear, these rules overlap to some degree. Undoubtedly, these rules cannot replace a broader structural shift in how science is done. However, in the absence of structural support and education, even researchers who wish to be socially responsible might publish work that has negative impacts due to a lack of awareness (for a similar point, see [46,47]). The following list of rules is meant for these scientists. We emphasize that our purpose is not to provide a prescriptive list against which any individual study or scientist can be evaluated, nor do we propose this list as a fixed and deterministic code of conduct. Rather, we aim to highlight straightforward considerations in order to empower individual scientists to actively engage with the potential social impact of their research (even in cases where such impact is indirect). Moreover, we acknowledge that in drafting these rules, we drew on our own experiences and paradigms; therefore, this list cannot be entirely comprehensive. Nevertheless, we hope this list can enrich the conversation about individual and collective social responsibility in science that is sorely missing outside a few select fields. At the very least, we hope that these rules can help scientists to avoid unwittingly causing harm to others and help them to navigate potential criticisms from the general public and other scientists.
Rule 1: Get diverse perspectives early on
Science is inherently collaborative. We pool our expertise to work together on projects and depend on knowledgeable peers to critically evaluate our ideas and research. Peers from other fields or peers with knowledge we do not have are particularly helpful in this regard. Without them, we run the risk of overestimating how well-informed we are [47–49]. Similarly, when studying topics related to a particular marginalized group, we can greatly benefit from reaching out to members of said group, as they may have valuable, highly accurate [50] “insider” knowledge [51,52], based on their experiences. To varying degrees, some fields recognize the benefits of “participatory research” [53] and of viewing the community in question as a research partner [54] (e.g., qualitative research, public health). Unfortunately, however, in many other empirical sciences, these insights are almost entirely absent. When we view a social group as a research topic rather than an equal partner and ignore our own limited knowledge, we risk creating flawed designs and introducing easily avoidable errors. For example, a study that aimed to examine the genital arousal patterns of bisexual men in response to erotic stimuli supposedly found no difference between the attraction pattern of self-identified bisexual men and gay men [55]. Following this study, news outlets such as the New York Times published articles that called into question the very existence of bisexuality in men, proclaiming that bisexual men are either “Straight, Gay or Lying.” Aside from other criticisms about this line of research and what can be learned from it (e.g., [56,57]), a later study informed by consultation with representatives from the bisexual community found that the original result was merely an artifact of inadequate sampling and screening [58]. Had the researchers consulted with the community of interest early on, the negative impact of doubts cast on bisexual men and portrayals of bisexual men as untrustworthy would have been avoided (see [59] for a similar conclusion regarding flawed research into d/Deaf signing communities).
Recent efforts to adopt an inclusive approach to studying diverse populations span multiple disciplines and content, such as race [60], autism [61,62], artificial intelligence [63,64], and pedagogy [65]. We follow suit and recommend that scientists should try to get inclusive perspectives on their work by identifying the populations impacted by their study and engaging them at the earliest stages [47]. Efforts are being made to do this in multiple disciplines and localities. For example, Community-Based Participatory Research in North America and Patient and Public Involvement in the UK are 2 commendable initiatives to involve lay members of the public to contribute and collaborate on research that affects their lives [66,67]. In addition to taking such approaches, we can invite insider researchers to be our coauthors or to consult on our work with adequate compensation. Importantly, such collaborations should not be cursory. When in the position of being the “outsider” researchers working with insiders (whether fellow researchers or members of the public), we should be ready and willing to share power and control over a given project with those who will be most affected by its findings.
Rule 2: Understand the limits of your design with regard to your claims
Our scientific claims are only as good as the methods we use to test them, and the research designs used should be appropriate for our research hypotheses, or else they might support the wrong conclusions. This is, of course, true for any and all empirical research. However, inaccurate conclusions are problematic in studies that make socially impactful claims (and especially ones that can affect minoritized social groups). In studies that fall outside the public’s attention, even serious methodological limitations may be acceptable as long as they are clearly addressed. However, when a study reaches public attention, a paragraph summarizing limitations may not be sufficient to curb the study’s potentially negative impact. Often, the general public pays little attention to methodological minutiae. Instead, there is an implicit trust that studies can be interpreted and generalized based solely on their title, abstract, or press release (see [47,68]).
Experts are also not immune from adopting conclusions based on insensitive generalizations, sometimes leading to grievous consequences. For example, autism spectrum disorder (ASD), a complex neurodevelopmental condition often characterized by “persistent deficits in social communication and social interaction across multiple contexts…” [69], has been initially diagnosed as affecting predominantly males, with an estimated male:female ratio ranging from 4:1 to 10:1 (reviewed in [70]). These conceptions have even led scientists to characterize autism as a result of an “extreme male brain,” with females enjoying a “protective” factor ([71]; such notions extend as far back as the 1940s, reviewed in [72]). However, recent research indicates that ASD is underdiagnosed in females and that earlier estimates of the prevalence of the condition included (then unbeknownst) biased samples and diagnostic criteria [70,73–75]. As a result, the prominence of the male brain theory may have severely disadvantaged autistic girls and women, who were underserved by mental health institutions [76] and mistreated by their social environments who maintained the stereotype that autism is a male-only condition [74].
Insensitive generalizations occur across many scientific disciplines, of course, including biology (e.g., almost exclusive reliance on male animal models in inferring population-level effects; [77]), computer science (e.g., face detection algorithms constructed based on almost exclusively white samples; [63,78]), medicine (e.g., treating cisgender men as representative of the human race as far as pathophysiology and treatment of disease go; [79]), psychology (e.g., marking implicit measures of associations as the main target of diversity-training programs; [80]), and others. Therefore, we recommend that researchers whose research touches on social issues (broadly defined) ask themselves earnestly, prior to data collection, what kind of generalization can be made based on their available tools, research design, and the kind of data they can collect. These questions can push us to improve our design, focus our energy on improving our methods, and sharpen the level of generalization appropriate for our findings.
Rule 3: Incorporate underlying social theory and historical contexts
While the laws of nature are oblivious to our current theories in physics or biology, society and human behavior can be shaped by social theories [31,81]. Throughout history, social policies and hierarchies were justified by the scientific understanding of that time. The resulting social structures then give the semblance of confirming the social theories that shaped them in the first place. In such a reality, merely reporting empirical information without addressing the social structures underlying these data can lead uninformed readers to the wrong conclusions (e.g., [82–84]). Therefore, we suggest that to be more socially responsible, scientists need to take into account the social context both at the design stage (e.g., including measures that can illuminate the role of social context) and as an integral part of our communication efforts. This is especially true for studies documenting between-group differences and studies with clear implications for future social policies.
For example, it is well documented that there are average differences in test scores between different racialized groups in the US [85–87]. Some understood these findings as indicators of stable racial differences rooted in biology, a conclusion that fuels pernicious stereotypes and can cause harm to the stereotyped groups. Moreover, proponents of this view used these findings to promote social policies of diverting funds from students and families from marginalized backgrounds (e.g., [88]). In contrast, many commentators have noted that “race” is not a meaningful biological category (e.g., [82,89,90]) and that test results should be understood in the context of historical structural differences and systemic racism that created education and environmental disparities between various marginalized groups ([87]; see [91] for a variety of views). From this, it follows that more (not less) investment is needed to curb the influence of the social context that created these differences in the first place. Note that reporting on observed differences between groups is not necessarily problematic ([90]) and can even be the first step in creating social policies to address these differences. However, to avoid promoting the wrong conclusions, we should not ignore the myriad of conclusions that can possibly be supported by these results and should make an effort to contextualize them accordingly [31,52]. In such cases, we should incorporate the context as an integral part of the narrative of communicating the findings and not merely as a paragraph summarizing the limitations of the study that will naturally fall outside of public attention.
Rule 4: Be transparent about your hypothesis and analyses
Every empirical report runs the risk of disseminating findings that eventually turn out to be false. Research shows that motivated reasoning can further increase this risk by leading scientists to conduct and report their analyses in ways that procedurally exacerbate false positives [92,93]. These include, for example, deciding on additional data collection based on obtained results, reporting only results that support a specific narrative, and sequentially conducting multiple analyses until the desired results are obtained. Increased awareness of such risks in recent years has resulted in more and more calls for transparency in the scientific process, including calls for preregistering study protocols and analysis. Preregistration involves developing a comprehensive study protocol that details the hypotheses to be tested, the procedures to obtain the relevant data, and the methods and analyses to test the hypotheses. Although these protocols vary by discipline, an important feature is that they are typically time-stamped. Obtaining a time-stamped registration of the study protocol clearly delineates planned versus post hoc decisions. Even though preregistration can come with certain costs and is not a panacea for all potential problems involved in conducting research [94,95], detailing the planned analyses in advance can safeguard against potential biases that might permeate data collection and analyses, especially in studies where researchers have many degrees of freedom.
In addition, preregistration can inspire confidence in the veracity of one’s analyses. This may be particularly important in studies with meaningful social implications, which are often fervently and critically debated after the fact. Preregistration can curtail any suggestion that the results are only obtained due to post hoc decisions to conduct specific analyses, include or exclude particular variables, or control for certain variables. A potentially even more beneficial form of registration is available in the registered report format, now offered by more than 300 journals in numerous scientific disciplines, ranging from several nature-group journals to discipline-specific ones (e.g., Cochrane Reviews and BMC medicine, Psychological Science, Academy of Management Discoveries) [96]. Registered reports allow scientists to receive peer review on their planned study—before conducting it—and potentially to be conditionally accepted for full publication regardless of the obtained results. Thus, registered reports allow advantages even compared to peer-reviewed research proposals in that they allow publication regardless of specific outcomes (for a practical guideline, see the 10 simple rules by [97]). Notably, (pre-) registration offers transparency mostly for confirmatory hypothesis testing; exploratory analyses remain a critical scientific practice that provides valuable contributions. Here, we emphasize the ability of the registration procedure to guard against the tendency not to report some results of both positive and negative outcomes (the file-drawer problem), a phenomenon that can be particularly problematic in the context of contentious scientific debates that can significantly impact underrepresented groups via public discourse.
Rule 5: Report your results and limitations accurately and transparently
Publishing an article in a prestigious journal can be an important stepping stone in a scientist’s career. However, these journals typically prioritize simple-to-understand articles that tout substantial theoretical innovation and practical contribution [43]. This means that, even if we are cognizant of the study’s limitation during the design stage (Rule 2), we are still incentivized to simplify, overstate, and sensationalize the impact of our results after we obtain the data. Overstating the implication of our studies can also result in various undesirable outcomes, from allocating public funds to inefficient interventions (e.g., [98]) to skewing public discourse and reducing trust in science in general. One step we can take to curb such negative impacts is to accurately report the limitations of the methodology and our results, including those incompatible with a simplified narrative. Another way to increase both our own and the scientific community’s certainty about the accuracy of our results is to upload our data and analysis procedure to an online repository. This allows other scientists to double-check and reproduce our work, which can reveal difficult-to-detect errors or incorrect inferences. We also recommend ensuring that the data comply with FAIR practices [99] to increase transparency, reproducibility, and reusability.
Of course, an accurate report of results and limitations is a core tenant of any scientific enterprise. However, the possible negative impact of an oversimplified and overstated finding with socially important implications should encourage us to seriously think about limitations that we did not consider at earlier stages of the study. For example, acknowledging possible heterogeneity in samples and results is one way to avoid oversimplification [100,101]. Although a single study can never account for the various ways in which heterogeneity can limit our conclusions, addressing heterogeneity can encourage more incremental scientific progress on this topic and provide a more nuanced understanding to the public and policymakers. More broadly, describing our findings in a manner that closely reflects the obtained results without overselling them can reduce potential misinterpretations and safeguard from problematic usage of one’s findings.
Rule 6: Choose your terminology carefully
Specialized terminology can have much utility in scientific inquiry by condensing specific concepts and constructs into concise verbal units. However, such specialized terminology can also cause problems when used in a way that seems neutral to some but can carry value-laden connotations for others. Using such loaded terms affects what information people take away from our writing. For example, research on medical terminology has shown that referring to “gout” as “urate crystal arthritis” better aligns participants’ understanding of the disease with contemporary scientific understanding [102].
Choice of terminology may be particularly important when we talk about marginalized groups. In such cases, certain terms can carry connotations related to social stereotypes or core aspects of individuals’ identities. By using such terms, we may be perceived as endorsing stereotypical beliefs and negative views about the marginalized group and may cause stress and genuine hurt to its members [103]. For example, some terminology used when referring to transgender people has been criticized for its implied meaning. Notably, a study coined the term “rapid-onset gender dysphoria” [17] to describe parents’ perceptions regarding changes in their children’s gender identity and expression. In addition to using a term that may mislead others into thinking it represents an established diagnosis, the study was heavily criticized [104–106] for using medical-sounding language such as “cluster outbreaks of gender dysphoria” and “social and peer contagion” that imply that transgender status is tantamount to an infectious disease. Such a conclusion has no empirical support [107] but could nevertheless adversely impact how parents treat their transgender children. More subtly, the often-used terms “transgendered,” “male-to-female” (MTF), or “female-to-male” (FTM) have been criticized for implying that a person “changes” their gender (or has their gender changed by others) rather than changing how other people perceive their gender through coming out [108]. Such implications can be avoided by using terms like “transgender” and “assigned female\male at birth,” which focus on social perception rather than implying essential changes.
Importantly, diversity among people from the same group means that some will prefer terms deemed offensive by others (for example, some transgender people use MTF and FTM to describe themselves). As such, it is possible that a single term can never satisfy everyone. This problem is compounded by the ever-changing nature of language and its shared understanding (e.g., [109,110]). Nevertheless, we should strive to understand the connotations that others associate with our chosen terminology so that we can make educated decisions and minimize harm. We should investigate whether, at a given moment, affected communities have existing best practices when referring to relevant concepts. In qualitative research, this is often achieved by the practice of “member checks” [111], whereby participants are given the opportunity to review, comment, and correct transcripts of interviews and even drafts of the research report. In quantitative research, member checks are often impossible due to the anonymization of participant data. Nevertheless, scientists using quantitative methods can draw on the expertise of stakeholders and advocacy organizations to provide feedback on their use of language. This is especially important when we coin a new term, which is ideally done in collaboration with members of affected groups.
Rule 7: Seek a rigorous review and editorial processes
A rigorous review—a review that is unbiased, thorough, and follows best reviewing practices [112–114]—is the last line of defense in keeping the scientific literature free from errors and flaws that the authors overlooked. In its ideal form, a rigorous review process involves several knowledgeable peers carefully reviewing the scientific product at hand and providing constructive comments, as well as a careful editor that selects the reviewers, integrates the reviews, and assures the quality of the process. This is especially important for potentially impactful studies for which the bulk of scrutiny often occurs after publication. Therefore, it is also in our best interest to go through a rigorous peer review. A rigorous review also increases the confidence of the research community and the general public in the credibility of the published study and its results. In contrast, unsound editorial practices can result in detrimental outcomes for the original authors and the public sphere alike [60,115]. Although most review processes remain undisclosed, evidence of a rigorous review can be crucial if an article ever comes under public scrutiny. Therefore, we recommend authors submit papers to journals that are reputable for their rigorous process and avoid publishing socially impactful studies in any format that jeopardizes the review process, such as non-peer-reviewed publications or journals that overlook critical points from reviewers (for example, see the publicly available reviewer’s comments for [11], raising much of the concerns that indeed arose after publication). These recommendations also extend to suggesting potential reviewers during submission. Although researchers can use this option to nominate reviewers they think will be favorable to their research [116], the socially responsible approach would be to nominate experts that are likely to be reasonably critical of the study and have with a track record of considering these issues.
Finally, if the manuscript covers a potentially impactful topic, we can alert the editor to this in the cover letter and request extra diligence in the review and the editorial process. In such cases, editors may opt to invite commentaries on the accepted manuscript from opposing researchers [117]. However, in our opinion, such commentaries are not a substitute for a rigorous review, as invested parties often ignore commentaries altogether, even if they point out major flaws in the original paper. For example, Spitzer [118] notoriously claimed to show evidence in favor of the efficacy of “conversion therapy” in changing non-heterosexual orientations. Instead of insisting on a rigorous review process, the editor opted to invite numerous critical commentaries to accompany the paper. Unfortunately, the many flaws detailed by these commentaries did nothing to dissuade organizations that promote conversion therapies from using Spitzer’s article as evidence for their pseudoscientific claims and harmful practices. Spitzer later acknowledged that his paper was flawed and apologized to the gay community for the harm it had caused [119]. With the benefit of hindsight, we now know that such harm would have had higher chances of being avoided altogether if the manuscript had been rigorously reviewed (see also [60] for a discussion of the review and editorial processes that limit racial diversity).
Rule 8: Play an active role in ensuring correct interpretations of your results
A study can substantially impact public discourse if its conclusions are disseminated through news and social media. To appeal to a broad audience, press releases tend to simplify or sensationalize research findings. Traditional and social media outlets may further amplify this tendency, thus undermining the researchers’ efforts to disseminate their findings responsibly and accurately. Case in point, researchers found that men treat women’s orgasms as an achievement that reaffirms their masculinity [120]. In the article, the authors emphasized that this attitude has negative implications for men and (especially) for women. In contrast, some media outlets reported that the study shows that women’s orgasms benefit men, missing the point entirely. Undoubtedly, some studies can lend themselves more easily to inaccurate interpretations and erroneous narratives than others; however, this example goes to show that even a clearly spelled-out message can be widely misinterpreted.
Naturally, we cannot anticipate all the ways in which our findings can be portrayed or misrepresented. However, to mitigate the impact of these issues, we can be active in how our research is disseminated. For example, in response to the inaccurate article, the authors wrote a press release that further emphasized the negative implications of their findings and sent these to journalists they felt would more accurately report their research, and succeeded in eliciting more accurate coverage [121]. Notably, most academic institutions house public relations offices that can assist in drafting and disseminating such press releases. Scientists can collaborate in drafting a release that accurately reflects scientific findings in a manner accessible to the general public and disseminate it after acceptance but before the study is available online. Although the public relations office may also tend to oversimplify the results, it is much easier to influence and sharpen the university’s press release than to influence news outlets’ reporting. Furthermore, we can track the impact of our studies via tools such as Altmetric and follow up with prominent media outlets to ask for corrections. If such requests are refused or ignored, we can report the inaccuracies to independent regulators who can force corrections (e.g., the Independent Press Standards Organisation in the United Kingdom). We can also engage in social and traditional media discussions with the help of media professionals from our institutions. This can take the form of social media posts, replies, quotes, and interviews for traditional media. In sum, although not in our typical scientific skill set (and as such, more difficult to contend with), there are ways for us as researchers who are interested in responsible dissemination of our findings to actively engage in the impact of our research in the public sphere and ensure the public is exposed to more accurate accounts of our findings and their implications.
Rule 9: Address criticism from peers and the general public with respect
Studies that touch on socially contentious issues or other identity-related topics will often result in heated responses from peers and communities affected by this work. Online platforms that incentivize quick responses and engagement, like Twitter, can exacerbate these responses and create self-reinforcing cycles that accentuate polarized interpretations of specific findings. Even if started in good faith, such online discussions might devolve quickly into bitter moral arguments between opposing camps. In such arguments, the most harm is often inflicted on the more vulnerable members of the community, be it early career researchers, individuals from marginalized backgrounds, or any other potential vulnerability.
Despite the very emotional (and often personal) nature of these discussions and their rapid deterioration, it is important that we do not rush to respond. The sheer volume of negative responses can be overwhelming, and treating all commentators as a single group is tempting. However, some adversarial claims will have substantive criticism that we will be able to refute. Some substantive claims will offer new insights or potential limitations we did not consider in advance. Yet other claims might express genuine hurt, especially in cases where individuals feel that our findings and conclusions affect a core aspect of their identity. Differentiating these points can be very difficult in the heat of the moment. Nevertheless, we suggest that it is best if we address substantive criticisms with respect and address the unintended harm our research might have caused, keeping in mind potential limitations in our perspective and our study.
Rule 10: When all else fails, consider submitting a correction or a self-retraction
Despite the best of intentions, we might realize only after publication that our article has harmful implications or is otherwise flawed. This occurs more often when we are open to learning new things about the subject matter from any critical comments we receive after publication. If we change our minds and become convinced that our publication is flawed, we might consider issuing a correction or retracting the paper altogether. A correction can be issued to alert the readers about flaws that do not take away from the main point of the article. In contrast, a retraction may be in order when the flaw relates to a key measure, analysis, or conclusion. For example, a recent study [122] about the potential benefits of hydroxychloroquine for treating Coronavirus Disease 2019 (COVID-19) was retracted by some of the authors because they could no longer stand behind “the veracity of the primary data sources” [123]. Whether hydroxychloroquine helps treat COVID-19 or not, studies that present support for an ineffective treatment can result in catastrophic consequences. Due to media attention, new (and potentially ineffectual or harmful) COVID-19 treatments were broadly (and prematurely) adopted by medical staff in numerous clinics around the world. A retraction suggests that the scientific establishment, and in this case, the authors themselves, have lost confidence in the study, which can be used to argue against the premature adoption of these conclusions.
There are good reasons why we may consider self-retracting a majorly flawed article. First, retractions are the ultimate tool to correct the public record, as they alert readers that a study should not be relied upon (but see [124] for potential issues even with retractions). Retractions are important because policymakers, interested parties, and other researchers may still rely on the original flawed article, even if the authors disavow their own conclusions in a subsequent publication. Moreover, retracting a potentially harmful study signals to the public and other scientists that the authors, in particular, and scientists, in general, take the responsibilities given to them seriously. If we decide to retract a paper, the best course of action is to discuss this with the editor and write a detailed notice explaining the reasons that led us to retraction. Finally, despite the cost to authors incurred by retractions, self-retractions can be beneficial, especially when compared to journal-initiated retractions. Journal-initiated retractions are often taken as an indication of wrongdoing, even when no malfeasance took place. In contrast, authors who self-retract may be lauded as “heroic” [125,126] for admitting an error and being willing to sacrifice a publication for the greater good. If we become convinced that our paper promotes harm, it is better to be remembered as the person who courageously admitted a mistake than the authors of a socially harmful paper.
Summary
Communicating one’s scientific findings to peers and the general public is integral to the scientific endeavor. Without informing our discipline about our important results, theories cannot be updated, and knowledge cannot be accumulated. Likewise, disseminating our findings to the public and policymakers can shape public discourse and encourage the implementation of more scientifically accurate policies. However, due to a lack of training and structural support, scientists may be unaware of the potential social impact of their findings. For example, an artificial intelligence expert might build an excellent new generative language model but may unintentionally overlook their model’s bias when it comes to indigenous populations. Unfortunately, once a specific finding with a particular interpretation has gained public traction, updating or correcting the interpretation requires significant efforts that often fail (e.g., the impact of an infamous study on vaccine skepticism; [127,128]).
Should such potential implications dissuade researchers from conducting socially impactful research? As scientists, we believe that scientific and social progress hinges on searching for empirical truths and better theories and that potential misuse of a scientific study should typically not provide sufficient grounds for not publishing or conducting it in the first place. However, we also believe that social responsibility and scientific merit are not diametrically opposed. Therefore, in the spirit of the recent push towards more active engagement with the social impact of scientific research (e.g., [47,52,129–131]), we suggested 10 simple rules to help scientists consider socially responsible aspects of their work. By following these suggestions, we believe that scientists will be better able to foresee and minimize potential harms and, at the very least, be better prepared for post-publication discussions related to their research.
We recognize that these recommendations work, at times, against the authors’ incentives and are not a substitute for structural change in how scientific research is conducted and rewarded. This conflict of interest between publishing socially responsible science and the authors’ incentives is especially harsh for early career researchers who need publications in prestigious journals to get a permanent job. Therefore, we call on scientific societies, research institutions, and funding agencies to take active steps to encourage and reward social responsibility. Given the broader societal implications and the unintended harm that has already been caused time and again, we believe there is no better time than the present to start engaging with this important topic.
Acknowledgments
We would like to thank Sara Chadwick, Tal Eyal, Alex Holcombe, Mustafa I. Hussain, Ora Kofman, Yoav Kessler, Tal Yatziv, and Sari van Anders for fruitful discussions and helpful comments on earlier versions of this manuscript.
References
- 1.
Douglas H. Scientific freedom and social responsibility. In: Hartl P, Tuboly AT, editors. Science, Freedom, Democracy. Routledge; 2021. p. 68–87.
- 2. Owen R, Macnaghten P, Stilgoe J. Responsible research and innovation: from science in society to science for society, with society. Sci Public Policy. 2012 Dec 1;39(6):751–760. https://doi.org/10.1093/scipol/scs093
- 3. Schuijff M, Dijkstra AM. Practices of responsible research and innovation: a review. Sci Eng Ethics. 2019 Dec 16;26(2):533–574. pmid:31845176
- 4. Johnson DJ, Tress T, Burkel N, Taylor C, Cesario J. RETRACTED: Officer characteristics and racial disparities in fatal officer-involved shootings. Proc Natl Acad Sci U S A. 2019 Jul 22;116(32):15877–82. https://doi.org/10.1073/pnas.1903856116
- 5. Knox D, Mummolo J. Making inferences about racial disparities in police violence. Proc Natl Acad Sci U S A. 2020 Jan 21;117(3):1261–1262. pmid:31964781
- 6. Schimmack U, Carlsson R. Young unarmed nonsuicidal male victims of fatal use of force are 13 times more likely to be Black than White. Proc Natl Acad Sci U S A. 2020 Jan 21;117(3):1263. pmid:31964782
- 7. Johnson DJ, Cesario J. Reply to Knox and Mummolo and Schimmack and Carlsson: controlling for crime and population rates. Proc Natl Acad Sci U S A. 2020 Jan 21;117(3):1264–1265. pmid:31964783
- 8. Retraction for Johnson et al. Officer characteristics and racial disparities in fatal officer-involved shootings. Proc Natl Acad Sci U S A. 2020 Jul 10;117(30):18130. https://doi.org/10.1073/pnas.2014148117
- 9. Derrick GE, Faria R, Benneworth P, Pedersen DB, Sivertsen G. Towards characterizing negative impact: Introducing Grimpact. In Proceedings of the 23rd International Conference on Science and Technology Indicators: Science, Technology and Innovation Indicators in Transition; 2018. p. 1199–1213.
- 10. Frodeman R. The Hidden Life of Science & Technology. Issues Sci Technol. 2019;35(2):31–33.
- 11. AlShebli B, Makovi K, Rahwan T. RETRACTED: the association between early career informal mentorship in academic collaborations and junior author performance. Nat Commun. 2020 Nov 17;11(1). https://doi.org/10.1038/s41467-020-19723-8
- 12. Andersson K. I am not alone–we are all alone: Using masturbation as an ethnographic method in research on shota subculture in Japan. Qual Res. 2022 Apr 26:146879412210966. https://doi.org/10.1177/14687941221096600
- 13. Clark CJ, Winegard BM, Beardslee J, Baumeister RF, Shariff AF. RETRACTED: declines in religiosity predict increases in violent crime—but not among countries with relatively high average IQ. Psychol Sci. 2020 Jan 21;31(2):170–183. https://doi.org/10.1177/0956797619897915
- 14. Hardouin S, Cheng TW, Mitchell EL, Raulli SJ, Jones DW, Siracuse JJ, et al. RETRACTED: prevalence of unprofessional social media content among young vascular surgeons. J Vasc Surg. 2020 Aug;72(2):667–671. pmid:31882313
- 15. Hashemi M, Hall M. RETRACTED ARTICLE: criminal tendency detection from facial images and the gender bias effect. J Big Data. 2020 Jan 7;7(1). https://doi.org/10.1186/s40537-019-0282-4
- 16. Jabbour J, Holmes L, Sylva D, Hsu KJ, Semon TL, Rosenthal AM, et al. Robust evidence for bisexual orientation among men. Proc Natl Acad Sci U S A. 2020 Jul 20;117(31):18369–18377. pmid:32690672
- 17. Littman LL. Rapid onset of gender dysphoria in adolescents and young adults: a descriptive study. J Adolesc Health. 2017 Feb;60(2):S95—S96. https://doi.org/10.1016/j.jadohealth.2016.10.369
- 18. Mead LM. RETRACTED: Poverty and culture. Society. 2020.
- 19. Polizzi di Sorrentino E, Herrmann B, Villeval MC. Dishonesty is more affected by BMI status than by short-term changes in glucose. Sci Rep. 2020 Jul 22;10(1). pmid:32699212
- 20. Safra L, Chevallier C, Grèzes J, Baumard N. Tracking historical changes in trustworthiness using machine learning analyses of facial cues in paintings. Nat Commun. 2020 Sep 22;11(1). https://doi.org/10.1038/s41467-020-18566-7
- 21. Vercellini P, Buggio L, Somigliana E, Barbara G, Viganò P, Fedele L. RETRACTED: attractiveness of women with rectovaginal endometriosis: a case-control study. Fertil Steril. 2013 Jan;99(1):212–8. https://doi.org/10.1016/j.fertnstert.2012.08.039
- 22. Delzell DA, Poliak CD. Karl pearson and eugenics: personal opinions and scientific rigor. Sci Eng Ethics. 2012 Nov 21;19(3):1057–1070. pmid:23179067
- 23.
Harding SG. The science question in feminism. Ithaca, NY: Cornell University Press; 1986.
- 24. Gould P. Letting the data speak for themselves*. Ann Assoc Am Geogr. 1981 Jun;71(2):166–176. https://doi.org/10.1111/j.1467-8306.1981.tb01346.x
- 25. MacKenzie D. Interests, positivism and history. Soc Stud Sci. 1981 Nov;11(4):498–504. https://doi.org/10.1177/030631278101100405
- 26. Shapin S. Here and everywhere: sociology of scientific knowledge. Annu Rev Sociol. 1995 Aug;21(1):289–321. https://doi.org/10.1146/annurev.so.21.080195.001445
- 27. Grady C. Institutional review boards: Purpose and challenges. Chest. 2015;148(5):1148–1155. pmid:26042632
- 28. Jobin A, Ienca M, Vayena E. The global landscape of AI ethics guidelines. Nat Mach Intell. 2019 Sep;1(9):389–399. https://doi.org/10.1038/s42256-019-0088-2
- 29. Ellemers N. Gender stereotypes. Annu Rev Psychol. 2018 Jan 4;69(1):275–298. pmid:28961059
- 30. Flentje A, Heck NC, Brennan JM, Meyer IH. The relationship between minority stress and biological outcomes: a systematic review. J Behav Med. 2019 Dec 20;43(5):673–694. pmid:31863268
- 31. Gergen KJ. Social psychology as history. J Pers Soc Psychol. 1973;26(2):309–320. https://doi.org/10.1037/h0034436
- 32. Manstead AS. The psychology of social class: how socioeconomic status impacts thought, feelings, and behaviour. Br J Soc Psychol. 2018 Feb 28;57(2):267–291. pmid:29492984
- 33. Sivertsen G, Meijer I. Normal versus extraordinary societal impact: how to understand, evaluate, and improve research activities in their relations to society? Res Eval. 2019 Dec 10;29(1):66–70. https://doi.org/10.1093/reseval/rvz032
- 34.
Daimer S, Berghäuser H, Lindner R. Putting responsible research and innovation into practice. Cham: Springer International Publishing; 2022. The institutionalisation of a new paradigm at policy level. p. 35–56. https://doi.org/10.1007/978-3-031-14710-4_3
- 35.
Gianni R. Responsibility and freedom: the ethical realm of RRI. Hoboken, NJ: Wiley & Sons; 2016.
- 36. Rosenthal R. Science and ethics in conducting, analyzing, and reporting psychological research. Psychol Sci. 1994 May;5(3):127–134. pmid:11652978
- 37.
American Anthropological Association [Internet]. AAA Statement on Ethics. 2012 [cited 2022 Sep 1]. Available from: https://www.americananthro.org/LearnAndTeach/Content.aspx?ItemNumber=22869&navItemNumber=652.
- 38.
British Sociological Association [Internet]. Statement of Ethical Practice. 2017 [cited 2022 Sep 1]. Available from: https://www.britsoc.co.uk/media/24310/bsa_statement_of_ethical_practice.pdf.
- 39.
Caplan P. The Ethics of Anthropology: Debates and Dilemmas. London and New York: Routledge; 2003.
- 40. Nosek BA, Spies JR, Motyl M. Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspect Psychol Sci. 2012 Nov;7(6):615–631. pmid:26168121
- 41. Pennycook G, Thompson VA. An analysis of the Canadian cognitive psychology job market (2006–2016). Can J Exp Psychol. 2018 Jun;72(2):71–80. pmid:29902028
- 42. Aguinis H, Cummings C, Ramani RS, Cummings TG. “An A is an A”: the new bottom line for valuing academic research. Acad Manag Perspect. 2020 Feb;34(1):135–154. https://doi.org/10.5465/amp.2017.0193
- 43. Edwards MA, Roy S. Academic research in the 21st century: maintaining scientific integrity in a climate of perverse incentives and hypercompetition. Environ Eng Sci. 2017 Jan;34(1):51–61. pmid:28115824
- 44. Moher D, Naudet F, Cristea IA, Miedema F, Ioannidis JP, Goodman SN. Assessing scientists for hiring, promotion, and tenure. PLoS Biol. 2018 Mar 29;16(3):e2004089. pmid:29596415
- 45. Conry-Murray C, Silverstein P. The Role of Values in Psychological Science: Examining Identity-based Inclusivity [Preprint]. 2022 [cited 2022 Jul 1]. PsyArXiv. https://doi.org/10.31234/osf.io/cskg2
- 46. Lazer DM, Pentland A, Watts DJ, Aral S, Athey S, Contractor N, et al. Computational social science: obstacles and opportunities. Science. 2020 Aug 27;369(6507):1060–1062. pmid:32855329
- 47. Ledgerwood A, Hudson SK, Lewis NA, Maddox KB, Pickett CL, Remedios JD, et al. The pandemic as a portal: reimagining psychological science as truly open and inclusive. Perspect Psychol Sci. 2022 Mar 2:174569162110366. pmid:35235485
- 48.
Ledgerwood A, da Silva FA, Kadirvel S, Maitner A, Wang YA, Maddox KB. Methods for advancing an open, replicable, and inclusive science of Social cognition. In: Hugenberg K, Johnson K, Carlston DE, editors. Oxford Handbook of Social Cognition. Oxford. UK: Oxford University Press. in press.
- 49. Lewis NA. What counts as good science? How the battle for methodological legitimacy affects public psychology. Am Psychol. 2021 Nov;76(8):1323–1333. pmid:35113596
- 50. Rolin K. Standpoint theory as a methodology for the study of power relations. Hypatia. 2009;24(4):218–226. https://doi.org/10.1111/j.1527-2001.2009.01070.x
- 51. Bonner A, Tolhurst G. Insider-outsider perspectives of participant observation. Nurse Res. 2002 Jul;9(4):7–19. pmid:12149898
- 52. Schwarzlose RF. Superiority and stigma in modern psychology and neuroscience. Trends Cogn Sci. 2022 Oct. pmid:36207259
- 53. Bergold J, Thomas S. Participatory Research Methods: A Methodological Approach in Motion. FQS. 2012 Jan. 30;13(1). Available from: https://www.qualitative-research.net/index.php/fqs/article/view/1801.
- 54. Israel BA, Schulz AJ, Parker EA, Becker AB. REVIEW OF COMMUNITY-BASED RESEARCH: assessing partnership approaches to improve public health. Annu Rev Public Health. 1998 May;19(1):173–202. pmid:9611617
- 55. Rieger G, Chivers ML, Bailey JM. Sexual arousal patterns of bisexual men. Psychol Sci. 2005 Aug 1;16(8):579–584. pmid:16102058
- 56. Feinstein BA, Galupo MP. Bisexual orientation cannot be reduced to arousal patterns. Proc Natl Acad Sci U S A. 2020 Nov 17;117(50):31575–31576. pmid:33203669
- 57. Zivony A. Bisexuality in men exists but cannot be decoded from men’s genital arousal. Proc Natl Acad Sci U S A. 2020 Nov 17;117(50):31577–31578. pmid:33203668
- 58. Rosenthal AM, Sylva D, Safron A, Bailey JM. Sexual arousal patterns of bisexual men revisited. Biol Psychol. 2011 Sep;88(1):112–115. pmid:21763395
- 59. Hochgesang J. [Internet]. Open Letter to Springer Editors and Their Response. 2021 [cited 2022 Sep 1].
- 60. Roberts SO, Bareket-Shavit C, Dollins FA, Goldie PD, Mortenson E. Racial inequality in psychological research: trends of the past and recommendations for the future. Perspect Psychol Sci. 2020 Jun 24;15(6):1295–1309. pmid:32578504
- 61. Cascio MA, Weiss JA, Racine E. Making autism research inclusive by attending to intersectionality: a review of the research ethics literature. Rev J Autism Dev Disord. 2020 May 14. https://doi.org/10.1007/s40489-020-00204-z
- 62. Chown N, Robinson J, Beardon L, Downing J, Hughes L, Leatherland J, et al. Improving research about us, with us: a draft framework for inclusive autism research. Disabil Soc. 2017 May 5;32(5):720–734. https://doi.org/10.1080/09687599.2017.1320273
- 63.
Buolamwini J, Gebru T. Gender shades: Intersectional accuracy disparities in commercial gender classification. In: Friedler SA, Wilso C, editors, Proceedings of the 1st Conference on Fairness, Accountability and Transparency, volume 81 of Proceedings of Machine Learning Research. 2018. p. 77–91.
- 64.
Leavy S. Gender bias in artificial intelligence: the need for diversity and gender theory in machine learning. In Association for Computing Machinery. Proceedings of the 1st International Workshop on Gender Equality in Software Engineering (GE ’18). New York: Association for Computing Machinery; 2018. p. 14–16. https://doi.org/10.1145/3195570.3195580
- 65. Linder C, Harris JC, Allen EL, Hubain B. Building inclusive pedagogy: recommendations from a national study of students of color in higher education and student affairs graduate programs. Equity Excell Educ. 2015 Apr 3;48(2):178–194. https://doi.org/10.1080/10665684.2014.959270
- 66. Knowles S. Voorhees J, Planner C. Participatory Research vs PPI–What can we learn from each other? NIHR School for Primary Care Research. 2015 [cited 2022 Sep 1]. Retrieved from: https://www.spcr.nihr.ac.uk/news/blog/participatory-research-vs-ppi2013-what-can-we-learn-from-each-other.
- 67. Leung MW. Community based participatory research: a promising approach for increasing epidemiology’s relevance in the 21st century. Int J Epidemiol. 2004 May 27;33(3):499–506. pmid:15155709
- 68. Reggev N, Kardosh R. A brief guide to situating the neuroscience of Black and White civilian arrests in a broader social context. NeuroImage. 2022 Apr;119154. pmid:35381339
- 69.
American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition. [place unknown]: American Psychiatric Publishing; 2013.
- 70. Happé F, Frith U. Annual Research Review: looking back to look forward–changes in the concept of autism and implications for future research. J Child Psychol Psychiatry. 2020 Jan 28;61(3):218–232. pmid:31994188
- 71. Baron-Cohen S, Knickmeyer RC, Belmonte MK. Sex differences in the brain: implications for explaining autism. Science. 2005 Nov 3;310(5749):819–23. pmid:16272115
- 72. Draaisma D. Stereotypes of autism. Philos Trans R Soc B Biol Sci. 2009 May 27;364(1522):1475–1480. pmid:19528033
- 73. Hull L, Petrides KV, Mandy W. The female autism phenotype and camouflaging: a narrative review. Rev J Autism Dev Disord. 2020 Jan 29;7(4):306–317. https://doi.org/10.1007/s40489-020-00197-9
- 74. Ridley R. Some difficulties behind the concept of the ‘Extreme male brain’ in autism research. A theoretical review. Res Autism Spectr Disord. 2019 Jan;57:19–27. https://doi.org/10.1016/j.rasd.2018.09.007
- 75. Sedgewick F, Kerr-Gaffney J, Leppanen J, Tchanturia K. Anorexia nervosa, autism, and the ADOS: how appropriate is the new algorithm in identifying cases? Front Psychiatry. 2019 Jul 18;10. https://doi.org/10.3389/fpsyt.2019.00507
- 76. Krahn TM, Fenton A. The extreme male brain theory of autism and the potential adverse effects for boys and girls with autism. J Bioeth Inq. 2012 Jan 5;9(1):93–103. pmid:23180205
- 77. Wald C, Wu C. Of mice and women: the bias in animal models. Science. 2010 Mar 25;327(5973):1571–1572. https://doi.org/10.1126/science.327.5973.1571
- 78. El Khiyari H, Wechsler H. Face verification subject to varying (age, ethnicity, and gender) demographics using deep learning. J Biometr Biostat. 2016;07(04). https://doi.org/10.4172/2155-6180.1000323
- 79. Mauvais-Jarvis F, Bairey Merz N, Barnes PJ, Brinton RD, Carrero JJ, DeMeo DL, et al. Sex and gender: modifiers of health, disease, and medicine. Lancet. 2020 Aug;396(10250):565–582. pmid:32828189
- 80. Greenwald AG, Dasgupta N, Dovidio JF, Kang J, Moss-Racusin CA, Teachman BA. Implicit-bias remedies: Treating discriminatory bias as a public-health problem. Psychol Sci Public Interest. 2022;23(1):7–40. pmid:35587951
- 81.
Schwartz SH. Values and culture. In: Munro D, Schumaker JF, Carr SC, editors. Motivation and culture. New York: Routledge; 1997. p. 69–84.
- 82. Bryant BE, Jordan A, Clark US. Race as a social construct in psychiatry research and practice. JAMA Psychiatry. 2022 Feb 1;79(2):93. pmid:34878501
- 83. Remedios JD. Psychology must grapple with Whiteness. Nat Rev Psychol. 2022 Jan 27;1(3):125–126. https://doi.org/10.1038/s44159-022-00024-4
- 84. Wensley D, King M. Scientific responsibility for the dissemination and interpretation of genetic research: lessons from the “warrior gene” controversy. J Med Ethics. 2008 Jun 1;34(6):507–509. pmid:18511629
- 85. Hung M, Smith WA, Voss MW, Franklin JD, Gu Y, Bounsanga J. Exploring student achievement gaps in school districts across the United States. Educ Urban Soc. 2019 Mar 27;52(2):175–193. https://doi.org/10.1177/0013124519833442
- 86.
Jencks C, Phillips M. The black-white test score gap. Washington DC: Brookings Institution Press; 1998.
- 87. Reardon SF, Kalogrides D, Shores K. The geography of racial/ethnic test score gaps. Am J Sociol, 2019 Jan 1;124(4):1164–1221.
- 88.
Herrnstein RJ, Murray CA. The bell curve: Intelligence and class structure in American life. New York: Free Press; 1994.
- 89. Stolley PD. Race in epidemiology. Int J Health Serv. 1999 Oct;29(4):905–909. pmid:10615582
- 90. Krieger N. Refiguring “race”: epidemiology, racialized biology, and biological expressions of race relations. Int J Health Serv. 2000 Jan;30(1):211–216. pmid:10707306
- 91.
Fish JM. Race and intelligence: separating science from myth. [place unknown]: Taylor & Francis Group; 2013.
- 92. John LK, Loewenstein G, Prelec D. Measuring the prevalence of questionable research practices with incentives for truth telling. Psychol Sci. 2012;23(5):524–532. pmid:22508865
- 93. Simmons JP, Nelson LD, Simonsohn U. False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol Sci. 2011;22(11):1359–1366. pmid:22006061
- 94. Logg JM, Dorison CA. Pre-registration: weighing costs and benefits for researchers. Organ Behav Hum Decis Process. 2021 Nov;167:18–27. https://doi.org/10.1016/j.obhdp.2021.05.006
- 95. Pham MT, Oh TT. On not confusing the tree of trustworthy statistics with the greater forest of good science: a comment on Simmons et al.’s perspective on preregistration. J Consum Psychol. 2020 Dec 4. https://doi.org/10.1002/jcpy.1213
- 96. Chambers C. What’s next for registered reports? Nature. 2019 Sep 10;573(7773):187–189. pmid:31506624
- 97. Henderson EL, Chambers CD. Ten simple rules for writing a Registered Report. PLoS Comput Biol. 2022 Oct 27;18(10):e1010571. pmid:36301802
- 98. Paluck EL, Porat R, Clark CS, Green DP. Prejudice reduction: progress and challenges. Annu Rev Psychol. 2021 Jan 4;72(1):533–560. pmid:32928061
- 99. Wilkinson MD, Dumontier M, Aalbersberg IJ, Appleton G, Axton M, Baak A, et al. The FAIR Guiding Principles for scientific data management and stewardship. Sci Data. 2016 Mar 15;3(1):1–9. pmid:26978244
- 100. Martinez JE, Paluck EL. Quantifying idiosyncratic and shared contributions to judgment. Behav Res Methods. 2020;52:1428–1444. pmid:31898288
- 101. Sen M, Wasow O. Race as a bundle of sticks: designs that estimate effects of seemingly immutable characteristics. Annu Rev Polit Sci. 2016 May 11;19(1):499–522. https://doi.org/10.1146/annurev-polisci-032015-010015
- 102. Petrie KJ, MacKrill K, Derksen C, Dalbeth N. An illness by any other name: the effect of renaming gout on illness and treatment perceptions. Health Psychol. 2018 Jan;37(1):37–41. pmid:28836797
- 103. Nadal KL, Whitman CN, Davis LS, Erazo T, Davidoff KC. Microaggressions toward lesbian, gay, bisexual, transgender, queer, and genderqueer people: a review of the literature. J Sex Res. 2016 Mar 11;53(4–5):488–508. pmid:26966779
- 104. Ashley F. A critical commentary on ‘rapid-onset gender dysphoria’. Sociol Rev. 2020 Jul;68(4):779–799. https://doi.org/10.1177/0038026120934693
- 105.
CAAPS [Internet]. CAAPS Position Statement on Rapid Onset Gender Dysphoria (ROGD). 2022 [cited 2022 Sep 1]. Coalition for the Advancement & Application of Psychological Science. Available from: https://www.caaps.co/rogd-statement.
- 106. Restar AJ. Methodological critique of littman’s (2018) parental-respondents accounts of “rapid-onset gender dysphoria”. Arch Sex Behav. 2019 Apr 22;49(1):61–66. pmid:31011991
- 107. Bauer GR, Lawson ML, Metzger DL. Do clinical data from transgender adolescents support the phenomenon of “rapid onset gender dysphoria”? J Pediatr. 2022 Apr;243:224–227. https://doi.org/10.1016/j.jpeds.2021.11.020
- 108. Vincent BW. Studying trans: recommendations for ethical recruitment and collaboration with transgender participants in academic research. Psychol Sex. 2018 Jan 30;9(2):102–116. https://doi.org/10.1080/19419899.2018.1434558
- 109. Matsick JL, Kruk M, Palmer L, Layland EK, Salomaa AC. Extending the social category label effect to stigmatized groups: lesbian and gay people’s reactions to “homosexual” as a label. J Soc Polit Psychol. 2022 Aug 15;10(1):369–390. https://doi.org/10.5964/jspp.6823
- 110. Merolla J, Ramakrishnan SK, Haynes C. “Illegal,” “undocumented,” or “unauthorized”: equivalency frames, issue frames, and public opinion on immigration. Perspect Politics. 2013 Sep;11(3):789–807. https://doi.org/10.1017/s1537592713002077
- 111. Thomas DR. Feedback from research participants: are member checks useful in qualitative research? Qual Res Psychol. 2016 Aug 2;14(1):23–41. https://doi.org/10.1080/14780887.2016.1219435
- 112. Bornmann L. Scientific peer review. Annu Rev Inf Sci Technol. 2011;45(1):197–245. https://doi.org/10.1002/aris.2011.1440450112
- 113. DiDomenico RJ, Baker WL, Haines ST. Improving peer review: what reviewers can do. Am J Health Syst Pharm. 2017 Dec 15;74(24):2080–2084. pmid:29074482
- 114. D’Arcy A, Salmons J. Peer review in linguistics journals: best practices and emerging standards. Language. 2021;97(4):e383–e407. https://doi.org/10.1353/lan.2021.0076
- 115. Pickler RH, Munro CL, Likis FE. Addressing racism in editorial practices. Nurse Author Ed. 2020 Dec;30(4):38–40. https://doi.org/10.1111/nae2.11
- 116. Teixeira da Silva JA, Al-Khatib A. Should authors be requested to suggest peer reviewers? Sci Eng Ethics. 2017 Feb 2;24(1):275–285. pmid:28155093
- 117. Bauer PJ. Expanding the reach of psychological science. Psychol Sci. 2019 Dec 18;31(1):3–5. https://doi.org/10.1177/0956797619898664
- 118. Spitzer RL. Can some gay men and lesbians change their sexual orientation? 200 participants reporting a change from homosexual to heterosexual orientation. Arch Sex Behav. 2003 Oct;32:403–417. pmid:14567650
- 119. Spitzer RL. Spitzer reassesses his 2003 study of reparative therapy of homosexuality. Arch Sex Behav. 2012 May 24;41(4):757. pmid:22622659
- 120. Chadwick SB, van Anders SM. Do women’s orgasms function as a masculinity achievement for men? J Sex Res. 2017 Feb 23;54(9):1141–1152. pmid:28276934
- 121. Chadwick SB [Internet]. When the Media Twists Up Your Feminism and Spits it Out: A Reflection on Spitting Back. [cited 2022 Sep 1]. Standpoints. Available from: https://feministvoices.com/standpoints/dispatches-from-the-unlikeliest-of-labs-3.
- 122. Mehra MR, Desai SS, Ruschitzka F, Patel AN. RETRACTED: Hydroxychloroquine or chloroquine with or without a macrolide for treatment of COVID-19: a multinational registry analysis. Lancet. May 2020. pmid:32450107
- 123. Mehra MR, Ruschitzka F, Patel AN. Retraction-Hydroxychloroquine or chloroquine with or without a macrolide for treatment of COVID-19: a multinational registry analysis. Lancet. 2020 Jun 13;395(10240):1820. pmid:32511943
- 124. Berenbaum MR. On zombies, struldbrugs, and other horrors of the scientific literature. Proc Natl Acad Sci U S A. 2021 Jul 30;118(32):e2111924118. pmid:34330868
- 125. Hosseini M, Hilhorst M, de Beaufort I, Fanelli D. Doing the right thing: a qualitative investigation of retractions due to unintentional error. Sci Eng Ethics. 2017 Mar 20;24(1):189–206. pmid:28321689
- 126. Vuong Q. The limitations of retraction notices and the heroic acts of authors who correct the scholarly record: an analysis of retractions of papers published from 1975 to 2019. Learn Publ. 2019 Dec 26;33(2):119–30. https://doi.org/10.1002/leap.1282
- 127. Motta M, Stecula D. Quantifying the effect of Wakefield et al. (1998) on skepticism about MMR vaccine safety in the U.S. PLoS ONE. 2021 Aug 19;16(8):e0256395. pmid:34411172
- 128. De keersmaecker J, Roets A. ‘Fake news’: Incorrect, but hard to correct. The role of cognitive ability on the impact of false information on social impressions. Intelligence. 2017 Nov;65:107–110. https://doi.org/10.1016/j.intell.2017.10.005
- 129. Honig B, Lampel J, Siegel D, Drnevich P. Ethics in the production and dissemination of management research: institutional failure or individual fallibility? J Manag Stud. 2013 Oct 10;51(1):118–142. https://doi.org/10.1111/joms.12056
- 130. Kelly MP, Martin N, Dillenburger K, Kelly AN, Miller MM. Spreading the news: history, successes, challenges and the ethics of effective dissemination. Behav Anal Pract. 2018 Apr 20;12(2):440–451. pmid:31976252
- 131. Milton CL. Ethics of scholarly collaboration. Nurs Sci Q. 2019 Sep 12;32(4):276–277. pmid:31514619