Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Invitation appeals and STEM academic scientists research participation: Findings from six survey experiments

  • Tipeng Chen ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Writing – original draft, Writing – review & editing

    tpchen@asu.edu (TC); ericwelch@asu.edu (EW)

    Affiliation Center for Science, Technology and Environmental Policy Studies, School of Public Affairs, Arizona State University, Phoenix, Arizona, United States of America

  • Timothy P. Johnson,

    Roles Conceptualization, Methodology, Writing – original draft, Writing – review & editing

    Affiliation Department of Public Policy, Management, and Analytics, University of Illinois at Chicago, Chicago, Illinois, United States of America

  • Jinghuan Ma,

    Roles Investigation, Project administration, Writing – review & editing

    Affiliation Center for Science, Technology and Environmental Policy Studies, School of Public Affairs, Arizona State University, Phoenix, Arizona, United States of America

  • Ashlee Frandell,

    Roles Investigation, Project administration, Writing – review & editing

    Affiliation School of Public Policy and Leadership, University of Nevada Las Vegas, Las Vegas, Nevada, United States of America

  • Lesley Michalegko,

    Roles Investigation, Project administration, Writing – review & editing

    Affiliation Center for Science, Technology and Environmental Policy Studies, School of Public Affairs, Arizona State University, Phoenix, Arizona, United States of America

  • Eric W. Welch

    Roles Conceptualization, Funding acquisition, Writing – review & editing

    tpchen@asu.edu (TC); ericwelch@asu.edu (EW)

    Affiliation Center for Science, Technology and Environmental Policy Studies, School of Public Affairs, Arizona State University, Phoenix, Arizona, United States of America

Abstract

Survey research is a primary method used to investigate the opinions, perceptions and behaviors of academic scientists. However, little is known about the most successful appeal strategies for eliciting survey participation from these busy, highly educated professionals. Drawing on leverage-salience theory, this study examines the impacts of two sets of invitation appeals—information and representation appeals—on survey response rates among academic scientists in four STEM fields employed at U.S. R1 universities. Findings from six randomized experiments show that the effectiveness of both sets of invitation appeals is mixed and context-dependent, varying based on the polarization and relevance of survey topics, STEM academic scientists’ career stage, and their prior interactions with survey administrators. Specifically, self-representation appeals are most effective for polarized topics when recipients have low community affiliation. Less detailed information appeals are more successful when asking about low relevance topics, particularly for recipients with greater demands on their time, while more detailed information is effective for highly relevant and polarized topics. Additionally, invitations containing more detailed information are effective for first-time recipients in survey panels. This complexity reinforces the importance of designing effective outreach strategies to account for survey topics and recipient characteristics.

Introduction

Surveys have long been utilized to study the community of academic scientists in science, technology, engineering, and mathematics (STEM) fields, a practice that dates back more than half a century [13]. Research in this area has explored a wide range of topics relevant to STEM academic scientists, ranging from their personal life to professional activities, to expert opinions. Scholars have examined the dynamic motivations and beliefs driving STEM academic scientists’ behaviors and opinions [46], the socioeconomic determinants for STEM academic scientists’ career choices, discovery and innovation activities [710], their various professional networks (e.g., collaboration, competition, and mentorship) [11,12], and their university/work environments (e.g., representation, inclusion, and equity) [1315].

Surveying STEM academic scientists also holds considerable promise for enhancing science communication. The traditional deficit model of science communication assumes that scientific knowledge flows one-way from experts to the public. Under this model, communication failures—especially when the public distrusts science or holds “anti-science” views—are often attributed to media misinterpretation and public ignorance [16,17]. However, by aggregating the opinions of STEM academic scientists—those with the most widely recognized expertise in their fields—surveys provide a revised approach for science communication [18]. Survey administrators publish scientific opinions in a more accessible and understandable format for public authorities and the public through digital platforms [19], bypassing the social media’s potential role in misinterpreting science and polarizing audience views on science [20,21]. Institutions such as the Pew Research Center and the National Center for Science and Engineering Statistics (NCSES) routinely conduct surveys to synthesize and publish the views of leading STEM academic scientists [2225]. These efforts enhance science communication by providing valuable assessments, insights, and consultations that inform policy decisions and influence individual behavioral changes [18,2628].

Surveys further foster a two-way dialogue by recognizing STEM academic scientists—one of the most educated subgroups of the public—as both contributors to and recipients of scientific knowledge. By capturing diverse and, at times, divided perspectives within the STEM academic community, surveys contribute to constructive public debate—especially when knowledge about reality is uncertain, underdeveloped, and/or the issues are politicized [2,29,30]. Examples include contentious topics such as climate change [31,32], emerging biotechnology [28,33], women’s abortion rights [34], and vaccine safety [35]. By presenting diverse scientific opinions, surveys help structure information environments where the public, policymakers, and the private sector can carefully consider multiple sides of a controversial issue [36,37] and researchers have opportunities to refine their work for a broader social consensus [38,39].

In the face of declining survey response rates and politicization of science communication [4,30,40,41], understanding how to effectively engage STEM academic scientists in survey research and tailor surveys specifically to them becomes important. Through a series of online survey experiments, this study examines the effectiveness of two sets of appeal strategies designed to increase response rates among STEM academic scientists. The study focuses on those employed by Carnegie-designated Research Extensive and Intensive (R1) universities in the United States (U.S.) working in four STEM fields – biology, civil and environmental engineering, geography, public health. One set of appeals focuses on levels of information provided to STEM academic scientists at the time they are invited to participate. Another set of appeals varies representation, with one appeal condition emphasizing participation as representative of the scientific community and another emphasizing the importance of expressing personal opinions. This study contributes to our understanding of survey appeal strategies that may influence STEM academic scientist’s willingness to participate in online survey research.

Literature, theory, and hypothesis

Initial contact and effective communication are important to elicit cooperation in web surveys [4244]. Low response rates can reduce effective sample sizes, which limits statistical power and increases the risk of nonresponse bias [45]. For a number of years, researchers have been examining how to better elicit respondent cooperation with online surveys by testing various strategies for developing effective communications [46], such as offering monetary incentives [47], designing persuasive invitation messages [48], and sending optimal notification and follow-up reminders [49]. However, these design strategies may not all be applicable to STEM academic scientists. For example, monetary incentives are known to be less effective for highly educated populations [50]. Due to their unique characteristics, STEM academic scientists may exhibit survey participation behaviors that differ considerably from those of the general population, presenting both opportunities and challenges in survey design and implementation.

In terms of opportunities, response rates of STEM academic scientists tend to be higher than those of the general public due to several factors. First, STEM academic scientists are generally prosocial and sympathetic to improving public knowledge about science and scientific activities [38,51]. There is growing encouragement for STEM academic scientists to share their expertise and opinions beyond their institutions by active engagement in science communication within the public domain [17,5254]. Second, STEM academic scientists may be more familiar with survey methodologies and their rights as human research subjects and have a clearer understanding of the risks and benefits associated with participating in surveys. Their digital literacy enables them easier access to online surveys, and they may have empathy with fellow academic scientists and believe in reciprocity to help other researchers [42]. Third, data from STEM academic scientist surveys can be linked with Big Data relevant to science behaviors (e.g., bibliometric data) to produce research outputs that would be impossible with either data source individually [55,56].

Meanwhile, some work-related characteristics of STEM academic scientists raise challenges for researchers when recruiting them for surveys. For one, the growing demand for STEM academic scientists to share their opinions has potentially led to an increasing number of survey invitations directed toward them [2325]. This surge in survey requests contributes to increased survey fatigue among STEM academic scientists [57]. Moreover, given their various roles and obligations (e.g., research, teaching, mentoring, and service), STEM academic scientists face email fatigue due to the constant influx of emails from both internal institutions and various external sources [58,59], making them less likely to notice or be interested in online survey invitations. Also, compared with the general public, STEM academic scientists are highly educated professionals with domain-specific expertise and experience, making them relatively small in number, exceptionally busy, and less approachable [44,46]. Moreover, STEM academic scientists are a heterogeneous population, differing in academic positions (e.g., tenured, tenured-tracked, clinical, and teaching), rank (e.g., assistant, associated, and full), professional experience, and disciplinary backgrounds (e.g., biology, chemistry, etc.). Although biographical and demographic data on STEM academic scientists are publicly available, allowing researchers to compare and weigh their observable respondents with a reliable sample frame, such a heterogeneous group makes it difficult for researchers to obtain a representative sample of this target population.

Dillman and colleagues emphasize the importance of Tailored Design methods to customize survey procedures and instruments to minimize error and boost participation according to the characteristics of the target population [60]. This approach considers factors such as survey topic, sponsor, available resources, and timelines. Following the ideas of Tailored Design, some research investigates strategies to improve response rates specifically among educated professional groups such as STEM academic scientists, physicians, and politicians, particularly in online surveys [42,44,61,62]. These strategies typically fall into three categories: incentives, notifications, and appeals.

Incentives for survey participation can be monetary or nonmonetary, though their effectiveness varies. Monetary incentives differ in amount (ranging from under $2 to more than $50), format (cash or lottery), and timing (prepaid or post-survey completion) [62,63]. Evidence suggests that prepaid cash incentives are more effective at improving response rates than other formats, and higher amounts generally yield better participation [6163]. Nonmonetary incentives, including small tokens such as pens, have shown limited effectiveness for increasing response rates among professionals [61,62]. While monetary incentives sometimes enhance response rates for these educated professionals, they also pose challenges [64]. Some public sector institutions prohibit their employees from accepting compensation, and small monetary amounts may be perceived as insulting [44,65]. Use of incentives also obviously increases the cost of collecting survey data. These challenges highlight the tradeoff of using incentives to boost response rates while considering ethical and cultural sensitivities for educated professionals such as STEM academic scientists.

A second common strategy to boost survey participation of educated professionals is sending notifications to sampled individuals, including pre-survey alert letters and follow-up reminders. Alert letters inform participants about the survey’s purpose, rationale, sponsoring institution, and expected delivery date [66]. For example, Frandell and colleagues found that pre-notifications increased response rates by about 3% in experiments conducted across three surveys targeting STEM academic scientists at U.S. R1 universities [42]. Similarly, Hart and colleagues reported a 4% increase in survey participation among clinical professionals following pre-notifications [67]. Follow-up reminders are another effective notification method, involving repeated contacts with non-respondents to encourage their participation. Reminders often include details about the survey’s closing date and solutions for technical issues with electronic surveys. Guise and colleagues, in a non-experimental study among clinical professionals, found that repeated follow-ups could improve response rates [68]. Follow-up reminders are particularly helpful for highly educated professionals who may overlook initial invitations due to high email volumes or who become more available after the initial invitation period. While notifications are effective, they also present challenges [69]. Nonresponse may reflect implicit refusals, and excessive reminders can lead to survey fatigue, annoyance and/or a hostile survey climate, potentially reducing response quality for current and future surveys. Additionally, repeated follow-ups increase survey administration costs, and the marginal gains in response rates diminish with each subsequent reminder.

Compared to incentives and notifications, appeals offer a more cost-effective and labor-efficient strategy. Appeals act as customizable nudges that can be tailored to respondents’ characteristics, the survey’s purpose, and its topic. Research has examined the effects of multiple types of appeals that have been made as part of survey invitations. Appeals examined emphasize the interests of multiple parties, including the respondent, the greater public, and in a few cases, the investigator [7073]. Appeals highlighting the interests of individual respondents are labeled authority [74,75], egoistic [72], and importance of respondent [71] appeals. In contrast, those appeals representing the interests of the broader public are labeled affiliation [75], altruistic [72], science [76], and social utility [71,77] appeals. The evidence available is mixed, with some experiments suggesting egoistic appeals are most effective [70,71,78], and others finding altruistic [76,79] or help-the-researcher appeals [80] to be more effective, and still others document no differences across appeal conditions [46,71,72,74,75,77]. Finally, some find no main effects of appeal type but report significant interactions between appeal and other variables, including the type of organization making the appeal [81], and the cultural background of respondents [75].

Although prior studies have explored the use of appeals to increase survey participation among educated professionals, such as teachers [75], physicians [74], and nurses [71], very little research exists that evaluates the relative effectiveness of various appeal types as part of surveys focusing on STEM academic scientists. These scientists possess unique work characteristics, such as time flexibility, familiarity with research ethics, appreciation of research activities, and understanding of survey methodology, which may shape their responses to survey invitations differently. Thus, it is important to understand how different participation appeals influence STEM academic scientists’ cooperation. Recognizing that traditional appeals—egoistic, altruistic, and help-the-sponsor—often fail to yield high response rates, this study investigates two sets of alternative appeals: (1) appeals emphasizing information detail and (2) appeals emphasizing type of representation.

Grounded in leverage-salience theory [43,82], we develop hypotheses about how these two sets of appeals may influence STEM academic scientists’ propensity to participate in surveys. Salience is a construct referring to the extent to which specific attributes of a survey are perceived as motivating by potential survey respondents. Leverage-salience theory posits that individuals are more inclined to participate in surveys when they perceive certain attributes of the survey invitation as important (salient). Salience varies based on the targeted population’s characteristics; for example, lottery incentives may appeal to low-income individuals [83], while altruistic appeals resonate more with prosocial individuals [48]. By emphasizing survey attributes that align with recipients’ specific concerns and preferences, their likelihood of participation in surveys increases.

The first set of appeals operationalize the salience of information by varying the amount of detail provided about the survey topic as part of the invitation. We hypothesize that invitations with less information will be more salient with STEM academic scientists and effective for increasing response rates for several reasons. First, STEM academic scientists are accustomed to concise, precise communication in academic settings. Even though STEM academic scientists can process detailed information efficiently, excessive details perceived as providing marginal benefit may annoy scientists, decreasing their propensity to respond to the survey. Lengthy invitations with unnecessary details further increase survey complexity and ambiguity, reducing readability and potentially leading to concern about the quality of the survey and the qualifications of the research team. Second, while additional details may elaborate on the survey topic, they do not necessarily enhance its salience. Scientists can understand the purpose and significance of a survey from brief descriptions. Third, scientists are time-sensitive and thus value their time highly. Similarly, detailed invitations require more time to read and process, which may deter participation, especially among STEM academic scientists who often experience email fatigue [5859]. In summary, the salience for STEM academic scientists lies in the clarity and brevity of the invitation, not the volume of information.

Hypothesis 1: STEM academic scientists receiving less detailed survey invitations are more likely to respond than those receiving more detailed invitations.

The second set of experiments use an integrative conceptualization of appeal design, operationalizing representation salience through two types of representation appeals that are most often examined in the literature. STEM academic scientists, on the one hand, might be invited to participate via appeals to self-representation that emphasize the importance of their professional expertise and individual voice. Such appeals align with egoistic appeals that emphasize individual respondent authority, knowledge and expertise [71,74,75,7981]. Alternatively, appeals might be made to respondents as being representatives of the greater scientific community–a community-representation appeal. These types of appeals build on social identity theory that prioritizes community identity as a key motivator of prosocial behaviors, such as survey participation, that benefit the community [84]. These types of messages are similar to the emphasis placed on social and community benefits of participation typically found in altruistic appeals [70,77,79].

As noted earlier, findings from the literature on invitation appeals have been mixed, with some results suggesting that the salience of representation appeals is context-based [75,81], depending on other characteristics of survey design and/or the targeted population. Thus, the characteristics of the STEM academic scientific community also matter for any observed response differences between self-representation and community representation appeals. We hypothesize that self-representation appeals are more salient with STEM academic scientists than community-representation appeals. STEM academic scientists are more responsive to self-representation appeals, as they are generally more comfortable expressing their personal opinions than representing or speaking for the greater scientific community where disagreements are common. The scientific community is inherently heterogeneous, varying by disciplines, career stages, research activities, capacities, and research endowments [85]. This heterogeneity means that the behaviors, research processes, and personal opinions of individual scientists often fail to represent the broader scientific community. Furthermore, scientific debates on contentious issues are both common and openly conducted. For instance, epidemiologists have engaged in transparent and vigorous discussions about COVID-19 social distancing policies and vaccine distribution on social media [86]. Most scientific arguments require further scrutiny, and issues with broad consensus among experts remain rare [36]. Science communication, grounded in critical objectivity, prioritizes evidence derived from rigorous scientific methods—such as replicable experiments and falsifiable hypotheses—over authoritative or popular opinions [87,88]. Because academic training emphasizes critical objectivity, STEM academic scientists tend to approach generalizations of their opinions with caution. They may perceive community-representation appeals as potentially introducing bias into survey results, leading to hesitation in responding to such appeals. Thus, STEM academic scientists’ preference for individual expression over representing others likely makes them more inclined to respond to invitations framed with appeals that emphasize their personal perspective.

Hypothesis 2: STEM academic Scientists receiving self-representation appeals are more likely to respond than those receiving community-representation appeals.

Method

SciOPS and sampling strategy

This study draws on six survey experiments conducted by the SciOPS (Scientist Opinion Panel Survey) survey panel, a science communication platform developed by the Center for Science, Technology and Environmental Policy Studies at Arizona State University. SciOPS, consisting of a survey panel of randomly selected academic scientists in U.S. R1 universities, connects society with the scientific community by collecting and sharing the broadly representative opinions of U.S.-based STEM academic scientists on timely, critical science and technology issues. Further information on SciOPS surveys is available at: https://www.sci-ops.org/.

The survey experiments were embedded within six SciOPS surveys covering various topics: (1) COVID-19 Survey Wave 2; (2) COVID-19 Survey Wave 4; (3) Public Trust in Science Survey; (4) Survey of Scientists’ Perceptions of Surveys; (5) Women’s Health Survey; and (6) Vaccine Survey. All the above surveys followed a two-stage sampling and administration process. Table 1 summarizes the key features of these surveys.

First stage.

Since 2020, SciOPS has built a pilot sample frame of approximately 12,000 academic scientists in four STEM fields: biology, geography, civil and environmental engineering, and public health. SciOPS used probability sampling to randomly select R1 universities across the U.S. (see S1 Table in Supporting Information). The research team then collected the names and contact information of tenured and tenure-track faculty (assistant, associate, and full professors) and PhD-holding non-tenure track researchers from publicly available websites of sample departments. In 2021, two groups of random samples from this pilot sample frame were drawn separately for three appeal experiments embedded in two COVID-19-related surveys (Waves 2 and Wave 4) and a Public Trust in Science Survey.

Second stage.

In 2022, SciOPS launched its first panel member recruitment campaign by sending four rounds of invitations to STEM academic scientists in the pilot sample frame. Panel members were informed they would be invited to complete two surveys annually and would receive survey results and an annual certification of recognition. The campaign resulted in 986 eligible academic scientists joining the panel, yielding a recruitment rate (RECR) of 7.7% [89]. Following recruitment, we conducted three surveys with SciOPS panel members. The Survey of Scientists’ Perceptions of Surveys and Women’s Health Survey each used a random sample of all panel members. The Vaccine Survey targeted all biologists and public health faculty within the SciOPS panel. Two information appeal experiments and one representation appeal experiment were conducted as part of these surveys.

All surveys prior to 2022 were conducted using Sawtooth Software®, while those from 2022 onward used the Nubis® software system. Surveys were administered online in English. Table 1 shows that surveys limited to SciOPS panel members achieved higher rates (around 35%) than those including non-panel members (lower than 20%), and survey fatigue was evident in COVID-19 Survey Wave 4, which had a lower response rate than Wave 2 despite using the same sample frame. Fig 1 shows the timeline of SciOPS development and its six embedded survey experiments.

Randomized controlled trials design

Treatment condition.

The information appeal experiment tested the effect of varying the volume of information about survey topics in the invitation to potential respondents. This experiment was included in the Survey of Scientists’ Perceptions of Surveys and the Vaccine Survey. Invitations included one of three conditions: no information about the survey topic, some information (approximately 45 words), or detailed information (approximately 90 words). Fig 2 illustrates the operationalization of these conditions in the Survey of Scientists’ Perceptions of Surveys while Fig 3 presents the corresponding operationalization in the Vaccine Survey. The complete set of the email invitations used in these experiments is provided in Supporting Information as S1 and S2 Appendices.

thumbnail
Fig 2. Information appeal experiment in survey of scientists’ perceptions of surveys.

https://doi.org/10.1371/journal.pone.0326331.g002

The representation appeal experiment tested the effectiveness of self-representation vs. community-representation appeals. This experiment was conducted in the COVID-19 Survey Wave 2, the Public Trust in Science Survey, and the Women’s Health Survey. In the self-representation condition, invitations encouraged STEM academic scientists to participate in surveys by explicitly asking them to “express your personal views,” whereas the community-representation condition explicitly encouraged them to “represent the scientific community.” Figs 46 shows the operationalization in the Public Trust in Science Survey. Full invitation emails for each survey are available in S3S5 Appendices in Supporting Information.

thumbnail
Fig 4. Representation appeal experiment in COVID-19 Survey Wave 2.

https://doi.org/10.1371/journal.pone.0326331.g004

thumbnail
Fig 5. Representation appeal experiment in public trust in science survey.

https://doi.org/10.1371/journal.pone.0326331.g005

thumbnail
Fig 6. Representation appeal experiment Women’s Health Survey.

https://doi.org/10.1371/journal.pone.0326331.g006

In the COVID-19 Survey Wave 4, a two-by-two experimental design was implemented, combining the two representation appeal conditions with two levels of information detail (none vs. some information). Fig 7 shows the experimental design, and full invitation emails are included in S6 Appendix (Fig 8).

Randomization.

STEM academic scientists in the sample were randomly assigned to experimental conditions using random numbers generated in MS Excel. For the information appeal experiments, one-third of the sample was assigned to each of the three conditions (no information, some information, detailed information). For the representation appeal experiments, the sample was evenly split between self-representation and community-representation appeals. In the two-by-two design for the COVID-19 Survey Wave 4, the sample was divided equally among the four condition combinations.

Measurement.

The outcome variable in each experiment is the survey response rate, defined as the proportion of responses within the eligible experimental sample. Responses included both completed and partial responses (coded as “1”), while nonresponses (coded as “0”) included explicit refusals, surveys opened with no answers, or no reply to the invitation. The eligible experimental sample excluded ineligible STEM academic scientists (e.g., deceased, retired, or out of academia) and those unreachable during the survey period (e.g., on rotations or leave).

To assess randomization quality, we conducted balance tests for all six experiments using Pearson’s Chi-squared test, accounting for demographic variables and prior SciOPS survey experience. Balance tests showed no significant differences in demographic variables (gender, academic field, academic rank) across treatment groups at a 0.05 significance level, confirming the experimental conditions were well-balanced. Additionally, some sampled scientists in the COVID-19 Survey Wave 4 had consented to become SciOPS panel members in 2022, indicating their greater willingness to represent the scientific community and contribute to science communication, but no significant differences were found in the proportion of SciOPS panel members across experimental groups. Similarly, for the Vaccine Survey, while some respondents had participated in prior SciOPS surveys, no significant differences were found in this subgroup across experimental conditions. Detailed results are available in S2S4 Tables in the Supporting Information.

Ethical considerations

A consent form appeared on the first page of each survey instrument, informing participants of their rights to participate voluntarily or decline at any time. Participants were asked to indicate their consent by clicking to begin the survey. We informed, obtained, and documented consent through emails and the survey software systems. Survey participation was recorded as consent, while refusals, ineligibility, and non-contact were also documented through email communications. Given the nature of online surveys, the absence of foreseeable risks to participants, and the characteristics of our sample—busy academic scientists geographically distant from our institution—obtaining written consent was time-consuming and not feasible. Instead, we employed implied consent, a widely accepted and ethically appropriate approach for minimal-risk online survey research [90]. Research has found no substantial differences in how well online versus written consent informs participants [91], and requiring signed consent forms may reduce response rates and deter prospective participants who would otherwise be willing to complete the survey [92,93]. Study procedures were approved by the Arizona State University Institutional Review Board (ASU Study #00012476).

Analytical strategies

Our statistical analysis follows a three-stage approach. First, for surveys with three experimental conditions, we conducted one-way analysis of variance with pairwise comparisons of response rates using the Tukey HSD test. For surveys with two experimental conditions, we used t-tests to compare response rates across conditions. Second, we utilized logistic regression to assess the effect of experimental conditions on the probability of survey participation controlling for covariates. Third, we performed subgroup analyses to examine response rate differences by academic rank and prior interactions with SciOPS, including previous survey invitations and panel membership.

These subgroup analyses were based on several factors. First, STEM scientists at different academic ranks may face varying competition, promotion pressures, and time availability, potentially influencing their response to information and representation appeals. Second, prior SciOPS interactions could shape respondents’ sensitivity to these appeals. Those invited to previous surveys may have pre-existing perceptions of SciOPS, affecting their responses differently from those of first-time participants. Additionally, SciOPS panel members, being more prosocial and committed to science communication, may have different sensitivity to survey appeals.

Results

Information appeal experiment results and discussion

Hypothesis 1 posits that lengthy survey invitation emails with more information discourage STEM academic scientists from participating in surveys compared to invitations with less information. Table 2 presents the response rates for each experimental condition across surveys involving information appeals, revealing mixed results. In the Survey of Scientists’ Perceptions of Surveys, invitation emails without any information significantly increased response rates compared to emails with greatest amounts of information about the survey topic (19.8 percentage points higher, p < 0.01). In contrast, the COVID-19 Survey Wave 4 shows the opposite trend: STEM academic scientists receiving emails with no information responded at lower rates than those receiving emails with some information, albeit at a borderline significance level (2.7 percentage points lower, p < 0.1).

The S5 Table in Supporting Information provides logistic regression results that align with these findings. In the Survey of Scientists’ Perceptions of Surveys (Model 1 in S5 Table), both treatments—some information and much information—significantly reduced the probability of academic scientists responding. Compared to emails without any information about the purpose of the survey, emails with some information were 9.6 percentage points less likely to motivate responses (p < 0.1), while emails with much information were also less likely to motivate responses (17.4 percentage points, p < 0.01). Conversely, in the COVID-19 Survey Wave 4 (Model 3), STEM academic scientists receiving emails with some information were 2.8 percentage points more likely to respond than those receiving emails with no information (p < 0.05). The salience of information details is inconsistent across different surveys, offering mixed support for Hypothesis 1. A brief and concise invitation email with minimal survey details does not consistently guarantee more responses. One possible explanation is that the salience of information depends on several factors, including the polarization of the survey topic, respondents’ career stage, and their prior experience with SciOPS. To explore these contingencies, we conducted subgroup analyses.

Table 3 further illustrates these subgroup discrepancies by rank. In the Survey of Scientists’ Perceptions of Surveys, assistant professors and full professors responded at significantly higher rates to emails without information compared to those with much information (24.4 and 22.7 percentage points higher, respectively, both at borderline p < 0.1). In the Vaccine Survey, invitation emails with much information obtained significantly higher response rates from non-tenure-track researchers (26.9 percentage points higher, p < 0.05), compared to emails providing some information. Additionally, associate professors responded more to emails with some information about the survey than to emails with no information (21.0 percentage points higher, p < 0.1). In the COVID-19 Survey Wave 4, emails providing some information resulted in significantly higher response rates from associate professors (5.4 percentage points higher, p < 0.05), compared to emails with no information.

The elaboration likelihood model posits that individual attitudinal change results from two distinct routes of persuasion (central and peripheral), depending on their motivation and ability to process a message [94]. The central route occurs when individuals are highly involved with the issues and possess the time and cognitive resources to thoughtfully consider the merits of the information presented in support of an advocacy. The peripheral route, by contrast, occurs when individuals have low involvement or limited processing capacity. Attitude shift is a result of a simple cue, such as brief arguments or an attractive information source rather than scrutinizing message content [95].

thumbnail
Table 3. Response rate by information appeals across subgroups of academic rank.

https://doi.org/10.1371/journal.pone.0326331.t003

In terms of personal relevance and involvement, the Survey of Scientists’ Perceptions of Surveys elicits a lower level of relevance from STEM academic scientific community. It addresses issues that rarely impact STEM academic scientists’ daily work and attracts limited attention from both the STEM academic scientific community and the general public. In contrast, the Vaccine Survey and the COVID-19 Survey Wave 4 both address high-stakes, critical and polarized issues that greatly affect both societal outcomes and the well-being of the STEM academic scientific profession, thus increasing level of relevance.

Career stage shapes STEM academic scientists’ information process capacity. Assistant professors and full professors are often overwhelmed with time-sensitive research responsibilities and heavier survey burdens compared to associate professors and non-tenure track researchers [9698]. Assistant professors, under the pressure of the tenure clock in the early stages of their academic careers, face heavier workloads but have fewer resources for flexibility [96]. Full professors, on the other hand, juggle multiple tasks that demand significant time, including greater mentoring responsibilities, managing multiple research projects and grants, supervising laboratories, and engaging in professional services outside their institutions [97]. Conversely, non-tenure-track researchers and associate professors, with relatively more flexible time management and less survey burden, are more likely to have capacity to engage with detailed invitation emails.

Taken together, these differences suggest that assistant and full professors are more likely to rely on the peripheral route to process survey requests. When the survey topic is less polarized and personally relevant––the Survey of Scientists’ Perceptions of Surveys––they are more responsive to concise, efficient invitations that reduce cognitive processing load. In contrast, non-tenure-track researchers and associate professors are more likely to perform central route processing. For more polarized and relevant topics such as the Vaccine Survey and the COVID-19 Survey Wave 4, they are more cautious about expressing opinions on contentious issues and utilize their cognitive resources to scrutinize detailed survey information to understand the study’s background and purpose before deciding to respond.

Table 4 shows that response rate differences across experimental conditions are also contingent on respondents’ prior communication experience with SciOPS. In the Vaccine Survey, the sample are all SciOPS panel members. Invitation emails with much information induce significantly higher response rates from STEM academic scientists who had not previously participated in SciOPS surveys (28.1 percentage points higher, p < 0.01) compared to emails with some information. According to the central route of the elaboration likelihood model, individuals who are receiving SciOPS survey invitations for the first time may lack familiarity with SciOPS’s communication style and, thus, need to mobilize more information-processing capacity to comprehend the messages and access its credibility. As a result, the effect of more detailed information is significant. Those who have previously responded to SciOPS surveys are likely to have a familiarity effect. Their prior exposure to similar communications enables them to rely on simple cues, such as recognizable sender and branding, to respond to the survey through a peripheral route. This result is consistent with previous findings about online survey panel members, which suggests that returning survey participants often may overlook aspects of the survey request from a known source and not scrutinize the message in depth [99,100].

thumbnail
Table 4. Response rate by information appeals across subgroups of prior communication experience with SciOPS.

https://doi.org/10.1371/journal.pone.0326331.t004

For those sampled for the COVID-19 Survey Wave 4, all were invited to be SciOPS panel members in 2022 before this survey, but only some of them consented to join. This group of scientists demonstrates a strong commitment to supporting the scientific community and science communication through survey participation. Subgroup analysis shows that emails with some information resulted in significantly higher response rates from those SciOPS panel members (15.7 percentage points higher, p < 0.1), compared to emails with no information. According to the central route of the elaboration likelihood model, SciOPS panel members have a stronger relevance with SciOPS surveys compared to non SciOPS panel members. Their stronger commitment and relevance prompts them to more carefully consider the context and background of the surveys they participate in. Surveys with more information are more likely to attract their participation as this is more likely to provide them with sufficient information to evaluate the salience of the survey.

Representation appeal experiment results and discussion

Hypothesis 2 posits that invitation appeals emphasizing self-representation motivate STEM academic scientists to participate in surveys more effectively than emails encouraging them to represent the scientific community. Table 5 compares response rates across experimental conditions for surveys with representation appeals, also revealing mixed results. In the Women’s Health Survey, a self-representation appeal significantly increased response rates compared to a community-representation appeal, with a 9.5 percentage point higher response rate (p < 0.05). The S6 Table in Supporting Information presents logistic regression results, which show consistent findings while controlling for covariates. In the Women’s Health Survey (Model 6), STEM academic scientists receiving community-representation appeals were 9.7 percentage pointsless likely to respond to the survey compared to those receiving self-representation appeals (p < 0.05). The results from the Women’s Health Survey thus support Hypothesis 2.

Table 6 shows that in the Women’s Health Survey, non-tenure-track researchers responded at significantly higher rates to a self-representation appeal compared to a community-representation appeal (31.1 percentage points, p < 0.01). Table 7 shows that the effect of self-representation appeals on response rates are not contingent on respondents’ prior communication experience with SciOPS.

thumbnail
Table 6. Response rate by representation appeals across subgroups of academic rank.

https://doi.org/10.1371/journal.pone.0326331.t006

thumbnail
Table 7. Response Rate by representation appeals across subgroups of prior communication experience with SciOPS.

https://doi.org/10.1371/journal.pone.0326331.t007

The results of the representation appeal experiments suggest that STEM academic scientists’ inclination to represent their personal views vs. those of the scientific community depends on the context in which they are asked to express their opinions. It is only significant in the Women’s Health Survey. This survey was conducted several months after the Supreme Court overturned Roe v. Wade, a time when women’s reproductive health was a polarizing and highly debated issue. In such a contentious environment, STEM academic scientists were more likely to express their personal views rather than represent perspectives of the scientific community. Results from subgroup analysis further show that non-tenure-track researchers were more likely to represent themselves than academic scientists with tenure or in tenure-track.

This pattern is aligned with self-categorization theory [101], which suggests that individuals with weaker group identification are more likely to affirm their individual identity and disidentify from the group when the group is under threat [102]. Non-tenure-track researchers are low group identifiers due to their institutional marginalization and identity insecurity within the academic scientific community, where holding a tenure-track position is a strong marker of group membership. In a polarized environment, the academic scientific community is internally highly divided and externally threatened, particularly under public scrutiny over its predisposition on contentious social issues. For individuals with insecure or marginalized identities, affiliating with the academic scientific community can feel risky. Thus, non-tenure-track researchers are more comfortable with expressing their personal opinions rather than representing a potentially controversial group.

As time passes since the initial COVID-19 outbreak, the declining salience of this survey topic may have influenced the effectiveness of self-representation and community-representation appeals differently in the COVID-19 Wave 2 and Wave 4 surveys. However, a comparison of experimental results from the two waves of the COVID-19 survey (Tables 4–6) show no significant differences in response rates between these appeals in either survey. This suggests that the effectiveness of self-representation and community-representation appeals was not dependent on changes in the salience of the COVID-19 survey topic. It is possible that by the time the COVID-19 Survey Wave 2 was conducted in the summer of 2021—more than a year after the outbreak—the topic salience may have already diminished for STEM academic scientists. By 2024, when the COVID-19 Survey Wave 4 was administered, any further change in topic salience was likely minimal.

Discussion

Findings from six experiments reveal no linear or simplistic conclusion regarding whether the amount of information provided or type of appeal made might be differentially effective in increasing academic STEM scientists’ survey response rates. Instead, the evidence highlights that both less and more information can lead to higher response rates under some conditions. Meanwhile, egoistic self-representation does not always lead to significantly higher response rates, compared to more altruistic community-representation appeals. These experimental conditions interact in complex and dynamic ways with survey response behavior. Table 8 summarizes the key findings across different surveys and subgroup analyses.

thumbnail
Table 8. Results summary for information appeal experiment.

https://doi.org/10.1371/journal.pone.0326331.t008

The heterogeneous effects of information and representation appeals on STEM academic scientists’ participation depends on survey topic, career stage, and prior interactions with survey institutions. For survey topics, appeals with less information tend to increase participation for less polarized or socially discussed topics, while appeals with more information are preferred for highly critical or politicized issues. Self-representation appeals are more effective for polarized and trending topics. Regarding career stages, STEM academic scientists with limited time availability (e.g., assistant professors) or high information burdens (e.g., full professors) are more likely to prefer short, concise invitation email messages. Weak group affiliation for non-tenure-track researchers makes them prioritize self-representation when responding to contentious survey topics. Lastly, the relationship between recipients of survey invitations and senders influences the effectiveness of these appeals. For one, a familiarity effect often emerges in panel survey settings where recipients have received repeated survey requests from a known source [99,100]. Over time, as recipients become accustomed to the style and content of these invitations, they may rely less on the specific details within each request when deciding whether to participate. In such cases, the necessity for carefully crafted information and representation appeals may be diminished. For another, respondents with commitment to carefully expressing their opinions will require clear guidance and more details about the survey to evaluate their capacity and eligibility for participation.

Survey researchers are encouraged to tailor invitation emails on a case-by-case basis, as there is no one-size-fits-all formula for designing effective appeals. Strictly adhering to a specific level of detail or emphasizing a particular type of representation is unlikely to yield optimal results. Table 8 suggests that survey administrators should consider the configuration of survey topics and recipient characteristics when using information and representation appeals. First, for polarized topics, we recommend using self-representation appeals, particularly when recipients have low affiliation with the group and when the group faces external threats. Second, for topics with lower relevance to recipients, we suggest providing minimal information, particularly for survey recipients who lack time and capacity to engage deeply with the survey request. In contrast, for highly relevant and polarized topics, we recommend providing a higher level of detail to recipients with the capacity to process more information. Third, when administering surveys to a panel, we suggest sending a high-level informational invitation to recipients with strong commitment and a middle-level informational invitation to first-time recipients.

This study has several limitations. First, some nonrespondents may have made their decisions based solely on the email subject line. Due to ethical considerations and technical constraints, we could not track whether survey recipients opened the email and were exposed to the experimental treatments embedded in the email body. To address this, we manipulated the representation appeal condition in both the subject line and email content, ensuring exposure to experimental treatment regardless of whether the email was opened. Thus, the estimated effect of the representation appeal is not biased by the email open rate. In contrast, the information appeal condition could not be meaningfully conveyed in a brief subject line. It is possible that the unmanipulated subject line may have influenced survey recipients’ propensity to open email messages compared with a hypothetical subject line manipulating the information appeal. Nevertheless, it does not bias the estimated effect of the information appeal on response propensity, as all experimental groups received the same subject line and the rates at which emails are opened can be assumed to be randomly distributed across experimental groups.

Second, while we tested our hypotheses across multiple random experiments, the sample sizes varied significantly, ranging from several hundred to several thousand participants. The lack of statistically significant results in some experiments may be attributed to smaller sample sizes and limited statistical power. Third, STEM academic scientists’ personal interests in different survey topics may have influenced their attention to the content of the invitation email [99,103]. Although we use randomized trials and controlled for academic specialty in logistic regression models, we conducted an additional sensitivity check by examining field-specific response rates. Specifically, we analyzed responses to the Women’s Health Survey and the Vaccine Survey under the assumption that faculty in public health might have greater interest in these two survey topics. We found that public health faculty who received invitations with greater information were significantly more likely to respond—by approximately 15 percentage points (p < 0.1), compared to those who received no information. However, we did not observe significant differences across other subgroups (see S7 Table in Supporting Information). These findings suggest that academic field may only partially capture topic interest. We recommend that future research develop more direct measures of personal interest in survey topics and explore how such relevance influences how recipients process and respond to survey invitations.

Despite these limitations, this study contributes to three lines of research. First, it is the first in the science communication literature to examine how invitation emails influence the participation of a unique population—STEM academic scientists—in survey research studies. As science communication becomes increasingly prominent in both society and academia, this work helps advance our understanding of available communication strategies to encourage STEM academic scientists to share their opinions and experiences in survey research. Unlike surveys targeting the general population, conducting surveys with STEM academic scientists is more challenging due to their time constraints and relatively small population size. Our study shows that designing tailored information and representation appeals can, under the right circumstances, provide a cost-effective approach to increasing STEM academic scientists’ willingness to participate in research.

Second, this study provides new evidence on how the content of survey invitation messages impacts response rates. While existing research has explored the effects of egoistic appeals, altruistic appeals, topic salience, and other messaging strategies [48,70,74,79,99], we investigate two novel appeals: the volume of information provided and the framing of representation as self or community. Our findings highlight more possibilities to customize survey invitations to suit targeted populations. Additionally, results show that the effectiveness of these appeals is influenced by factors such as survey topics, recipients’ prior interactions with the survey administrators, and their time availability and cognitive processing capacity. Our findings offer insights for previous research that reports mixed evidence on the effectiveness of specific strategies in influencing response rates. These variations in effectiveness can be attributed to additional factors related to survey design and the characteristics of the targeted populations.

Third, this study expands the survey response rate literature by examining a special, highly educated population: STEM academic scientists. Previous research has focused on highly educated groups such as teachers, health care providers, and politicians [44,63,75]. By introducing the scientific community into this body of work, we contribute to a broader understanding of response behavior among specialized professional populations.

This research opens several agendas for future study. First, we recommend additional research to replicate our experiments and examine how the effects of information and representation appeals vary across broader academic communities and other populations. Our findings and conclusions are limited to STEM academic scientists at U.S. R1 universities and may not generalize to academic scientists in other fields, scientists working in industry and government, international scholars, or the public in other professions. Scientists in non-STEM disciplines or industry settings have fundamentally different work environments, incentives, and communication norms, which may influence their reaction to information and representation appeals in ways that differ from STEM academic scientists. For example, social scientists, who more frequently engage in survey research, may interpret survey appeals differently than STEM faculty. Moreover, the salience of detailed information may vary across industry and government scientists, other professions (e.g., teachers, lawyers, or journalists), or the general public due to distinct information-processing preferences, occupational routines, and time availability. Similarly, the representation appeal experiment could be extended to other populations with strong social identity and connections (e.g., neighborhoods, ethnic groups, immigrants, and LGBTQ communities).

Second, future research could explore how information and representation appeals impact response quality, including breakoff rates, speeding, and straightlining. Providing more detailed information may help respondents consider questions more thoroughly, connect their answers to the survey’s context, and offer higher-quality answers. Meanwhile, refined community-representation appeals could exert greater moral or altruistic pressure, motivating respondents to engage more carefully with surveys.

Third, it is worth examining whether information and representation appeals in survey invitations influence how individuals respond to survey questions. Variations in survey requests, question wording, and item ordering are known to shape respondents’ interpretation of questions and their choices of answers [103105]. In particular, representation appeals—asking individuals to represent themselves versus their professional association or community—may lead to differing responses to the same questions. Understanding how these appeals frame responses can help survey researchers better tailor invitation wording to align with their research goals, whether focused on capturing individual opinions or community perspectives.

Supporting information

S1 Table. Number of randomly selected institutions for sampling scientists.

https://doi.org/10.1371/journal.pone.0326331.s001

(PDF)

S1 Appendix. Invitation emails template for survey of scientists’ perceptions of surveys (information appeal experiment).

https://doi.org/10.1371/journal.pone.0326331.s002

(PDF)

S2 Appendix. Invitation emails template for vaccine survey (information appeal experiment).

https://doi.org/10.1371/journal.pone.0326331.s003

(PDF)

S3 Appendix. Invitation emails template for COVID-19 Wave 2 Survey (representation appeal experiment).

https://doi.org/10.1371/journal.pone.0326331.s004

(PDF)

S4 Appendix. Invitation emails template for public trust survey (representation appeal experiment).

https://doi.org/10.1371/journal.pone.0326331.s005

(PDF)

S5 Appendix. Invitation emails template for Women’s Health Survey (representation appeal experiment).

https://doi.org/10.1371/journal.pone.0326331.s006

(PDF)

S6 Appendix. Invitation emails template for COVID-19 Wave 4 Survey (two by two – information and representation appeals experiment).

https://doi.org/10.1371/journal.pone.0326331.s007

(PDF)

S2 Table. Balance test results for information appeal experiments.

https://doi.org/10.1371/journal.pone.0326331.s008

(PDF)

S3 Table. Balance test results for representation appeal experiments.

https://doi.org/10.1371/journal.pone.0326331.s009

(PDF)

S4 Table. Balance tests results for COVID-19 Survey Wave 4.

https://doi.org/10.1371/journal.pone.0326331.s010

(PDF)

S5 Table. Logit models results of information appeal experiment.

https://doi.org/10.1371/journal.pone.0326331.s011

(PDF)

S6 Table. Logit models results of representation appeal experiment.

https://doi.org/10.1371/journal.pone.0326331.s012

(PDF)

S7 Table. Field-specific sensitivity check for vaccine survey and Women’s Health Survey.

https://doi.org/10.1371/journal.pone.0326331.s013

(PDF)

References

  1. 1. Herner S. Information gathering habits of workers in pure and applied science. Ind Eng Chem. 1954;46(1):228–36.
  2. 2. Ladd EC Jr, Lipset SM. Politics of academic natural scientists and engineers: a survey of faculty in the United States systematically explores political opinions and commitments. Science. 1972;176(4039):1091–100. pmid:17775128
  3. 3. Ritti R. Work goals of scientists and engineers. Ind Relat. 1968;7(2):118–31.
  4. 4. Fanelli D. How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS One. 2009;4(5):e5738. pmid:19478950
  5. 5. Perkmann M, Salandra R, Tartari V, McKelvey M, Hughes A. Academic engagement: a review of the literature 2011-2019. Res Policy. 2021;50(1):104114.
  6. 6. Ryan JC. The work motivation of research scientists and its effect on research performance. R & D Manag. 2014;44(4):355–69.
  7. 7. Feeney MK, Bernal M, Bowman L. Enabling work? Family-friendly policies and academic productivity for men and women scientists. Sci Public Policy. 2014;41(6):750–64.
  8. 8. Bozeman B, Corley E. Scientists’ collaboration strategies: implications for scientific and technical human capital. Res Policy. 2004;33(4):599–616.
  9. 9. John LK, Loewenstein G, Prelec D. Measuring the prevalence of questionable research practices with incentives for truth telling. Psychol Sci. 2012;23(5):524–32. pmid:22508865
  10. 10. Rappa M, Debackere K. Youth and scientific innovation: the role of young scientists in the development of a new field. Minerva. 1993;31(1):1–20.
  11. 11. Jung H, Chen Y, Frandell A, Welch E. Ties with benefits: relationship between relational multiplexity, gender, and work-life balance. Rev Public Pers Adm. 2024.
  12. 12. Welch EW, Jha Y. Network and perceptual determinants of satisfaction among science and engineering faculty in US research universities. J Technol Transf. 2016;41:290–328.
  13. 13. Blackwell LV, Snyder LA, Mavriplis C. Diverse faculty in STEM fields: attitudes, performance, and fair treatment. J Divers High Educ. 2009;2(4):195.
  14. 14. Jung H, Welch EW. The impact of demographic composition of social networks on perceived inclusion in the workplace. Public Adm Rev. 2022;82(3):522–36.
  15. 15. Sheltzer JM, Smith JC. Elite male faculty in the life sciences employ fewer women. Proc Natl Acad Sci U S A. 2014;111(28):10107–12. pmid:24982167
  16. 16. Corley EA, Kim Y, Scheufele DA. Leading US nano-scientists’ perceptions about media coverage and the public communication of scientific research findings. J Nanopart Res. 2011;13:7041–55.
  17. 17. Nisbet MC, Scheufele DA. What’s next for science communication? Promising directions and lingering distractions. Am J Bot. 2009;96(10):1767–78. pmid:21622297
  18. 18. Javeline D, Shufeldt G. Scientific opinion in policymaking: the case of climate change adaptation. Policy Sci. 2013;47(2):121–39.
  19. 19. Howell EL, Brossard D. (Mis)informed about what? What it means to be a science-literate citizen in a digital world. Proc Natl Acad Sci U S A. 2021;118(15):e1912436117. pmid:33876739
  20. 20. Scheufele DA. Communicating science in social settings. Proc Natl Acad Sci U S A. 2013;120(supplement_3):14040–7.
  21. 21. Scheufele DA, Krause NM, Freiling I. Misinformed about the “infodemic?” Science’s ongoing struggle with misinformation. J Appl Res Mem Cogn. 2021;10(4):522–6.
  22. 22. Johnson TP, Feeney MK, Jung H, Frandell A, Caldarulo M, Michalegko L, et al. COVID-19 and the academy: opinions and experiences of university-based scientists in the U.S. Humanit Soc Sci Commun. 2021;8(1):146. pmid:34806031
  23. 23. Myers KR, Tham WY, Yin Y, Cohodes N, Thursby JG, Thursby MC, et al. Unequal effects of the COVID-19 pandemic on scientists. Nat Hum Behav. 2020;4(9):880–3. pmid:32669671
  24. 24. NCSES (National Center for Science and Engineering Statistics). Doctorate recipients from U.S. universities: 2022 [Internet]. NSF 24-300. Alexandria, VA: U.S. National Science Foundation; 2023. Available from: https://ncses.nsf.gov/pubs/nsf24300.
  25. 25. PEW Research Center. Public and scientists’ views s on science and society [Internet]. 2015. Available from: https://www.pewresearch.org/science/2015/01/29/public-and-scientists-views-on-science-and-society/.
  26. 26. Bruine de Bruin W, Bostrom A. Assessing what to address in science communication. Proc Natl Acad Sci U S A. 2013;110 Suppl 3(Suppl 3):14062–8. pmid:23942122
  27. 27. Fischhoff B. The sciences of science communication. Proc Natl Acad Sci U S A. 2013;110 Suppl 3(Suppl 3):14033–9. pmid:23942125
  28. 28. Landrum AR, Hallman WK, Jamieson KH. Examining the impact of expert voices: communicating the scientific consensus on genetically-modified organisms. Environ Commun. 2019;13(1):51–70.
  29. 29. Gustafson A, Rice RE. A review of the effects of uncertainty in public science communication. Public Underst Sci. 2020;29(6):614–33. pmid:32677865
  30. 30. Scheufele DA. Science communication as political communication. Proc Natl Acad Sci U S A. 2014;111 Suppl 4(Suppl 4):13585–92. pmid:25225389
  31. 31. Farnsworth SJ, Lichter SR. The structure of scientific opinion on climate change*. Int J Public Opin Res. 2011;24(1):93–103.
  32. 32. Ivanova A, Schäfer MS, Schlichting I, Schmidt A. Is there a medialization of climate science? Results from a survey of German climate scientists. Sci Commun. 2013;35(5):626–53.
  33. 33. Scheufele DA, Corley EA, Dunwoody S, Shih T-J, Hillback E, Guston DH. Scientists worry about some risks more than the public. Nat Nanotechnol. 2007;2(12):732–4. pmid:18654416
  34. 34. Frandell A, Islam S, Chen T, Caldarulo M, Johnson TP, Michalegko L, et al. Abortion rights: perspectives of academic scientists in the United States. Womens Health Rep (New Rochelle). 2024;5(1):602–12. pmid:39473985
  35. 35. Welch EW, Johnson TP, Chen T, Ma J, Islam S, Michalegko LF, et al. How scientists view vaccine hesitancy. Vaccines (Basel). 2023;11(7):1208. pmid:37515024
  36. 36. Morin D. To debate or not debate? Examining the effects of scientists engaging in debates addressing contentious issues. JCOM J Sci Commun. 2018;17(04):A02.
  37. 37. Scheufele DA, Krause NM. Science audiences, misinformation, and fake news. Proc Natl Acad Sci U S A. 2019;116(16):7662–9. pmid:30642953
  38. 38. Alberts B, Cicerone RJ, Fienberg SE, Kamb A, McNutt M, Nerem RM, et al. Self-correction in science at work. Science. 2015;348(6242):1420–2. pmid:26113701
  39. 39. Hilgard J, Jamieson KH. Does a scientific breakthrough increase confidence in science? News of a Zika vaccine and trust in science. Sci Commun. 2017;39(4):548–60.
  40. 40. Kahan DM, Landrum A, Carpenter K, Helft L, Jamieson KH. Science curiosity and political information processing. Polit Psychol. 2017;38(S1):179–99.
  41. 41. Stedman RC, Connelly NA, Heberlein TA, Decker DJ, Allred SB. The end of the (research) world as we know it? Understanding and coping with declining response rates to mail surveys. Soc Nat Resour. 2019;32(10):1139–54.
  42. 42. Frandell A, Feeney MK, Johnson TP, Welch EW, Michalegko L, Jung H. The effects of electronic alert letters for internet surveys of academic scientists. Scientometrics. 2021;126(8):7167–81. pmid:34054159
  43. 43. Groves RM, Singer E, Corning A. Leverage-saliency theory of survey participation: description and an illustration. Public Opin Q. 2000;64(3):299–308. pmid:11114270
  44. 44. Kertzer JD, Renshon J. Experiments and surveys on political elites. Annu Rev Polit Sci. 2022;25(1):529–50.
  45. 45. Groves RM, Peytcheva E. The impact of nonresponse rates on nonresponse bias: a meta-analysis. Public Opin Q. 2008;72(2):167–89.
  46. 46. Safarpour A, Bush SS, Hadden J. Participation incentives in a survey of international non-profit professionals. Res Polit. 2022;9(3).
  47. 47. Deutskens E, De Ruyter K, Wetzels M, Oosterveld P. Response rate and response quality of internet-based surveys: an experimental study. Mark Lett. 2004;15:21–36.
  48. 48. Conn KM, Mo CH, Sellers LM. When less is more in boosting survey response rates. Soc Sci Q. 2019;100(4):1445–58.
  49. 49. Clasing-Manquian P, Gonzalez J. The effect of communication emails on web survey response rate, representativeness, and response bias: results from a factorial randomized control trial in a college student population. Field Methods. 2024.
  50. 50. Singer E, Van Hoewyk J, Maher MP. Experiments with incentives in telephone surveys. Public Opin Q. 2000;64(2):171–88. pmid:10984332
  51. 51. Iorio R, Labory S, Rentocchini F. The importance of pro-social behaviour for the breadth and depth of knowledge transfer activities: an analysis of Italian academic scientists. Res Policy. 2017;46(2):497–509.
  52. 52. Altenmüller MS, Gollwitzer M. Prosociality in science. Curr Opin Psychol. 2022;43:284–8. pmid:34508967
  53. 53. Brossard D, Scheufele DA. Science, new media, and the public. Science. 2013;339(6115):40–1. pmid:23288529
  54. 54. Cologna V, Mede NG, Berger S, Besley J, Brick C, Joubert M. Trust in scientists and their role in society across 68 countries. Nat Hum Behav. 2025:1–18.
  55. 55. Johnson TP, Smith TW. Big data and survey research: supplement or substitute? In: Thakuriah P, Tilahun N, Zellner M, editors. Seeing cities through big data. Switzerland: Springer Geography; 2017; pp. 127–40.
  56. 56. Salganik MJ. Bit by bit: Social research in the digital age. New Jersey: Princeton University Press; 2019.
  57. 57. Brown RF, St John A, Hu Y, Sandhu G. Differential electronic survey response: does survey fatigue affect everyone equally? J Surg Res. 2024;294:191–7. pmid:37913726
  58. 58. Grevet C, Choi D, Kumar D, Gilbert E. Overload is overloaded: email in the age of Gmail. In: Proceedings of the SIGCHI conference on human factors in computing systems (CHI’14). New York, NY, USA: Association for Computing Machinery; 2014. pp. 793–802.
  59. 59. Sheer VC, Fung TK. Can email communication enhance professor-student relationship and student evaluation of professor?: some empirical evidence. J Educ Comput Res. 2007;37(3):289–306.
  60. 60. Dillman DA, Smyth JD, Christian LM. Internet, phone, mail, and mixed‐mode surveys: the tailored design method. 4th ed. New Jersey: John Wiley; 2014.
  61. 61. Cho YI, Johnson TP, Vangeest JB. Enhancing surveys of health care professionals: a meta-analysis of techniques to improve response. Eval Health Prof. 2013;36(3):382–407. pmid:23975761
  62. 62. VanGeest J, Johnson TP. Surveying nurses: identifying strategies to improve participation. Eval Health Prof. 2011;34(4):487–511. pmid:21454329
  63. 63. VanGeest JB, Johnson TP, Kapousouz E. Monetary incentives in clinician surveys: an analysis and systematic review with a focus on establishing best practices. Eval Health Prof. 2025;48(2):256–76. pmid:39450569
  64. 64. Read D. Monetary incentives, what are they good for? J Econ Methodol. 2005;12(2):265–76.
  65. 65. Renshon J. Losing face and sinking costs: Experimental evidence on the judgment of political and military leaders. Int Organ. 2015;69(3):659–95.
  66. 66. Sue VM, Ritter LA. Conducting online surveys. California: Sage; 2007.
  67. 67. Hart AM, Brennan CW, Sym D, Larson E. The impact of personalized prenotification on response rates to an electronic survey. West J Nurs Res. 2009;31(1):17–23. pmid:18515752
  68. 68. Guise V, Chambers M, Välimäki M, Makkonen P. A mixed-mode approach to data collection: combining web and paper questionnaires to examine nurses’ attitudes to mental illness. J Adv Nurs. 2010;66(7):1623–32. pmid:20497273
  69. 69. Cull WL, O’Connor KG, Sharp S, Tang SS. Response rates and response bias for 50 surveys of pediatricians. Health Serv Res. 2005;40(1):213–26. pmid:15663710
  70. 70. Childers TL, Pride WM, Ferrell OC. A reassessment of the effects of appeals on response to mail surveys. J Mark Res. 1980;17(3):365–70.
  71. 71. Linsky AS. A factorial experiment in inducing responses to a mail questionnaire. Sociol Soc Res. 1965;49(2):183–9.
  72. 72. Pedersen MJ, Nielsen CV. Improving survey response rates in online panels: effects of low-cost incentives and cost-free text appeal interventions. Soc Sci Comput Rev. 2016;34(2):229–43.
  73. 73. Singer E, Ye C. The use and effects of incentives in surveys. Ann Am Acad Pol Soc Sci. 2013;645(1):112–41.
  74. 74. Chan RCH, Mak WWS, Pang IHY, Wong SYS, Tang WK, Lau JTF, et al. Utility and cost-effectiveness of motivational messaging to increase survey response in physicians. Field Methods. 2017;30(1):37–55.
  75. 75. Green KE. Others. The effects of two types of appeal on survey response rates. Paper presented at the Annual Meeting of the American Educational Research Association [Internet]. 1993. Available from: https://eric.ed.gov/?id=ED359233.
  76. 76. Jones WH. Multiple criteria effects in a mail survey. J Mark Res. 1978;15(2):280–4.
  77. 77. Roberts RE, McCrory OF, Forthofer RN. Further evidence on using a deadline to stimulate responses to a mail survey. Public Opin Q. 1978;42(3):407–10.
  78. 78. Champion DJ, Sear AM. Questionnaire response rate: a methodological analysis. Soc Forces. 1969;47(3):335–9.
  79. 79. Gendall P, Hoek J, Esslemont D. The effect of appeal, complexity and tone in a mail survey covering letter. Int J Mark Res. 1995;37(3):1–12.
  80. 80. Kerin RA, Harvey MG. Methodological considerations in corporate mail surveys: a research note. J Bus Res. 1976;4(3):277–81.
  81. 81. Houston MJ, Nevin JR. The effects of source and appeal on mail survey response patterns. J Mark Res. 1977;14(3):374–8.
  82. 82. Lavrakas PJ. Encyclopedia of survey research methods. California: Sage; 2008.
  83. 83. Zhang C, Lonn S, Teasley SD. Understanding the impact of lottery incentives on web survey participation and response quality. Field Methods. 2016;29(1):42–60.
  84. 84. Einarsson H, Cernat A, Shlomo N. The effects of framing the survey request and using targeted appeals on participation in cross-sectional surveys. Field Methods. 2023;36(3):187–205.
  85. 85. Rahmandad H, Vakili K. Explaining heterogeneity in the organization of scientific work. Organ Sci. 2019;30(6):1125–45.
  86. 86. Thorp HH. Public debate is good for science. Science. 2021;371(6526):213. pmid:33446530
  87. 87. Howarth C, Parsons L, Thew H. effectively communicating climate science beyond academia: harnessing the heterogeneity of climate knowledge. One Earth. 2020;2(4):320–4. pmid:33495753
  88. 88. Popper K. The logic of scientific discovery. New York: Routledge; 2005.
  89. 89. AAPOR (The American Association for Public Opinion Research) [Internet]. Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys. 10th ed. 2023. Available from: https://aapor.org/wp-content/uploads/2023/05/Standards-Definitions-10th-edition.pdf
  90. 90. AAPOR (American Association for Public Opinion Research) [Internet]. Institutional review boards. [cited 2025 May 9]. Available from: https://aapor.org/standards-and-ethics/institutional-review-boards/#1668710698131-5d301939-1f4b.
  91. 91. Varnhagen CK, Gushta M, Daniels J, Peters TC, Parmar N, Law D, et al. How informed is online informed consent? Ethics Behav. 2005;15(1):37–48. pmid:16127857
  92. 92. Singer E. Informed consent: consequences for response rate and response quality in social surveys. Am Sociol Rev. 1978;43(2):144–62. pmid:655499
  93. 93. Singer E. Exploring the meaning of consent: participation in research and beliefs about risks and benefits. J Off Stat. 2003;19(3):273.
  94. 94. Petty RE, Cacioppo JT. Communication and persuasion: Central and peripheral routes to attitude change. New York: Springer; 1986.
  95. 95. Petty RE, Cacioppo JT. The elaboration likelihood model of persuasion. Adv Exp Soc Psychol. 1986;19:123–205.
  96. 96. Link AN, Swann CA, Bozeman B. A time allocation study of university faculty. Econ Educ Rev. 2008;27(4):363–74.
  97. 97. Crespo M, Bertrand D. Faculty workload in a research intensive university: A case study. Center for Interuniversity Research and Analysis on Organizations. 2013. Available from: https://www.cirano.qc.ca/files/publications/2013RP-11.pdf
  98. 98. SciOPS (Scientist Opinion Panel Survey). Scientist Opinion Panel Survey Report on “Survey of Surveys” [Internet]. 2022. Available from: https://sci-ops.org/survey/survey-of-surveys/
  99. 99. Keusch F. The role of topic interest and topic salience in online panel web surveys. Int J Mark Res. 2013;55(1):59–80.
  100. 100. Tourangeau R, Groves RM, Kennedy C, Yan T. The presentation of a web survey, nonresponse and measurement error among members of web panel. J Off Stat. 2009;25(3):299–321.
  101. 101. Turner JC, Hogg MA, Oakes PJ, Reicher SD, Wetherell MS. Rediscovering the social group: A self-categorization theory. Oxford: Basil Blackwell; 1987.
  102. 102. Spears R, Doosje B, Ellemers N. Self-stereotyping in the face of threats to group status and distinctiveness: the role of group identification. Pers Soc Psychol Rev Bull. 1997;23(5):538–53.
  103. 103. Galesic M, Tourangeau R. What is sexual harassment? It depends on who asks! Framing effects on survey responses. Appl Cogn Psychol. 2007;21(2):189–202.
  104. 104. MacInnis B, Miller JM, Krosnick JA, Below C, Lindner M. Candidate name order effects in New Hampshire: evidence from primaries and from general elections with party column ballots. PLoS One. 2021;16(3):e0248049. pmid:33725009
  105. 105. Schwarz N. Self-reports: How the questions shape the answers. Am Psychol. 1999;54(2):93–105.