Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Testing biasedness of self-reported microbusiness innovation in the annual business survey

  • Luyi Han ,

    Roles Conceptualization, Data curation, Formal analysis, Methodology, Writing – original draft, Writing – review & editing

    lkh5474@psu.edu

    Affiliation Northeast Regional Center for Rural Development, Penn State University, State College, Pennsylvania, United States of America

  • Zheng Tian,

    Roles Conceptualization, Data curation, Methodology, Writing – original draft, Writing – review & editing

    Affiliation Northeast Regional Center for Rural Development, Penn State University, State College, Pennsylvania, United States of America

  • Timothy R. Wojan,

    Roles Conceptualization, Methodology, Supervision, Writing – original draft, Writing – review & editing

    Affiliation Oak Ridge Institute for Science and Education Research Ambassadors Program, National Center for Science and Engineering Statistics, National Science Foundation, Oak Ridge, Tennessee, United States of America

  • Stephan J. Goetz

    Roles Conceptualization, Supervision, Writing – original draft, Writing – review & editing

    Affiliation Northeast Regional Center for Rural Development, Department of Agricultural Economics, Sociology, and Education, Penn State University, State College, Pennsylvania, United States of America

Abstract

This study tests for potential bias in self-reported innovation due to the inclusion of a research and development (R&D) module that only microbusinesses (less than 10 employees) receive in the Annual Business Survey (ABS). Previous research found that respondents to combined innovation/R&D surveys reported innovation at lower rates than respondents to innovation-only surveys. A regression discontinuity design is used to test whether microbusinesses, which constitute a significant portion of U.S. firms with employees, are less likely to report innovation compared to other small businesses. In the vicinity of the 10-employee threshold, the study does not detect statistically significant biases for new-to-market and new-to-business product innovation. Statistical power analysis confirms the nonexistence of biases with a high power. Comparing the survey design of ABS to earlier combined innovation/R&D surveys provides valuable insights for the proposed integration of multiple Federal surveys into a single enterprise platform survey. The findings also have important implications for the accuracy and reliability of innovation data used as an input to policymaking and business development strategies in the United States.

Introduction

Surveys of business innovation started in earnest in the U.S. in 2008 with the redesign of the Survey of Industrial Research and Development (SIRD), which became the Business R&D and Innovation Survey (BRDIS). Unlike the European Union’s Community Innovation Survey administered since 1992, BRDIS combined detailed questions on the R&D activities of firms with self-reported innovation questions regarding the introduction of new or significantly improved products and processes. A significantly lower incidence of innovation in the U.S. relative to most EU countries raised questions as to whether the difference may be due to a downward bias in innovation self-reports, consistent with U.S. respondents being more predisposed to viewing “new or significantly improved” as being only science- and engineering-based innovation. The issue became largely moot with the introduction of the Annual Business Survey (ABS) in 2018, in which firms with ten or more employees are only asked innovation questions, while R&D questions are asked separately in the Business Enterprise Research and Development Survey. However, microbusinesses (i.e., fewer than ten employees) in the ABS still receive both the R&D and innovation modules. Combined R&D and innovation survey, targeted to microbusinesses may lead to substantial underreporting of innovation activities if these businesses are less likely to pursue science and engineering based innovation. Indeed, microbusinesses report the lowest incidence of innovation in the ABS, raising the question of whether this is merely a function of firm size or a disadvantage attributable to survey design. We consider two types of product innovation reported in the ABS innovation module as dependent variables to test the potential bias. New-to-market product innovations represent new or significantly improved goods or services into the market before any competitors in the last three years, while new-to-business innovations represent new or significantly improved goods or services only new to firms but pre-existing in the market.

The administration of the inaugural ABS provides the information necessary to test for the bias attributable to this combined R&D and innovation survey design. A sharp threshold of ten employees was used by ABS administrators to partition the drawn sample into two groups–microbusiness and all other businesses, thereby determining which survey module(s) each group would receive. In other words, the ten-employee threshold was determined exogenously with respect to survey respondents, and all businesses should be accurately categorized into the two groups, considering microbusinesses as the treated and all other businesses as the controlled. These two conditions allow the use of a sharp regression discontinuity design to test the null hypothesis that there is no underreporting of innovation activities for firms on either side of the threshold. If the null hypothesis is not rejected with a statistically powerful test, our study will provide support for using the ABS data in microbusiness innovation studies. If the null is rejected, our study will caution researchers who want to use these data.

Previous literature discusses potential survey design biases that may impact self-reported innovation rates. Studies have found lower innovation rates when R&D questions are combined with innovation questions in the same survey compared to innovation-only surveys. Exact causes are unclear, but factors may include survey length, framing innovation as related to R&D, and title cues about survey topics. Other biases arise from survey mode and question formats [1, 2]. We review the empirical and cognitive testing literature addressing both the possible R&D bias in innovation surveys and the lower incidence of innovation in U.S. surveys relative to the EU Community Innovation Survey. A comparison of the BRDIS and ABS microbusiness surveys shows how the implicit conversation about innovation might be affected differently by how the R&D topic is introduced and organized. Details on the data are provided before specifying the regression discontinuity test. The findings are then discussed with respect to small business innovation research and implications are drawn for the proposed single enterprise platform in terms of the format or sequencing of unique modules to different types of firms.

The findings from this analysis have implications not only for designing innovation surveys but also for general survey planning by emphasizing the psychology of survey responses [3]. The vision for re-engineering Federal business surveys is to integrate multiple surveys into a single survey that businesses would be required to complete, reducing respondents’ burden with the objective of raising response rates [4]. For that purpose, the ABS “replaces the five-year Survey of Business Owners (SBO) for employer businesses, the Annual Survey of Entrepreneurs (ASE), the Business R&D and Innovation for Microbusinesses survey (BRDI-M), and the innovation section of the Business R&D and Innovation Survey (BRDI-S).” It was conceived as an interim step between multiple, independent surveys and the future single platform for businesses reporting administrative records and responding business surveys. Evidence that responses in one part of the ABS are influenced by other modules would raise serious questions regarding data quality in integrated surveys. Alternatively, evidence that the R&D module in the ABS does not bias innovation results for those respondents who receive it may provide valuable information for the modular design of integrated surveys. Does the inclusion of an R&D module for microbusiness with 10 or fewer employees cause potential bias in self-reported innovation in the ABS survey? We use a regression discontinuity design (RDD) to assess the bias.

Literature review

The study most resembling the objectives of this article was conducted by Statistics Norway. Previously, the U.S. and Norway both had self-reported innovation rates considerably below those of their EU peers. Innovation surveys in both countries were mandatory, and both combined R&D and innovation activities. In contrast, most of the innovation surveys from peer EU countries are voluntary and contain few questions pertaining to science and engineering-based innovation and R&D. The seemingly poor innovation performance—using self-reported innovation questions as suggested by the Oslo Manual [1]—of two of the most economically advanced countries in the world provided the motivation for examining potential bias in combined R&D/innovation surveys rigorously. Norway modified the 2011–2012 administration of their national Innovation in Norwegian Business survey by designating two special subsets of firms that were sent innovation-only surveys similar to the EU’s Community Innovation Survey [2]. One set was required to complete the survey while completion was voluntary for the other set. Product or process innovation rates were highest in the voluntary, innovation-only survey (41%); lower in the mandatory, innovation-only survey (30%); and lowest in the standard mandatory, R&D/innovation survey (19%). To the extent that all three samples were representative of Norwegian businesses, the exercise confirmed the tendency for R&D/innovation surveys to find lower rates of self-reported innovation relative to innovation only surveys. However, because the respondents in each sample were different it was not possible to identify respondents’ characteristics or response behaviors that may explain the greater reluctance of R&D/innovation survey respondents to self-identify as innovators.

Tian, Wojan, and Goetz [5] attempted to compare responses from the same firm to an innovation-only and combination innovation-R&D survey by linking the 2014 BRDIS to the 2014 Annual Survey of Entrepreneurs (ASE)—the precursor to the ABS that also included an innovation module but no R&D module. Despite seemingly promising sample sizes of approximately 290,000 for ASE and 44,000 for BRDIS, the resulting number of matches was limited and could not support a formal analysis of survey response variation. In addition to the small sample size, additional error would be introduced by the possibility that surveys were completed by different people or different business units within the organization, further reducing the ability to discern meaningful differences from the same firm responding to different surveys. Square table analysis used to assess the strength of agreement of responses by matched firm responses was largely uninformative. However, logistic regression of the pooled BRDIS-ASE data did indicate that BRDIS respondents were more than twice as likely to report no product innovation as ASE respondents after controlling for industry and firm size, reinforcing the findings from Wilhelmsen [2].

A conference paper by Hoskens et al. [6], presented at the OECD Blue Sky Forum on Science and Innovation Indicators, discussed various method effects that have been associated with differences in the incidence of innovation for surveys conducted in the same country. In addition to the effect of R&D modules lowering innovation rates identified by Wilhelmsen [2], other method-related effects result from the use of long vs. short innovation surveys [6, 7] and differences in the mode of data collection [8]. Shifting from a paper survey to a web survey containing one question per screen seemingly lessened the tendency to report no innovation as a means of avoiding follow-up questions [8].

The Psychology of Survey Response [3] provides a rich framework for assessing how survey design decisions might introduce sources of potential bias. The main insight from this approach is that the intended purpose of surveys, i.e., to collect objective information, is not always the criterion that respondents use in answering questions. A survey also represents a conversation between respondents and enumerators (or survey instruments), so differences in response may be due to survey mode, short or long version of completion, and survey titles or headings that may influence various aspects of the implicit conversation. In the context of innovation surveys, this may be manifest in the decision to self-report “new or significantly improved” products or processes.

Analyzing the possible impact of method or survey design choice on reported innovation rates within a regression discontinuity framework highlights the most critical aspects of a treatment: its dosage. Within ABS, the inclusion or omission of the R&D module is binary depending on whether the firm employed fewer than ten employees. However, the effect of the R&D module in ABS may be quite different from the effect of the R&D module on innovation questions in BRDIS vis-à-vis the innovation-only questions in ASE. A closer look at how the R&D topic is introduced and covered in ABS and BRDIS is necessary to inform our priors on the size of the effect we expect to detect.

The most obvious difference is the title of the surveys. Respondents to the Business R&D and Innovation Survey know what “the conversation” is likely to be about. Respondents to the Annual Business Survey are given no such clues and thus are less likely from the outset to conclude that the survey is or is not relevant to their firms. Firms engaged in grassroots innovation may choose to opt out of full participation in BRDIS, and thus may fail to appreciate that the innovation questions are independent of R&D activity. The dosage of the R&D module treatment in BRDIS can be thought of as much larger and more front-loaded than the dosage of the treatment in ABS at the outset.

The innovation questions precede the R&D questions in both surveys to help minimize the bias caused by framing innovation as that directly related to science and engineering. For respondents who only interact with the online survey, their initial response to the innovation questions may be little influenced by the R&D questions. The big difference between BRDIS and ABS is the number of R&D questions received by R&D performers and nonperformers. In BRDIS, firms that indicate no formal R&D budget skip the financial questions but are still asked questions about R&D human resources, e.g., how many scientists and engineers are employed by the company, and how many have PhDs (Fig 1). In contrast, R&D nonperformers in ABS skip all questions related to R&D finances and R&D human resources. BRDIS respondents who are not R&D performers may be inclined to revisit the innovation questions after considering the R&D human resource questions. Of course, the inclusion of R&D questions may influence all respondents who view the entire “Information Copy” of the surveys before completing the survey online. But only BRDIS respondents are required to read and answer questions on R&D human resources at their firms through the online survey instrument. This again would suggest that the dosage of R&D questions in BRDIS may bias responses relative ABS.

thumbnail
Fig 1. Screenshot of human resource questions from 2016 Business R&D and innovation survey.

Notes: 2016 BRDIS [survey instrument] available at https://www.nsf.gov/statistics/srvyindustry/about/brdis/surveys/srvybrdis-2016-BRDI-1.pdf.

https://doi.org/10.1371/journal.pone.0296667.g001

A final consideration of the treatment dosage that may affect a firm’s revelation about its innovation activities is the relative centrality of the innovation and R&D topics to the conversation. Because ABS combines the former Survey of Business Owners (SBO) with the innovation questions from BRDIS and microbusiness R&D questions from the former Business R&D and Innovation Microbusiness survey (BRDI-M), the conversation starts much differently than the conversation in BRDIS. After the standard questions confirming the continuity and ownership structure of the business, ABS elicits information on the business owners: their age, prior business ownership experience, educational background, and reasons for starting or acquiring the business. In this regard, ABS is much better at “getting to know the respondent,” particularly for microbusinesses where an owner is likely to be the person completing the innovation and R&D sections that follow. This may produce a higher comfort level than in BRDIS, where the innovation questions are the first substantive questions received. Firms not engaged in R&D may be particularly reticent to respond affirmatively to these initial innovation questions given the cue from the survey title that R&D questions will follow.

The comparison of responses to innovation questions in BRDIS and ASE by Tian, Wojan, and Goetz [5] provides an upper bound for the effect size of the downward bias that might be attributable to the inclusion of an R&D module in an innovation survey. The anticipated effect size for ABS respondents on either side of the 10-employee threshold should be significantly lower than that found in the BRDIS-ASE study given that R&D nonperformers have minimal exposure to the R&D questions and the survey title does not cue up R&D as a questionnaire topic. It is difficult to assess the extent to which the differences between the BRDIS and ABS survey instruments affect the psychology of responding to self-reported innovation question. However, a lower bound of less than one-quarter the ostensible BRDIS effect would explain the observed difference in innovation rates between microbusinesses (< 10 employees) and the smallest business size class over the threshold (10–19 employees) which is 2.3% [9]. The power of the regression discontinuity test for detecting small effects will thus be essential for assessing the robustness of findings and the extent to which the results may be due to measurement error.

In sum, this comprehensive examination of innovation surveys, particularly in the context of R&D module inclusion, underscores the complexity of measuring self-reported innovation rates accurately. The research conducted by Statistics Norway, as well as other relevant studies, highlights the potential for survey design choices and methodological differences to impact the reported incidence of innovation within a given country. From the mandatory vs. voluntary survey participation to the framing of questions and the dosage of the R&D module, each element may contribute to nuances in survey response. Moreover, the inherent conversational nature of surveys, influenced by survey titles and respondent characteristics, further complicates the interpretation of self-reported innovation data. In our study, we use the ABS data to test the null hypothesis that there is no underreporting of innovation activities due to the survey design in which microbusinesses receive both R&D and innovation modules, while small businesses (10 or more employees) receive only an innovation module. Additionally, our study primarily concentrates on businesses with approximately 10 employees, thereby contributing to the literature on small business innovation and entrepreneurship [1013].

Data

We use two confidential Census datasets, accessed through an approved research project at a Federal Statistical Research Data Center (FSRDC). The 2018 Annual Business Survey, for which 2017 is the reference year, contains self-reported innovation, industry (defined by the North American Industry Classification System or NAICS), and firm size. We also use the 2016 Longitudinal Business Database (LBD), of which firm records came from the 2016 Business Register (BR) that were used to draw the 2018 ABS sample. According to US Census Bureau, “Annual Business Survey data are sourced from a combination of responses to the survey, data from the economic census, and administrative records data. The ABS is primarily collected using an electronic instrument. The survey was mailed to approximately 850,000 employer businesses. Businesses selected to report were sent a letter informing of their requirement to report… Approximately 67.8 percent of the sample responded to the survey.” Employment size in the BR determined which firms received the microbusiness (<10 employees) survey containing the innovation and R&D modules and which receiving the innovation-only survey. Although the 2018 ABS microdata also contains firms’ employment in 2017, we are concerned that these employment data may not equal the 2016 BR data as firms may have grown or shrunk during these two years. Therefore, we use the 2016 LBD employment data as the running variable in regression discontinuity design, while using the 2018 ABS data for all other firms’ and innovation related information.

Microbusinesses in the 2018 ABS received the ABS-1 survey instrument, while all other firms received the ABS-2 instrument. Table 1 shows the summary statistics for all firms in the two groups (Panel A) as well as firms around the 10-employee threshold with half, one, and two bandwidths (Panels B to D, respectively). The bandwidth selection is explained in the next section. Each panel shows summary statistics separately for firms below or above the 10-employee cutoff. We use the ABS sample weights to compute the summary statistics. We do not report the minimum or maximum values of variables following FSRDC disclosure avoidance rules. In the full sample reported in Panel A, the average employment size across all samples is about eight, the average employment size of microbusinesses is three below the cutoff, and 30 for all other firms above the cutoff.

We consider two types of product innovation reported in the ABS innovation module for both treatment and control groups. New-to-market product innovations represent new or significantly improved goods or services into the market before any competitors in the last three years, while new-to-business innovations represent new or significantly improved goods or services only new to firms but pre-existing in the market. As shown in Table 1, for the two innovation variables, new-to-market and new-to-business innovation dummies, we in general see a higher incidence of innovation above the cutoff. For example, the average new-to-market innovation rate is around 8 percent below the cutoff, and over 10 percent above the cutoff. In Panels B, C, and D, we report similar statistics using different RDD bandwidths.

Models and methods

We use a sharp regression discontinuity design (RDD) to assess the effect of distributing different formats of the ABS survey instruments to firms with more or less than 10 employees on self-reporting new-to-market and new-to-business innovations. The running variable (Xi) in the sharp RDD is a firm’s employee size and the cut-off value (c) is 10. The outcome variable (Yi) is a binary innovation variable. A sharp RDD is appropriate because microbusinesses with fewer than 10 employees would mandatorily receive the ABS-1 survey instrument containing both the R&D and innovation modules, while firms with 10 or more employees would receive the ABS-2 instruments without the R&D module. Although it is natural to think of microbusinesses as the treatment group as they received “additional” survey questions, to conform with the RDD literature in which a treated unit is the one with the running variable above the cutoff, we consider microbusinesses as the control group and other firms as the treatment group. The assignment of the treatment occurs ex ante ensuring that it is independent of the outcome variable.

A typical sharp regression discontinuity design considers the expected difference in the potential outcome when the running variable takes a value immediately above or below the cut-off value. That is, the sharp RDD considers the local average treatment effect (LATE) at the cut-off value, which can be expressed as [14] (1)

The sharp RDD can also be written simply as a linear regression model [15] (2) where Di = 1 if Xi ≥ c or Di = 0 if Xi < c. Eq (2) can be estimated with either global or local least squares estimation. The global estimation uses all samples, and the local estimation uses samples delineated by a bandwidth around the cut-off value. The bandwidth selection is based on the mean-squared-error (MSE) optimal bandwidth choice method as explained in Cattaneo and Titiunik [16].

Regression discontinuity designs are used for causal inference in many fields of economics. A comprehensive review of RDD can be found in papers of Cattaneo and Titiunik [16], Imbens and Lemieux [14], and Lee and Lemieux [17]. The applications cover such topics as education [18, 19], health [20], family [21], and relevant to our paper, firm innovation [2224]. Bronzini and Piselli [23] examine the effect of Italian regional government subsidies on R&D output measured by the number and the probability of patent applications. In their sharp RDD strategy, the dependent variables are either discrete count data for the number of patent application or a binary data for whether to file patent applications. They subsequently use the Poisson model in the RDD for the count variable and the logit model for the binary variable and estimate the models with all samples or selected samples defined by some bandwidth. Their estimation strategy is similar to ours since we also use a binary dependent variable and estimate the logit model in the RDD with various sample ranges.

Based on the generic model in Eq (2), we estimate an expanded linear regression model indexed for firm i: (3)

The dependent dummy variable Innovationi captures new-to-market innovations and new-to-business innovations. The S1 Appendix shows how these two variables are defined. Di is the treatment variable where Di = 1 if firms have equal to or more than ten employees. (Empi − 10) is the normalized running variable for employee counts. We also include an interaction term of the treatment and the normalized running variables to allow the slopes to vary on both sides of the cutoff point. Xi embeds the firm level control variables, including the highest education of the firm owner (Jankowski, et al. forthcoming), and the NAICS 2-digit industry dummies.

The outcome variable in a sharp RDD is usually a continuous variable. However, our dependent variable is binary. So, we estimate the model in three ways. First, we use the binary dependent variable as it is and interpret the model as a linear probability model (LPM). Second, we estimate a logit model for Eq (3) with Yi transformed with the log(u/(1-u)), where u = Pr(Yi = 1). Third, we first estimate a probit model for the binary outcome variable on education and NAICS dummies and then use the fitted value of the probit model as the outcome variable in Eq (3). The fitted value regressions do not include education and NAICS dummies as they are used in the first stage probit model.

As noted above, the RDD estimates LATE around the cutoff point. That is, we test the null hypothesis, H0: τ = 0 and γ = 0, around the ten-employee cutoff in the proximity determined by the optimal bandwidth. For example, if the optimal bandwidth is four, we estimate the RDD regressions only based on firms from which their employment sizes are between 6 and 14. We use the mean-squared-error (MSE) optimal bandwidth choice method to determine the bandwidth [25]. We use Stata command “rdbwselect” with “mserd” option to choose the optimal bandwidth. We also report estimates using half bandwidth and two bandwidths as robustness checks.

An important assumption affecting RDD validity is the absence of manipulation around the RDD cutoff point. In this analysis, it means firms are not able to overreport their employees (≥ 10 employees) strategically in order to skip the R&D module. Such a time-saving strategy would be most valuable to R&D performing microbusinesses, firms that tend to have a very high incidence of innovation [26]. This strategic behavior would bias innovation estimates below the threshold downward and estimates above the threshold upward. However, this manipulation is impossible because assignment to the ABS-1 or ABS-2 survey is based on administrative data rather than self-reported employment.

To address statistical power concerns owing to reliance on a small subsample of the data in an attempt to detect a small effect, we also provide RDD power tests to evaluate both the robustness of our findings along with the validity of inferences from negative findings [27]. We use Stata command “rdpow” to conduct RDD power tests. The approach is based on the continuity and smoothness assumptions and typically uses local polynomial methods.

Results

As the ABS and LBD data are confidential, only accessible inside an FSRDC, our results have undergone a disclosure review process, including rounding of coefficients, standard errors, and observations according to disclosure avoidance rules.

RDD figures

We first present a set of data-driven RDD plots to provide preliminary visual evidence of the existence of a jump in innovation self-reports at the 10-employee cutoff. We use Stata command “rdplot” to generate RDD figures. We only plot the fitted values generated from a probit model for the binary outcome as discussed in Eq (3). In Figs 25, the X-axis shows bins by the number of employees, and the Y-axis shows the fitted values for the binary innovation outcome variables. We specify the fourth order of the polynomial fit to approximate the population conditional expectation functions for the control and treated units [28]. We show two types of figures for the fitted new-to-market and new-to-business innovation variables. One is a fitted line for the first 50 employment size bins, and another is restricted by the RDD bandwidth. Overall, Figs 25 demonstrate a smooth transition in innovation self-reports at the 10-employee cutoff.

thumbnail
Fig 2. RDD plot for fitted new-to-market innovation.

Notes: ABS 2018 and LBD 2016, authors’ calculations.

https://doi.org/10.1371/journal.pone.0296667.g002

thumbnail
Fig 3. RDD plot for fitted new-to-market innovation restricted to bandwidth.

Notes: ABS 2018 and LBD 2016, authors’ calculations.

https://doi.org/10.1371/journal.pone.0296667.g003

thumbnail
Fig 4. RDD plot for fitted new-to-business innovation.

Notes: ABS 2018 and LBD 2016, authors’ calculations.

https://doi.org/10.1371/journal.pone.0296667.g004

thumbnail
Fig 5. RDD plot for fitted new-to-business innovation restricted to bandwidth.

Notes: ABS 2018 and LBD 2016, authors’ calculations.

https://doi.org/10.1371/journal.pone.0296667.g005

RDD regressions

Tables 2 and 3 show the main RDD regressions for new-to-market innovation and new-to-business innovation, respectively. We report three panels in each table, Panel A shows the LPM regression for the binary innovation variable, Panel B shows the logit regression, and Panel C shows the probit regression for fitted values. For each column in these two tables, column (1) reports the global regression using all firms, column (2) restricts the sample on the half bandwidth, column (3) uses the full bandwidth, and column (4) uses two bandwidths. In Table 2 Panel A, column (1) shows a 0.007 statistically significant point estimate on the RDD treatment variable, but coefficient estimates in the remaining columns (2), (3), and (4) are not statistically significant at the 10% level. Since the RDD method only looks at the causal effects around the cutoff (LATE), the results from the last three columns are most informative. They all suggest no statistical difference in self-reported innovation below or above the 10-employee cutoff, implying that we are unable to detect substantial survey bias among microbusinesses (fewer than 10 employees) answering R&D and innovation modules relative to all other businesses that did not receive an R&D module. A more definitive statement requires an estimate of the probability of detecting a substantial bias if true, which is provided in the section below. The positive and statistically significant effects in column (1) confirms the common finding that larger firms are more likely to report innovation across the complete sample. Results from using logit or probit fitted outcome variables in Panels B and C are consistent with our findings from Panel A. They all suggest no significant effects at the cutoff. Table 3 reports the new-to-business innovation variable, and we find similar results as Table 2. In Tables 2 and 3, the interaction term in column (2) gets dropped from the model automatically. It may be because the narrowest bandwidth effectively reduces the normalized running variable to a dummy. Then, the interaction term of the RDD treatment and the normalized running variable is also a dummy. For sensitivity analysis, the results remain robust when omitting the interaction terms.

RDD power tests

RDD estimations yield statistically insignificant coefficients at the 10-employee cutoff, indicating no difference in reporting innovation activities around the cutoff. However, statistical inference based on insignificant results requires more elaboration [29], such as estimating the probability of detecting an effect if true, the effect size, and other parameters of the test, while taking into account the sample size. This is accomplished using RDD power tests [27] with the results presented in Table 4.

Table 4 Panel A shows the power tests for the binary new-to-market innovation. We show some RDD power tests specifications in the Table notes. The full sample contains 334,000 observations (firms) below the cutoff and 130,000 observations above the cutoff, while the effective number of observations narrows down to 40,000 and 31,000 using the optimal bandwidth. The sample bandwidth is 3.844 for panel A, which means we use firms from which employment sizes are between 6 to 14 in the power calculations.

One of the important parameters we set is τ, which “specifies the treatment effect under the alternative at which the power function is evaluated” [27]. The treatment effect is set at 0.05 that is roughly half the effect size derived from the analysis of BRDIS and ASE [5]. We are most interested in the power estimate for 0.5τ corresponding to an effect size that is one quarter of that observed in BRDIS. In Table 4 Panels A and C for the binary outcome innovation variables, we see that the power against the alternative of 0.5τ (0.025) is 0.998 and 0.974, respectively, while in Panels B and D using predicted values for new-to-market and new-to-business innovation, an effect size as small as 0.2τ (0.01) can be detected with near certainty. This statistical result adds additional credence to the preliminary visual evidence provided in Figs 25. Overall, the tests show our RDD results are estimated with a high degree of power against a reasonable range of alternatives, permitting the more definitive conclusion that our nonsignificant results can be interpreted as no bias in self-reported innovation by microbusinesses in the 2018 ABS. If the null result was due to measurement error, then this would be reflected in much lower statistical power. Figs 69 visually display the results of the power tests as shown in Table 4.

thumbnail
Fig 6. RDD power test for new-to-market innovation.

Notes: ABS 2018 and LBD 2016, authors’ calculations. This figure corresponds to the power tests in Table 4 Panel A.

https://doi.org/10.1371/journal.pone.0296667.g006

thumbnail
Fig 7. RDD power test for probit fitted new-to-market innovation.

Notes: ABS 2018 and LBD 2016, authors’ calculations. This figure corresponds to the power tests in Table 4 Panel B.

https://doi.org/10.1371/journal.pone.0296667.g007

thumbnail
Fig 8. RDD power test for new-to-business innovation.

Notes: ABS 2018 and LBD 2016, authors’ calculations. This figure corresponds to the power tests in Table 4 Panel C.

https://doi.org/10.1371/journal.pone.0296667.g008

thumbnail
Fig 9. RDD power test for probit fitted new-to-business innovation.

Notes: ABS 2018 and LBD 2016, authors’ calculations. This figure corresponds to the power tests in Table 4 Panel D.

https://doi.org/10.1371/journal.pone.0296667.g009

Conclusion

Microbusinesses, which make up most of the employer firms in the United States and account for the largest share of the sample included in the ABS, have a large impact on the incidence of innovation reported in the country. Focusing on self-reported innovation activities in surveys, this study examines whether including an R&D module that only microbusinesses receive made these firms less likely to report innovation than firms with 10 or more employees. Past research identifies a significant downward bias in combined innovation-R&D surveys relative to innovation-only surveys [2, 5]. However, differences in survey design between ABS and earlier combined innovation and R&D surveys suggest this bias may have been ameliorated or eliminated. Using RDD tests at the sharp 10-employee threshold, we do not detect a statistically significant bias affecting microbusiness self-reports for either new-to-market or new-to-business product innovation, which is further confirmed with RDD power tests.

The U.S. is unique in fielding a large, nationally representative innovation survey that includes microbusinesses. Most EU countries administering the harmonized Community Innovation Survey limit coverage to firms with 10 employees or more, consistent with guidance from the Oslo Manual [1]. However, this resource has yet to be fully exploited by small business and innovation researchers who are seemingly unaware of R&D and innovation statistics for microbusinesses [30]. Finding that the lower incidence of innovation among microbusinesses is not attributable to survey design should be reassuring to small business researchers. The result gives more credence to analyses investigating alternative explanations for the seeming innovation lag of microbusinesses. Jankowski, Wojan, and Kindlon [31] provide evidence suggesting that the lower innovation rate by microbusinesses is driven by a large number of single-employee firms where the owner-operator does not have a college degree. Among nonperforming R&D microbusinesses that are not characterized in this way, innovation rates were nearly identical to those of small businesses with 10–49 employees. Han, et al. [32] confirm that the effect of cloud computing subscriptions on microbusiness innovation rates is similar to that of other firm size classes.

This analysis can also inform the choices within the Federal statistical system in moving to a single enterprise platform for the collection of business information. Plans for the inaugural ABS were released after the NASEM report on reengineering business surveys was drafted but before the report was published. The authors of the report “urge the Census Bureau to use the experience with the ABS to inform the development of our recommended A[nnual]B[usiness]S[urvey]S[ystem]” (National Academies of Sciences, Engineering, and Medicine 2020, p.159) [9]. Some of the attributes of 2018 ABS noted in this analysis that may have contributed to data quality in a survey where different respondents receive different modules include: 1) a neutral title; 2) sequencing of question modules from more general to more specific; 3) isolating respondents from potentially irrelevant questions (e.g., employment of PhD scientists and engineers) that may affect responses to relevant questions (e.g., introduction of a new or significantly improved product); and 4) reliance on electronic data collection. Survey mode may be the most important factor as web-based surveys can easily satisfy the requirement to effectively insulate respondents from potentially irrelevant questions in a manner that is difficult to accomplish with paper surveys. Paper surveys may also encourage strategic responses to skip over modules—e.g., an 8-employee firm reporting 10 employees in ABS to avoid responding to the R&D module—further reducing data quality and undermining the objectives of an integrated survey [8].

Supporting information

S1 Appendix. Definitions of innovation variables (2018 ABS).

https://doi.org/10.1371/journal.pone.0296667.s001

(DOCX)

References

  1. 1. OECD, Eurostat. Oslo Manual 2018: Guidelines for Collecting, Reporting and Using Data on Innovation, 4th Edition. Paris: OECD Publishing; 2018.
  2. 2. Wilhelmsen L. A question of context: Assessing the impact of a separate innovation survey and of response rate on the measurement of innovation activity in Norway. Oslo, Norway: Statistics Norway; 2012 p. 78. https://www.ssb.no/a/english/publikasjoner/pdf/doc_201251_en/doc_201251_en.pdf
  3. 3. Tourangeau R, Rips LJ, Rasinski K. The Psychology of Survey Response. Cambridge: Cambridge University Press; 2000.
  4. 4. National Academies of Sciences, Engineering, and Medicine. Reengineering the Census Bureau’s Annual Economic Surveys. Abraham KG, Citro CF, White GD, Kirkendall NK, editors. Washington, D.C.: National Academies Press; 2018.
  5. 5. Tian Z, Wojan TR, Goetz SJ. An Examination of the Informational Value of Self-Reported Innovation Questions. Washington, D.C: The Center for Economic Studies of the U.S. Census Bureau; 2022 p. 25. Report No.: CES 22–46. https://www.census.gov/library/working-papers/2022/adrm/CES-WP-22-46.html
  6. 6. Hoskens M, Delanote J, Debackere K, Verheyden L. State of the art insights in capturing, measuring and reporting firm-level innovation indicators. The OECD Blue Sky Forum on Science and Innovation Indicators; 2016 Sep 21; Ghent, Belgium.
  7. 7. Cirera X, Muzi S. Measuring innovation using firm-level surveys: Evidence from developing countries. Res Policy. 2020;49: 103912.
  8. 8. Statistics Netherlands. ICT, kennis en economie 2012. Den Haag/Heerlen, Netherlands; 2012. https://www.cbs.nl/nl-nl/publicatie/2012/24/ict-kennis-en-economie-2012
  9. 9. National Center for Science and Engineering Statistics (NCSES). Annual Business Survey: Data Year 2017 (NSF 21–303). National Science Foundation; 2020. https://ncses.nsf.gov/pubs/nsf21303/
  10. 10. Dost M, Shah SMM, Saleem I. Mentor expectations and entrepreneurial venture creation: mediating role of the sense of nothing to lose and entrepreneurial resilience. J Entrep Emerg Econ. 2021;14: 1229–1243.
  11. 11. Kumar S, Raj R, Saleem I, Singh EP, Goel K, Bhatia R. The interplay of organisational culture, transformational leadership and organisation innovativeness: Evidence from India. Asian Bus Manag. 2023 [cited 26 Oct 2023].
  12. 12. Triguero A, Córcoles D, Cuerva MC. Persistence of innovation and firm’s growth: evidence from a panel of SME and large Spanish manufacturing firms. Small Bus Econ. 2014;43: 787–804.
  13. 13. Liu S. The Urban–Rural Divide: The Effects of the Small Business Innovation Research Program in Small and Nonmetro Counties. Econ Dev Q. 2022;36: 208–227.
  14. 14. Imbens GW, Lemieux T. Regression discontinuity designs: A guide to practice. J Econom. 2008;142: 615–635.
  15. 15. Cunningham S. Causal Inference: The Mixtape. New Haven; London: Yale University Press; 2021.
  16. 16. Cattaneo MD, Titiunik R. Regression Discontinuity Designs. Annu Rev Econ. 2022;14: 50.
  17. 17. Lee DS, Lemieux T. Regression Discontinuity Designs in Economics. J Econ Lit. 2010;48: 281–355.
  18. 18. Bleemer Z, Mehta A. Will Studying Economics Make You Rich? A Regression Discontinuity Analysis of the Returns to College Major. Am Econ J Appl Econ. 2022;14: 1–22.
  19. 19. Sekhri S. Prestige Matters: Wage Premium and Value Addition in Elite Colleges. Am Econ J Appl Econ. 2020;12: 207–225.
  20. 20. Carpenter C, Dobkin C. The Effect of Alcohol Consumption on Mortality: Regression Discontinuity Evidence from the Minimum Drinking Age. Am Econ J Appl Econ. 2009;1: 164–182. pmid:20351794
  21. 21. Avdic D, Karimi A. Modern Family? Paternity Leave and Marital Stability. Am Econ J Appl Econ. 2018;10: 283–307.
  22. 22. Bronzini R, Iachini E. Are Incentives for R&D Effective? Evidence from a Regression Discontinuity Approach. Am Econ J Econ Policy. 2014;6: 100–134.
  23. 23. Bronzini R, Piselli P. The impact of R&D subsidies on firm innovation. Res Policy. 2016;45: 442–457.
  24. 24. Wang Y, Li J, Furman JL. Firm performance and state innovation funding: Evidence from China’s Innofund program. Res Policy. 2017;46: 1142–1161.
  25. 25. Calonico S, Cattaneo MD, Farrell MH. Optimal bandwidth choice for robust bias-corrected inference in regression discontinuity designs. Econom J. 2020;23: 192–210.
  26. 26. Kindlon A, National Center for Science and Engineering Statistics (NCSES). Microbusinesses Performed $4.5 Billion of R&D in the United States in 2018. Alexandria, VA: National Science Foundation; 2018. Report No.: NSF 22–309. https://ncses.nsf.gov/pubs/nsf22309
  27. 27. Cattaneo MD, Titiunik R, Vazquez-Bare G. Power calculations for regression-discontinuity designs. Stata J. 2019;19: 210–245.
  28. 28. Calonico S, Cattaneo MD, Titiunik R. Optimal Data-Driven Regression Discontinuity Plots. J Am Stat Assoc. 2015;110: 1753–1769.
  29. 29. Abadie A. Statistical Nonsignificance in Empirical Economics. Am Econ Rev Insights. 2020;2: 193–208.
  30. 30. Jones BF, Summers LH. A Calculation of the Social Returns to Innovation. Innovation and Public Policy. University of Chicago Press; 2020. pp. 13–59. https://www.nber.org/books-and-chapters/innovation-and-public-policy/calculation-social-returns-innovation
  31. 31. Jankowski JE, Wojan TR, Kindlon A. Microbusiness Innovation in the United States: Making Sense of the Largest and Most Variegated Firm Size Class. Second Edition. In: Gault F, Arundel A, Karemer-Mbula E, editors. Handbook of Innovation Indicators and Measurement. Second Edition. Cheltenham, UK: Edward Elgar; 2023.
  32. 32. Han L, Wojan TR, Goetz SJ. Experimenting in the cloud: The digital divide’s impact on innovation. Telecommun Policy. 2023;47: 102578.