Skip to main content
Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Exploring the role of R&D collaborations and non-patent IP policies in government technology transfer performance: Evidence from U.S. federal agencies (1999–2016)

  • Iman Hemmatian ,

    Roles Conceptualization, Data curation, Formal analysis, Methodology, Project administration, Software, Visualization, Writing – original draft, Writing – review & editing

    Affiliation College of Business Administration, Cal Poly Pomona, Pomona, California, United States of America

  • Todd A. Ponzio,

    Roles Conceptualization, Investigation, Validation, Writing – original draft, Writing – review & editing

    Affiliation School of Medicine, Wake Forest University, Winston-Salem, North Carolina, United States of America

  • Amol M. Joshi

    Roles Conceptualization, Funding acquisition, Methodology, Resources, Supervision, Writing – original draft, Writing – review & editing

    Affiliation School of Business, Wake Forest University, Winston-Salem, North Carolina, United States of America


Around the world, governments make substantial investments in public sector research and development (R&D) entities and activities to generate major scientific and technical advances that may catalyze long-term economic growth. Institutions ranging from the Chinese Academy of Sciences to the French National Centre for Scientific Research to the Helmholtz Association of German Research Centers conduct basic and applied R&D to create commercially valuable knowledge that supports the innovation goals of their respective government sponsors. Globally, the single largest public sector R&D sponsor is the U.S. federal government. In 2019 alone, the U.S. government allocated over $14.9 billion to federally funded research and development centers (FFRDCs), also known as national labs. However, little is known about how federal agencies’ utilization of FFRDCs, their modes of R&D collaboration, and their adoption of non-patent intellectual property (IP) policies (copyright protection and materials transfer agreements) affect agency-level performance in technology transfer. In particular, the lack of standardized metrics for quantitatively evaluating government entities’ effectiveness in managing innovation is a critical unresolved issue. We address this issue by conducting exploratory empirical analyses of federal agencies’ innovation management activities using both supply-side (filing ratio, transfer rate, and licensing success rate) and demand-side (licensing income and portfolio exclusivity) outcome metrics. We find economically significant effects of external R&D collaborations and non-patent IP policies on the technology transfer performance of 10 major federal executive branch agencies (fiscal years 1999–2016). We discuss the scholarly, managerial, and policy implications for ongoing and future evaluations of technology transfer at federal labs. We offer new insights and guidance on how critical differences in federal agencies’ interpretation and implementation of their R&D management practices in pursuit of their respective missions affect their technology transfer performance outcomes. We generalize key findings to address the broader innovation processes of public sector R&D entities worldwide.


As highlighted in a recent study by the World Intellectual Property Organization (WIPO), governments in many countries significantly expanded and accelerated their investments in research and development (R&D) activities as part of their policy responses to the 2009 financial crisis and the 2020 coronavirus outbreak [1]. For example, China is focused on building critical nationwide infrastructure via the construction of advanced data centers, 5G wireless networks, and new energy vehicles [1, 2]. These initiatives are driven primarily by greater public sector investment in the Chinese Academy of Sciences, which in 2018 announced plans to rapidly grow its number of national labs from 200 to 700 by 2020, with a fully operational key lab system expected to be completed by 2025. In another example, specifically for combating the coronavirus, France pledged 5 billion euros in R&D spending, which represents a 25% increase over its original R&D budget for 2020. This effort is led by the 10 research institutes that make up the French National Centre for Scientific Research (CNRS) and receive 80% of all public R&D funds allocated by the French government [3, 4]. In a similar effort within Europe, Germany’s second stimulus package, which targets COVID-19 recovery, features 50 billion euros of R&D investments in a wide array of future-focused technologies. These funds are directed towards R&D projects conducted by three distinct networks of federal- and state-sponsored labs, which include the Helmholtz Association of German Research Centers, the Max Planck Institutes, and the Fraunhofer Institutes [5, 6]. In other countries such as Turkey [7], India [8], and Israel [9], governments initiated similar programs to promote technology commercialization and spark growth. Despite the considerable differences in political systems and economic priorities across China, France, Germany, Israel, Turkey, and India, what their respective public sector R&D entities all have in common are clearly defined government mandates to pursue scientific and technical breakthroughs that may fuel long-term growth, prosperity, and security.

In line with its counterparts in the aforementioned countries, but on an even broader scale, the single largest public sector R&D sponsor in the world is the U.S. federal government, which has a similar pro-growth mandate to drive scientific discovery, develop new knowledge, promote technical standards, and generate useful innovations. For instance, in 2019 alone, the U.S. government funded an estimated total of $141.5 billion in R&D expenditures [10], which “plays an irreplaceable role in directing technology toward more general and active domains” [11, 12]. Approximately 27% or $39.6 billion of this total is intramural R&D conducted internally by federal agencies, while the bulk of this funding, 73% or $101.9 billion, is allocated to R&D conducted externally by for-profit corporations and nonprofit organizations. Within the extramural R&D allocation, industry represents $43.6 billion, universities receive $33.4 billion, and contractor-operated federally funded research and development centers (FFRDCs, many of which are called ‘national labs’ or ‘federal labs’) account for $14.9 billion. Although there is an established stream of prior research on university-industry technology transfer [1320], far less is known about technology transfer at national/federal labs and non-university research institutes. Despite governments’ consistently large and increasing budget allocations to public sector R&D entities and activities within their respective countries, there appears to be inconsistent and limited use of quantitative metrics for measuring performance outcomes related to creating and commercializing new technologies.

The dearth of research in this area is surprising because national labs are an essential component of the core systems of innovation in the U.S. and around the world [21, 22]. We aim to extend prior research in a new direction by exploring how government-industry technology transfer at national/federal facilities may differ from university-industry technology transfer along critical dimensions, especially in terms of the identification, adoption, and usage of appropriate performance metrics. Indeed, an evaluation of institutional policies and practices at these R&D facilities may yield new managerial and theoretical insights for improving the effectiveness of existing government-supported technology transfer processes. Our study investigates the following research question: How do differences in R&D policy implementation across federal agencies affect their technology transfer activities and performance? We believe that obtaining empirical evidence to answer this question is timely, relevant, and strategically important as governments around the world continue to expand the scale and scope of their funding for public sector R&D. The central premise of our study is that two main elements of federal agencies’ varying approaches to innovation management directly influence technology transfer performance: (1) their engagement in external R&D collaborations, through formal or informal partnership agreements; and (2) their adoption of policies for sharing non-patented intellectual property (IP). Although the importance of participating in external R&D collaborations and adopting policies for handling IP are routinely incorporated into existing research on university-industry transfer [23, 24], these factors are not yet systematically integrated into the emerging stream of research on government-industry technology transfer [25]. Our study aims to contribute to the nascent literature on federal technology transfer by providing a conceptual framework, expanded metrics to measure successful technology transfer, and fresh empirical evidence to guide scholars, managers, and policymakers in their evaluation of R&D commercialization processes.

Our study differs from previous studies in two critical ways. First, unlike prior research on government and academic technology transfer that focuses primarily on protecting proprietary technologies through patenting activities [26, 27], we examine the importance of external R&D collaborations and sharing proprietary technologies through non-patent IP policies such as copyrights and materials transfer agreements (MTAs). Second, in contrast to the emerging set of studies on federal technology transfer that use only agency-driven supply-side metrics to evaluate agency performance [28], we introduce customer-driven demand-side metrics and integrate both types of measures into our empirical analyses. For example, beyond the traditional supply-side metrics of filing ratio, transfer rate, and licensing success rate [29, 30] that capture a producer’s ability to push technologies into the commercial marketplace, we use the demand-side metrics of licensing income and portfolio exclusivity that capture a customer’s willingness to pull technologies out of government labs [31]. By incorporating external R&D collaborations and non-patent IP policies as predictors and demand-side metrics as outcomes in our models, we seek to provide a more holistic picture of federal technology transfer performance at the agency level.

We organize our study by first explaining the historical context of key legislative acts and proposing a conceptual framework. We then conduct a set of exploratory analyses that offer initial empirical evidence for the effects of external R&D collaborations and non-patent IP policies on the technology transfer performance of 10 major federal executive branch agencies (fiscal years 1999–2016). Overall, when we specifically examine agencies’ use of formal agreements for partnerships, we find a positive and significant relationship between this type of external R&D collaboration and all of our supply-side metrics (filing ratio, transfer rate, and licensing success rate), as well as portfolio exclusivity on the demand-side. In contrast, agencies’ use of other types of customized and informal external R&D collaborations appears to be associated with two main effects on the demand-side: (1) a significant decrease in licensing income; and (2) a simultaneous and somewhat surprising corresponding increase in portfolio exclusivity. We find evidence that agencies with a greater utilization of FFRDCs have lower agency-driven technology transfer performance in terms of supply-side metrics. On the demand-side, we find that greater FFRDC utilization is associated with greater licensing income and lower portfolio exclusivity.

We also find the adoption of non-patent IP policies to be associated with substantial and economically significant shifts in the supply-side and demand-side metrics. While the supply-side effects are similar for copyright agreements and MTAs, the demand-side effects are different. External R&D collaborations appear to further amplify these observed effects. In sum, our findings indicate that federal technology managers must carefully consider the combined effects of external R&D collaboration and non-patent IP policies when formulating their respective agencies’ technology transfer plans and programs. Based on these findings, we discuss the scholarly, managerial, and policy implications for ongoing and future evaluations of federal agencies’ technology transfer performance. Beyond U.S. federal agencies, we also consider how our key findings may inform possible innovation process improvements and policy reforms for public sector R&D entities and activities in other countries and contexts.

Historical context and conceptual framework

Historical context

The Bayh-Dole Act of 1980 (Bayh-Dole), the Stevenson-Wydler Technology Innovation Act of 1980 (Stevenson-Wydler), and the Federal Technology Transfer Act of 1986 (FTTA of 1986) are the cornerstones of the legal foundations for federal agencies’ interpretation and implementation of their R&D management practices and technology transfer activities in pursuit of their respective missions [32]. Bayh-Dole allows non-profits and small businesses to keep title to inventions made using federal government funding. Enacted in December of 1980, Bayh-Dole—which made changes to U.S. Code (USC) title 35, i.e., the “Patents” chapter—also explicitly authorized federal agencies to “grant exclusive or partially-exclusive licenses to patents, patent applications, or other forms of protection obtained” (Codified as amended at 35 USC 200 et seq.). The importance of Bayh-Dole (PL 96–517) on innovation policy is well-documented in the existing literature and is prominently featured in numerous studies of federal technology transfer, academic entrepreneurship, and the commercialization of university-owned patents [3336].

Prior to Bayh-Dole, the granting of an exclusive license was a lengthy and cumbersome endeavor, with specific requirements differing by the agency. In practice, exclusive licenses were almost never granted [37]. The elaborate process for the U.S. Navy, considered among the more forward-thinking agencies at the time [38], involved advertising the patent in three different publications (the U.S. Patent Office, the Federal Register, and at least one other publication of choice) for a period of at least six months, followed by another 60 day public notice of a prospective exclusive license [39]. Given the lengthy processes and bureaucratic hurdles, agencies would typically grant only non-exclusive licenses, which made a prospective licensee’s business decision of investing in federally-owned patented technologies far riskier. As a result, the government appeared to be hoarding around 30,000 unlicensed patents, and potentially useful and valuable new technologies were not being commercialized. Hence, prior to the enactment of Bayh-Dole, if one looked at patent licensing activity, federal technology transfer appeared to be at a complete standstill. For example, in 1976 alone, only about 150 patents were licensed by all federal agencies from over 2,000 issued patents [39]. With such a small fraction of patents actually being licensed for commercial use, proposed reforms recommended offering a degree of exclusivity in licensing through a unified and more streamlined process to accelerate the commercialization of unlicensed inventions [40, 41]. By explicitly authorizing federal agencies to exclusively license inventions, Bayh-Dole aimed to address this issue.

Although it is less well-known than Bayh-Dole, an equally important piece of legislation is Stevenson-Wydler (PL 96–480), which was enacted fifty-two days earlier and signed into law, also by President Carter. This law made changes to title 15, the “Commerce and Trade” chapter, and was focused on using the federal labs’ R&D capabilities and resulting technologies to more directly benefit citizens of the U.S. by moving those technologies to the private sector (Codified as amended at 15 USC 3701 et seq.). Of the two laws, Stevenson-Wydler is the only one to explicitly mention “technology transfer” and make technology transfer a codified mission of the federal labs by requiring each agency “strive where appropriate to transfer federally owned or originated technology to State and local governments and to the private sector [42].”

Agencies were legally bound to establish an Office of Research and Technology Applications (ORTA) at each lab, staff the office with at least one full-time professional, and devote not less than 0.5% of the agency’s R&D budget to support the technology transfer function. A waiver for the 0.5% budgetary requirement was built into the law, and essentially all of the agencies requested waivers; the requirement was dropped when the law was amended in 1986 [4345]. From the beginning, the implementation of technology transfer legislation at federal agencies and national labs had its supporters [46, 47] and skeptics [48]. Part of the ongoing debate about the effectiveness of these policies and laws arose from the lack of a uniform, standardized set of metrics for consistently monitoring and measuring technology transfer performance across agencies, labs, and teams:

“There is no single, obvious way to measure the success of tech transfer that everyone has somehow been missing. Metrics themselves should be seen as experimental, and their impact needs to be monitored. At the same time, metrics should not be altered lightly because stability is needed to make comparisons over time.

     RFI Response, Massachusetts Institute of Technology [49]

The enactment of Stevenson-Wydler required federal agencies to incorporate technology transfer activities directly into their respective missions and to allocate dedicated resources but did not adequately define appropriate metrics for measuring these activities across agencies. In the 1980s, several academic studies indicating that the U.S. faced a risky competitive decline in innovation, along with the growing recognition that many national labs had specialized facilities and equipment that could be leveraged to support innovation more broadly, prompted renewed attention on federal technology transfer [5052]. In response, Congress passed the FTTA of 1986. Signed into law by President Reagan, the legislation amended Stevenson-Wydler in a number of noteworthy ways, including moving licensing from being handled centrally by agency managers to being handled by the inventing lab, establishing a Federal Laboratory Consortium, and most importantly, authorizing the use of Cooperative Research and Development Agreements (CRADAs) by the labs.

A CRADA is a contractual agreement between one or more federal laboratories and one or more non-federal entities “under which the Government, through its laboratories, provides personnel, services, facilities, equipment, intellectual property, or other resources with or without reimbursement (but not funds to non-Federal parties) and the non-Federal parties provide funds, personnel, services, facilities, equipment, intellectual property, or other resources toward the conduct of specified research or development efforts which are consistent with the missions of the laboratory” (15 USC 3710a). Shortly after the law came into effect, CRADAs experienced explosive growth and rapidly came to dominate the formal channels of federal technology transfer [22, 5355].

In sum, the enactment of Bayh-Dole, Stevenson-Wydler, and the FTTA of 1986 established some of the guiding principles that remain influential today in shaping how federal agencies’ employees manage R&D activities. All three pieces of legislation address key issues regarding the ownership, transfer, and sharing of knowledge generated by federal labs. Bayh-Dole emphasized the importance of licensing out government-owned inventions. Stevenson-Wydler established the requirement that federal agencies explicitly incorporate technology transfer into their missions. The FTTA of 1986 broadly authorized resource-leveraging agreements in the form of CRADAs to facilitate collaboration with private sector partners. These laws collectively provide broad authorization and guidance to agencies conducting R&D on how to execute their mission with a focus on technology transfer.

Conceptual framework

Prior research argues that an organization’s performance in managing R&D is influenced by: (1) the types of problems that the organization seeks to solve; and (2) the forms of governance that the organization intends to implement [56]. From the perspective of managers and policymakers, improving organizations’ technology transfer performance requires matching the governance form with the problem type [5658]. Previous findings indicate that misaligned governance forms and problem types may be costly to organizations in terms of lost time and wasted resources and may be harmful in ways that are economically significant [59, 60]. Thus, the use of insufficient or unsuitable governance forms for problem-solving may inhibit or impede the inventive output and commercial activities that organizations ultimately seek to generate through producing new knowledge and engaging in R&D collaborations [61].

In the context of U.S. federal government technology transfer, we contend that evaluating the possible performance consequences of choosing governance forms that are congruent with problem types is especially important for producing useful knowledge [6264]. The federal government and its individual executive branch agencies play a vital role in the production of scientific knowledge as a public good [65]. The core issue of allocation associated with knowledge [66, 67] as a public good is exacerbated by the fact that elected leaders, agency officials, taxpaying citizens, and private sector firms are all interested stakeholders in public sector R&D programs administered by a representative bureaucracy [68]. These divergent stakeholder groups may fail to reach a policy consensus because they are often driven by competing or conflicting political, social, and economic motivations for either supporting or opposing government R&D investment in creating national capacity for scientific problem-solving [6971]. We argue that the frequent failure to achieve a broad-based policy consensus is due in part to a lack of standardized and consistent government-wide performance metrics for quantifying the value of technology transfer outcomes at the agency-level [29]. Furthermore, key constituencies such as taxpaying citizens, industry associations, advocacy groups, and media watchdogs may express valid concerns about only being able to observe the amount of financial resources allocated to and invested in public sector R&D activities without being able to observe any eventual commercial outcomes or actual economic impacts [72]. In other words, in the absence of observable outputs with quantifiable benefits, these constituencies tend to focus on the known costs of observable inputs. Hence, we contend that because the value of producing scientific knowledge and sharing it as a public good is usually difficult to ascertain [73], this makes actually introducing, reforming, replacing, or eliminating innovation programs difficult for public sector organizations such as federal agencies to initially justify and eventually operationalize [74].

To specifically address the challenge of formulating and executing evidence-based policies for managing government-funded innovation, our study attempts to develop managerially sound and empirically observable metrics that capture the implementation of certain salient governance forms within the distinct problem types encountered by federal agencies. We believe that these tools may help scholars, managers, and policymakers who are seeking further transparency and greater accountability when examining federal agencies’ management of public resources for R&D undertaken in the national interest.

It is generally easier to assess the potential commercial value of knowledge that is more applied and practical versus knowledge that is more basic and theoretical. For example, an optical sensor technology that can accurately detect faint objects from a great distance might enhance existing commercial products such as satellites. However, a new underlying theory that predicts the quantum effects of gravity on bending light from distant stars may be considerably more difficult to immediately use in commercial applications including satellites. In addition, knowledge that spans multiple scientific domains is more interdependent because it requires synthesizing discoveries across distinct disciplines. Greater knowledge interdependence is often associated with fundamental breakthroughs as opposed to incremental innovations. For example, nuclear physicists, computer scientists, and medical researchers from a range of national labs collaborated to originally prototype and develop medical imaging technologies that use radioisotopes to detect and treat cancerous tumors. Both dimensions–applied knowledge and interdependent knowledge–affect researchers’ abilities to solve scientific problems. Consistent with the prior literature, we posit that the scientific problems that organizations face may be characterized along two critical dimensions: the use of applied knowledge [75, 76] and the interdependence of the knowledge [77] required to solve the problem.

Hence, effectively solving a problem is dependent on knowing how to navigate the interdependence and knowing where to search for the relevant knowledge [78]. Compared to major R&D-intensive multinational corporations in the private sector, major R&D-intensive federal agencies are typically tasked by their leaders to engage in larger-scale and longer-term scientific endeavors that may have no immediate commercial value but that are nonetheless expected to be strategically important for the future of the country as a whole. FFRDCs and external (non-federal) R&D partners may possess unique resources and capabilities that are essential for commercializing scientific knowledge. This implies that utilizing FFRDCs and engaging in formal and informal R&D partnerships [79, 80] are potentially useful means for federal agencies to transfer technologies in fulfillment of their missions [8183].

Leading R&D-intensive federal agencies–including the U.S. Department of Energy, the Department of Defense (DOD), the Department of Health and Human Services (HHS), the National Aeronautics and Space Administration (NASA), and the U.S. Department of Agriculture (USDA)–independently pursue their own portfolios of scientific research characterized by varying levels of applied versus basic knowledge and varying levels of knowledge interdependence across domains [84]. Analyzing similarities and differences across these agencies provides insight into how to navigate this interdependence and precisely where to search for the relevant knowledge. We argue that some of these similarities and differences are readily apparent in each agency’s decisions regarding the appropriate set of governance forms to apply in solving the types of scientific problems that are central to fulfilling their respective missions. These governance forms typically include a broad array of knowledge sharing processes and structures–from standardizing contracts, to forming alliances, to offering grants and prizes, to leveraging crowdsourced platforms, and building global user communities [8588].

Unlike prior research on government and academic technology transfer, which focuses primarily on protecting proprietary technologies through patenting activities, here we examine the importance of engaging in external R&D collaborations (such as FFRDC utilization, traditional CRADAs, and other R&D collaborations) and sharing proprietary technologies through non-patent IP policies such as copyrights and MTAs [89, 90], and evaluate these agency-specific policies on performance measures. Because solving difficult scientific problems frequently requires the development of technical artifacts such as software, datasets, and materials as complementary resources and tools, we examine this aspect of knowledge sharing. In addition, “increased linkages to and knowledge flows from various external partners, particularly in uncertain environments, lead to improved innovation outcomes” [91].

“Although copyright can protect computer software, under the Government Works exception, the U.S. Government is prohibited from claiming copyright in the United States in any works prepared by officers or employees of the Federal Government in the course of their official duties” [49]. “MTAs may apply to anything from materials that are simply under the control of the originator but have no formal intellectual property rights attached to them to proprietary materials protected by patents and trade secrets” [92]. Defining policies for the use of copyrights and MTAs presents an interesting dilemma for managers of federal agencies. On the one hand, potential external R&D partners and prospective licensees often need access to accompanying software, datasets, materials, and associated know-how to be able to further develop and possibly commercialize technologies transferred from federal labs. On the other hand, if the copyright status or property transfer rights of the software, datasets, materials, or know-how is unclear or infeasible due to the agency’s interpretation and implementation of federal statutes, this may hinder the subsequent use of these essential technical artifacts in the development of commercial applications in the private sector.

In sharp contrast to the emerging set of studies on federal technology transfer that utilize only supply-side performance metrics [28], we propose the use of demand-side metrics as well [93, 94]. For example, previous research recommends measuring the filing ratio, transfer rate, and licensing success rate to evaluate the effectiveness of federal technology transfer efforts [29, 30]. A key limitation of all of these metrics is that they represent the potential market readiness of the technologies solely from the selling producer’s perspective. We attempt to overcome this limitation by extending the relevant metrics to include the licensing income generated and the degree of portfolio exclusivity associated with income bearing licenses. We contend that licensing income and portfolio exclusivity represent actual market acceptance of the technologies from the paying customer’s perspective. In other words, filing ratio, transfer rate, and licensing success rate reflect an agency’s ability to prudently manage and convert its technologies into accessible artifacts, while licensing income and portfolio exclusivity reflect actual customers’ market adoption of the technical artifacts generated by the agency. In keeping with the emphasis of recent administrations on Lab to Market (Lab2M) initiatives across the entire U.S. federal government, we believe a balanced usage of both supply-side and demand-side performance metrics is warranted. A holistic appraisal of supply- and demand-side metrics may help stakeholders better understand technology transfer performance and drive a policy consensus.

Managing federal technology transfer performance

The historical context and conceptual framework we proposed above suggest that there is a set of practical and actionable ideas that should be explored empirically to provide federal technology managers with an initial sense of the potential direction and magnitude of the possible effects of key innovation management decisions under their control. For example, when allocating their annual R&D budgets and aligning these budgets with their missions, federal technology managers are empowered to determine their respective agencies’ utilization of FFRDCs. Instead of using classical contracts or CRADAs, greater reliance on FFRDCs for conducting R&D might plausibly reduce supply-side technology performance while boosting demand-side performance. This is because the generation of key knowledge artifacts becomes more decentralized and more distributed, though still without a commercial sector nexus, when it is shifted from agency-operated sites to contractor-operated FFRDCs. However, this same shift may advance the technologies closer to commercialization by mitigating scientific or technical risk, which may enhance demand-side performance.

In addition, greater use of formal R&D agreements such as portfolios of CRADAs [95] as well as informal R&D partnerships, which feature public sector agency and private sector firm (government-industry) relationships and interactions, may have different effects on supply-side performance. Partnership agreements such as CRADAs may facilitate improved filing ratios, transfer rates, and licensing success rates by clearly specifying partnership terms and conditions and IP rights in a structured and consistent manner that is straightforward for external R&D partners to grasp and implement. In contrast, non-standardized forms of external R&D partnerships such as one-off custom collaborative agreements may reduce supply-side performance by constraining technology managers’ options for IP protection [96] and increasing the complexity of licensing arrangements.

The implementation of non-patent IP policies varies considerably across federal agencies. If an agency’s innovation activities generate substantial amounts of software, data, and expertise that is useful and valuable to the private sector, then non-patent IP policies such as copyright agreements are directly relevant to the agency’s technology transfer performance. Or, if an agency’s innovation activities involve unique or difficult to obtain proprietary materials such as biological samples or chemical compounds that are of commercial importance, then MTAs are also directly relevant to the agency’s performance. Differences in the adoption of copyright agreements and MTAs may account for observed differences between supply-side and demand-side metrics.

Methodology and data

Data sources and collection procedures

Our sample timeframe is from 1999–2016 and our unit of analysis is at the federal agency level. Under the Technology Transfer Commercialization Act of 2000 (TTCA of 2000, PL 106–404), U.S. federal agencies are required to report certain measurements annually. Under this statute (15 USC 3710(f)(2)(B)), the specific parameters are: (1) the number of patent applications filed; (2) the number of patents received; (3) the number of fully-executed licenses that received royalty income in the preceding fiscal year, categorized by whether they are exclusive, partially-exclusive, or non-exclusive; (4) the time elapsed from the date the license was requested by the licensee in writing to the date the license was executed; (5) the total earned royalty income; (6) what disposition was made of the income described in the clause; (7) the number of licenses terminated for cause; and (8) any other parameters or discussion that the agency deems relevant or unique to its practice of technology transfer (in practice, (4) and (6) are not actually reported by U.S. federal agencies). In addition, agencies also report the number of invention disclosures, as well as the number of CRADAs.

We use variables collected from three online databases managed by the U.S. federal government. The National Science Foundation (NSF) maintains two of these databases: (1) the National Center for Science and Engineering Statistics (NCES) database; and (2) the FFRDC R&D Survey. The remaining database is from the National Institute of Standards and Technology (NIST) Annual Technology Transfer Reports managed by the U.S. Department of Commerce (DOC).

NIST Annual Technology Transfer Reports contain information about CRADAs, invention disclosures and patenting, portfolio profiles of active licenses, and income from licensing. We downloaded and gathered our data from the FY 2003–2019 annual reports, which covers technology transfer activities over the eighteen-year period of 1999–2016, inclusive.

The FFRDC R&D Survey is the main source of information on R&D expenditures at FFRDCs in the U.S. These annual surveys categorize R&D spending by: (1) source of funds (federal, state and local, business, nonprofit, or other); (2) federal agency source; (3) type of R&D (basic research, applied research, or development); (4) type of costs (salaries, software, equipment, subcontracts, and other direct or indirect costs); and (5) total operating budget.

The NCES database provided statistical data about the breakdown of R&D expenditures at FFRDCs (university-administered, nonprofit-administered, and industry-administered) by the source of funds. Specifically, NCES indicates the R&D funds allocated by the federal government to state and local government, businesses, nonprofit organizations, or other federal and non-federal entities.


We use the NIST, NSF-NCES, and NSF-FFRDC R&D databases to construct and compute the following dependent, explanatory, and control variables, as described below.

Dependent variables.

Our analysis spans five primary dependent variables that represent technology transfer activities managed by federal agencies [97]. The first set of dependent variables serves as supply-side metrics for evaluating technology transfer outcomes. The Filing Ratio is the ratio of the number of patent applications divided by the total number of new invention disclosures in a given fiscal year [29]. Transfer Rate is the ratio of the number of newly granted patent licenses over the total number of filed patent applications in a given fiscal year [29]. Licensing Success Rate is the number of licenses granted out of the total number of invention disclosures received by an agency’s Technology Transfer Office (TTO) in a given fiscal year [30].

The second set of dependent variables captures demand-side metrics for evaluating the technology transfer performance of federal agencies. License Income (log) is the base 10 logarithm of the annual dollar amount of licensing income compiled and communicated by NIST in a given fiscal year [97, 98]. Portfolio Exclusivity is the total annual amount of active exclusive licenses divided by total active income bearing licenses communicated by NIST in a given fiscal year [97, 99].

Explanatory variables.

Our two sets of explanatory variables represent federal agencies’ R&D collaborations and non-patent IP policies. The first set of explanatory variables measure the extent of federal agencies’ external R&D collaborations. FFRDC Utilization (%) is the percentage of FFRDCs utilized by a given agency, in a given year. It is important to note that some agencies fund R&D projects conducted by FFRDCs that are primarily aligned with other agencies. For example, the DOD funds research at multiple FFRDCs associated with the DOE. There are many types of CRADAs, and here we focus on those that formalize a substantive collaboration, the traditional CRADAs. Traditional CRADAs (log) is the base 10 logarithm of an agency’s number of traditional CRADAs. Other R&D Collaborations (log) is the base 10 logarithm of the number of other collaborative R&D relationships that are not part of traditional CRADAs. These measures capture different dimensions of the size and scale of an agency’s R&D partner portfolio.

Our second set of explanatory variables defines federal agencies’ adoption of policies for non-patented IP. Copyright Policy is a binary variable, which is 1 if the federal agency adopted copyright IP policy as a mechanism for technology transfer, as disclosed by NIST and equals 0 otherwise. Materials Transfer Policy is s a binary variable, which is 1 if the federal agency adopted MTAs as a mechanism for technology transfer, as disclosed by NIST, and equals 0 otherwise.

Control variables.

To account for alternative explanations, we include a set of control variables at the U.S. federal agency level that may plausibly influence technology transfer activities. R&D Asset Intensity (%) is the percentage of total annual R&D obligations allocated to R&D plants in the form of fixed assets such as specialized facilities. R&D Applied Knowledge (%) is the dollar-weighted percentage index of R&D expenditure in Applied R&D activities as a percentage of the total R&D budget disclosed to NSF-NCSES in a given fiscal year. The following control variables account for agencies’ allocation of R&D to other entities. R&D to Universities (%) is the percentage of total annual R&D obligations performed by academic institutions (universities and colleges, excluding FFRDCs). R&D to State and Local Govt. (%) is the percentage of total annual R&D obligations performed by state and local governments.

Model specification

Because of the nature of our data (repeated observations across agencies over multiple years) we follow a panel data methodology. To deal with additional unobserved heterogeneity, we use random effects panel regressions using the xtreg command in Stata 15 statistical software [100]. The large chi-square value from Breusch-Pagan test reveals heteroscedasticity; thus we run our models with the ‘vce (robust)’ option, which computes robust standard errors and accounts for clustered observations by agency [101].


Descriptive statistics and correlations

Table 1 below lists the means, standard deviations, minimum, maximum, and correlations among the variables in our models for the full sample (FY 1999–2016). The average Filing Ratio of federal agencies in our sample is 61%, which is consistent with the university industry-wide benchmark [102]. The mean value for Transfer Rate, a measure of effectiveness, over the 18-year period is about 75%. This is significantly higher than the 42% industry-wide norm [29] but is driven by an aging success from the Department of Commerce (see DOC Transfer Rates 18yrs and 5yrs in Table 3). The average Licensing Success Rate, a measure of efficiency, is 34%, which is higher than the 25% industry-wide benchmark, and reflects the efficiency of a technology transfer at federal agencies [30]. We note that our sample is a longitudinal panel spanning fiscal years 1999–2016; hence our benchmarks may vary from the results reported in the previous cross-sectional studies cited in the literature. Here too, the number is driven by historical success, also out of the DOC. The mean values for FFRDC Utilization (%), Traditional CRADAs (log), and Other R&D Collaborations (log) are 19.34%, 1.96, and 1.25, respectively, which indicate the average strength of R&D collaborations at federal agencies between 1999 and 2016. In terms of non-patent IP policies, about 29% of agencies adopted a non-patent invention policy (Copyright Policy) and about 48% of them adopted MTAs (Materials Transfer Policy).

Table 1. Descriptive statistics and correlations (full sample 1999–2016).

As shown in Table 1, the largest significant correlation appears to be 0.92, which is between Licensing Success Rate and Transfer Rate. This is expected since both contain the number of licenses in the numerator, with the difference in the denominator being either disclosures (LSR) or filed patent applications (TR). The high correlation shows the historical patent-centric model of licensing and is not an issue, since both dependent variables are not used in the same regression models. We check the variance inflation factor (VIF) for all variables in our models. We find that average VIF for our variables is 1.63 and no variable exceeds 2.08, indicating that multicollinearity is not an issue in any of our models. Table 2 below shows the descriptive statistics for the last 5 years of our sample (FY 2012–2016, inclusive).

Table 3 presents the average summary of our variables broken down by 10 agencies in the fiscal years 1999–2016 and 2012–2016. We elaborate on the interpretation of the agency-level summary table (Table 3) in the discussion section of the paper.

Summary of results

We present the results of our study and models for our analyses of federal agency technology transfer performance in Table 4. As shown in Table 4, a 1% unit increase in the FFRDC Utilization (%) is associated with a 4.76% decrease (Model 2: β = -0.0476, p<0.01) in Transfer Rate, 1.89% decrease (Model 3: β = -0.0189, p<0.01) in Licensing Success Rate, 3.15% increase (Model 4: β = 0.0315, p<0.01) in License Income (log), and 0.44% decrease (Model 5: β = -0.0044, p<0.01) in Portfolio Exclusivity. In Table 4 (Model 1), we see that the coefficient for the FFRDC Utilization (%) is negative (β = -0.0012), but not significant for the Filing Ratio.

Table 4. The effects of R&D collaborations and non-patent IP policies on federal agency technology transfer performance metrics.

As shown in Table 4, we find that one order of magnitude increase in Traditional CRADAs (log) in federal agencies is associated with a 23.52% increase (Model 1: β = 0.2353, p<0.01) in Filing Ratio (indicating less active management), a 39.40% increase (Model 2: β = 0.3940, p<0.05) in Transfer Rate, a 20.46% increase (Model 3: β = 0.2046, p<0.01) in Licensing Success Rate, and an 8.14% increase (Model 5: β = 0.0814, p<0.05) in Portfolio Exclusivity. In Table 4 (Model 4), we see that the coefficient for the Traditional CRADAs (log) is negative (β = -0.0945), but is not significant for License Income (log).

A one order of magnitude increase in Other R&D Collaborations (log) is associated with a 5.07% increase (Model 1: β = 0.0507, p<0.01) in Filing Ratio, a 29.65% decrease (Model 2: β = -0.2965, p<0.1) in Transfer Rate, a 12.14% decrease (Model 4: β = -0.1214, p<0.05) in License Income (log), and a 6.39% increase (Model 5: β = 0.0639, p<0.01) in Portfolio Exclusivity. In Table 4 (Model 3), we see that the coefficient for the Other R&D Collaborations (log) is negative (β = -0.1645), but is not significant for Licensing Success Rate.

In terms of non-patent IP policies, establishing a non-patent invention policy (Copyright Policy) is broadly associated with improved metrics, such as a 36.51% decrease (Model 1: β = -0.3651, p<0.05) in Filing Ratio (reflecting more prudent management), a 460.61% increase (Model 2: β = 4.6061, p<0.01) in Transfer Rate, a 161.79% increase (Model 3: β = 1.6179, p<0.05) in Licensing Success Rate, and a 21% decrease (Model 5: β = -0.2100, p<0.01) in Portfolio Exclusivity. In Table 4 (Model 4), we see that the coefficient for the Copyright Policy is positive (β = 0.3971), but is not significant for License Income (log).

As shown in Table 4, adopting MTAs (Materials Transfer Policy) is associated with a 33.12% decrease (Model 1: β = -0.3312, p<0.01) in Filing Ratio, a 203.66% increase (Model 2: β = 2.0366, p<0.01) in Transfer Rate, a 71.56% increase (Model 3: β = 0.7156, p<0.05) in Licensing Success Rate, and a 20% increase (Model 5: β = 0.2036, p<0.01) in Portfolio Exclusivity. In Table 4 (Model 4), we see that the coefficient for the Materials Transfer Policy is negative (β = -0.1190), but not significant for License Income (log).

In Table 5, we report the results of the interaction effects of non-patent IP policies on federal agency technology transfer performance. As shown in Table 5, for the supply-side metrics, the interaction term between Copyright Policy and Materials Transfer Policy is positive and significant for Filing Ratio (Model 1: β = 0.6381, p<0.05), Transfer Rate (Model 2: β = 4.2867, p<0.01), and Licensing Success Rate (Model 3: β = 1.7917, p<0.01). For the demand-side metrics, the interaction term between Copyright Policy and Materials Transfer Policy is negative and significant for License Income (log) (Model 4: β = -3.4228, p<0.01) and positive (β = 0.1994), but is not significant for Portfolio Exclusivity.

Table 5. The interaction effects of non-patent IP policies on federal agency technology transfer performance metrics.


With the caveat that we assumed consistent and accurate reporting across the agencies, our analysis underscores the value that a broad view of technology transfer brings to a federal agency. Specifically, our model indicates that agencies’ technology transfer success, as judged by the licensing-focused metrics of transfer rate and licensing success rate, is typically improved when those agencies have considered and implemented diverse approaches to disseminating non-patented proprietary artifacts, including data, software, and materials. In general, those agencies also have reduced filing ratios, indicating a higher level of acumen and rigor in making and supporting technology transfer decisions and activities. They also have a higher number of traditional CRADAs as normalized to their R&D budgets, indicating a more open approach to technology transfer.

The importance of non-patent policies

Technology transfer has historically focused on inventions, and specifically patentable inventions, even as the word ‘invention’ remained undefined in Stevenson-Wydler. It was not until 1986 and PL 99–502 that Congress made it clear their view of ‘inventions’ referred to “any invention or discovery which is or may [emphasis added] be patentable or otherwise protected under title 35, United States Code, or any novel variety of plant which is or may be protectable under the Plant Variety Protection Act (7 U.S.C. 2321 et seq.).” Setting aside the circular nature of the definition, a lot of things may be patentable under U.S. patent law, at least back in 1986. However, the technology areas that lend themselves to patent protection are becoming fewer, and the protection itself available through patents is becoming narrower [103]. Thus, the trends in technology licensing have naturally evolved to include (if not increasingly rely on) other forms of IP, even as scholarly attention still concentrates on patents due to their ease of access [104]. Unfortunately, despite how easily quantifiable patents are, few sources document trade secrets and know-how licenses, and the extent to which federal laboratories participate in those types of transactions is not clear.

However, some agencies have promulgated non-patent IP policies related to technology transfer, such as copyright and MTA policies [49], indicating a broader view of technology transfer. There appears to be a strong association between improved agency performance and the implementation of these non-patent technology transfer mechanisms. Agency performance is ranked in Table 6, and each of these factors is associated with improved transactional count metrics, indicating more opportunities to leverage resources, accelerate R&D, and boost commercialization efforts. For example, the establishment of a copyright policy (at DOC, DOE, and NASA) is associated with exercising a more prudent use of patenting resources as measured by the filing ratio, even if copyrights have not been historically available to federal employees (See: 17 USC 105; copyright protection is generally not available to Government Owned Government Operated laboratories (GOGOs) but is available to Government Owned Contractor Operated laboratories (GOCOs)). There is also an association between employing a copyright policy and offering more non-exclusive licenses (see Table 4). Here, it is interesting to note that the three agencies mentioned above take different policy approaches to licensing software. DOC, which includes NIST as well as the National Oceanic and Atmospheric Administration (NOAA), does not appear to ‘license’ software and datasets, preferring to place it in the public domain, and so the non-exclusive license counts reported in the annual reports presumably do not include software [105]. This may be due to DOC labs being mainly government owned and government operated, making copyright protection generally unavailable. NASA applies a similar approach, while DOE licenses software from the contractor-managed national labs [106]. Enacting another non-patent policy—a material transfer policy—is similarly associated with improved performance metrics. Three of the top five performers for transfer rate have MTA policies. In fact, for all of the performance metrics shown in Table 6, one thing is common among the majority of top agencies: the official adoption of defined non-patent IP policies.

Most agencies tend to use the CRADA authority to execute various agreement types, including MTAs, non-disclosure agreements, and traditional CRADAs [107]. However, HHS as well as USDA have long had distinct statutory authorities to transfer materials. For example, the original Public Health Service (PHS) Act of 1944 explicitly highlighted open and coordinated research (See PL 78–410). As amended, it broadly enables cooperative research across the full PHS (See: 42 USC 241), as well as explicitly authorizes the transfer of ‘substances’ from its largest intramural component (See: 42 USC 282(c))—the National Institutes of Health (NIH).

This World War II-era law seeded a culture of cooperative research, and in an attempt to streamline the sharing of biomaterials among universities and non-profits, NIH proposed to use an efficient Universal Biological Material Transfer Agreement (UBMTA) in a notice in the Federal Register (60 FR 12771–12775). However, they explicitly recognized not all agencies may have the same ability to execute a stand-alone UBMTA implementing document and that other agencies may need to alternatively execute a CRADA due to a perceived lack of a legal authority. Therefore, either through explicit legal authorities or through permissive interpretation of existing authorities, agencies positioned for greater success are those able to expand their interpretations of both a) which assets are transferable and b) the statutory mechanisms available.

The importance of open innovation

CRADAs were authorized under the FTTA of 1986. The importance of CRADAs has endured and grown, and there continue to be far more CRADAs (or similar partnership agreements) than licenses executed by the individual laboratories and across all agencies [29]. However, even as their importance (by the numbers) far outweighs that of licenses, the emphasis on licensing lingers, and a recent report by the Government Accountability Office attempted to prescribe actions to increase licensing activity [108].

Several years prior to the enactment of the TTCA of 2000 and the required annual agency reports, a survey of industry leaders was conducted to ascertain what, in fact, industry wanted from the federal labs. The thinking was that rather than asking policymakers and academics for answers, it would make sense to seek real-world perspectives from the very people responsible for conducting the practical commercial application side of technology transfer: industry partners. Industry was found to view the very outputs Congress focuses on as actually providing minimal value, with more value stemming from contract and cooperative R&D activities, as well as idea transfer vs. technology transfer per se [31]. In this regard, industry leaders saw the federal labs’ contribution to collaborative, multidisciplinary R&D as potentially helpful, articulating federal labs’ promise within the context of an ‘open innovation’ system several years before the term took root [109]. While industry prized the cooperative activities above licensing or IP, the absence of a CRADA metric in the reporting statute is an early example of how empirically derived information may not appreciably affect policy in this realm [54, 110]. Our model suggests that industry input has been proven correct, in that there is a strong association between CRADAs and classical measures of success.

For example, if industry does indeed prize open innovation arrangements with federal labs, it would follow that those agencies that practice more open innovation should have more opportunity to be involved in the development of commercially-relevant licensable IP. Specifically, those agencies that engage in more CRADAs would be expected to develop more licensable IP, including patents, resulting in a higher transfer rate and licensing success rate [29, 30], not to mention licensing income. Broadly speaking, our empirical results suggest that this is indeed the case. If we normalize traditional CRADAs to the R&D budget, agencies can be ranked by normalized cost/CRADA (Table 6). Agencies with a lower cost/CRADA have a higher open innovation or ‘R&D Collaboration’ rank. Three of the top agencies (EPA, USDA, and DOE) are also among the top agencies receiving licensing income normalized to R&D budget, and three (VA, DOC, and EPA) are top performers for transfer rate.

It might be expected that agencies engaged in open innovation would be exposed to more market demand for those same technologies. For example, the VA has engaged in the largest number of CRADAs as normalized to its R&D budget over the 18-year period, yet it had low-mid range normalized license royalties. One explanation can be found in the CRADA statute itself, which provides a notable carrot for industry partners:

“The laboratory shall ensure, through such agreement, that the collaborating party has the option to choose an exclusive license for a pre-negotiated field of use for any such invention under the agreement or, if there is more than one collaborating party, that the collaborating parties are offered the option to hold licensing rights that collectively encompass the rights that would be held under such an exclusive license by one party. (See 15 USC 3710a(b)(1))

This required provision essentially removes the risk of market competition for inventions developed under a CRADA. While those agencies that execute more CRADAs may have higher performance metrics such as higher transfer rates and higher licensing success rates, the actual licensing income is not significantly affected. The association between a higher number of traditional CRADAs and a higher portfolio exclusivity rate corroborates this explanation. Specifically, for every 10-fold increase in traditional CRADAs, there appears to be an 8% increase in exclusivity (see Table 4). Those are likely to be inventions developed under a CRADA and subject to the above provision.

The special case of the DOD

The DOD alone accounts for over half the government’s R&D funding. At the other end of the spectrum, we have the EPA and the VA (see Table 3). Indeed, given that both agencies focus predominately on applied R&D and exhibit a relatively unrestricted approach to filing patent applications (see filing ratios in Table 3), the EPA may be the most appropriate comparator to the DOD. The model explored here treats those agencies the same, at least in most aspects. For example, transfer rate, licensing success rate, and filing ratio are within-agency measures, which can be compared between agencies. In essentially all cases and every measure, the EPA outperforms the DOD, and by far (see Table 6). One reason for this may be that, while the DOD reports as a single agency, according to the statute that defines a federal agency (See: 15 USC 3703), it actually comprises at least four distinct ‘agencies’ under the law:

“‘Federal agency’ means any executive agency as defined in section 105 of title 5 and the military departments as defined in section 102 of such title, as well as any agency of the legislative branch of the Federal Government.

The above definition explicitly carves out each of the three named ‘military departments’ (i.e., Army, Navy, Air Force) as their own respective agencies, in addition to the rest of the DOD, precluding the adoption of a single set of common DOD technology transfer policies and practices. However, according to data plots showing the relative contribution of various DOD components, the Army accounts for the majority of DOD’s technology transfer transactions and does so with an R&D budget less than its counterparts [111]. If the Army were to report alone, it could well be among the top agencies in Table 6. Still, the DOD Defense Laboratories Office collates and delivers a single annual report of what statutorily should be four distinct agency accounts while each military department can point to the statute to substantiate their differences. The differences naturally contribute to frustrating delays in industry-DOD collaborations involving more than one DOD ‘agency’ [112, 113]. Amending the statute to remove the underlined part above would enable harmonized policies and practices broadly throughout the DOD.

For good measure

Agency technology transfer reports containing prescribed measures have been required since the TTCA of 2000. One of those is “the time elapsed from the date which the license was requested by licensee in writing to the date the license was executed” but process metrics such as this are rarely officially reported or publicly disclosed. The concern with lengthy processes is well documented and continues to throttle effective federal technology transfer [31, 114122]. Recognizing this, President Obama’s 2011 memorandum called to “streamline” processes and increase the “pace” of activities [123]. There are few, if any, agency- and laboratory-level reports analyzing the actual processes, and practical time and personnel costs associated with establishing technology transfer mechanisms are broadly lacking [124]. However, one study did evaluate the effects of collaboration-specific factors on inter-organizational transactional efficiency [125]. Given the overall size and continued growth of government investment in intramural R&D, more empirical studies are needed. With this in mind, here we used normalized output metrics as approximations to assess performance and investigated the associations that a host of variables have with performance measures.

We find that the average transfer rate over the past 5 reported years, approximately 40%, is exactly the same that has been calculated for university counterparts. In a similar manner, the government-wide licensing success rate of 18% and its filing ratio of 55% are also comparable to university counterparts (see Table 2). However, the extent of differences between agencies is striking (see Tables 3 and 6), raising the question, ‘What drives success?’ Through our analyses, we find a strong association between both non-patent IP policies and CRADAs and improved measures of technology transfer success. Based on the performance metrics of Table 6, the top agencies across the full 18-year period are, in order: DOC, HHS, VA, EPA, and DOE. In the most recent 5-year period, the top performers in order are: EPA, HHS, DOC, VA & DOE (tied). For the other agencies, implementing and taking the following actions may be associated with improved technology transfer performance:

  1. DOI: adopt non-patent IP policies and a more discerning approach to patent filing
  2. USDA: to the extent possible, adopt a copyright policy and employ a more discerning approach to patent filing
  3. DOD: apply a more discerning approach to patent filing; consider adopting a copyright policy minimally for FFRDCs and other contractor-operated labs
  4. DOT: adopt non-patent IP policies to the extent possible and continue to apply rigor to disclosure review
  5. NASA: adopt a material transfer policy

In summary, while most prior performance studies focus on patents and licenses, our analysis uncovers the importance of R&D collaboration and non-patent IP policies to technology transfer success. Specifically, we find agencies that employ an open and comprehensive approach to innovation management have measurably higher successes in traditional technology transfer metrics. We find a significant association between a higher number of traditional CRADAs and higher transfer and licensing success rates. Non-patent IP policies enabled through long-standing agency-specific authorities or a more inclusive interpretation of pan-government statutes are similarly associated with significant improvements in performance as measured by the transfer and licensing success rates, as well as with a more prudent approach to invention review and patent filing. Our research underscores the importance of evaluating how the utilization of external R&D collaboration and coordination of non-patent IP policies affects the supply-side and demand-side of federal technology transfer performance.

Generalizing key findings

As we introduced at the outset of this study, national government leaders and senior policymakers around the world–from Beijing to Tel Aviv to Berlin to Washington, D.C.–are all increasingly investing in and leveraging public sector R&D entities and activities to recover from unexpected external shocks and rebuild core components of their respective infrastructures and economies. These clear official mandates and policy directives emphasize the paramount importance of innovation as a chief driver of future growth, greater prosperity, and stronger security. In this context, we believe that our study of U.S. federal agencies’ technology transfer activities and outcomes offers initial empirical evidence and actionable insights that may be more broadly applicable to overcoming similar challenges with managing government-industry innovation processes in other countries as well.

For example, ongoing government efforts to prioritize national labs as strategic resources, coupled with incentives for external R&D collaboration, and mechanisms for sharing non-patented IP are all essential parts of the available managerial toolkit for supporting technology transfer. We find evidence that U.S. federal agencies with a greater utilization of FFRDCs have lower agency-driven technology transfer performance in terms of supply-side metrics. On the demand-side, we find that greater FFRDC utilization is associated with greater licensing income and lower portfolio exclusivity. We now turn to explaining how these findings may potentially inform comparable efforts in other countries.

Recall that China is in the midst of dramatically expanding its system of national labs from 200 to 700 institutes administered by the Chinese Academy of Sciences. As noted earlier, France is already allocating 80% of its public R&D spending to the 10 institutes of the CNRS, while its European counterpart, Germany, is deploying a massive technology-focused economic stimulus package delivered through three distinct federal- and state-sponsored R&D networks consisting of the Helmholtz Association of German Research Centers, the Max Planck Institutes, and the Fraunhofer Institutes. To facilitate stronger technology transfer from national labs, Israel is revising its policy in the governmental R&D sector. Other countries such as India, Turkey, Uruguay, and South Africa are putting greater emphasis on developing national innovation systems to pursue scientific breakthroughs that may yield long-term economic growth [79, 126128].

Our findings suggest that simply increasing the overall portion of public R&D funding allocated to these national labs may not necessarily produce corresponding increases in supply-side metrics. In fact, our findings spanning several years of U.S. federal agencies suggest that greater overall utilization of national labs may actually produce lower supply-side performance measured in terms of filing ratio, transfer rate, and licensing success rate. There may also be a considerable time lag between the initial investment and the observable inventive outputs. The knowledge generated as a public good through R&D activities undertaken by national labs is cumulative and highly interdependent in terms of drawing from multiple disciplines and domains. This further suggests that the pathways to commercialization may require a longer-term and sustained initiative that includes a range of external R&D collaborators in the private sector and new types of arrangements for handling non-patented IP.

Our findings also indicate that the trade-off between the demand-side metrics of licensing income and portfolio exclusivity must be carefully managed. On the one hand, if a government intends to help establish a technical standard based on essential IP, it may actively promote licensing arrangements with a full spectrum of industry participants. In such cases, lower levels of licensing income do not necessarily signal a lack of commercial viability. In fact, if the licensing rights are offered at nominal fees, the associated income might indicate exactly the opposite, namely that widespread adoption of the standard is occurring. This is why the interpretation of the demand-side metric of licensing income should always be evaluated in conjunction with the level of portfolio exclusivity. On the other hand, if a government intends to nurture a particular sector, it may offer exclusive licensing terms with only a select few industry participants. Here, the policy aims may be completely different than promoting standards, and the approach is to provide essential IP as a potential source of competitive advantage to protect certain actors within a promising industry sector. Again, the demand-side metrics of licensing income and portfolio exclusivity must be evaluated and interpreted in the context of the overarching policy objectives and regulatory frameworks of the respective countries that are implementing these metrics.

As a further example that the key findings of this study may be generalized, consider that by allocating and aligning annual R&D budgets with their missions, federal technology managers are able to ascertain the utilization of FFRDCs for their respective agencies. Rather than using CRADAs or traditional contracts, greater emphasis on FFRDCs utilization for managing R&D might potentially reduce supply-side while amplifying demand-side technology performance. By decentralizing core knowledge generation and shifting it from agency-operated sites to contractor-operated FFRDCs, this change may advance the technologies closer to commercialization and increase demand-side performance. Recalling our previous description of the role of national labs in China, France, and Germany, our findings suggest that the extent to which the day-to-day operations of these labs are administered by internal government agencies or external private contractors may have a meaningful impact on the observed levels of supply-side and demand-side metrics for technology transfer performance.

In addition, our findings regarding the importance of CRADAs and non-patent IP policies as predictors of success align with reports investigating university-industry collaborations from many regions. Companies operating in the same sector can benefit from knowledge spillovers within a geographic location, and university-industry collaborations can solidify such geographic industrial clusters [129]. CRADAs are inherently collaborative, and collaborations based on clear shared goals and honest, open dialogue enhance university-firm performance [130, 131]. Evidence from comparative case studies reported in the literature and our quantitative analyses in this study suggests that the same is likely true for government-firm collaborations [55, 132, 133]. Similarly, the strategic significance of non-patent IP is increasingly being recognized [134, 135], even as the approach to knowledge management and appropriation of non-patent IP requires more nuance than the management of patent IP [136, 137].

For example, proposals based on expectations that blockchain technology may facilitate a practical approach to knowledge management of these forms of IP, and thereby enhance their commercialization potential, are being developed and disseminated [138, 139]. In another example that is especially pertinent to the outbreak and aftermath of the coronavirus, there is renewed interest among national governments and transnational actors such as the World Health Organization to facilitate greater pooling of cohort data from clinical trials, foster more open sharing of biological materials, and encourage the voluntary waiving of certain IP rights to accelerate medical R&D for combating COVID-19 [140142]. Hence, new forms of CRADAs and MTAs may emerge in the near future, which only underscores the importance of identifying valid and meaningful ways to measure technology transfer activities and performance over time.

Limitations and future directions

Our study is limited in ways that further research may overcome in the future. Based on publicly available databases managed by the U.S. federal government, our sample timeframe was restricted from 1999–2016 and sample size to 127. However, due to this limitation, we carefully interpreted the results and determined the effect sizes. Future research may be able to utilize a larger sample when federal agency data is made publicly available and provides the opportunity to extend our initial findings.

Our empirical findings provide guidance for formulating evidence-based policy reforms that are tailored to the mandated missions and statutory constraints of each agency. We believe that our findings could be the starting point for new streams of research on this topic that may be generalized and applied to other countries and contexts. We encourage future researchers to continue to investigate this area and help provide policymakers with essential tools that may drive a policy consensus regarding best practices for improving technology transfer performance within and across government agencies around the world.

As other countries such as China, in particular, continue to rapidly expand their own systems of national labs and make more managerial data available for external use by interested parties, it may be feasible to conduct comparative quantitative analyses of technology transfer across multiple countries during the same timeframe. In addition, there may be opportunities to expand our work by examining the extent to which multiple networks of public sector R&D institutes in countries such as Germany and Israel are complementary or competitive with each other in creating and commercializing scientific and technical breakthroughs. The more decentralized and distributed approach to public sector R&D followed by Germany may provide an interesting contrast to the more centralized and concentrated approach followed by its fellow European Union member, France. We believe that our study highlights a promising avenue for future research and scholarly inquiry. Although countries may differ substantially in their political systems and economic priorities, they may continue to encounter similar challenges when analyzing and formulating future policies to achieve their respective national innovation goals. Our study suggests that understanding the most effective practices and process for managing government-industry technology transfer on the supply-side as well as the demand-side is a matter of strategic importance that may help leaders and stakeholders in the public and private sectors make progress towards building a policy consensus and reaching their development objectives.


  1. 1. Dutta S, Lanvin B, Wunsch-Vincent S. Global innovation index 2020: Johnson Cornell University; 2020.
  2. 2. Zhou N, Wu Q, Hu X. Research on the policy evolution of China’s new energy vehicles industry. Sustainability. 2020;12(9):3629.
  3. 3. Brischoux F, Angelier F. Academia’s never-ending selection for productivity. Scientometrics. 2015;103(1):333–6.
  4. 4. Mairesse J, Turner L. Measurement and explanation of the intensity of co-publication in scientific research: an analysis at the laboratory level. National Bureau of Economic Research Cambridge, Mass., USA; 2005.
  5. 5. Goel RK, Göktepe‐Hultén D. Academic leadership and commercial activities at research institutes: German evidence. Managerial and Decision Economics. 2018;39(5):601–9.
  6. 6. Philipps A. Mission statements and self-descriptions of German extra-university research institutes: a qualitative content analysis. Science and Public Policy. 2013;40(5):686–97.
  7. 7. Özçelik E, Taymaz E. R&D support programs in developing countries: the Turkish experience. Research Policy. 2008;37(2):258–75.
  8. 8. Fu X, Zhang J. Technology transfer, indigenous innovation and leapfrogging in green technology: the solar-PV industry in China and India. Journal of Chinese Economic and Business Studies. 2011;9(4):329–47.
  9. 9. Messer-Yaron H. Technology transfer in countries in transition: policy and recommendations. World Intellectual Property Organisation http://www wipo int/export/sites/www/dcea/en/pdf/Technology_Transfer_in_Countries_in_Transition_FINAL-2108. 2012.
  10. 10. Pece C. Federal R&D obligations increase 8.8% in FY 2018; preliminary FY 2019 R&D obligations increase 9.3% over FY 2018 2020. Available from:
  11. 11. De Rassenfosse G, Jaffe A, Raiteri E. The procurement of innovation by the US government. PLoS ONE. 2019;14(8):e0218927. pmid:31404070
  12. 12. Corredoira RA, Goldfarb BD, Shi Y. Federal funding and the rate and direction of inventive activity. Research Policy. 2018;47(9):1777–800.
  13. 13. Zhou P, Tijssen R, Leydesdorff L. University-industry collaboration in China and the USA: a bibliometric comparison. PLoS ONE. 2016;11(11):e0165277. pmid:27832084
  14. 14. Wu Y, Huang W, Deng L. Does social trust stimulate university technology transfer? Evidence from China. PLoS ONE. 2021;16(8):e0256551. pmid:34432841
  15. 15. Kneller R, Mongeon M, Cope J, Garner C, Ternouth P. Industry-university collaborations in Canada, Japan, the UK and USA–with emphasis on publication freedom and managing the intellectual property lock-up problem. PLoS ONE. 2014;9(3):e90302. pmid:24632805
  16. 16. Weckowska DM. Learning in university technology transfer offices: transactions-focused and relations-focused approaches to commercialization of academic research. Technovation. 2015;41:62–74.
  17. 17. Anderson TR, Daim TU, Lavoie FF. Measuring the efficiency of university technology transfer. Technovation. 2007;27(5):306–18.
  18. 18. Siegel DS, Waldman D, Link A. Assessing the impact of organizational practices on the relative productivity of university technology transfer offices: an exploratory study. Research Policy. 2003;32(1):27–48.
  19. 19. Arvanitis S, Kubli U, Woerter M. University-industry knowledge and technology transfer in Switzerland: what university scientists think about co-operation with private enterprises. Research Policy. 2008;37(10):1865–83.
  20. 20. Lee YS. ‘Technology transfer’and the research university: a search for the boundaries of university-industry collaboration. Research Policy. 1996;25(6):843–63.
  21. 21. Bozeman B. Technology transfer and public policy: a review of research and theory. Research Policy. 2000;29(4–5):627–55.
  22. 22. Berman EM. Technology transfer and the federal laboratories: a midterm assessment of cooperative research. Policy Studies Journal. 1994;22(2):338–48.
  23. 23. Wright M, Clarysse B, Lockett A, Knockaert M. Mid-range universities’ linkages with industry: knowledge types and the role of intermediaries. Research Policy. 2008;37(8):1205–23.
  24. 24. Siegel DS, Wright M. University technology transfer offices, licensing, and start-ups. The Chicago Handbook of University Technology Transfer and Academic Entrepreneurship. 2015;1(40):84–103.
  25. 25. Chen C, Link AN, Oliver ZT. US federal laboratories and their research partners: a quantitative case study. Scientometrics. 2018;115(1):501–17.
  26. 26. Jaffe AB, Lerner J. Reinventing public R&D: patent policy and the commercialization of national laboratory technologies. RAND Journal of Economics. 2001:167–98.
  27. 27. Jaffe AB, Fogarty MS, Banks BA. Evidence from patents and patent citations on the impact of NASA and other federal labs on commercial innovation. The Journal of Industrial Economics. 1998;46(2):183–205.
  28. 28. Markman GD, Gianiodis PT, Phan PH. Supply‐side innovation and technology commercialization. Journal of Management Studies. 2009;46(4):625–49.
  29. 29. Choudhry V, Ponzio TA. Modernizing federal technology transfer metrics. The Journal of Technology Transfer. 2020;45(2):544–59.
  30. 30. Stevens AJ, Kato K. Technology transfer’s twenty five percent rule. Les Nouvelles. 2013;1:51.
  31. 31. Roessner JD. What companies want from the federal labs. Issues in Science and Technology. 1993;10(1):37–42.
  32. 32. Kerrigan JE, Brasco CJ. The technology transfer revolution: legislative history and future proposals. Pub Cont LJ. 2001;31:277.
  33. 33. Mowery DC, Nelson RR, Sampat BN, Ziedonis AA. Ivory tower and industrial innovation: university-industry technology transfer before and after the Bayh-Dole Act: Stanford University Press; 2015.
  34. 34. Mowery DC, Sampat BN. The Bayh-Dole Act of 1980 and university–industry technology transfer: a model for other OECD governments? The Journal of Technology Transfer. 2004;30(1):115–27.
  35. 35. Mowery DC, Sampat BN, Ziedonis AA. Learning to patent: institutional experience, learning, and the characteristics of US university patents after the Bayh-Dole Act, 1981–1992. Management Science. 2002;48(1):73–89.
  36. 36. Shane S. Encouraging university entrepreneurship? The effect of the Bayh-Dole Act on university patenting in the United States. Journal of Business Venturing. 2004;19(1):127–51.
  37. 37. Van Ravenswaay R. Government patents and the public interest. Idea. 1977;19:331.
  38. 38. Moore RW. Defense systems management review. Defense Systems Management Coll Fort Belvoir VA, 1979 Winter 1979. Report No.: Contract No.: 1.
  39. 39. Timmons DR. Technology transfer: a look at the Federal sector. 1978.
  40. 40. Large AJ. Public money and private gain. Wall Street Journal. 1979.
  41. 41. Reynolds W. Reforming patent law; [OP-ED]. New York Times. 1980 09/15/1980.
  42. 42. Act A. Public Law 96-96th Congress. Public Law. 1980;96:349.
  43. 43. Allison DK. Technology transfer in the navy: the historical background. The Journal of Technology Transfer. 1982;7(1):55–72.
  44. 44. Parker GL. Technology transfer and the national laboratories. 1994.
  45. 45. Schacht WH. Technology transfer: use of federally funded research and development. Washington, District of Columbia: Congressional Research Service; 2012.
  46. 46. Linsteadt GF, DelaBarre D. A positive look at the Stevenson-Wydler technology innovation Act of 1980—PL 96–480. The Journal of Technology Transfer. 1981;5(2):23–8.
  47. 47. Soderstrom EJ, Winchell BM. Patent policy changes stimulating commercial application of federal R&D. Research Management. 1986;29(3):35–8.
  48. 48. Rosen H. Now you see it, now you don’t. The Journal of Technology Transfer. 1981;5(2):29–34.
  49. 49. Copan WG, Shyam-Sunder S, Singerman PA, Zielinski PR, Silverthorn CF, Na CJ, et al. Return on investment initiative for unleashing american innovation. National Institute of Standards and Technology Special Publication, 2018 1234.
  50. 50. Abernathy WJ. Competitive decline in US innovation: the management factor. Research Management. 1982;25(5):34–41.
  51. 51. Denny JE. Cooperative R&D: DOE’s patent policy need not be a barrier. Research Management. 1983;26(5):34–9.
  52. 52. Soderstrom J, Copenhaver E, Brown MA, Sorensen J. Improving technological innovation through laboratory/industry cooperative R and D. Policy Stud Rev;(United States). 1985;5(1).
  53. 53. Adams JD, Chiang EP, Jensen JL. The influence of federal laboratory R&D on industrial research. Review of Economics and Statistics. 2003;85(4):1003–20.
  54. 54. Rogers EM, Carayannis EG, Kurihara K, Allbritton MM. Cooperative research and development agreements (CRADAs) as technology transfer mechanisms. R&D Management. 1998;28(2):79–88.
  55. 55. Mowery D. Using Cooperative Research and Development Agreements as S&T Indicators: What do we have and what would we like? Technology Analysis & Strategic Management. 2003;15(2):189–205.
  56. 56. Felin T, Zenger TR. Closed or open innovation? Problem solving and the governance choice. Research Policy. 2014;43(5):914–25.
  57. 57. Siegel DS, Veugelers R, Wright M. Technology transfer offices and commercialization of university intellectual property: performance and policy implications. Oxford Review of Economic Policy. 2007;23(4):640–60.
  58. 58. Bozeman B, Coker K. Assessing the effectiveness of technology transfer from US government R&D laboratories: the impact of market orientation. Technovation. 1992;12(4):239–55.
  59. 59. Li D, Eden L, Hitt MA, Ireland RD, Garrett RP. Governance in multilateral R&D alliances. Organization Science. 2012;23(4):1191–210.
  60. 60. Sampson RC. The cost of misaligned governance in R&D alliances. Journal of Law, Economics, and Organization. 2004;20(2):484–526.
  61. 61. Asakawa K, Nakamura H, Sawada N. Firms’ open innovation policies, laboratories’ external collaborations, and laboratories’ R&D performance. R&D Management. 2010;40(2):109–23.
  62. 62. Amesse F, Cohendet P. Technology transfer revisited from the perspective of the knowledge-based economy. Research Policy. 2001;30(9):1459–78.
  63. 63. Shapira P, Youtie J, Yogeesvaran K, Jaafar Z. Knowledge economy measurement: methods, results and insights from the Malaysian knowledge content study. Research Policy. 2006;35(10):1522–37.
  64. 64. Andrés AR, Asongu SA, Amavilah V. The impact of formal institutions on knowledge economy. Journal of the Knowledge Economy. 2015;6(4):1034–62.
  65. 65. Hourihan M, Parkes D. Federal R&D budget trends: a short summary. American Association for the Advancement of Science, report. 2016.
  66. 66. O’Dell CS, Grayson CJ, Essaides N. If only we knew what we know: the transfer of internal knowledge and best practice: Simon and Schuster; 1998.
  67. 67. Cabrera A, Cabrera EF. Knowledge-sharing dilemmas. Organ Stud. 2002;23(5):687–710.
  68. 68. Riccucci NM, Van Ryzin GG. Representative bureaucracy: a lever to enhance social equity, coproduction, and democracy. Public Administration Review. 2017;77(1):21–30.
  69. 69. Baccini L, Urpelainen J. Legislative fractionalization and partisan shifts to the left increase the volatility of public energy R&D expenditures. Energy Policy. 2012;46:49–57.
  70. 70. Howells J. Intermediation and the role of intermediaries in innovation. Research Policy. 2006;35(5):715–28.
  71. 71. Kivimaa P. Government-affiliated intermediary organisations as actors in system-level transitions. Research Policy. 2014;43(8):1370–80.
  72. 72. Lanahan L, Joshi AM, Johnson E. Do public R&D subsidies produce jobs? Evidence from the SBIR/STTR program. Research Policy. 2021;50(7):104286.
  73. 73. Stiglitz JE. Leaders and followers: perspectives on the Nordic model and the economics of innovation. Journal of Public Economics. 2015;127:3–16.
  74. 74. Dever P. Jr Reforming subsidies in the federal budget. Politics & Policy. 2008;36(5):854–78.
  75. 75. Simon HA. The structure of ill structured problems. Artificial Intelligence. 1973;4(3–4):181–201.
  76. 76. Simon HA. The architecture of complexity. Facets of systems science: Springer; 1991. p. 457–76.
  77. 77. Fey CF, Birkinshaw J. External sources of knowledge, governance mode, and R&D performance. Journal of Management. 2005;31(4):597–621.
  78. 78. Lakhani KR, Tushman ML, Baldwin C, Grandori A, von Hippel E. Open innovation and organizational boundaries: the impact of task decomposition and knowledge distribution on the locus of innovation. Working Paper, 2012.
  79. 79. Adegbesan JA, Higgins MJ. The intra‐alliance division of value created through collaboration. Strategic Management Journal. 2011;32(2):187–211.
  80. 80. Bozeman B, Wilson L. Market-based management of government laboratories: the evolution of the US national laboratories’ government-owned, contractor-pperated management system. Public Performance & Management Review. 2004;28(2):167–85.
  81. 81. Anderson GW, Breitzman A. Identifying NIST impacts on patenting: a novel data set and potential uses. Journal of Research of the National Institute of Standards and Technology. 2017;122:1. pmid:34877120
  82. 82. Snyder B, Thomas JW. GOGOs, GOCOs, and FFRDCs… Oh My! Federal Laboratory Consortium for Technology Transfer, 2019.
  83. 83. Gallo ME. Federally Funded Research and Development Centers (FFRDCs): background and issues for Congress: Congressional Research Service; 2017.
  84. 84. Bahar M, Griesbach R. Cultivating and nurturing a culture of innovation in federal agencies. Les Nouvelles-Journal of the Licensing Executives Society. 2020;55(3).
  85. 85. Joshi AM, Inouye TM, Robinson JA. How does agency workforce diversity influence federal R&D funding of minority and women technology entrepreneurs? An analysis of the SBIR and STTR programs, 2001–2011. Small Business Economics. 2018;50(3):499–519.
  86. 86. Mahr D, Lievens A. Virtual lead user communities: drivers of knowledge creation for innovation. Research Policy. 2012;41(1):167–77.
  87. 87. Van de Vrande V, Vanhaverbeke W, Duysters G. External technology sourcing: the effect of uncertainty on governance mode choice. Journal of Business Venturing. 2009;24(1):62–80.
  88. 88. Keil T, Maula M, Schildt H, Zahra SA. The effect of governance modes and relatedness of external business development activities on innovative performance. Strategic Management Journal. 2008;29(8):895–907.
  89. 89. Mowery DC, Ziedonis AA. Academic patents and materials transfer agreements: substitutes or complements? The Journal of Technology Transfer. 2007;32(3):157–72.
  90. 90. Wittig R, Zelenka M, Smith S, Manz P. Government works in an international marketplace: The Copyright Issue. Syracuse L & Tech J. 2002:1.
  91. 91. West J, Bogers M. Leveraging external sources of innovation: a review of research on open innovation. Journal of Product Innovation Management. 2014;31(4):814–31.
  92. 92. Bubela T, Guebert J, Mishra A. Use and misuse of material transfer agreements: lessons in proportionality from research, repositories, and litigation. PLoS Biol. 2015;13(2):e1002060. pmid:25646804
  93. 93. Ceccagnoli M, Jiang L. The cost of integrating external technologies: supply and demand drivers of value creation in the markets for technology. Strategic Management Journal. 2013;34(4):404–25.
  94. 94. Guerzoni M, Raiteri E. Demand-side vs. supply-side technology policies: hidden treatment and new empirical evidence on the policy mix. Research Policy. 2015;44(3):726–47.
  95. 95. Munson JM, Spivey WA. Take a portfolio view of CRADAs. Research-Technology Management. 2006;49(4):39–45.
  96. 96. Joshi AM, Hemmatian I. How do legal surprises drive organizational attention and case resolution? An analysis of false patent marking lawsuits. Research Policy. 2018;47(9):1741–61.
  97. 97. Hughes M, Howieson S, Walejko G, Gupta N, Jonas S, Brenner A, et al. Technology transfer and commercialization landscape of the federal laboratories. Institute for the Defense Analyses. 2011;27:2015.
  98. 98. Link AN, Oliver ZT. US federal agency technology transfer mechanisms and metrics. Technology Transfer and US Public Sector Innovation: Edward Elgar Publishing; 2020.
  99. 99. Aulakh PS, Jiang MS, Pan Y. International technology licensing: monopoly rents, transaction costs and exclusive rights. Journal of International Business Studies. 2010;41(4):587–605.
  100. 100. Karim S, Kaul A. Structural recombination and innovation: unlocking intraorganizational knowledge synergy through structural change. Organization Science. 2014;26(2):439–55.
  101. 101. Park HD, Tzabbar D. Venture capital, CEOs’ sources of power, and innovation novelty at different life Stages of a new venture. Organization Science. 2016.
  102. 102. Stevens AJ. An emerging model for life sciences commercialization. Nature Biotechnology. 2017;35(7):608–13. pmid:28700555
  103. 103. Brougher J, Linnik KM. Patents or patients: who loses? Nature Biotechnology. 2014;32(9):877–80. pmid:25203034
  104. 104. Schwartz J. Experts urge TTOs: don’t leave know-how royalty dollars on the table. Technology Transfer Tactics. 2019 May:
  105. 105. Copyright, fair use, and licensing statements for SRD, data, software, and technical series publications 2020. Available from:
  106. 106. Licensing guide and sample license. In: Group DoETTW, editor. 2013.
  107. 107. Charles RL. Expanding use of technology transfer mechanisms within the Army’s medical treatment facilities. US Army Medical Department Journal. 2012:32–7. pmid:22388720
  108. 108. Neumann J. Federal research: additional sctions needed to improve licensing of patented laboratory inventions. Highlights of GAO-18-327, a report to the chairman, committee on the judiciary, house of representatives. In: Office USGA, editor. Washington, DC2018.
  109. 109. Chesbrough HW. Open innovation: the new imperative for creating and profiting from technology: Harvard Business Press; 2003.
  110. 110. Youtie J, Bozeman B, Jabbehdari S, Kao A. Credibility and use of scientific and technical information in policy making: an analysis of the information bases of the national research council’s committee reports. Research Policy. 2017;46(1):108–20.
  111. 111. Appler D. DoD technology transfer program: defense industrial base seminar and workshops. Director Defense Research and Engineering 2010.
  112. 112. Hernandez SHA, Morgan BJ, Hernandez BF, Parshall MB. Building academic-military research collaborations to improve the health of service members. Nurs Outlook. 2017;65(6):718–25. pmid:28601252.
  113. 113. Scherer RW, Sensinger LD, Sierra-Irizarry B, Formby C. Lessons learned conducting a multi-center trial with a military population: the Tinnitus Retraining Therapy Trial. Clinical Trials. 2018;15(5):429–35. pmid:29792074.
  114. 114. Bodde DL. On guns and butter: reflections on technology transfer from federal laboratories. Technology in Society. 1993;15(3):273–80.
  115. 115. Carr RK. Doing technology transfer in federal laboratories (Part 1). The Journal of Technology Transfer. 1992;17(2–3):8–23.
  116. 116. Ferraris GL. Cooperative Research and Development Agreements (CRADA) with industry as a value enhancing asset in the academic/research environment. A case study at the Naval Postgraduate School (NPS). Naval Postgraduate School Monterey CA, 2005.
  117. 117. Technology transfer: several factors have led to a decline in partnerships at DOE’s laboratories. Report to the chairman, committee on energy and natural resources, U.S. senate. In: Office USGA, editor. Washington DC: United States Government Accountability Office; 2002. p. 1–40.
  118. 118. Harrer B, Cejka C. Agreement execution process study: CRADAs and NF-WFO agreements and the speed of business. Department of Energy: Energy Do; 2011.
  119. 119. Howieson SV, Shipp SS, Walejko GK, Rambow PB, Peña V, Holloman SS, et al. Policy issues for department of defense technology transfer. Alexadria, VA: Institute for Defense Anaylsis, 2013.
  120. 120. Walejko GK, Hughes ME, Howieson SV, Shipp SS. Federal laboratory–business commercialization partnerships. Science. 2012;337(6100):1297–8. pmid:22984055
  121. 121. Rogers EM, Takegami S, Yin J. Lessons learned about technology transfer. Technovation. 2001;21(4):253–61.
  122. 122. Carayannis EG, Rogers EM, Kurihara K, Allbritton MM. High-technology spin-offs from government R&D laboratories and research universities. Technovation. 1998;18(1):1–11.
  123. 123. Obama B. Presidential memorandum: accelerating technology transfer and commercialization of federal research in support of high growth businesses. U.S. Government Publishing Office, Washington, D.C.; 2011.
  124. 124. Van Egeren TS. Tracking overhead ORTA costs in technology transfer activities. Air Force Inst of Tech Wright-Patterson AFB OH, 1997.
  125. 125. Ravilious GE, Choudhry V, Howieson SV, Ponzio TA. An analysis of factors affecting federal laboratory technology transfer transactional efficiency: practical measures proposed in this article can make the commercialization of federal research and technologies more timely and efficient. Research-Technology Management. 2021;64(3):20–30.
  126. 126. Ray S. Technology transfer and technology policy in a developing country. The Journal of Developing Areas. 2012:371–96.
  127. 127. Papaioannou T, Watkins A, Mugwagwa J, Kale D. To lobby or to partner? Investigating the shifting political strategies of biopharmaceutical industry associations in innovation systems of South Africa and India. World Development. 2016;78:66–79.
  128. 128. Galaso P, Rodríguez Miranda A. The leading role of support organisations in cluster networks of developing countries. Industry and Innovation. 2021;28(7):902–31.
  129. 129. Molina-Morales FX, Martínez-Cháfer L, Capó-Vicedo J, Capó-Vicedo J. The dynamizing role of universities in industrial clusters. The case of a Spanish textile cluster. The Journal of The Textile Institute. 2021:1–10.
  130. 130. Costa J, Neves AR, Reis J. Two sides of the same coin. University-industry collaboration and open innovation as enhancers of firm performance. Sustainability. 2021;13(7):3866.
  131. 131. Rybnicek R, Königsgruber R. What makes industry–university collaboration succeed? A systematic review of the literature. Journal of Business Economics. 2019;89(2):221–50.
  132. 132. Bozeman B, Pandey S. Cooperative R&D in government laboratories: comparing the US and Japan. Technovation. 1994;14(3):145–59.
  133. 133. Carayannis EG, Alexander J. Secrets of success and failure in commercialising US government R&D laboratory technologies: a structured case study approach. Int J Technol Manage. 1999;18(3–4):246–69.
  134. 134. Marr K, Phan P. The valorization of non-patent intellectual property in academic medical centers. The Journal of Technology Transfer. 2020;45(6):1823–41. pmid:33012983
  135. 135. Graham SJ. Beyond patents: the role of copyrights, trademarks, and trade secrets in technology commercialization. Technological innovation: generating economic results: Emerald Group Publishing Limited; 2008.
  136. 136. Mehlman SK, Uribe-Saucedo S, Taylor RP, Slowinski G, Carreras E, Arena C. Better practices for managing intellectual assets in collaborations. Research-Technology Management. 2010;53(1):55–66.
  137. 137. Slowinski G, Hummel E, Kumpf RJ. Protecting know-how and trade secrets in collaborative R&D relationships. Research-Technology Management. 2006;49(4):30–8.
  138. 138. Philsoophian M, Akhavan P, Namvar M. The mediating role of blockchain technology in improvement of knowledge sharing for supply chain management. Management Decision. 2022;60(3):784–805.
  139. 139. Wang J, Wang S, Guo J, Du Y, Cheng S, Li X. A summary of research on blockchain in the field of intellectual property. Procedia Computer Science. 2019;147:191–7.
  140. 140. Zarocostas J. What next for a COVID-19 intellectual property waiver? The Lancet. 2021;397(10288):1871–2. pmid:34022975
  141. 141. Haendel MA, Chute CG, Bennett TD, Eichmann DA, Guinney J, Kibbe WA, et al. The National COVID Cohort Collaborative (N3C): rationale, design, infrastructure, and deployment. Journal of the American Medical Informatics Association. 2021;28(3):427–43. pmid:32805036
  142. 142. Du L, Wang M, Raposo VL. International efforts and next steps to advance COVID-19 vaccines research and production in low-and middle-income countries. Vaccines. 2022;10(1):42.