The impact of research on the world beyond academia has increasingly become an area of focus in research performance assessments internationally. Impact assessment is expected to incentivise researchers to increase engagement with industry, government and the public more broadly. Increased engagement is in turn expected to increase translation of research so decision-makers can use research to inform development of policies, programs, practices, processes, products, and other mechanisms, through which impact can be realised. However, research has shown that various factors affect research use, and evidence on ‘what works’ to increase decision-makers’ use of research is limited. The Conversation is an open access research communication platform, published under Creative Commons licence, which translates research into news articles to engage a general audience, aiming to improve understanding of current issues and complex social problems. To identify factors that predict use of academic research and expertise reported in The Conversation, regression analyses were performed using The Conversation Australia 2016 Annual Survey data. A broad range of factors predicted use, with engagement actions being the most common. Interestingly, different types of engagement actions predicted different types of use. This suggests that to achieve impact through increased engagement, a deeper understanding of how and why different engagement actions elicit different types of use is needed. Findings also indicate The Conversation is overcoming some of the most commonly identified barriers to the use of research: access, relevance, actionable outcomes, and timeliness. As such, The Conversation offers an effective model for providing access to and communicating research in a way that enables use, a necessary precursor to achieving research impact.
Citation: Zardo P, Barnett AG, Suzor N, Cahill T (2018) Does engagement predict research use? An analysis of The Conversation Annual Survey 2016. PLoS ONE 13(2): e0192290. https://doi.org/10.1371/journal.pone.0192290
Editor: James Wilsdon, University of Sheffield, UNITED KINGDOM
Received: July 20, 2017; Accepted: January 22, 2018; Published: February 7, 2018
Copyright: © 2018 Zardo et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The consent that was included in the survey stated that the data would only be seen by research team. As such consent was given with the understanding that data would not be made public. As such we believe it would be unethical to share or publish this data, as consent was given under the condition that the data would not be made publicly available. The ethics approval for this project was provided by Queensland University of Technology University Human Research Ethics Committee (UHREC)—they can be contacted on firstname.lastname@example.org or email@example.com.
Funding: No funding was received for this conduct of this study. Dr Tim Cahill was employed by The Conversation at the time of the survey development and as his employer The Conversation paid his salary. No salary was provided specifically for the survey development as this was done as part of Dr Cahill’s normal duties. Dr Cahill invited Dr Zardo to contribute to the survey design. Dr Zardo contributed to the survey design as outlined in the methods section of the paper and was not paid for this work or any other work on this paper, in either funding or salary from The Conversation. Dr Tim Cahill was employed at KPMG Australia at the time of reviewing the manuscript. This review was done on personal time. KPMG did not provide funding for this project. Neither the Conversation nor KPMG had any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. Dr Zardo, Associate Professor Barnett and Associate Professor Suzor did not receive any funding nor salary from The Conversation to undertake this project, the study design, data collection and analysis, decision to publish, or preparation of the manuscript was undertaken in their roles as academics at Queensland University of Technology.
Competing interests: The lead author has not received any funding from The Conversation or any other grant to complete these analyses. The Conversation has not been involved in any way in the design of the methodology and analysis of the results. Tim Cahill was employed as Chief Data Scientist at The Conversation at the time that the survey was developed. Dr Zardo was invited by Dr Cahill to contribute to the survey but did not receive any funding from The Conversation at the time of survey development or at any other time. The survey was developed as part of usual business for The Conversation; it was not developed for the purposes of this study. Dr Zardo developed the design of the methodology and analysis for the study and produced the manuscript with input from the other authors as described in Author Contributions. Dr Cahill currently works for KPMG Australia. KPMG Australia is not affiliated with The Conversation. KPMG Australia has not been involved in this study in any way. These commercial affiliations do not alter our adherence to PLOS One policies on sharing data and materials. There are no patents, products in development or marketed products to declare.
In 2018, for the first time, Australian universities will be assessed on 1) the extent to which they engage with industry, government and other research end-users, and 2) the impact that their research has had on the world beyond academia . The Australian Engagement and Impact Assessment runs in parallel to the established Excellence in Research Australia (ERA) exercise, focused on research funding, publications, citations and peer review . While impact assessment has been on and off the Australian Government agenda for some time, it was the United Kingdom that implemented the world’s first national impact assessment in 2014 [2,3]. The UK Research Excellence Framework (REF) included an impact assessment based on detailed case studies of the impacts research has had beyond academia that could be traced to an underpinning research base .
The UK’s action on research impact assessment spurred further Australian developments. In 2015 the Academy of Technology, Science and Engineering (ATSE) ran the Research Engagement Australia (REA) pilot. The REA pilot developed and tested income-based metrics to demonstrate engagement between academia and industry. Focusing on engagement metrics addressed two key criticisms of the UK’s case study approach to impact assessment . First, metrics that use existing data sets minimise burden on the academic sector in regards to collecting evidence of impact [6,7]. Secondly, while impact can take a long time to be achieved and is not in the direct control of academics , engagement metrics based on industry-funded research demonstrate vested relationships with industry, and can indicate potential future impact. In 2015, the Australian Government also commissioned a review of research funding which recommended that assessment of research impact be added to national research performance assessments .
In December 2015 the Australian Government launched the National Innovation and Science Agenda (NISA) that announced the introduction of a research impact assessment . The purpose of the impact assessment, as outlined in the NISA, was to incentivise higher education institutions to increase engagement with industry and government to increase the positive impacts that research has on the economy and society more broadly. The central hypothesis of this approach is that increased engagement drives increased impact. This premise is supported by research that has identified close collaboration, increased interaction as well as access, trust, relevance and communication as key factors that affect research use by decision-makers in industry and government [10–14].
The problem with focusing solely on engagement as an indicator of future impact is the failure to acknowledge the significant challenge of getting decision-makers to use research evidence. To get from engagement to impact, research must be used in some way. Research must become an input to the decision-making, development, production, implementation or use of policies, programs, practices, products, services, etc., the mechanisms that deliver impact beyond academia. Research use has been defined as the application of research evidence and expertise to inform decision-making within planning, development, implementation and evaluation processes [15–19]. Studies of government decision-makers use of research have shown that different types of research evidence are used. Decision-makers directly commission research relevant to an issue they are facing, search for existing individual studies or literature review or systematic reviews of research, and decision-makers also seek out expert evidence-based advice directly from academics who are recognised experts in their field [16,20–24].
There are also different ways in which research evidence is used. Types of research use have commonly been categorised as: instrumental use, which refers to direct application of research-based evidence; conceptual use, referring to use of research evidence to inform understanding, discussion and/or debate without direct application; and tactical, political or symbolic use, which refers to the use of research evidence to inform a decision or position previously established [10,13,15,25–28].
The definitions and types of use outlined above align with the Australian Research Council definition of impact, which built on the UK’s definition of research impact : ‘The contribution that research makes to the economy, society, environment and culture beyond the contribution to academic research’. This aligns with the academic definitions and types of use outlined above. It suggests that research must be used in some way that contributes to the world beyond academia, but does not specify a specific type of use or contribution. Impact case study templates used in both the UK and Australia require researchers to describe how their research has been used outside of academia how that use represents an important contribution [2, 23].
Engagement has been clearly differentiated from use in the work of Redman et al. [15,29,30], who define engagement as actions ‘that create a bridge between the potential reflected in the capacity to use research and the eventual outcome of research application’ . The definition of engagement adopted for the Australia research impact assessment is ‘The interaction between researchers and research end-users outside of academia, for the mutually beneficial transfer of knowledge, technologies, methods or resources’ . Here, engagement refers to how research-based outputs are transferred to, accessed and shared between researchers and users of research outside of academia. This aligns to Redman et al  identification of research access, research appraisal and interactions between decision-makers and researchers as engagement actions.
The 2017 Innovation and Science Australia Performance Review found there are limited mechanisms to support research translation and collaboration between researchers and industry . Research translation and implementation science are fields of research that have developed in response to the substantial challenges faced by those who have attempted to increase use of research in industry and government [33,34]. There is an extensive research base on the factors affecting decision-maker research use [10,11,35–40]. There has also been extensive work in using identified factors and relevant theories and frameworks [41–44] to design interventions aimed at increasing use of research [12,14,18,41–45]. Many of these frameworks are focused on: increasing interaction and collaboration between researchers and decision-makers, for example through co-production [41–44] of research and implementation projects; increasing access to and accessibility of research, e.g. through synthesising research and improving research communication; and also building capacity for decision-makers to engage with research evidence and researchers and vice versa, often though training and facilitation [12,14].
While it has been established that many factors can affect research use and many interventions developed, high quality empirical research testing interventions that aim to increase decision-makers’ research use is more limited [12,14,29,45]. A recent systematic review of research covering the years 1990 to 2015 identified just 16 studies that specifically sought to test interventions aimed at increasing decision-makers’ use of research . Seven of these interventions effectively increased research use, the other nine failed to show any change in behaviour. The authors concluded that ‘skill or the intention to use evidence, in itself, cannot be regarded as a reliable indicator of behaviour change in practice’. A complex interplay of factors can affect research use, which can make the transition from engagement to research use and subsequent impact extremely difficult. Engagement is only a first, albeit necessary, step on a challenging pathway to impact that has not yet been extensively mapped, with no definitive means to effectively navigate.
The systematic review by Langer (2016)  showed that interventions with the most reliable evidence of increasing decision-makers’ research use are those that facilitated research access and communication, and built skills for research use . This is supported by the recent review by Sarkies et al. (2017)  which identified three studies with experimental designs that tested interventions. Two of the studies were focused on access and communication. The other focused on use of knowledge brokering to support decision-makers to use research. Access mechanisms used in these effective interventions were mainly online evidence portals designed for specific research fields and systematic or other literature reviews that summarised evidence for decision-makers. Targeted and tailored research messages were the most effective communication approaches. While there are many factors affecting research use that are not directly in the control of academia, research access and communication are factors that academics and research institutions can increase and improve.
There has also been a great deal of work on increasing access to research through the open access publishing movement, however there are a limited number of studies that have examined the use of open access academic research by those outside of academia [46,47]. A recent review found just 53 studies published between 2001 and mid 2017 on use of open access research by those outside of academia . The two largest and most comprehensive of these studies were on use of institutional research repositories . These studies showed that there are people outside of academia that seek out and use research across a range of fields for both work-related and personal reasons [46–48].
It is clear from the Engagement and Impact Assessment in Australia and much research in the field of research translation that direct engagement, interaction, collaboration and co-production of research are needed to support increased use and impact of research [12,14,15,31,41,49,50]. For example, the engagement indicators used in the the Engagement and Impact Assessment in Australia are based on establishment of research partnerships, and collaboration and co-production require researchers and decision-makers or end-users to directly work together. The evidence that people are accessing and using research outside of direct engagement and translation efforts [46–48], coupled with the consistent finding that access and accessibility are critical factors enabling research use [11,12], highlights the need to better understand independent research use enabled by open access platforms and identify whether this type of use also has a relationship with indirect engagement actions.
The Conversation is an online research communication platform that provides access to academic research evidence that has been ‘translated’ and communicated in a form more accessible to general audiences than traditional academic journal articles. The Conversation articles include papers that report on or review research findings as well as articles that offer academic expert opinion and analysis on current news issues. The Conversation was established in 2011, with the funding from four Australian universities and one Australian government funded research institution. The Conversation sought to provide better understanding of current and complex issues in the news through high quality, independent journalism based on research evidence and academic expertise. The Conversation has a strong open access mandate and ethic, assuring their content will remain free to both read and share .
The Conversation has experienced rapid uptake since its launch in 2011. Chapters have since been established in the US, UK, South Africa and France. The Conversation attracts a monthly readership of approximately 3.8 million with a further 35 million readers reached each month through republication . The Conversation publishes short journalistic style articles written by academic researchers, who must have doctoral research qualifications or be studying for doctorate. The Conversation’s editors, who have journalism backgrounds and expertise, select articles for publication and guide development of article content to ensure it is relevant and accessible to a general audience. The Conversation articles are published using a Creative Commons licence, which means other organisations and individuals can access them for free and can also republish them in full at no cost .
Each year The Conversation undertakes an annual survey of their Australian readership. The survey includes a broad range of questions including reader demographics and questions relating to readers’ main purpose for reading The Conversation, what they value about The Conversation, and how frequently they read The Conversation. Importantly, the survey also asks about the actions they take after reading articles on The Conversation and whether and how readers have used The Conversation articles to inform decision-making related to their work and their lives. The actions outlined in the survey are focused on actions that involve engagement with The Conversation content, including: ‘left a comment on the article’, ‘shared an article on social networks’, ‘discussed with friends or colleagues’, ‘contacted a local politician or government official’. The survey also asked about types of use of articles including: to inform development of policy, or to inform personal attitude or behaviour change. The types of engagement action and use of research captured in the survey are much broader than alternative metrics or ‘altmetrics’ often used as proxy measures of engagement and impact [52–54]. The Conversation annual survey therefore provides a unique opportunity to explore how the broad range of factors examined in the survey, including engagement, affect the use of research read in The Conversation.
This paper reports the findings of regression analyses undertaken on the data collected via The Conversation Australia Annual Survey 2016. The analysis aimed to answer the following research questions:
- Do engagement actions predict use of information read in The Conversation articles?
- What other factors predict use of information read in The Conversation articles?
Materials and methods
The Conversation developed The Conversation Annual Survey 2016, with input from Dr Zardo. Dr Zardo led inclusion of questions on use of research that forms the focus of this study (see S1 File). The survey was made available to readers via The Conversation Australian website and through their online newsletter mailed to subscribers. It was also advertised via The Conversation social media. The survey was open from 7 to 15 April 2017. Survey Monkey was used to distribute the survey and collect responses. The method of survey collection was non-random and we cannot calculate a survey response rate or compare the characteristics of respondents and non-respondents.
A low risk human research ethics application was approved by the Queensland University of Technology University Human Research Ethics Committee to analyse the data. The first survey question asked consent for the data to be used in subsequent reporting. Eighteen people (0.23% of total sample) did not give consent, leaving a total sample for analysis of 7,772 consenting participants for analyses.
Data on use of research and academic expertise reported in The Conversation articles were generated from the survey question ‘Have you used articles from The Conversation to do any of the following?’ with the following response options, each asking about a different type of use (tick as many as apply):
- Develop strategy, policy, presentations, decisions and/or directions which have been documented, for example, in policy briefs, papers, projects plans or reports, PowerPoints, etc.
- Further support existing an strategy, policy, program or business decisions
- Inform general understanding, discussion and debate on strategy, policy, project or business topics
- Change my own behaviour and/or attitudes in my personal life
These are the four measures of use that we aimed to predict. The first three types of use align to the instrumental, tactical/symbolic and conceptual types of research use discussed in the introduction and are specifically related to use of articles in relation to work-related decision-making. The fourth seeks to identify use of research to inform one’s personal decision-making.
Data on engagement actions were generated from the question ‘What actions have you taken as a result of reading an article on The Conversation?’ with the following response options (tick as many as apply):
- Republished the article
- Left a comment on the article
- Shared an article on social networks (e.g. Facebook, Twitter) or by email
- Discussed with friends or colleagues
- Printed to read or share
- Contacted the author to discuss their ideas
- Contacted the author to work with them
- Contacted the author to ask about studying with them or at their university
- Used the article in a report
- Used the article as a classroom resource or as basis of discussion with students
- Contacted a local politician or government official
- Undertaken further research
The actions listed here represent dissemination, communication and other activities undertaken by the reader to engage others or the author, or engage withThe Conversation content, or led to engagement with other content. Each response to these questions was automatically made into an individual variable with a binary (yes/no) outcome.
The survey also asked about: age, gender, state of residence, place of residence (city, regional, rural), education, income, sector worked within; employment position; business owner and business type.
To determine the factors that predicted the four types of use of The Conversation articles outlined above, two methods of regression analysis were used: logistic regression and classification trees. Regression methods were selected because our main aim was to find which variables predicted use. As there is current limited published quantitative evidence of factors that predict research use, an inclusive approach was used and each survey question, excluding the question on research use, which is the outcome, and the question on university affiliation, was treated as a potential predictor of research use. See S2 File for full list of included predictor variables. We used two regression methods as we aimed to see if there was convergence or divergence in the predictors selected. This approach was taken, as there is often no single regression method that can be definitively described as the ‘best’ method. See the following references for further examples of studies using of a mix of regression methods [55–57].
The logistic regression analysis was undertaken in steps. To ensure that the predictor variables were not highly correlated with each other, variance inflation factors (VIFs) were used and were less than 5 for all variables, and were between 1 and 1.3 for variables that significantly predicted each type of use, indicating that collinearity was not an issue. The first step in the logistic regression was a univariate analysis where each variable was independently tested against research use. Then in the second step, all factors that were significant predictors of research use were entered into a regression model in a single block. Then all significant variables in the first block were entered into a second model in one block. This process was repeated until all factors in the block made a statistically significant contribution to the model. See S3 File for details of logistic regression results and classification tree results.
A classification tree was generated for each of the four types of research use. Classification trees use predictors to split the data into the two most distinct groups, e.g. in a dataset of cause of death, deaths from lung cancer (yes/no) might be split by smoking status (yes/no). Our dependent variable was research use and we included all other survey question responses as potential predictors. After each split is made the two nodes created are considered for further splitting until no further splits can be made. The stopping rule for splits is based on the improvement in model fit, so each split must improve the model’s predictive ability.
Each time a split is made every predictor is considered using every possible grouping or cut-off, and the variable that creates the biggest difference in the dependent variable is chosen. This intensive selection makes classification trees vulnerable to over-fitting and hence 10-fold cross-validation is used to avoid selecting spurious predictors. A key advantage of classification trees is that they are able to consider multiple potential predictors but generally avoid the problem of multicollinearity because the staged approached makes it unlikely that two highly correlated predictors will be selected. Trees are also very useful at uncovering interactions between predictors. Trees are generally inefficient for modelling linear or non-linear associations with continuous predictors; however most of our predictors were categorical.
Probability ratios for each node of each tree were calculated in relation to the root node, where the root node is the overall sample before any splits are made. For example, if the probability of research use in the root node was 0.1 and the probability of use in women was 0.2 then the probability ratio for women would be 2.
Overall results of the regression analyses are presented in Table 1 below. The results of the logistic regression are discussed together with the results of the classification tree below. Please note this table only includes predictors that showed a realtionships to the type of use outcome.
Positive predictors are in green and negative in orange.
The total sample of (7772) had an overall probability of:
- 15% for using The Conversation articles to inform development of strategy, policy, programs, etc.
- 17% for using The Conversation articles to support work-related decisions previously made.
- 71% probability of using The Conversation articles to inform work-related discussion and debate.
- 53% of using information read in The Conversation articles to inform personal attitude or behaviour change.
Use of research to inform work-related discussion, debate
‘To assist with work’ was the only factor that predicted this type of use across both regression analyses. The tree showed a 27% increase in probability and the logistic regression showed 1.45 greater odds of this type of use. The logistic regression identified a greater number of predictors than the tree. The engagement predictors identified showed ‘used the article in report’, ‘undertake further research’ and ‘republish the article’ had more than twice the odds of this type of use compared to those who didn’t engage in these actions.
Use of research to develop strategy, policy, presentations, decisions and/or directions, etc.
‘Used the article in a report’ was the first split in the classification tree for this type of use and showed that the 14% of the respondents who had used a The Conversation article in a report has 3.1 times greater probability of using an article in this way. The logistic regression indicated those who used an article in a report had a 4.3 times greater odds of this type of use.
Logistic regression also indicated that ‘politician, policy officer and other government worker’ (referred to from here on in as policy decision-makers) had 3.2 times higher odds of this type of use compared to participants in other roles. The classification tree showed that the 6% (N = 466) of respondents who used an article in a report, identified ‘assist with work’ as a main reason for reading The Conversation, and respondents who had worked as a policy decision-maker were 5.5 times more likely to use an article in this way. This combination of factors was the strongest predictor of this type of use in the classification tree.
The logistic regression identified ‘Chairperson, Director, Chief Executive Officer, Chief Financial Officer, Chief Operating Officer, Owner, Partner or Proprietor’ and ‘General Manager, Department Head, Senior Executive, Manager, or Professional’ as predictors of this type of use, however the classification tree did not distinguish between these and other positions (other than policy decision-makers).
The classification tree also showed that, excluding policy decision makers, the 4% of participants who ‘used an article in a report’, indicated ‘assist with work’ as a main reason for reading The Conversation, and indicated they undertook ‘further research’, had a 4.5 times greater probability of this type of use compared with the sample average.
‘Assist with work’ was the basis of the second split in the classification tree on this type of use. The tree showed that the 8% (N = 622) of those who had ‘used the article in a report’, and indicated ‘to assist with work’ as a main reason for reading The Conversation, were 3.7 times more likely to use an article in this way. Both regression analyses indicated similar results in regards to ‘assist with work’, with an odds ratio of 1.45 in the logistic regression, and a probability ratio of 1.27 in the tree.
Use of research to support a work-related decision or action that had already been made
‘Used the article in a report’ predicted this type of use in the logistic regression. ‘Used article in a report’ was also the first split in the classification tree for this type of use. The tree showed 14% of those who used a The Conversation article in a report were 2.5 times more likely to use an article in this way, compared to the sample average.
Employment as ‘Chairperson, Director, Chief Executive Officer, Chief Financial Officer, Chief Operating Officer, Owner, Partner or Proprietor’ and ‘General Manager, Department Head, Senior Executive, Manager, or Professional’ as well as a policy-decision-makers, predicted this type of use across both regression methods. The tree showed that those who ‘used an article in report’ and were employed in one of these three employment positions were 3.3 times more likely to use research in this way compared with the sample average.
The logistic regression indicated that the ‘Chairperson, Director, Chief Executive Officer, etc.’ group had nearly twice the odds of using research in this way, compared to those in other employment positions. At 1.91, this was the highest odds ratio of the predictive employment positions. The logistic regression also showed that policy decision-makers and ‘General Manager, Department Head, Senior Executive, etc.,’ had respectively 1.57 and 1.47 greater odds to use research in this way compared with other employment positions.
The logistic regression also showed that those who ‘discussed research with friends and colleagues’ had 1.62 greater odds of this type of use compared to those who did not indicate that they discussed articles. The classification tree showed that the 5% (N = 389) who used an article in a report, were employed in senior roles or were policy decision makers and discussed articles with friends and colleagues were 3.5 times more likely to use research in this way. Therefore, for those in policy and senior roles, discussing articles with friends and colleagues further increased their likelihood of using articles in this way.
Use of research to inform personal attitude and behaviour change
‘Discussed with friends and colleagues’ was the first split on the tree for this type of use. The tree showed that the 80% of participants who discussed articles with friends and colleagues’ were 1.13 times more likely to use articles in this way compared to the sample average. The classification tree showed that the 48% of participants who ‘discussed articles with friends and colleagues’ and also read The Conversation ‘to explain the news’ were 1.24 times more likely to use articles in this way.
The logistic regression found that those ‘who discussed an articles with friends and colleagues’ had twice the odds of this type of use, compared to those who did not indicate they discussed articles. Participants who indicated they read The Conversation ‘to explain the news’ had 1.48 times greater odds of using an article in this way, compared to those that did not indicate this main reason.
The logistic regression showed participants who indicated ‘expert opinion and facts’ as a main reason for reading The Conversation had 1.47 greater odds of using an article in this way. The classification tree showed a more complicated picture. The 22% of participants who discussed The Conversation articles with friends and colleagues, but did not indicate ‘explain the news’ as main reason for reading The Conversation, and instead indicated ‘for expert opinion and facts’ as their main reason were only 1.05 times more likely to use research in this way, compared to the sample average.
Analysis of The Conversation 2016 Annual survey has yielded novel insights on how information in The Conversation articles is being used to inform readers’ decision-making in their work and personal lives. This study makes an important contribution to the high quality quantitative literature on factors that predict decision-maker use of academic research evidence and expertise [11,16,58]. In particular this study draws on data from a broad range of sectors and included clearly defined types of use of The Conversation articles addressing issues identified as challenges to the interpretation of outcomes in previous research [11,12]. Further, this study has included a focus on use of The Conversation articles in personal decision-making and shown that the factors that predict individuals’ use of The Conversation articles in personal decision-making differ from the factors that influence work-related decision-making.
This study set out to answer two questions: do engagement factors predict use?; and what other factors predict use?. The key factors that predicted work-related use of The Conversation articles were related to engagement actions taken after reading an article, being employed in senior-level and policy-related employment positions. Work related use showed much higher growth in predictive capability with additional factors, suggesting it is more dependent on a number of specific factors working in cooperation. This aligns with previous research focused in health policy contexts, which found that the predictive power of factors found to affect research use improved when factors were ‘manipulated simultaneously’ . Of the three key factors that affected personal decision-making regarding attitude and behaviour change, two were related to participants’ main reasons for reading The Conversation, and one was an engagement action. Importantly, this study demonstrates the potential for end-user surveys to form part of the suite of metrics seeking to demonstrate research engagement and impact as part of national research performance assessments .
Engagement actions predict use
Engagement actions were the most common predictors of article use. Engagement actions predicted all four types of article use, across both regression models. Employment position was the only other variable that also predicted all four types of use, but not as frequently as engagement actions did, and only in the logistic regression. Engagement actions more frequently predicted work-related use of The Conversation articles, compared with use of The Conversation to inform personal attitude and behaviour change. Interestingly different engagement actions predicted different types of use. For example ‘discussed with friends or colleagues’ predicted use of an article to further support a work-related decision that had been previously made and also predicted use of an article to inform the reader’s own personal attitude or behaviour change, but did not predict use of an article to directly inform development of strategy, policy, etc. These findings support the hypothesis that increased engagement can support increased research impact, but also highlight that not all engagement actions have the same effect. This suggests that a deeper understanding of how different engagement actions elicit different types of research use is needed to achieve research impact through increased engagement [12,53,60–62].
Key predictors of research use
The advantage of using both the logistic regression and classification tree analyses is that logistic regression identifies factors that independently predict use and the classification tree highlights relationships between factors that predict use. This is important as it has been highlighted that the lack of identification of relationships between variables is a limitation of regression analyses . As Table 1 demonstrates, a broad range of factors significantly predicted the each type of article use across the two methods. For sake of brevity, the remainder of the discussion will focus on factors identified as statistically significant predictors across both logistic regression and classification tree analyses. Table 2 provides an overview of the factors that predicted each type of research use across both regression methods.
Instrumental use of The Conversation articles.
‘Used the article in a report’ was the strongest predictor of use of The Conversation articles to develop strategy, policy, presentations, decisions and/or directions, etc. across both regression methods. Using research to directly inform development of policy, program, practice and other action has been described as ‘instrumental’ research use [10,13,15,25]. Importantly, this finding indicates that The Conversation articles have been relevant to decision-makers needs and actionable within their work-related context. This means that The Conversation is overcoming some of the most commonly identified barriers to decision-maker use of research: access, relevance, actionable outcomes, and timeliness [11,40,63].
Finding that use of an article in a report predicts use of an article to inform development of policy, strategy, project etc., makes intuitive sense, as these are both direct, instrumental uses of research. In testing assumptions for inclusion of predictors, ‘used article in a report’ was included because it was correlated with the outcome variable but was not highly correlated to other predictors which is a necessary assumption for effective logistic regression analysis . Further, the outcome ‘using article to inform policy, strategy, program, project documents, etc.’ includes a number of different outputs, therefore it is possible that those who have used The Conversation article in a report are also independently more likely to be using research instrumentally in multiple outputs. Qualitative follow up research could provide greater detail to deepen understandings of examples of instrumental use of The Conversation articles.
Working as a politician, policy officer, or government employee also predicted use of The Conversation articles to inform the development of strategy, policy, programs, etc., i.e. instrumental use. This is an important finding that demonstrates that politicians and policy officers are actively seeking out research evidence and academic expertise on The Conversation and using it to inform policy and program development. No other employment type, out of a total of 16 employment types, significantly predicted direct, instrumental use of research. This study shows that publishing in The Conversation offers a unique opportunity to directly inform and influence politicians and policy officers, with real potential for that research to be used in government policy and program decision-making.
‘Discuss with friends and colleagues’ is an engagement action that significantly predicted use of The Conversation articles to support existing strategy, policy and other work-related decisions. ‘To assist with work’ was the only response from the survey question ‘what are the main reasons you read The Conversation’ that predicted instrumental use of a The Conversation article. Together, these results highlight that people are accessing The Conversation for work-related purposes and further strengthens the assertion that participants’ find The Conversation articles content relevant to their work-related decision-making needs and actionable in their context [58,63,65]. The results strongly suggest that The Conversation model is an effective means for both providing access to research, as well as communicating research in a way that enables use of research, a necessary precursor to achieving research impact.
Republishing an article significantly predicted instrumental use of The Conversation articles. The Conversation articles are made freely available using a Creative Commons license, which allows republishing in full at no cost. The finding suggests that participants who republish articles work in roles that have a research or news-related reporting and/or communication and engagement aspect to their work and are using The Conversation as a key source of content. One of the main venues that republish The Conversation is the Australian Broadcasting Corporation (Australia’s national broadcaster). Article republishing significantly increases the potential for engagement and use beyond The Conversation direct readership, therefore also increasing the likelihood of use of The Conversation articles leading to positive impact.
Undertaking further research was another engagement action that predicted instrumental use of The Conversation articles. This suggests that these participants were seeking out information for the purpose of directly informing understanding or decision-making. It also suggests that The Conversation is just one of a number of resources that participants were turning to to inform development of policy, program, strategy, etc. This supports previous research highlighting that research evidence is only ever one of the many inputs that inform policy, program and practice decision-making [27,66–68]. It may also suggest that participants sought to read the underlying research studies often linked to in The Conversation articles, however this is not clear from the survey results. The term ‘research’ can be interpreted in myriad ways and is not always used in reference to academic research literature . Deeper understanding of the motivations of these particular participants could illuminate the types of situations and drivers that are motivating their search for research evidence and uncover the other sources of evidence they rely on.
Tactical use of The Conversation articles.
‘Used the article in a report’ also predicted use of articles to support a work-related decision or action that had already been made. ‘Used article in a report’ was again the strongest predictor for this type of use based on the classification tree. ‘Symbolic’ or ‘tactical’ use of research are terms coined to describe when research is used to support a decision or action made prior to research evidence being sought or examined [13,15,25].
Being employed as a ‘politician, policy officer or government employee’ significantly predicted tactical use of a The Conversation article, supporting the claim that politicians only seek evidence to support their agenda. It is often quipped that politicians only seek information that confirms a predetermined stance or interest, sometimes referred to as ‘policy based evidence’, rather than ‘evidence based policy’, and certainly there are situations where this has occurred [70,71]. However, it has been effectively argued that conceptual and tactical use of research at any stage of the policy decision-making process is a valid use of research. Further, there are often occasions where political staffers and policy officers are not in control of the decision-making process [16,72]. In most cases policy decision-makers, at all levels, are responsible for implementation of the political decision-making of the government of the ruling party . As such, indicating this type of tactical use may be related to use of The Conversation articles to inform planning for implementation of policy, program or strategy decisions that have been made and/or announced . Focusing on different types of research use and recognising that research is only ever one of a multitude of inputs that inform decision-making reflects the complex reality of policy development; which is not a linear, rational process and which is affected by a broad range of interconnected factors [13,25,69,71,73–76].
That policy decision-makers are using The Conversation articles both instrumentally and tactically suggests articles are being used in relation to particular situational and decision-making needs . Previous studies have also demonstrated that policy-decision makers use research both instrumentally and tactically [10,25,35]. A 2016 systematic review  has highlighted the pivotal role that motivation and opportunity play in influencing decision-makers use of research. This suggests a more nuanced understanding of the motivations and opportunities that enable research use, potentially through detailed case studies, is urgently needed .
Interestingly however, being employed in senior leadership and management positions, regardless of sector, also predicted this type of tactical use of The Conversation articles. And whilst employment position was a significant predictor of research use, employment sector was not. Specific positions in a given context were more important than the context itself in predicting use of The Conversation articles for work-related purposes. This finding links to the existing literature regarding the role that context plays in the design of interventions seeking to increase research use by public health decision-makers [30,77]. The findings suggest that regardless of the sector, identifying and strategically including those in senior, authoritative roles in intervention design will be critical. This is supported by systematic review findings that identified authority to make decisions and engaging opinion leaders and champions as a key enablers of research use [40,63,69].
Finding that those at the most senior levels in organisations across all industries and politicians and other government decision-makers are using The Conversation articles to inform and support work-related decision-making is significant. These results highlight that while partner-based research funding, as evidence of engagement with industry and government, is an important indicator of future potential impact; indirect engagement efforts targeting those in senior-level and government positions also has potential for real impact .
Conceptual use of The Conversation articles.
‘To assist with work’ was the only factor that predicted use of The Conversation articles to inform work-related discussion, debate and understanding across both the logistic regression and classification tree analyses. This type of use, where research informs understanding without specific action, is defined as conceptual use . Measuring conceptual use via survey is valuable for various reasons. It is similar to social media analysis that can identify if a particular topic or article of content has been discussed in the public sphere. However, The Conversation survey data can complement social media analytics as it can capture conceptual use that may only take place in personal, face-to-face conversation . Further, this type of use, as asked about in The Conversation survey, is directly focused on whether research has informed research work-related discussion and debate.
Whilst there is no guarantee that conceptual use will shift to an applied type of use, such as tactical or instrumental, it can enable decision-makers to use research to support their business activity when the opportunity becomes available. Through his in-depth research on policy decision-making and development processes Kingdon  highlighted that significant policy change and development can occur when the right ‘window of opportunity’ is open. In this way, conceptual use of research and academic expertise as measured in The Conversation survey could be considered as an indicator of research moving along the pathway to impact. Measuring conceptual use of research via survey can augment social media analytics, as it is also able to capture instances of private discussion and debate, which cannot be measured easily on a large scale by other means.
Use of The Conversation to inform one’s own personal attitude and behaviour chnage.
Interestingly, discussing articles with friends and colleagues was the only engagement action that predicted use of The Conversation to inform personal attitude or behaviour change. Together, the findings regarding discussing The Conversation articles with friends and colleagues suggest that this is a very active type of engagement, where one may be seeking to gather opinions to inform their own decision-making or to use the content to influence the opinions or decision-making of others. The finding supports previous studies that have identified discussion and debate as important in enabling research take up [59,78]. Indeed leading research translation focused organisations have established forums to enable deliberative dialogue around research. Many systematic reviews have identified increased interaction, communication and face-to-face engagement as key enablers of research use [11,12,63]. However, the small numbers of interventions that have sought to use this approach show limited evidence of effectiveness .
Reading The Conversation ‘to explain the news’ significantly predicted use of a The Conversation article to inform personal attitude and behaviour change. This finding suggests that for these participants The Conversation is providing greater detail and information that helps them understand a particular current issue and inform decision-making. In these instances The Conversation is enabling the use of research and academic expertise in ways that can have very direct and significant impact on people and communities. This opportunity for researchers to have a direct impact on the general public is unique. It is estimated that just 25–50% of all academic research is currently free to access on the Internet via open access journals and articles and institutional repositories [79,80]. Even then much of this research remains inaccessible to the general public, as it written for an expert peer academic audience . As such, The Conversation as a not-for-profit organisation that translates research evidence and academic expertise for non-academic audiences, fills a critical gap; enabling individuals to make evidence-informed decisions that can shape the direction and outcomes of human experience, actions and worldview through translation and open communication of academic research and expertise.
Participants who indicated ‘expert opinion and facts’ as a main reason for reading The Conversation were also predicted to use The Conversation articles to inform personal attitude and behaviour change. Similarly ‘to explain the news’, finding that ‘expert opinion and facts’ significantly predicted use of The Conversation article suggests that The Conversation is providing a reliable source of academic expertise and research evidence that is relevant to their needs and interests [27,69]. In 2015 Alperin  completed one of the first comprehensive studies of non-academic use of institutional repositories in South America . Alperin  found that those working outside academia represented between 16–25% of users. Of those, between 7.9 and 10.5% were using articles from repositories for personal reasons. Together these results indicate that there is significant capacity to engage with and use of research amongst the general public, and therefore significant potential for research to have impact through open access research communications platforms .
This study is based on a survey that was designed by marketing and communications staff within The Conversation. Only the question on use of The Conversation was crafted by an academic and based on academic literature. The Conversation articles are based on research evidence and also on academic expertise and opinion and as the survey responses are not linked to individual outputs we do not what articles were actually used. This is an issue with much research in the field that has surveyed government decision-makers about their use of research . Despite the limitations this data set has produced findings reflected in the academic literature as highlighted throughout the discussion. The strength of this analysis is that the types of use of The Conversation were clearly defined, which has also been an issue in past studies [11,69]. A further limitation is this is based on an Australian survey and therefore the responses reflect use of The Conversation by Australians only. However The Conversation is also available in the UK, which has also introduced Research Impact assessment in their national research performance frameworks and The Conversation is also available in the US, France, Canada, Indonesia and Africa. Future research on use of The Conversation in other countries to provide a comparison to the findings would be valuable.
The Conversation annual survey has provided valuable data demonstrating extensive engagement and use of research and academic expertise by those working in sectors beyond academia. Overall, the major contribution of the study are the findings demonstrating that the majority of factors that predicted the four types of use of The Conversation articles were related to engagement actions undertaken after reading a The Conversation article, participants’ main reasons for reading The Conversation and senior and policy-related employment positions. Importantly however, different factors predicted different types of use, highlighting that interventions seeking to achieve research impact through increased engagement, must be based on a more detailed and nuanced understanding of engagement and use that is currently depicted in the rhetoric surrounding ‘research impact’. These findings suggest that senior and influential decision-makers can play a critical role in development of strategies and interventions seeking to support or increase use of research. Further, this study has shown that The Conversation is providing research-informed articles that are being used by industry and government decision-makers to inform work-related decision-making and discussion and debate. This indicates that decision-makers are finding content on The Conversation that is relevant to their decision-making needs and actionable in their context. This highlights that, for academics, publishing in The Conversation provides an excellent opportunity to increase research use, and therefore potential research impact.
We would like to thank The Conversation for the opportunity to contribute to the development of the survey and to analyse and report on survey results.
- 1. Australian Research Council. Engagement and Impact Assessment Consultation Paper [Internet]. 2016 [cited 2017 Jul 12]. Available from: http://www.arc.gov.au/sites/default/files/filedepot/Public/ARC/consultation_papers/ARC_Engagement_and_Impact_Consultation_Paper.pdf
- 2. Greenhalgh T, Fahy N. Research impact in the community-based health sciences: an analysis of 162 case studies from the 2014 UK Research Excellence Framework. BMC Med. 2015;13:232. pmid:26391108
- 3. Penfield T, Baker MJ, Scoble R, Wykes MC. Assessment, evaluations, and definitions of research impact: A review. Res Eval. 2014 Jan 1;23(1):21–32.
- 4. Wilkinson C. Evidencing impact: a case study of UK academic perspectives on evidencing research impact. Stud High Educ. 2017 Jun 16;1–14.
- 5. Cahill T, Wenham M, Australian Academy of Technology and Engineering. Research engagement for Australia: measuring research engagement between universities and end users: a report of a pilot study by the Australian Academy of Technology and Engineering (ATSE). [Internet]. Melbourne, Vic.: Australian Academy of Technology and Engineering; 2016 [cited 2017 Jul 12]. Available from: https://www.atse.org.au/content/publications/reports/industry-innovation/research-engagement-for-australia.aspx
- 6. Watermeyer R. Issues in the articulation of ‘impact’: the responses of UK academics to ‘impact’ as a new measure of research assessment. Stud High Educ. 2014 Feb 7;39(2):359–77.
- 7. Hinrichs-Krapels S, Grant J. Exploring the Effectiveness, Efficiency and Equity (3e’S) of Research and Research Impact Assessment [Internet]. Rochester, NY: Social Science Research Network; 2016 Dec [cited 2017 Jul 12]. Report No.: ID 2881917. Available from: https://papers.ssrn.com/abstract=2881917
- 8. Watt I. Review of Research Policy and Funding Arrangements. Department of Education and Training; 2015.
- 9. Australian Government. Measuring impact and engagement in university research. 2015.
- 10. Lavis JN, Robertson D, Woodside JM, McLeod CB, Abelson J, Group TKTS. How Can Research Organizations More Effectively Transfer Research Knowledge to Decision Makers? Milbank Q. 2003;81(2):221–48. pmid:12841049
- 11. Oliver K, Innvar S, Lorenc T, Woodman J, Thomas J. A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Serv Res. 2014;14(1):2.
- 12. Langer L, Tripney J, Gough D, University of London, Social Science Research Unit, Evidence for Policy and Practice Information and Co-ordinating Centre. The science of using science: researching the use of research evidence in decision-making. 2016.
- 13. Amara N, Ouimet M, Landry Ré. New Evidence on Instrumental, Conceptual, and Symbolic Utilization of University Research in Government Agencies. Sci Commun. 2004 Sep 1;26(1):75–106.
- 14. Sarkies MN, Bowles K- A, Skinner EH, Haas R, Lane H, Haines TP. The effectiveness of research implementation strategies for promoting evidence-informed policy and management decisions in healthcare: a systematic review. Implement Sci. 2017 Nov 14;12:132. pmid:29137659
- 15. Redman S, Turner T, Davies H, Williamson A, Haynes A, Brennan S, et al. The SPIRIT Action Framework: A structured approach to selecting and testing strategies to increase the use of research in policy. Soc Sci Med. 2015 Jul 1;136:147–55. pmid:26004208
- 16. Haynes AS, Gillespie JA, Derrick GE, Hall WD, Redman S, Chapman S, et al. Galvanizers, Guides, Champions, and Shields: The Many Ways That Policymakers Use Public Health Researchers. Milbank Q. 2011;89(4):564–98. pmid:22188348
- 17. Haynes AS, Derrick GE, Chapman S, Redman S, Hall WD, Gillespie J, et al. From “our world” to the “real world”: Exploring the views and behaviour of policy-influential Australian public health researchers. Soc Sci Med. 2011;72(7):1047–55. pmid:21414704
- 18. Armstrong R, Waters E, Dobbins M, Anderson L, Moore L, Petticrew M, et al. Knowledge translation strategies to improve the use of evidence in public health decision making in local government: intervention design and implementation plan. Implement Sci IS. 2013 Oct 9;8:121. pmid:24107358
- 19. Lavis JN, Oxman AD, Moynihan R, Paulsen EJ. Evidence-informed health policy 1—Synthesis of findings from a multi-method study of organizations that support the use of research evidence. Implement Sci. 2008;3:53. pmid:19091107
- 20. Petticrew M, Whitehead M, Macintyre SJ, Graham H, Egan M. Evidence for public health policy on inequalities: 1: The reality according to policymakers. J Epidemiol Community Health. 2004 Oct 1;58(10):811–6. pmid:15365104
- 21. Rycroft-Malone J, Seers K, Titchen A, Harvey G, Kitson A, McCormack B. What counts as evidence in evidence-based practice? J Adv Nurs. 2004 Jul 1;47(1):81–90. pmid:15186471
- 22. Perrier L, Mrklas K, Lavis JN, Straus SE. Interventions encouraging the use of systematic reviews by health policymakers and managers: A systematic review. Implement Sci [Internet]. 2011;6(1). Available from: http://www.scopus.com/inward/record.url?eid=2-s2.0-79955136527&partnerID=40&md5=b50c64e605b85deea478ad1689ec2a33
- 23. Lavis JN. Moving forward on both systematic reviews and deliberative processes. Heal Policy. 2006 Jan;1(2):59–63.
- 24. Lavis J. How Can We Support the Use of Systematic Reviews in Policymaking? PLoS Med. 2009;6(11).
- 25. Weiss CH. The many meanings of research utilisation. Public Adm Rev [Internet]. 1979 [cited 2017 Jul 18];39. Available from: https://sites.ualberta.ca/~dcl3/KT/Public%20Administration%20Review_Weiss_The%20many%20meanings%20of%20research_1979.pdf
- 26. Weiss CH, Murphy-Graham E, Birkeland S. An Alternate Route to Policy Influence: How Evaluations Affect D.A.R.E. Am J Eval. 2005 Mar 1;26(1):12–30.
- 27. Hanney SR, Gonzalez-Block MA, Buxton MJ, Kogan M. The utilisation of health research in policy-making: concepts, examples and methods of assessment. Health Res Policy Syst. 2003;1(2).
- 28. Davies HT, Powell AE, Nutley SM. Mapping the conceptual literature [Internet]. NIHR Journals Library; 2015 [cited 2017 Nov 27]. Available from: https://www.ncbi.nlm.nih.gov/books/NBK299408/
- 29. Brennan SE, McKenzie JE, Turner T, Redman S, Makkar S, Williamson A, et al. Development and validation of SEER (Seeking, Engaging with and Evaluating Research): a measure of policymakers’ capacity to engage with and use research. Health Res Policy Syst. 2017;15:1. pmid:28095915
- 30. Makkar SR, Turner T, Williamson A, Louviere J, Redman S, Haynes A, et al. The development of ORACLe: a measure of an organisation’s capacity to engage in evidence-informed health policy. Health Res Policy Syst [Internet]. 2016 Jan 14;14. Available from: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4712550/
- 31. Australian Research Council. Engagement and Impact Assessment Pilot 2017 Report [Internet]. Commonwealth of Australia; 2017. Available from: http://www.arc.gov.au/sites/default/files/filedepot/Public/EI/Engagement_and_Impact_Assessment_Pilot_2017_Report.pdf
- 32. Innovation and Science Australia. Performance Review of the Australian Innovation, Science and Research System 2016 | Key findings [Internet]. 2016 [cited 2017 Jul 12]. Available from: https://industry.gov.au/Innovation-and-Science-Australia/Documents/ISA-system-review/index.html
- 33. Rychetnik L, Bauman A, Laws R, King L, Rissel C, Nutbeam D, et al. Translating research for evidence-based public health: key concepts and future directions. J Epidemiol Community Health [Internet]. 2012 May 8; Available from: http://jech.bmj.com/content/early/2012/05/07/jech-2011-200038.abstract
- 34. Fixsen DL, Blase KA, Naoom SF, Wallace F. Core Implementation Components. Res Soc Work Pract. 2009 Sep 1;19(5):531–40.
- 35. Landry R, Lamari M, Amara N. The Extent and Determinants of the Utilization of University Research in Government Agencies. Public Adm Rev. 2003;63(2):192–205.
- 36. Estabrooks CA, Floyd JA, Scott-Findlay S, O’Leary KA, Gushta M. Individual determinants of research utilization: a systematic review. J Adv Nurs. 2003 Sep 1;43(5):506–20. pmid:12919269
- 37. Innvær S, Vist G, Trommald M, Oxman A. Health policy-makers’ perceptions of their use of evidence: a systematic review. J Health Serv Res Policy. 2002 Oct 1;7(4):239–44. pmid:12425783
- 38. Greenhalgh T, Robert G, Macfarlance F, Bate P, Kyriakidou O. Diffusion of Innovations in Service Organisations: Systematic Review and Recommendations. Millbank Q. 2004;82(4):581–629.
- 39. Mitton C, Adair CE, McKenzie E, Patten SB, Perry BW. Knowledge Transfer and Exchange: Review and Synthesis of the Literature. Milbank Q. 2007 Dec;85(4):729–68. pmid:18070335
- 40. Orton L, Lloyd-Williams F, Taylor-Robinson D, O’Flaherty M, Capewell S. The Use of Research Evidence in Public Health Decision Making Processes: Systematic Review. PloS One. 2011;6(7).
- 41. Powell K, Kitson A, Hoon E, Newbury J, Wilson A, Beilby J. A study protocol for applying the co-creating knowledge translation framework to a population health study. Implement Sci. 2013 Aug 29;8:98. pmid:23984982
- 42. Rowley E, Morriss R, Currie G, Schneider J. Research into practice: Collaboration for Leadership in Applied Health Research and Care (CLAHRC) for Nottinghamshire, Derbyshire, Lincolnshire (NDL). Implement Sci. 2012 May 3;7:40. pmid:22553966
- 43. Oborn E, Barrett M, Prince K, Racko G. Balancing exploration and exploitation in transferring research into practice: a comparison of five knowledge translation entity archetypes. Implement Sci. 2013 Sep 5;8:104. pmid:24007259
- 44. Crowe S, Turner S, Utley M, Fulop NJ. Improving the production of applied health research findings: insights from a qualitative study of operational research. Implement Sci. 2017 Sep 8;12:112. pmid:28886709
- 45. Moore G, Redman S, Haines M, Todd A. What works to increase the use of research in population health policy and programmes: a review. Evid Policy J Res Debate Pract. 2011 Aug 1;7(3):277–305.
- 46. ElSabry E. Who needs access to research? Exploring the societal impact of open access. Rev Fr Sci L’information Commun [Internet]. 2017 Aug 1 [cited 2017 Oct 13];(11). Available from: http://rfsic.revues.org/3271
- 47. Bankier J-G, Chatterji P. 100 Stories: The Impact of Open Access. 2016 [cited 2017 May 16]; Available from: https://works.bepress.com/jean_gabriel_bankier/27/
- 48. Alperin JP. The public impact of Latin America’s approach to open access [Internet]. STANFORD UNIVERSITY; 2015 [cited 2017 Mar 20]. Available from: https://stacks.stanford.edu/file/druid:jr256tk1194/AlperinDissertationFinalPublicImpact-augmented.pdf
- 49. Powell A, Davies H, Nutley S. Missing in action? The role of the knowledge mobilisation literature in developing knowledge mobilisation practices. Evid Policy J Res Debate Pract. 2017 May 19;13(2):201–23.
- 50. Varda D, Shoup JA, Miller S. A Systematic Review of Collaboration and Network Research in the Public Affairs Literature: Implications for Public Health Practice and Research. Am J Public Health. 2011 Nov 28;102(3):564–71. pmid:22021311
- 51. The Conversation. 2016 Stakeholder Report [Internet]. 2016 [cited 2017 Jul 12]. Available from: https://c15119308.ssl.cf2.rackcdn.com/2016_Stakeholder_Report_The_Conversation.pdf
- 52. Bornmann L. Do altmetrics point to the broader impact of research? An overview of benefits and disadvantages of altmetrics. J Informetr. 2014 Oct;8(4):895–903.
- 53. Ke Q, Ahn Y-Y, Sugimoto CR. A systematic identification and analysis of scientists on Twitter. PLOS ONE. 2017 Apr 11;12(4):e0175368. pmid:28399145
- 54. Dinsmore A, Allen L, Dolby K. Alternative Perspectives on Impact: The Potential of ALMs and Altmetrics to Inform Funders about Research Impact. PLOS Biol. 2014 Nov 25;12(11):e1002003. pmid:25423184
- 55. Mello FC de Q, Bastos LG do V, Soares SLM, Rezende VM, Conde MB, Chaisson RE, et al. Predicting smear negative pulmonary tuberculosis with classification trees and logistic regression: a cross-sectional study. BMC Public Health. 2006 Feb 23;6:43. pmid:16504086
- 56. Lei Y, Nollen N, Ahluwahlia JS, Yu Q, Mayo MS. An application in identifying high-risk populations in alternative tobacco product use utilizing logistic regression and CART: a heuristic comparison. BMC Public Health. 2015 Apr 9;15:341. pmid:25879872
- 57. Felicísimo ÁM, Cuartero A, Remondo J, Quirós E. Mapping landslide susceptibility with logistic regression, multiple adaptive regression splines, classification and regression trees, and maximum entropy methods: a comparative study. Landslides. 2013 Apr 1;10(2):175–89.
- 58. Tricco AC, Cardoso R, Thomas SM, Motiwala S, Sullivan S, Kealey MR, et al. Barriers and facilitators to uptake of systematic reviews by policy makers and health care managers: a scoping review. Implement Sci. 2016 Jan 12;11:4. pmid:26753923
- 59. Ouimet M, Bédard PO, Turgeon J, Lavis JN, Gélineau F, Gagnon F, et al. Correlates of consulting research evidence among policy analysts in government ministries: A cross-sectional survey. Evid Policy. 2010;6(4):433–60.
- 60. Morton S. Progressing research impact assessment: A ‘contributions’ approach. Res Eval. 2015 Oct 1;24(4):405–19.
- 61. Meagher L, Lyall C, Nutley S. Flows of knowledge, expertise and influence: a method for assessing policy and practice impacts from social science research. Res Eval. 2008 Sep 1;17(3):163–73.
- 62. Newson R, King L, Rychetnik L, Bauman AE, Redman S, Milat AJ, et al. A mixed methods study of the factors that influence whether intervention research has policy and practice impacts: perceptions of Australian researchers. BMJ Open. 2015 Jul 1;5(7):e008153. pmid:26198428
- 63. Mitton C, Adair CE, McKenzie E, Patten SB, Waye Perry B. Knowledge transfer and exchange: review and synthesis of the literature. Milbank Q. 2007 Dec;85(4):729–68. pmid:18070335
- 64. Midi H, Sarkar SK, Rana S. Collinearity diagnostics of binary logistic regression model. J Interdiscip Math. 2010 Jun 1;13(3):253–67.
- 65. Ritter A. How do drug policy makers access research evidence? Int J Drug Policy. 2009;20(1):70–5. pmid:18226519
- 66. Zardo P, Collie A. Type, frequency and purpose of information used to inform public health policy and program decision-making. BMC Public Health. 2015 Apr;15(1):381–92.
- 67. Dobbins M, Rosenbaum P, Plews N, Law M, Fysh A. Information transfer: what do decision makers want and need from researchers? Implement Sci. 2007 Jul 3;2:20. pmid:17608940
- 68. Cook CN, Hockings M, Carter R (Bill). Conservation in the dark? The information used to support management decisions. Front Ecol Environ. 2010 May 1;8(4):181–6.
- 69. Oliver K, Lorenc T, Innvær S. New directions in evidence-based policy research: a critical analysis of the literature. Health Res Policy Syst. 2014 Jul 14;12:34. pmid:25023520
- 70. Marmot MG. Evidence based policy or policy based evidence? BMJ. 2004 Apr 17;328(7445):906–7. pmid:15087324
- 71. Strassheim H, Kettunen P. When does evidence-based policy turn into policy-based evidence? Configurations, contexts and mechanisms. Evid Policy J Res Debate Pract. 2014 May 1;10(2):259–77.
- 72. Kingdon JW. Agendas, alternatives, and public policies. 2nd ed. Boston: Longman; 1984.
- 73. Sanderson I. Evaluation, Policy Learning and Evidence-Based Policy Making. Public Adm. 2002 Jan 1;80(1):1–22.
- 74. Zardo P, Collie A, Livingstone C. Organisational factors affecting policy and programme decision making in a public health policy environment. Evid Policy J Res Debate Pract. 2015;11(4):509–527.
- 75. Pawson R. Evidence-Based Policy: A Realist Perspective. SAGE; 2006. 210 p.
- 76. Russell J, Greenhalgh T, Byrne E, McDonnell J. Recognizing rhetoric in health care policy analysis. J Health Serv Res Policy. 2008 Jan;13(1):40–6. pmid:18325155
- 77. Damschroder LJ, Aron DC, Keith RE, Kirsch SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolodated framework for advancing implementation science. Implement Sci. 2009;50(4).
- 78. Armstrong R, Waters E, Crockett B, Keleher H. The nature of evidence resources and knowledge translation for health promotion practitioners. Health Promot Int. 2007;22(3):254–60. pmid:17596543
- 79. Tennant JP, Waldner F, Jacques DC, Masuzzo P, Collister LB, Hartgerink CHJ. The academic, economic and societal impacts of Open Access: an evidence-based review. F1000Research [Internet]. 2016 Sep 21 [cited 2017 Mar 20];5. Available from: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4837983/
- 80. Piwowar Priem, Larivière Alperin, Matthias Norlander, et al. The State of OA: A large-scale analysis of the prevalence and impact of Open Access articles. PeerJ Prepr [Internet]. 2017 [cited 2017 Sep 19]; Available from: https://theidealis.org/the-state-of-oa-a-large-scale-analysis-of-the-prevalence-and-impact-of-open-access-articles/