Skip to main content
Advertisement

Nine quick tips for open meta-analyses

Abstract

Open science principles are revolutionizing the transparency, reproducibility, and accessibility of research. Meta-analysis has become a key technique for synthesizing data across studies in a principled way; however, its impact is contingent on adherence to open science practices. Here, we outline 9 quick tips for open meta-analyses, aimed at guiding researchers to maximize the reach and utility of their findings. We advocate for outlining preregistering clear protocols, opting for open tools and software, and the use of version control systems to ensure transparency and facilitate collaboration. We further emphasize the importance of reproducibility, for example, by sharing search syntax and analysis scripts, and discuss the benefits of planning for dynamic updating to enable living meta-analyses. We also recommend publication in open-access formats, as well as open data, open code, and open access publication. We close by encouraging active promotion of research findings to bridge the gap between complex syntheses and public discourse, and provide a detailed submission checklist to equip researchers, reviewers and journal editors with a structured approach to conducting and reporting open meta-analyses.

Systematic reviews are comprehensive syntheses of evidence that aim to answer a specific research question by systematically identifying, appraising, and summarizing all relevant studies on a topic. Unlike traditional narrative reviews, systematic reviews follow a rigorous and transparent methodology to minimize bias and ensure the reproducibility of their findings.

Meta-analysis—a statistical technique often used within systematic reviews to quantitatively combine and analyze the results from multiple individual studies—has emerged as a cornerstone methodology in scientific research, enabling scholars to synthesize results from multiple studies to draw comprehensive conclusions, with greater statistical power and generalizability than individual studies alone [1]. Pooling data across sources, meta-analyses can uncover trends and insights that might not be apparent in single studies, thereby providing a more reliable foundation for cumulative science, theory building, and evidence-based policy [2].

While systematic reviews and meta-analyses are closely related, not all systematic reviews necessarily include a meta-analysis: some reviews may focus on synthesizing qualitative or descriptive data, while others may not find sufficient homogeneity among the included studies to justify conducting a quantitative meta-analysis. However, when appropriate, meta-analyses can add significant value to systematic reviews by providing a quantitative synthesis of the available evidence.

In parallel with the steady rise of meta-analysis, the open science movement has sought to improve the transparency, reproducibility, and accessibility of scientific research. Open science principles advocate for the sharing of data, materials, and methodologies so that findings can be verified and built upon more easily by other researchers [3]. Studies have shown that the adoption of open science practices can enhance the credibility of scientific findings and foster greater innovation and collaboration within the research community [4]. Despite the clear synergy between the goals of meta-analysis and open science, integrating these practices remains a challenge. As such, clear guidelines might be helpful to navigate the complexities involved [5].

Here, we bridge the gap between meta-analysis methodology and open science principles by proposing 9 quick tips for open, transparent meta-analyses. These tips, summarized in Fig 1, are intended to help researchers design, conduct, and publish meta-analyses that adhere to the highest standards of openness and transparency, ensuring that their findings can be trusted, replicated, and built upon by the scientific community. We also provide a checklist to help researchers, reviewers, and journal editors implement these guidelines in practice (https://osf.io/k8aqx/).

Tip #1: Define and preregister your protocol

Meta-analysis serves as a powerful tool to synthesize data from multiple studies, but its validity and robustness is contingent on establishing a well-defined protocol at the onset of a project [6]. Defining and preregistering a clear protocol before conducting a meta-analysis helps safeguard against potential biases and ensure the transparency and reproducibility of the research process. This includes outlining the scope of the meta-analysis, its rationale, main and secondary hypotheses, and primary and secondary outcomes, specifying the procedure for literature search, study selection, data extraction, determining the study inclusion and exclusion criteria, and deciding on assessments for the quality and risk of bias in included studies. Typically, protocols should also outline the analysis plan, including statistical models, approaches for handling data heterogeneity, publication bias, and sensitivity analyses [79].

Once a protocol has been defined, preregistering it ensures transparency, accountability, and helps with its reproducibility [8,9]. Preregistration entails the a priori documentation of the research plan before the analysis begins, solidifying the methodological framework and analytic strategies [10]. This process requires detailing the study’s objectives, hypotheses, methodology, and statistical analysis plan in a time-stamped registry that is publicly accessible [11]. Common platforms for preregistering meta-analyses include PROSPERO (https://www.crd.york.ac.uk/prospero/) or the Open Science Framework (OSF; https://osf.io/). It is important to note that PROSPERO is a closed protocol registry that primarily accepts certain types of systematic reviews, such as those related to health and well-being, and prioritizes the registration of reviews conducted by UK-based researchers. In contrast, OSF is an open platform that allows researchers to register their projects, as well as share research data, materials, and analysis tools. While both platforms serve the purpose of preregistration, OSF offers a more flexible and open approach for researchers across various disciplines, for example, via predefined templates such as the Generalized Systematic Review Registration. Irrespective of the specific platform one chooses, preregistration serves as a declaration of the analytical roadmap, which enhances the study’s credibility and reproducibility [8]. It also helps differentiate between confirmatory and exploratory analyses, which is important for interpreting the findings accurately and for readers to assess the extent to which the results were hypothesized in advance [12].

Although preregistration may seem redundant in the context of meta-analysis, given that data is retrospectively collected from existing studies, it nevertheless serves several critical functions. Preregistration acts as a public commitment to a specific analysis plan, which enhances the credibility of the research by preventing undisclosed, ad hoc changes to the methodology that could be influenced by the data outcomes [12]. This in turn provides a safeguard against the introduction of bias, especially the kind that may arise from selective reporting or outcome switching after interim results are known. While it does not prevent post hoc decision-making per se, preregistration thus makes it easier for readers to detect deviations from the preregistered plan, which can help identify potential sources of bias introduced by such deviations [1012]. Importantly, deviations from the preregistered protocol should be reported and justified in the final publication to maintain the integrity of the research process [9].

The commitment to preregistration also aligns with the FAIR principles, ensuring that the research plans and protocols are findable, accessible, interoperable, and reusable, thereby contributing to the collective effort of fostering open science [13]. It is a proactive measure that communicates to the scientific community the integrity of the research process and the authenticity of the research intent [14]; as such, preregistration embodies a cornerstone practice in open meta-analysis, setting the stage for studies that are not only methodologically sound but also publicly accountable.

Tip #2: Opt for open tools and software

The credibility and trustworthiness of meta-analyses is greatly enhanced when researchers opt for open tools and software, which facilitate transparent, replicable, and verifiable research practices [15]. Open tools and software are not only free to use, they also allow others to examine and validate the underlying code, ensuring that the methodological processes are laid bare for scrutiny [16]. Moreover, open tools and software foster code reuse, allowing researchers to build upon existing work rather than starting from scratch, accelerating progress and avoiding duplication of effort.

Openness can be promoted at every step of a meta-analysis, from data extraction to the final statistical analysis. Using open-source statistical software like R [17], with packages like meta [18], metafor [19], and revtools [20] for meta-analysis, or Python [21], with generic packages for meta-analysis like PythonMeta [22] and PyMARE [23], or specialized packages such as NiMARE [24] for neuroimaging meta-analyses or AutoGDC [25] for DNA methylation and transcription meta-analyses, enables researchers to share their code, thereby providing a transparent audit trail from raw data to results. Other open-source options include JASP [26] and jamovi [27], both of which are full statistical software that include meta-analysis modules.

The recommendation also extends beyond statistical analysis, to embrace free tools for systematic review management. The Systematic Review Data Repository (www.srdrplus.ahrq.gov) [28], developed by the Agency for Healthcare Research and Quality (AHRQ), is an online repository and data management platform specifically designed for conducting systematic reviews. It provides structured forms for extracting data, assessing risk of bias, and tracking the review process, while enabling secure collaboration among review teams. Similarly, Rayyan (www.rayyan.ai) [29] is a web-based application that streamlines the screening of literature search results for systematic reviews and meta-analyses. It facilitates collaborative screening, allowing multiple researchers to independently evaluate studies in a blinded manner, while tracking screening decisions and conflicts. These platforms can help researchers transparently manage their review process and share their progress with the community. Paid subscription-based alternatives exist (e.g., Covidence; Rayyan also includes paid plans), with additional functionalities such as tighter integration with reference management software or more advanced project management capabilities; however, in most cases open tools are perfectly adequate. Adherence to open tools is not a mere technicality, but a principled stand for open science, which often symbolizes a researcher’s commitment to collaborative progress and to the democratization of knowledge [30].

Moreover, open-source can extend to version control systems, which allow for meticulous tracking of changes and collaborative input on the analytic scripts [31] and to software containers, which further enhance the reproducibility of meta-analyses. We turn to these tools with our next tip.

Tip #3: Use version control or containerization

In the context of open meta-analyses, version control systems help maintaining transparency, accountability, and collaborative integrity. Beyond its wide use in software development, version control is an indispensable asset for researchers managing the complexities of meta-analytical workflows [32]. Services such as Git, an open-source version control system (www.github.com; see also Bitbucket for an alternative: www.bitbucket.org), when integrated with online platforms like GitHub, provide a transparent mechanism to document the evolution of a project, offering snapshots of every stage in the project’s lifecycle [33]. This allows for the identification of who made particular changes, when these alterations were implemented, and why certain methodological adjustments were necessary, which is crucial in multi-contributor projects where coordination and clarity are paramount [34,35]. In addition, platforms like OSF offer the capability to update preregistrations, allowing researchers to document and justify any deviations from the initially preregistered protocol. This feature complements version control systems by providing a centralized location to track and explain changes to the preregistered plan, further enhancing transparency and accountability throughout the research process.

Incorporating the use of containers, such as Docker (www.docker.com) or Singularity (www.sylabs.io), can further enhance the reproducibility and portability of meta-analyses. Containers encapsulate the analysis environment, ensuring that all necessary computational tools, libraries, and dependencies are bundled together [36]. This guarantees that the analysis can be reliably replicated across different computing environments and across software releases, reducing the “it works on my machine” phenomenon that can hinder reproducibility [36]. The adoption of containers aligns seamlessly with version control practices. For example, a Dockerfile—essentially a blueprint for building a container—can be version-controlled alongside analysis scripts [37]. This allows for the entire computational environment to be versioned, shared, and archived, providing a more robust mechanism for replicating and verifying research findings [38].

With the implementation of version control and container technology, meta-analyses become more accessible and transparent. Researchers can not only track the iterative progress of their work but also ensure that their computational analyses are reproducible by anyone, anywhere. This extends to managing contributions across various collaborators, enabling the synthesis of insights while preserving the individual contributions of each team member [39], ensuring that intellectual input is accurately credited, and fostering a culture of recognition and respect within research teams. Furthermore, such approach supports reproducibility and data provenance, allowing future researchers to revisit and build upon past work with confidence in its veracity [40]. As such, it acts as a safeguard against the loss of data and analysis versions, proving indispensable in times of unexpected disruptions or when reverting to previous iterations is necessary [41].

Tip #4: Aim for reproducibility

A natural outcome of version-control systems, especially implemented as containers, is reproducibility [36]—a hallmark of credible scientific research, particularly critical in the context of meta-analyses. Aiming for reproducibility mandates meticulous documentation of all aspects of the research process to ensure that other investigators can replicate the findings and trust their validity [42]. This extends to providing explicit search strategies, including search syntax and dated search results from databases, which are fundamental for enabling others to reproduce the literature search with precision [6].

The detailed recording of search strategies should include the databases searched, the full electronic search strategy for at least 1 database, the date last searched, and any limits applied, as advocated by the PRISMA guidelines [6]. Ideally, researchers should include the exact search syntax used, tailored for each database, to account for variations in indexing terms and functionalities across different databases [43]. The selection process of studies, including screening, eligibility criteria, and the reasons for excluding particular studies, is typically summarized in a PRISMA flow diagram; this enables others to understand decision-making and evaluate the potential for selection bias [44]. Sharing dated search results from databases enhances transparency, as it accounts for the dynamic nature of databases where the availability of studies may change over time [45].

Furthermore, researchers should extend reproducibility efforts to data extraction and analysis phases by sharing their extraction forms, code, and any custom algorithms used [30], in a process that reinforces the credibility and utility of the findings [46,47]. In this context, transparent reporting involves a detailed account of the search strategy, including search terms, databases, date ranges, and any restrictions used. Researchers should also provide the search syntax for each database searched to enable replication [48]. Reporting should typically include the screening process, selection criteria, and the flow of information through the different phases of a meta-analysis, often depicted with a PRISMA flow diagram [7].

Risk of bias assessment is a fundamental step in meta-analyses to evaluate the methodological quality of included studies and detect potential sources of bias that may affect the validity of findings. To promote transparency and reproducibility in this process, researchers should prioritize open tools and instruments for assessing risk of bias. The Cochrane Risk of Bias tools (RoB 2 for randomized trials [49] and ROBINS-I for non-randomized studies [50]), freely available online (https://www.riskofbias.info), provide structured frameworks and clear guidance for bias appraisal. These tools can help streamline the risk of bias assessment, ensure methodological rigor, and enhance the replicability of their quality evaluations.

Ideally, researchers should also document all decisions made throughout the study, including the rationale behind the exclusion of certain studies and the methods used for data extraction and risk of bias assessment. This extends to the statistical methods and any sensitivity analyses performed, with justifications for the models and parameters [51], and to any deviations from the preregistered protocol. While journal articles may have limited space for such technical details, researchers should take advantage of supplementary materials or appendices to comprehensively document their decision-making processes, analytical choices, and any deviations from the preregistered plan. These supplementary files can be hosted alongside the main article or in open repositories, ensuring that the complete methodological details are openly accessible and citable. Together, these steps toward reproducibility help the reliability of meta-analyses, but also contribute to the collective trust in the findings presented within the scientific community.

Tip #5: Post your data

Posting data is an imperative principle in the domain of open meta-analysis, fostering a collaborative scientific environment where data are not only shared but also are made accessible for scrutiny and reanalysis [52]. Open data involves making the raw data collected from studies, as well as the extracted data used for meta-analytic computations, available in a public repository [53]. In the context of meta-analyses, this typically includes: effect size estimates (e.g., standardized mean differences, correlation coefficients, odds ratios) and associated statistics (sample sizes, standard errors) extracted from each included study; study-level characteristics or coding for potential moderator variables; and risk of bias assessments or ratings of study quality. Choosing the right repository is crucial; it should guarantee the longevity and accessibility of the data. Repositories like OSF, Dryad (www.daradryad.org), or Figshare (www.figshare.com) provide DOI-linked storage, ensuring that the data can be properly cited and linked back to the original research [54].

When choosing data-sharing platforms, researchers should consider long-term sustainability and durability. While popular platforms like GitHub offer convenient collaboration and versioning features, it is important to recognize that they are commercial entities subject to potential changes in business interests or ownership. For long-term preservation and access, researchers may want to prioritize platforms with explicit commitments to data archiving and long-term access plans. For example, OSF has contingency plans in place to ensure that data and materials hosted on their platform are preserved for a minimum of 50 years. Alternatively, researchers could adopt a hybrid approach, using platforms like GitHub for active version control and collaboration during the research process, but also archiving snapshots of their repositories and scripts in dedicated, long-term preservation.

The benefits of posting data are multifaceted: it increases the trust in the findings, enables other researchers to conduct secondary analyses or meta-analyses, and contributes to the reduction of research waste by avoiding the duplication of efforts [55]. When posting data, researchers must ensure that it conforms to all applicable privacy regulations and ethical standards [56]. Ideally, the data should be accompanied by detailed metadata, data dictionaries, and any relevant scripts or algorithms used to process the data. This ensures that other researchers can understand and replicate the analysis [57], in a commitment to transparent and reproducible science that upholds the integrity of the research and advances the collective knowledge within the field.

In addition to posting the meta-analytic data, researchers can also leverage open data repositories to access and extract data from the primary studies included in their meta-analysis whenever possible. Many journals and funders now require authors to make their raw data publicly available, offering opportunities for meta-analysts to obtain original datasets directly. Repositories such as OpenNeuro (https://openneuro.org) for neuroimaging data or GenBank (https://www.ncbi.nlm.nih.gov/genbank/) for nucleotide sequences can be invaluable resources for accessing primary data directly, which can reduce inaccuracies from manual extraction, enable more comprehensive data synthesis, and facilitate novel exploratory analyses.

Tip #6: Share analysis scripts

Open meta-analyses hinge on the replication and validation of research findings. Sharing analysis is an essential aspect of reproducibility, enabling others to verify results and conduct further analysis [58,59]. When researchers share their analysis scripts, they facilitate a deeper understanding of the methods used in the research, which can help identify potential issues and improve upon the proposed methods [60]. This practice should be standard, with scripts shared via repositories such as GitHub or Zenodo (www.zenodo.org), which provide DOIs for each release to ensure that the exact scripts used can be cited [61].

To follow good practices, scripts should be well commented, detailing the purpose and function of each section of code. This is critical as it provides context to the scripts, making them understandable to others who may not be familiar with the specific project or the coding language used [62]. Furthermore, sharing scripts encourages efficiency and collaboration as it allows others to build on existing work rather than starting from scratch [41]. Researchers are encouraged to license their scripts in a way that permits reuse and modification, such as through permissive licenses like the MIT or GNU General Public license, or using a Creative Commons Attribution 4.0 International (CC BY 4.0) license. The latter permits the reuse and modification of the work, while explicitly requiring attribution to the original authors. This can ensure that researchers’ intellectual contributions are properly acknowledged while still promoting the open sharing and collaborative development of their work.

Tip #7: Enable seamless updating

Traditional meta-analyses can rapidly become outdated as new research accumulates. While methodically rigorous, static reviews are snapshots that reflect the evidence available up to the point of their completion, and the lack of subsequent integration of new data can lead to periods where the meta-analysis is not reflective of the current state of evidence [47].

In response to this limitation, living meta-analyses are a form of systematic review that are regularly updated as new evidence becomes available. This approach ensures that the meta-analysis remains current and continuously reflects the latest data on a topic [63]. The structure of such a living document requires a rigorous initial protocol that specifies not only the methodology for the initial review but also the strategy for ongoing evidence surveillance, criteria for determining the significance of new data, and the process for their assimilation into the existing meta-analytic framework. Enabling seamless updating, particularly in the form of living meta-analyses, is especially valuable in areas where research evidence is rapidly evolving, as it can more accurately inform timely decision-making in clinical practice, policy, and further research.

The successful implementation of living meta-analyses is contingent on meticulous planning for data management and analysis update. This includes predefined methods for literature search updates, explicit inclusion and exclusion criteria, and robust statistical strategies capable of integrating new data without compromising the validity of the meta-analysis [64]. It also entails setting thresholds for what constitutes significant new evidence that warrants an update, thereby maintaining the balance between the currency of the analysis and the practicality of the update process.

Importantly, while committing to the implementation of living meta-analyses may not be feasible for all research teams, it is still beneficial to organize data and code in a way that enables future updates and maintains the potential for the review to evolve into a living document. One key consideration is the structured organization and documentation of data extraction processes and analytical pipelines. Researchers should strive to create modular and well-documented code that can be easily adapted to incorporate new data as it becomes available. Version control systems can help track changes and facilitate collaborative updates, ensuring that the review remains a living, evolving entity, while the use of containerization technologies can help encapsulate the entire computational environment, for seamless updating (see also Tip #3). Another important aspect is the use of robust data management practices that allow for the efficient retrieval and integration of new study information. This may involve the use of relational databases or other structured data storage solutions, as well as the development of standardized data dictionaries and metadata schemas.

Tip #8: Publish open access

Open access publication ensures that the results of research are accessible to all, without paywall restrictions, enabling broader dissemination, greater visibility, and increased citation and use of the work [65]. Open access can take various forms, including diamond/platinum open access, where articles are free to both authors and readers; gold open access, where the final published article is immediately open for all to read and use; bronze open access, where the article is freely accessible but without an explicit license; and green open access, which involves self-archiving a version of the article in a repository [66]. Given their relevance to guide practice and policy, the imperative for open access is even stronger in the case of meta-analyses, as it underpins the drive for informed decision-making in various sectors [67].

Researchers are encouraged to consider open access options when selecting a journal for submission, bearing in mind that many funding agencies now mandate open access publication as a condition of their grants [68]. However, caution should be exercised when considering open access options, as the landscape includes predatory publishers who exploit the open access model for profit while lacking robust peer review and editorial processes; resources like Cabell’s Predatory Reports (https://www2.cabells.com/about-predatory) or the Directory of Open Access Journals (https://doaj.org/) can help researchers identify reputable open access journals and publishers.

It is also important to acknowledge that the costs associated with gold open access publication can pose significant challenges for researchers, particularly those in the Global South or from institutions with limited funding resources. The article processing charges (APCs) levied by many open access journals can be prohibitively expensive, creating inequities in the ability to publish and disseminate research findings openly. To address this issue, researchers should explore available institutional support, OA publishing funds, or waivers and discounts offered by institutions or by some publishers for scholars from low and middle-income countries. Authors can also leverage institutional or subject repositories to deposit post-peer-reviewed versions of their work [69], or use open preprint servers such as arXiv (https://arxiv.org), bioRxiv (https://www.biorxiv.org), EcoEvoRxiv (https://ecoevorxiv.org), medRxiv (https://www.medrxiv.org), MetaArXiv (https://osf.io/preprints/metaarxiv), or PsyArXiv (https://osf.io/preprints/psyarxiv), in conjunction with formal publication in a peer-reviewed journal. To navigate the self-archiving policies and restrictions of different journals, researchers can consult the Sherpa/RoMEO database (https://v2.sherpa.ac.uk/romeo/), which provides a comprehensive listing of publisher policies regarding the sharing of pre-prints, post-prints, and other versions of published articles.

Of note, open access is not just about removing financial barriers, it is also about enabling the reuse and distribution of content. Thus, researchers should familiarize themselves with the different types of Creative Commons licenses and, to the extent that it is possible, choose one that aligns with how they want their work to be used [70]. Researchers should also consider providing plain language abstracts or summaries of their meta-analyses. These serve as crucial tools for making complex research findings accessible to audiences across disciplines and to the general public. These summaries should be written in clear, jargon-free language, avoiding technical terms or disciplinary-specific terminology that may hinder comprehension. The focus should be on distilling the key findings, implications, and practical relevance of the meta-analysis in a concise and easy-to-understand manner. Together, these steps can help advance the reach and impact of researchers’ findings within the scientific community and the public at large.

Tip #9: Promote your findings

Often considered secondary or even trivialized, promotion is a critical step to ensure that the synthesized evidence reaches a diverse audience, including other researchers, practitioners, policymakers, and the public [71]. Historically, promoting one’s findings has often taken the form of presentations at conferences, workshops, and webinars to reach academic and professional communities directly.

In the digital age, there are multiple additional avenues for promoting research findings. Social media platforms offer vast networks for sharing results rapidly and engaging with a global community [72], whereas academic networking sites provide forums for researchers to connect and share full-text publications with peers [73]. Blogging and podcasting are effective mediums for explaining the significance of meta-analysis findings in a more accessible language, thus bridging the gap between complex research and public understanding [74]. Infographics and short videos can also be used to convey key messages visually, making the information more digestible and shareable [75].

Engaging with traditional media by issuing press releases or coordinating with university media teams can also amplify the reach of research findings to a broader audience and may lead to coverage by journalists and influencers [76]. Researchers should emphasize the open and transparent nature of their work, highlighting the availability of data, materials, and analysis scripts for public scrutiny and reuse. In this context, plain language summaries as discussed in the previous tip can help broader promotion and dissemination efforts, and amplify the reach and impact of research findings. Together, these steps ensure researchers not only enhance the visibility and application of their work, but also fulfill their responsibility to contribute to evidence-informed decision-making in society.

Integration with existing guidelines

To facilitate implementation of these 9 tips, we have developed an open meta-analysis checklist (https://osf.io/k8aqx/). The proposed checklist is intended to integrate seamlessly with prevailing reporting guidelines and best practices in the field. It complements and extends the widely adopted PRISMA 2020 statement for transparent reporting of systematic reviews and meta-analyses [6]. While PRISMA focuses on essential reporting elements, our checklist provides supplementary guidance on open science practices spanning protocol development, reproducibility, dissemination, and post-publication promotion.

Furthermore, the checklist aligns with the PRIOR statement [77] on making all parts of the research cycle publicly accessible. Its emphasis on preregistration, open data/code, and open-access publishing map directly to the core tenets outlined by the PRIOR statement.

Importantly, the checklist also upholds the FAIR principles [13] by advocating for practices that enhance the findability, accessibility, interoperability, and reusability of meta-analytic outputs. Recommendations such as use of version control, posting analysis scripts, and clear data documentation all serve to maximize the FAIRness of meta-analytic research products.

In collectively promoting open and transparent workflows, responsible data stewardship, and the development of accessible knowledge resources, the open meta-analysis checklist provides an actionable complement to these foundational guidelines and principles. It offers a tailored operationalization for embracing open science in the domain of meta-analysis.

Conclusions

With these 9 quick tips, researchers can ensure their meta-analyses adhere to open science principles and best practices, promoting transparency, reproducibility, and accessibility. The tips also increase the likelihood a meta-analysis will stand the test of critical evaluation [78,79] and contribute meaningfully to the collective body of knowledge—though it is important to recognize that open practices alone do not guarantee a high-quality or impactful meta-analysis. The value and contribution of a meta-analysis to the collective body of knowledge also depend on the rigor of the methodology, the quality of the included studies, and the relevance of the research question being addressed. Beyond academic rigor, embracing these tips is a commitment to an open science ethos that values the dissemination and democratization of information: As meta-analysis continues to shape our understanding across various fields, adherence to these principles will facilitate a more collaborative, accessible, and innovative research environment, where knowledge can flourish unfettered by traditional barriers, and findings can be used to their fullest potential by all members of society.

There remain, however, areas that require further development and research. One key need is the creation of more user-friendly, integrated tools that seamlessly combine various open practices, from protocol development and preregistration to data extraction, analysis, and reporting, within a unified ecosystem. Such tools could lower barriers to entry and facilitate wider adoption of open meta-analytic workflows. Relatedly, there is a need for more comprehensive training resources and educational initiatives to equip researchers with the skills required for conducting open, reproducible meta-analyses [59].

Furthermore, as AI and machine learning capabilities advance, their responsible integration into meta-analytic processes must be carefully explored. New AI-based methods are emerging that could revolutionize and streamline various stages of the meta-analytic process. For example, tools like Abstrackr [80] use natural language processing to assist in the initial screening of literature search results, potentially accelerating study selection. AI-based text mining and data extraction approaches, such as those implemented in tools like RobotReviewer (https://robotreviewer.net), could help automate parts of the data extraction process from included studies. As these methods continue to evolve and become more accessible, developing best practices and guidelines for leveraging them while maintaining human oversight and methodological rigor will be crucial for harnessing their potential efficiency gains without compromising scientific integrity.

More generally, continued research is needed to evaluate the real-world impacts of open meta-analyses on scientific progress, evidence-based decision-making, and public trust in research. Empirical investigations into the adoption rates, challenges, and tangible benefits of these practices can inform further refinements and drive wider acceptance within the research community and beyond.

References

  1. 1. Borenstein M, Hedges LV, Higgins JPT, Rothstein HR. Introduction to meta-analysis. John Wiley & Sons; 2009.
  2. 2. Higgins JPT, Green S, editors. Cochrane handbook for systematic reviews of interventions. Version 5.1.0. The Cochrane Collaboration; 2011. www.handbook.cochrane.org.
  3. 3. Nosek BA, Alter G, Banks GC, Borsboom D, Bowman SD, Breckler SJ, et al. Promoting an open research culture. Science. 2015;348(6242):1422–1425. pmid:26113702
  4. 4. Munafò MR, Nosek BA, Bishop DVM, Button KS, Chambers CD, du Sert NP, et al. A manifesto for reproducible science. Nat Hum Behav. 2017;1(1):0021. pmid:33954258
  5. 5. McKiernan EC, Bourne PE, Brown CT, Buck S, Kenall A, Lin J, et al. How open science helps researchers succeed. eLife. 2016;5:e16800. pmid:27387362
  6. 6. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ. 2021;372:n71. pmid:33782057
  7. 7. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JPA, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009;6(7):e1000100. pmid:19621070
  8. 8. Booth A, Clarke M, Dooley G, Ghersi D, Moher D. The nuts and bolts of PROSPERO: an international prospective register of systematic reviews. Syst Rev. 2012;1(1):2. pmid:22587842
  9. 9. Stewart LA, Clarke M. Practical methodology of meta-analyses (overviews) using updated individual patient data. Stat Med. 2018;14(6):2057–2079.
  10. 10. Nosek BA, Lakens D. Registered reports: A method to increase the credibility of published results. Soc Psychol. 2014;45(3):137–141.
  11. 11. Chambers CD. Registered reports: A new publishing initiative at Cortex. Cortex. 2013;49(3):609–610. pmid:23347556
  12. 12. Wagenmakers EJ, Wetzels R, Borsboom D, van der Maas HL, Kievit RA. An agenda for purely confirmatory research. Perspect Psychol Sci. 2012;7(6):632–638. pmid:26168122
  13. 13. Wilkinson MD, Dumontier M, Aalbersberg IJ, Appleton G, Axton M, Baak A, et al. The FAIR Guiding Principles for scientific data management and stewardship. Sci Data. 2016;3:160018. pmid:26978244
  14. 14. Miguel E, Camerer C, Casey K, Cohen J, Esterling KM, Gerber A, et al. Promoting transparency in social science research. Science. 2014;343(6166):30–31. pmid:24385620
  15. 15. Morin A, Urban J, Adams PD, Foster I, Sali A, Baker PJ, et al. Shining light into black boxes. Science. 2012;336(6078):159–160. pmid:22499926
  16. 16. Ince DC, Hatton L, Graham-Cumming J. The case for open computer programs. Nature. 2012;482(7386):485–488. pmid:22358837
  17. 17. Team R Core. R: A language and environment for statistical computing. R Foundation for Statistical Computing; 2024. https://www.R-project.org/.
  18. 18. Schwarzer G. Meta: An R package for meta-analysis. R News. 2007;7(3):40–45.
  19. 19. Viechtbauer W. Conducting meta-analyses in R with the metafor package. J Stat Softw. 2010;36(3):1–48.
  20. 20. Zeng X, Huibers MH. revtools: An R package to support article screening for evidence synthesis. Res Synth Methods. 2022;13(6):618–626.
  21. 21. Van Rossum G, Drake FL. Python 3 Reference Manual. Scotts Valley, CA: CreateSpace; 2009.
  22. 22. Deng H. PythonMeta, Python module of Meta-analysis. 2024
  23. 23. Yarkoni T, Salo T, Nichols T, Peraza J. PyMARE: Python Meta-Analysis & Regression Engine. 2023.
  24. 24. Salo T, Yarkoni T, Nichols TE, Poline J-B, Kent JD, Gorgolewski KJ, et al. neurostuff/NiMARE: 0.2.0rc3. Zenodo; 2023.
  25. 25. Brown CA, Wren JD. AutoGDC: A Python Package for DNA Methylation and Transcription Meta-Analyses. bioRxiv. 2024.
  26. 26. JASP Team. JASP (Version 0.18.3) [Computer software]. 2024.
  27. 27. The jamovi project. jamovi (Version 2.5) [Computer Software]. 2024
  28. 28. Bak G, Mierzwinski-Urban M, FitzGerald JM, Kosa D, Puscasiu A. The Systematic Review Data Repository (SRDR): descriptive characteristics of a new tool based on initial user experience. Syst Rev. 2019;8(1):334.
  29. 29. Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan—a web and mobile app for systematic reviews. Syst Rev. 2016;5(1):210. pmid:27919275
  30. 30. Peng RD. Reproducible research in computational science. Science. 2011;334(6060):1226–1227. pmid:22144613
  31. 31. Perez-Riverol Y, Gatto L, Wang R, Sachsenberg T, Uszkoreit J, Leprevost FV, et al. Ten simple rules for taking advantage of Git and GitHub. PLoS Comput Biol. 2016;12(7):e1004947. pmid:27415786
  32. 32. Ram K. Git can facilitate greater reproducibility and increased transparency in science. Source Code Biol Med. 2013;8(1):7. pmid:23448176
  33. 33. Blischak JD, Davenport ER, Wilson G. A quick introduction to version control with Git and GitHub. PLoS Comput Biol. 2016;12(1):e1004668. pmid:26785377
  34. 34. Taskar B. Introduction to version control. In: Gentle J, Härdle W, Mori Y, editors. Handbook of Data Analysis. Springer; 2014.
  35. 35. Moreau D, Wiebels K. Ten simple rules for designing and conducting undergraduate replication projects. PLoS Comput Biol. 2023;19(3):e1010957. pmid:36928436
  36. 36. Moreau D, Wiebels K, Boettiger C. Containers for computational reproducibility. Nat Rev Methods Primers. 2023;3(50).
  37. 37. Wiebels K, Moreau D. Leveraging containers for reproducible psychological research. Adv Methods Pract Psychol Sci. 2021;4(2):1–18.
  38. 38. Nüst D, Eddelbuettel D, Bennett D. Docker for reproducible research. ACM SIGOPS Oper Syst Rev. 2017;51(3):71–79.
  39. 39. Bryan J. Excuse me, do you have a moment to talk about version control? Am Stat. 2018;72(1):20–27.
  40. 40. Stodden V, Seiler J, Ma Z. An empirical analysis of journal policy effectiveness for computational reproducibility. Proc Natl Acad Sci U S A. 2018;115(11):2584–2589. pmid:29531050
  41. 41. Wilson G, Bryan J, Cranston K, Kitzes J, Nederbragt L, Teal TK. Good enough practices in scientific computing. PLoS Comput Biol. 2017;13(6):e1005510. pmid:28640806
  42. 42. Iqbal SA, Wallach JD, Khoury MJ, Schully SD, Ioannidis JPA. Reproducible research practices and transparency across the biomedical literature. PLoS Biol. 2016;14(1):e1002333. pmid:26726926
  43. 43. Rethlefsen ML, Kirtley S, Waffenschmidt S, Ayala AP, Moher D, Koffel JB. PRISMA-S: an extension to the PRISMA statement for reporting literature searches in systematic reviews. Syst Rev. 2021;10(1):39. pmid:33499930
  44. 44. Shamseer L, Moher D, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: elaboration and explanation. BMJ. 2015;349:g7647. pmid:25555855
  45. 45. McGowan J, Sampson M. Systematic reviews need systematic searchers. J Med Libr Assoc. 2005;93(1):74–80. pmid:15685278.
  46. 46. Ioannidis JPA, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, et al. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383(9912):166–175. pmid:24411645
  47. 47. Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4:1. pmid:25554246
  48. 48. Haddaway NR, Collins AM, Coughlin D, Kirk S. The role of Google Scholar in evidence reviews and its applicability to grey literature searching. PLoS ONE. 2015;10(9):e0138237. pmid:26379270
  49. 49. Sterne JAC, Savović J, Page MJ, Elbers RG, Blencowe NS, Boutron I, et al. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ. 2019;366:l4898. pmid:31462531
  50. 50. Sterne JAC, Hernán MA, Reeves BC, Savović J, Berkman ND, Viswanathan M, et al. ROBINS-I: a tool for assessing risk of bias in non-randomized studies of interventions. BMJ. 2016;355:i4919. pmid:27733354
  51. 51. Borenstein M, Hedges LV, Higgins JPT, Rothstein HR. A basic introduction to fixed-effect and random-effects models for meta-analysis. Res Synth Methods. 2010;1(2):97–111. pmid:26061376
  52. 52. Piwowar HA, Day RS, Fridsma DB. Sharing detailed research data is associated with increased citation rate. PLoS ONE. 2007;2(3):e308. pmid:17375194
  53. 53. Vines TH, Albert AY, Andrew RL, Débarre F, Bock DG, Franklin MT, et al. The availability of research data declines rapidly with article age. Curr Biol. 2014;24(1):94–97. pmid:24361065
  54. 54. Tenopir C, Allard S, Douglass K, Aydinoglu AU, Wu L, Read E, et al. Data sharing by scientists: Practices and perceptions. PLoS ONE. 2011;6(6):e21101. pmid:21738610
  55. 55. Wicherts JM, Borsboom D, Kats J, Molenaar D. The poor availability of psychological research data for reanalysis. Am Psychol. 2006;61(7):726–728. pmid:17032082
  56. 56. Rocher L, Hendrickx JM, de Montjoye YA. Estimating the success of re-identifications in incomplete datasets using generative models. Nat Commun. 2019;10(1):3069. pmid:31337762
  57. 57. Strasser C, Hampton SE. The fractured lab notebook: undergraduates and ecological data management training in the United States. Ecosphere. 2012;3(12):1–18.
  58. 58. Nüst D, Konkol M, Gröbe P, Kray C, Schutzeichel M, Przibytzin H, et al. Opening reproducible research with the research compendium. Commun Comput Inf Sci. 2018;791:1–14.
  59. 59. Moreau D, Gamble B. Conducting a meta-analysis in the age of open science: Tools, tips, and practical recommendations. Psychol Methods. 2022;27(3):426–432. pmid:32914999
  60. 60. Stodden V, Seiler J, Ma Z. An empirical analysis of journal policy effectiveness for computational reproducibility. Proc Natl Acad Sci U S A. 2016;113(23):6409–6414.
  61. 61. Smith AM, Katz DS, Niemeyer KE. Software citation principles. PeerJ Comput Sci. 2016;2:e86.
  62. 62. Piccolo SR, Frampton MB. Tools and techniques for computational reproducibility. GigaScience. 2016;5(1):30. pmid:27401684
  63. 63. Akl EA, Meerpohl JJ, Elliott J, Kahale LA, Schünemann HJ. Living systematic reviews: 4. Living guideline recommendations. J Clin Epidemiol. 2017;70:47–53. pmid:28911999
  64. 64. Elliott JH, Turner T, Clavisi O, Thomas J, Higgins JP, Mavergames C, et al. Living systematic review: 1. Introduction—the why, what, when, and how. J Clin Epidemiol. 2017;91:23–30. pmid:28912002
  65. 65. Piwowar H, Priem J, Larivière V, Alperin JP, Matthias L, Norlander B, et al. The state of OA: a large-scale analysis of the prevalence and impact of Open Access articles. PeerJ. 2018;6:e4375. pmid:29456894
  66. 66. Suber P. Open access. MIT Press; 2012. ISBN: 978–0262517638.
  67. 67. Gates M, Gates A, Pieper D, Fernandes RM, Tricco AC, Moher D, et al. Reporting guideline for overviews of reviews of healthcare interventions: development of the PRIOR statement. BMJ. 2022;378:e070849. pmid:35944924
  68. 68. Tennant JP, Waldner F, Jacques DC, Masuzzo P, Collister LB, Hartgerink CH. The academic, economic and societal impacts of Open Access: an evidence-based review. F1000Res. 2016;5:632. pmid:27158456
  69. 69. Ross-Hellauer T. What is open peer review? A systematic review. F1000Res. 2017;6:588. pmid:28580134
  70. 70. Harnad S, Brody T, Vallières F, Carr L, Hitchcock S, Gingras Y, et al. The Access/Impact Problem and the Green and Gold Roads to Open Access: An Update. Serials Rev. 2008;34(1):36–40.
  71. 71. Morrison H. The Dramatic Growth of Open Access. Publications. 2017;5(3):15.
  72. 72. Chapman S, Nguyen TQ, White B. Strategies to improve the use of evidence in health policy. Popul Health Manag. 2020;23(1):23–31.
  73. 73. Sugimoto CR, Work S, Larivière V, Haustein S. Scholarly use of social media and altmetrics: A review of the literature. J Assoc Inf Sci Technol. 2017;68(9):2037–2062.
  74. 74. Thelwall M, Kousha K. ResearchGate: Disseminating, communicating, and measuring scholarship? J Assoc Inf Sci Technol. 2015;66(5):876–889.
  75. 75. Bonini T. Science podcasts: Analysis of global production and output from 2004 to 2018. First Monday. 2018;23(2).
  76. 76. Guo PJ, Kim J, Rubin R. How video production affects student engagement: An empirical study of MOOC videos. Proceedings of the First ACM Conference on Learning@ Scale Conference. 2016:41–50.
  77. 77. Shema H, Bar-Ilan J, Thelwall M. Research blogs and the discussion of scholarly information. PLoS ONE. 2012;7(5):e35869. pmid:22606239
  78. 78. Forero DA, Lopez-Leon S, González-Giraldo Y, Bagos PG. Ten simple rules for carrying out and writing meta-analyses. PLoS Comput Biol. 2019;15(5):e1006922. pmid:31095553
  79. 79. Carlson RB, Martin JR, Beckett RD. Ten simple rules for interpreting and evaluating a meta-analysis. PLoS Comput Biol. 2023;19(9):e1011461. pmid:37768880
  80. 80. Rathbone J, Hoffmann T, Glasziou P. Faster title and abstract screening? Evaluating Abstrackr, a semi-automated online screening program for systematic reviewers. Syst Rev. 2015;4(80). pmid:26073974