Figures
Abstract
Introduction
Systems modelling and simulation can improve understanding of complex systems to support decision making, better managing system challenges. Advances in technology have facilitated accessibility of modelling by diverse stakeholders, allowing them to engage with and contribute to the development of systems models (participatory modelling). However, despite its increasing applications across a range of disciplines, there is a growing need to improve evaluation efforts to effectively report on the quality, importance, and value of participatory modelling. This paper aims to identify and assess evaluation frameworks, criteria, and/or processes, as well as to synthesize the findings into a comprehensive multi-scale framework for participatory modelling programs.
Materials and methods
A scoping review approach was utilized, which involved a systematic literature search via Scopus in consultation with experts to identify and appraise records that described an evaluation framework, criteria, and/or process in the context of participatory modelling. This scoping review is registered with the Open Science Framework.
Results
The review identified 11 studies, which varied in evaluation purposes, terminologies, levels of examination, and time points. The review of studies highlighted areas of overlap and opportunities for further development, which prompted the development of a comprehensive multi-scale evaluation framework to assess participatory modelling programs across disciplines and systems modelling methods. The framework consists of four categories (Feasibility, Value, Change/Action, Sustainability) with 30 evaluation criteria, broken down across project-, individual-, group- and system-level impacts.
Discussion & conclusion
The presented novel framework brings together a significant knowledge base into a flexible, cross-sectoral evaluation effort that considers the whole participatory modelling process. Developed through the rigorous synthesis of multidisciplinary expertise from existing studies, the application of the framework can provide the opportunity to understand practical future implications such as which aspects are particularly important for policy decisions, community learning, and the ongoing improvement of participatory modelling methods.
Citation: Lee GY, Hickie IB, Occhipinti J-A, Song YJC, Skinner A, Camacho S, et al. (2022) Presenting a comprehensive multi-scale evaluation framework for participatory modelling programs: A scoping review. PLoS ONE 17(4): e0266125. https://doi.org/10.1371/journal.pone.0266125
Editor: Fausto Cavallaro, Universita degli Studi del Molise, ITALY
Received: August 15, 2021; Accepted: March 15, 2022; Published: April 22, 2022
Copyright: © 2022 Lee et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the paper and its Supporting Information files.
Funding: This research is being conducted under the Brain and Mind Centre’s Right care, first time, where you live Program, enabled by a AUD12.8 million partnership with BHP Foundation. The Program will develop infrastructure to support decisions relating to advanced mental health, and guide investments and actions to foster the mental health and wellbeing of young people in their communities. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: Dr Louise Freebairn is currently employed part-time by the Brain and Mind Centre, University of Sydney; ACT Health as Director of Knowledge Translation and Health Outcomes, Epidemiology Section, ACT Health & CSART as Director of Policy Applications & Translational Science. A/Professor Jo-An Occhipinti is both Head of Systems Modelling, Simulation & Data Science at the University of Sydney’s Brain and Mind Centre and Managing Director of Computer Simulation & Advanced Research Technologies (CSART). Professor Ian Hickie is the Co-Director, Health and Policy at the Brain and Mind Centre (BMC) University of Sydney. The BMC operates an early-intervention youth services at Camperdown under contract to headspace. He is the Chief Scientific Advisor to, and a 5% equity shareholder in, InnoWell Pty Ltd. InnoWell was formed by the University of Sydney (45% equity) and PwC (Australia; 45% equity) to deliver the $30 M Australian Government-funded Project Synergy (2017-20; a three-year program for the transformation of mental health services) and to lead transformation of mental health services internationally through the use of innovative technologies.
Introduction
Traditional versus participatory modelling
Systems modelling and simulation, also known as dynamic simulation modelling, is a term given to complex systems science analytic methods–such as system dynamics, Bayesian networks, and agent-based models–that is utilized in many countries and across diverse sectors to support evidence-informed decision making and to drive policy reform [1, 2]. By taking a complex systems view, significant challenges in society including population health crises, climate change, poverty, and civil strife can be better understood and managed through computer simulation models that capture the causal structure underlying the dynamics of these systems [1, 3–8]. Various systems modelling and simulation techniques have traditionally been applied across a range of disciplines including engineering, business, and environmental sciences for decades [9], but is now increasingly utilized in other fields including in public health [10–12]. This is largely attributed to the utility of systems modelling and simulation providing decision makers with both immediate and long-term support in understanding the prospective impacts of alternative strategic actions, where traditional statistical methods may be limited [13–16].
Systems modelling and simulation can provide insights at different levels of scale, including macro, meso, and micro; providing national, state, and local governments with tools that support strategic planning and decision making [4, 17–20]. By recognizing the interdependencies within complex systems, diverse stakeholder groups including those who are a part of the system being modelled are viewed as important communication agents. The involvement of stakeholders is necessary not only for their knowledge contributions but also their key role in coordinating the implementation of strategic system improvements–hence the value in shifting scientists away from working in isolation to develop systems models [21].
Participatory modelling (PM), or stakeholder-based systems modelling, brings together diverse knowledge and interests to engage in a joint learning and planning process to better understand complex systems, as well as possible implications of decision making to manage system challenges. Advances in technology and software have facilitated accessibility of modelling by a broader group of participants, allowing more diverse stakeholders working across a complex system to engage with and contribute to the development of these models [11, 22], as well as to inform decision making and further actions [23]. For example, graphical model interfaces allow stakeholders to better visualize and understand the logic and assumptions of a model than earlier software that required articulation of a model using mathematical equations or computer coding. Such accessibility has also facilitated the participation of those most impacted by policy changes (such as consumer representatives)–helping to work towards all stakeholders sharing a common understanding of a complex problem or issue, inform and enhance collective action, assist collective decision making processes, enhance both individual and social learning, as well as precipitate changes in stakeholder behaviors [9, 21, 23–29].
Evaluating participatory modelling programs—Challenges and opportunities
The inclusion of stakeholders during the PM process can facilitate learning, consensus, and transparent decision making [21]. However, PM evaluation is frequently disregarded or not based on transparent systematic methodological approaches [30]. A recent review paper of 60 randomly selected case studies on environmental PM programs reported that most studies (>60%) did not include evaluation [31]. The studies that did include evaluation were poorly reported, lacking detailed description and justifications on assessment criteria, methods of data collection, and analysis [31].
At the most basic level, evaluations provide systematic comparisons of program objectives and outcomes to understand how well something is working for the purpose of policy, planning, or implementation [32, 33]. According to the Cambridge Dictionary, evaluation is defined as the “process of judging the quality, importance, amount, or value of something” [34]. Applying this definition to the context of this paper, there is opportunity to better understand the quality, importance, and value of PMs [35]. This shifts the focus from solely one aspect of PM to a more holistic consideration of the whole PM process (e.g., knowledge integration and learning, technical systems model development, participatory and integrated planning, etc), providing opportunity for further knowledge on which aspects of PM are particularly important for policy decisions and community learning, as well as the ongoing improvement of PM methods [36, 37].
Evaluators are relied upon to address questions on the effectiveness of investments in local, state, and national programs, as well as to better explain if observed outcomes were (or were not) as planned, and how unintended consequences can be addressed [38–41]. There may be various motivations for conducting an evaluation of PM programs including the desire to improve and share knowledge on good practice for PM, quantitatively and qualitatively report on project impacts, as well as to assess the value of PM for future work [24, 35]. Evaluations also keep the modellers, funders, and other stakeholders of interest held accountable for demonstrating outcomes, as well as to provide merit to the work being evaluated [24]. Thus, PM program evaluations can also support policy makers to make evidence-informed decisions in determining how much weight to give the program or model outputs [24].
Evaluations that comprehensively capture the complex (e.g., uncertain, dynamic) nature of PM can be difficult [42], as embedding participatory approaches in systems modelling and simulation creates several challenges [35]. For instance, the focus of PM outcomes is often still on the knowledge integration and learning process rather than the multi-value perspectives integrated within the participatory process used to develop the models [31]. This can lead to evaluation practices to over- or under-represent certain stakeholder groups’ experiences [31]. Additionally, previous studies that have attempted to evaluate the benefits of PM have reported difficulty in the design of the evaluation process, as complex systems rarely have comparative controls that allow for feasible experimental design (i.e., ‘with modelling’ intervention vs ‘without modelling’ control), making the measure of PM effectiveness challenging [35]. Thus, it is important to understand the distinction between the evaluation in detecting the effectiveness of the model development process, compared to the actual success or failure of the engagement with the model itself.
PM evaluations are also typically constrained by contextual factors including limited time and program budget. PM programs are often funded to the point of delivering the final model, rather than evaluating the process and benefits of PM including the extent to which the final model informed decision making or built consensus. A lack of investment in the evaluation of PM leads to decreased motivation to conduct thorough evaluations and may also risk evaluation efforts to be overly simplified when measuring the impact of PM [24], missing the opportunity to assess the performance of PM in different contexts to inform the adaption and improvement of processes [43].
Objectives
Therefore, this scoping review aims to:
- Identify published PM evaluation frameworks, criteria, and/or processes irrespective of the modelling method, or the discipline in which they are designed and/or implemented.
- Assess the identified evaluation frameworks, criteria, and/or processes to understand their applicability to different PM program objectives and contexts.
- Synthesize the findings to develop a novel evaluation framework that can be adapted and executed broadly across diverse PM programs, regardless of the discipline or modelling method. A flexible framework is necessary as PM itself requires flexibility to the potentially changing priorities and needs of participants.
A scoping review has been deemed the most appropriate approach, compared to a systematic review, as the purpose of this paper is to focus on the broad collection and discussion of available literature, and to present a comprehensive multi-scale evaluation framework for PM programs [44, 45]. Scoping reviews are better suited than systematic reviews when the aim of evidence synthesis is to provide an overview of literature and identify broad knowledge gaps in a topic that has not been extensively reviewed (as opposed to seeking to answer focused questions as is done in systematic reviews) [44, 45]. Scoping reviews also differ from non-systematic literature searches as they are routinely informed by an a priori protocol, and are conducted via a rigorous and transparent approach to minimize error as well as to ensure reproducibility [44]. The development and application of the presented evaluation framework is supported by a participatory systems modelling program for youth mental health (described below in the Discussion). To our knowledge, this is the first multidisciplinary scoping review of evaluation frameworks for PM programs.
Materials and methods
This scoping review was conducted according to the suggested methodology outlined in the Joanna Briggs Institute (JBI) Reviewers’ Manual for Evidence Synthesis [46], in combination with additional recommendations for conducting scoping reviews [47]. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement was also applied [48], and the PRISMA extension for Scoping Reviews (PRISMA-ScR) Checklist has been provided as Supporting Information (S1 File). This review paper has also been registered with the Open Science Framework [49].
Search strategy
A focused search was conducted via Scopus in May 2021 in consultation with an academic librarian at The University of Sydney, utilizing a combination of Boolean operators, wildcards, and truncations to develop the final search strategy (Table 1). Scopus is a meta-database, and includes records from various databases across disciplines including environmental sciences, engineering, mathematics, social sciences and medicine [36]. Additional searches were conducted through hand searching, co-author recommendations, and citation chaining.
Inclusion and exclusion criteria
The criteria for inclusion were defined a priori by the authors (GYL, LF) in a Population, Concept, Context format [46], and applied to all yielded records. As detailed in Table 2, this scoping review included sources that described an evaluation framework, criteria, and/or process for PMs. Though there are varying definitions that exist, for the purpose of this review, we have defined an evaluation framework as a tool that presents an overview of the evaluation theory, topics or themes, questions, and/or data sources; evaluation criteria as a performance metric or indicator that further breaks down the evaluation framework, and; evaluation process as a defined procedure guided by theory of how the authors recommend PM evaluation [50].
Records that presented a standalone theoretical framework (or applied via a case study example) were also included. In contrast, records that only described the methodological tools (e.g., interviews, etc) used to evaluate the implementation of PM programs without describing an evaluation framework, criteria, and/or process were excluded. Records that only described the evaluation of a technical model (i.e., not PM) were excluded, as were records that described PM implementation programs without any consideration of evaluations. Date limits were not set, but studies not published in English were excluded from the review.
Data extraction and synthesis
Using a pro forma approach, the first author (GYL) independently reviewed the titles and abstracts of all yielded records. Uncertainty regarding whether records met the inclusion criteria were resolved through two-weekly discussions with the senior author (LF). To verify the data extraction, a random sample of 10 records were independently checked by LF. Following this verification process, full text review and data extraction was conducted independently by GYL.
To address the first and second objective, a data extraction template was developed by the authors (GYL, LF). Data extraction templates are used in scoping reviews to provide a structured and detailed summary of each record [46], and were used to collate information on yielded records that underwent full text review. The four-dimensional framework (4P) developed by Gray et al. formed the basis of the data extraction template. The 4P framework was selected as it was developed specifically to standardize the reporting of PM programs and therefore provided a useful structure to guide data extraction [37]. This framework has since been adapted to include two additional dimensions by Freebairn–imPact and Prioritizing [13]. The definitions of the resulting six dimensions (6P) were slightly adapted to fit the evaluation objectives of this scoping review. The revised definitions are: Purpose (why PM approaches should be evaluated); Process (the method utilized to execute evaluation framework/criteria/process); Partnerships (which stakeholders were involved in the development of the evaluation framework/criteria/process); Products (evaluation approach–e.g., theoretical, conceptual, and/or implementation); imPact (what were the outcomes/strengths of the evaluation framework/criteria/process), and; Prioritizing (what were the barriers/future opportunities of the evaluation framework/criteria/process) [13]. To ensure an all-inclusive synthesis of records, the JBI template for data extraction [46], as well as additional elements included by the authors were also incorporated into the final data extraction template (Table 3). Once the author (GYL) completed full text review, the senior author (LF) reviewed and verified the final list of records to include for synthesis.
To address the third objective, a narrative synthesis of the findings was conducted and utilized to develop an evaluation framework that can be applied across diverse disciplines and modelling methods. A narrative synthesis allows for the in-depth exploration beyond the description of the included records to understand relationships (e.g., differences, similarities) between the studies [51].
As part of the narrative synthesis, a word cloud was generated to analyze the heterogeneity in terminology identified during the data extraction process for the studies included in the scoping review [52]. Word clouds visually display the most frequently used words in a body of text with the bigger font size of a word illustrating that this word is used more frequently [53]. To ensure that a focused word cloud was generated specific to evaluation, the full text of all 11 studies included for synthesis were uploaded, which was followed by a process of elimination whereby words that were not related to evaluation–such as university, platform, and various stop words–were deleted. Following this process, the authors went through the remaining list of words and merged synonyms as well as the same words presented in its singular or plural form or with tense variation.
Though word clouds can support the identification of commonly utilized terms, there are limitations. For example, it is not clear from just the word cloud exercise alone how many terms appeared in each individual study. Additionally, some words may have different meanings depending on the field that the paper was published in (e.g., the word ‘sustainability’ may have a different meaning when used in environmental PM vs in a public health PM program). Thus, the whole author group engaged in an iterative process whereby the word cloud was utilized as a discussion tool to provide feedback, refine, and finalize the presented evaluation framework. Discussions with the authors were facilitated by GYL and LF from April to July 2021 via informal and formal meetings.
Results
Part I: Scoping review
The initial Scopus search yielded 465 results, and an additional 10 records were identified through hand searching, co-author recommendations, and citation chaining. Most articles were excluded from review based on their titles and abstracts (n = 451), as the majority described the evaluation methods or outcomes of the implementation of a PM program without any reference to an evaluation framework, criteria, and/or process. After screening 24 full-text records, 11 studies were included for synthesis. Though it was not intentional, all included records were from academic journals, as opposed to grey literature and conference papers. The PRISMA flow diagram is presented in Fig 1.
Characteristics of studies
Table 4 summarizes the characteristics of the studies. All but one (10/11, 90.9%) were published in an environmental sciences journal. The remaining one study was published in the International Journal of Environmental Research and Public Health. Approximately half (7/11, 63.6%) of the included studies were published in the past decade (2011–2021). Overall, three quarters (346/475, 72.8%) of the 475 total yielded records reviewed by their abstracts and titles were published in the last decade.
Characteristics of evaluation frameworks, criteria, and/or processes.
The papers included for synthesis either described a theoretical evaluation framework and/or criteria with no application to a case study [24, 54]; described a theoretical evaluation framework and/or criteria applied to a case study [25, 35, 55, 58, 59], or; described an evaluation process applied to a case study [56, 57, 60, 61]. The majority of the evaluation frameworks, criteria and/or processes were developed by building upon already existing work [24, 25, 35, 55–61]. For example, Cash et al’s paper–which described how policy makers were more likely to use scientific evidence if three criteria are met (i.e., credible, salient, legitimate) [62]–was utilized in Hamilton et al. and Falconi et al’s framework [24, 35]. Rowe and Frewer’s work was cited by two different studies [63], with Maskrey et al. referencing this work to include as an evaluation criteria to assess accessibility (e.g., language) during the participatory process, whereas Zorilla et al. referenced it as an evaluation criteria to assess how the participatory process enabled stakeholder values, assumptions, and preferences to be incorporated into decision making [55, 58]. Two of the evaluation frameworks described an empirical process of how their frameworks were developed, supplemented with literature reviews [24, 54].
Key benefits and areas of future research were identified for each paper through the data extraction process (i.e., imPact and Prioritizing categories of Freebairn’s adapted 6P communications framework on reporting PM outcomes). For example, Lynam et al.’s paper was one of the first academic papers that attempted to address the gap in research evidence to support improved evaluation practices in PM [54]. It was also one of the first to identify the need to address power relations when working with communities, as well as the PM process as distinct from the technical model (e.g., encouraging co-learning/communication vs level of accuracy/precision) [54]. As such, Lynam et al. was referenced by various other papers [25, 35, 55, 58, 59], and was used by Zorilla et al. to inform the development of their own evaluation framework [55]. While Lynam et al.’s work was ground-breaking, being one of the first in this field, it was limited in providing a comprehensive description of the theoretical underpinnings of their framework.
The evaluation frameworks, criteria, and/or processes focused on different design features and levels of examination (e.g., project vs system level, short vs long-term observation, etc). For example, Jones et al., Hamilton et al., Zare et al., and Smajgl & Ward noted the importance of evaluation both ex ante and ex post to ensure all involved–including modellers and stakeholders–have a better understanding of what the PM process is aiming to achieve, and to keep everyone accountable to the defined objectives [24, 25, 57, 61]. Jones et al. and Hamilton et al. also differentiated between the levels of stakeholder participation [24, 25]; and although this was not embedded into the evaluation criteria presented by Zorilla et al., there was recognition that future work should distinguish evaluation amongst stakeholder groups from policy makers to farmers [55]. There were two papers that did not explicitly consider the multi-value perspectives integrated within PM, such as consideration of the multiple levels of examination (e.g., diverse stakeholder perspectives, project vs system level, short vs long-term observation, etc) [54, 56]. Maskrey et al., Falconi et al., and Hamilton et al. recognized that evaluations should also consider both the immediate and long-term outcomes [24, 35, 58]. Hedelin et al. and Waterlander et al. focused on the organizational level–for example, focusing on the various elements of the system to understand organizational learning, change, and action [59, 60]. This information is summarized in Table 5.
A strength of some of the identified evaluation frameworks, criteria, and/or processes was that they were not only applicable to the specific PM program it was designed for, but adaptable to other PM programs [24, 25, 59]. However, for some this came at the cost of oversimplifying the evaluation framework [25, 55, 56, 61]. The strengths and limitations of each individual study are presented in Table 5. Recurring themes were synthesized from across the papers utilizing Freebairn’s adapted 6P communications framework for reporting PM programs [13]. These themes are presented in Table 6.
Part II: Development of a comprehensive multi-scale evaluation framework
The strengths and limitations of each individual study presented in Table 5 as well as the recurring themes synthesized in Table 6 informed the development of a comprehensive multi-scale evaluation framework. This information was supplemented through the development of a word cloud, which allowed all authors to engage in an iterative process to refine and finalize the presented framework (Fig 2).
The synthesis process of the scoping review revealed opportunities to develop an evaluation framework that builds off empirical evidence of existing literature. For example, to develop a framework that can be flexibly adapted for diverse PM programs regardless of the discipline or modelling method [24, 54, 59, 61], considers the whole participatory process [24, 25, 35, 54, 55], and evaluates the engagement, learning as well as the integration of stakeholders in the PM process in both the short- and long-term [24, 35, 55–59]. A flexible framework allows for a mixed-methods approach to support real-world implementation [24, 25, 35, 55–58, 60, 93], which is described in further detail below in the Discussion.
It was evident from the synthesis process that differing terminologies, approaches, and assumptions were used, which led to challenges, including PM evaluation efforts being siloed from one another. This poses risk that evaluation processes will not reach their full potential, with associated implications for funders, participating stakeholders, as well as modellers [94]. Additionally, it was evident during the synthesis process that studies either described a comprehensive evaluation framework, criteria, and/or process, or they focused on the actual evaluation methodologies; the two rarely coincided [95, 96]. Therefore, a comprehensive evaluation framework and criteria are needed for PM programs that have theoretical and empirical underpinnings but are also accompanied by practical evaluation tools and methods.
A total of 40 unique evaluation terms were identified as the most utilized across the studies included for synthesis. Process appeared the most frequently (623 times), whereas inclusive appeared the least (six times). There are limitations to word clouds, for example, they arguably provide only a superficial snapshot of themes across studies. Therefore, all authors engaged in an iterative discussion process, utilizing the word cloud presented in Fig 2 as a discussion tool to enable more in-depth exploration of the final list of terms to further analyze and incorporate into the presented PM evaluation framework. Four broad evaluation framework categories were identified, based on the key themes and limitations described above as part of the scoping review synthesis process–feasibility, value, change & action (impact) and sustainability (highlighted in yellow, Table 7). As an evaluation concept, the feasibility or plausibility of PM allows for questions to be asked regarding whether it was possible for all participants to engage and contribute throughout the PM process. Consideration of the value of the PM process allows for the exploration of questions regarding what was gained due to engaging participants in PM (e.g., learning, confidence, trust). Change & action facilitates observations of impact, including ex ante and ex post comparisons of stakeholder relationships, knowledge, and behaviors as a result of the PM process; sustainability allows for the observations of PM outcomes over time (Fig 3). For clarity, it is acknowledged that PM is conducted in a dynamic (changing) environment, and sustainability of impacts may not always be desired. Thus, by sustainability, we refer to the observation of longer-term (prolonged) outcomes of the feasibility, value, and impacts, and not necessarily that these outcomes must remain stagnant.
The remaining 35 terms (highlighted in grey, Table 7) identified during the development of the word cloud have been incorporated as evaluation criteria, presented as evaluation questions in Fig 3. The word level was neither incorporated as an evaluation framework category nor criteria; but as the various levels of evaluation (e.g., multi-value perspectives) were noteworthy across studies, this term was included as a separate component in our evaluation framework–specifically, consideration of project-, individual-, group-, and system-level impacts (Fig 3).
As recognized by Jones et al. [25], Hamilton et al. [24], and Zorilla et al. [55], the consideration of multiple evaluation perspectives, or multiple levels of impact from the project-, individual-, group- to system-level is important as PM processes are increasingly becoming more inclusive to involve stakeholders from diverse backgrounds (e.g., client vs decision makers). Our framework further explores consideration of multiple evaluation perspectives, by recognizing that sublevels can exist within the individual- and group-levels (Fig 3). Consideration of the sublevels of participation enables the recognition of, for example, potential power relations and dynamics amongst the stakeholders, to be able to improve PM design and appropriately measure outcomes [24, 25, 54–56, 59]. This has been reflected in our evaluation framework, presented in Fig 3, with the individual- and group-levels further stratified to include community participants (e.g., consumer representatives) and professional participants (e.g., policy makers). As the differentiation of community and professional stakeholders is not always possible, the evaluation framework and criteria has been developed to be adapted for different PM program contexts.
Discussion
This scoping review identified 11 studies that described an evaluation framework, criteria and/or process developed for PM programs. From the synthesis of these papers, the strengths and limitations, as well as overlapping concepts and themes were synthesized and analyzed to inform the development of a comprehensive multi-scale evaluation framework (Fig 3) that is designed to be adaptive, flexible, and iterative for PM programs, regardless of the discipline of study or modelling method. Such a framework is desirable as PM evaluation practices are currently limited across all fields. Our framework consists of four categories–(i) Feasibility; (ii) Value; (iii) Change & Action (Impact), and; (iv) Sustainability. It is recommended that comprehensive evaluation processes need clear criteria to set appropriate benchmarks [55, 73]; therefore, the authors developed 30 criteria–presented as questions–which also include all key words identified from the word cloud (Fig 3). Though the word cloud was useful to identify commonly used terminology and themes, word clouds are limited in only providing a superficial overview. This prompted all authors to engage in an iterative process over three months to refine and finalize the presented evaluation framework.
There is recognition that many evaluation practices have been inadequate in both depth and scope which has limited the ability to improve PM practices [24, 31, 35]. The development of the presented evaluation framework through the synthesis of the 11 studies on PM evaluation provides the opportunity to draw on the expertise from other authors–ensuring the presented framework is guided by the identified strengths, challenges, and opportunities of existing work.
The presented evaluation framework was also developed with the consideration of the various aims in PM evaluation, as identified in the 11 studies included for synthesis. Specifically, our evaluation framework explicitly includes criteria that enables the observation of changes in stakeholder behaviors such as learning and decision making (Criteria 9–10, 19–22) [57, 59]; evaluation of the success of PM in the context of the participatory process (Criteria 1, 7–8, 16) [24, 25, 35, 58, 61]; assessment of the changes in systems behavior to address complex challenges (Criteria 9, 16, 23–24, 28–30) [60]; evaluation of PM tools (Criteria 3–6) [54, 55], and; the consideration of differences between modelling outcomes and outputs (Criteria 23, 28–29) [56]. Our evaluation framework builds on the empirical work of others, taking into consideration the participatory process as well as influence of actual technical model [24, 35, 55]; providing a flexible framework that can be applied to other disciplines [24, 54, 59, 61]; enabling the observation of short- and long term outcomes [24, 58], and; prompting action and reflection in evaluation design to cater to dynamic nature of complex systems [60, 61].
Enabling action and reflection is of particular importance to ensure improvement throughout the PM process. As such, our proposed evaluation framework (which include all 30 evaluation criteria) is underpinned by the principles of Participatory Action Research (PAR) (Fig 4). PAR embeds reflection during all phases of the PM program and can lead to shared learning and joint action for change to improve PM processes. PAR is a bottom-up approach and is appropriate in the context of PM as the traditional roles of the modellers as the experts and stakeholders as the study participants are challenged [97, 98]. By working with the people who the modelling most affects (such as consumer representatives), PM outcomes can be improved through a more equitable process [99].
Application of the presented evaluation framework
The studies included for synthesis (Tables 5 and 6) used a variety of methods to collect evaluation data. It is recommended that the presented evaluation framework adopts a mixed methods approach to align with the PM process. Examples of the potential methods include semi-structured interviews, surveys, journey maps and social network analysis (Fig 5).
As the presented evaluation framework was developed to have broader international applications across disciplines and diverse participatory modelling programs, a more thorough description of how the authors plan to deploy the framework through a mixed-methods approach is described elsewhere. This description includes the tools developed as well as the suggested evaluation time points (ex ante and ex post) in the context of national multi-site youth mental health participatory systems modelling program (Right care, first time, where you live research Program). Further details of the research Program, including our participatory modelling approach, are also described elsewhere [100–102].
In summary, this PM program will develop system dynamics models for youth mental health across eight diverse regions across Australia. The evaluation framework has been translated into online survey and semi-structured interview questions, and will be underpinned by PAR principles, as well as formative, summative, process, and impact evaluation techniques. Novel research methods such as the gamification of online surveys, will enable unique data analysis (e.g., social network analysis), supporting the exploration of diverse stakeholder experiences to improve the PM process.
Strengths, limitations and opportunities for future research
Evaluations have the potential to measure change at the project, individual, group, and system (policy) level to improve understanding of the PM process and the elements that facilitate certain outcomes (e.g., support decision making) [24, 103, 104]. In other words, knowledge that is acquired from evaluation outcomes can be applied prospectively throughout the PM process, rather than retrospectively reflecting on before-after measurements. Through principles of PAR, the proposed evaluation framework embeds continuous cycles of reflection to facilitate shared learning and iterative refinement of processes throughout PM implementation. Careful thought on design aspects are needed to ensure that evaluations are worthwhile as they require additional time, resources, and funding [105]. The presented evaluation framework considers the contributions of all participants involved in the PM process, not only the perspectives of the modellers or funder [105, 106].
The presented evaluation framework is also designed to be adaptive, flexible and iterative, to ensure that the framework remains relevant in the evolving fields and contexts in which PM are being applied [9]. PM by nature is a subjective process that is largely dependent on social interactions, human beliefs, biases and values. To ensure rigor throughout the PM evaluation process, the proposed evaluation framework builds on principles of PAR to empower stakeholders from various backgrounds (e.g., community participants to professional participants), and embeds ongoing reflection and learning so that the PM process can respond to the changing needs of complex systems to ensure that the aims of PM are met–on collaboration, learning, communication, as well as be applied across disciplines and diverse modelling methods [21]. The presented evaluation framework supports the application of a mixed-methods approach with an emphasis on approaching PM evaluations holistically [107].
There are limitations to this scoping review that should be acknowledged. The heterogeneity in terminology was a challenge during the screening, data extraction, and full text review process. Thus, with the process described in which the first author and senior author worked closely to resolve any ambiguity, a robust method was followed to ensure that the studies included are most relevant for the purposes of this scoping review. Additionally, though the described search strategy was broad in that it did not set any limits to the field of study, it was narrow so that only the studies that disclosed an evaluation framework, criteria and/or process in a PM context were included. The choice of database and search terms will also have its limitations. For instance, as our initial search was conducted in May 2021, it is possible that additional literature has been published since then that may meet the inclusion criteria for our scoping review. However, the presented evaluation framework provides a comprehensive synthesis of and builds on the expertise of existing work, adding valuable contributions in the field of PM evaluation.
Additionally, the authors acknowledge that implementation of this framework in different contexts may mean that some aspects of it may be emphasized while other aspects de-emphasized. Our framework provides a uniquely comprehensive lens and a necessary contribution to the PM evaluation literature as an attempt to encourage researchers to consider evaluations in PM programs as a standard process of scientific inquiry.
Conclusions
Evaluations are an integral component of the PM process that should be carefully considered throughout, and not viewed as its own separate component or afterthought. With the ability to inform policy change by demonstrating the measured effectiveness of PM, such processes should be adequately supported with an appropriate evaluation design. The presented framework describes a multi-scale and comprehensive, yet flexible evaluation approach that is built on the rigorous synthesis of strengths and opportunities for further development identified from existing studies. This framework enables the conduct of holistic evaluation practices by considering the project-, individual-, group-, and system-level impacts to understand the feasibility, value, impact, and sustainability of the PM process. Outputs from adopting such an evaluation approach, underpinned by principles of PAR, can be used to guide ongoing improvements to the PM process, empower stakeholders and users of systems models to be more confident in the model outcomes, as well as to improve understanding of which aspects of PM are particularly important for policy decisions.
Acknowledgments
The authors would like to thank Glen Smyth, Academic Liaison Librarian at The University of Sydney, for his guidance and support in developing the search strategy. The authors would also like to thank Chloe Wilson for her support in assisting with the initial literature search.
References
- 1.
AF Bala BK., Noh KM. System Dynamics: Modelling and Simulation. Singapore Springer Nature 2017.
- 2. Kelly RA, Jakeman AJ, Barreteau O, Borsuk ME, ElSawah S, Hamilton SH, et al. Selecting among five common modelling approaches for integrated environmental assessment and management. Environmental Modelling & Software. 2013;47:159–81. https://doi.org/10.1016/j.envsoft.2013.05.005.
- 3.
A B. The Big Book of Simulation Modeling: Multimethod Modeling with AnyLogic 6. Chicago: AnyLogic North America 2013.
- 4. Currie DJ, Smith C, Jagals P. The application of system dynamics modelling to environmental health decision-making and policy—a scoping review. BMC Public Health. 2018;18(1):402. pmid:29587701
- 5. Freebairn L, Atkinson J-A, Kelly PM, McDonnell G, Rychetnik L. Decision makers’ experience of participatory dynamic simulation modelling: methods for public health policy. BMC Medical Informatics and Decision Making. 2018;18(1):131. pmid:30541523
- 6. Hieronymi A. Understanding Systems Science: A Visual and Integrative Approach. Systems Research and Behavioral Science. 2013;30(5):580–95. https://doi.org/10.1002/sres.2215.
- 7. Long KM, Meadows GN. Simulation modelling in mental health: A systematic review. Journal of Simulation. 2018;12(1):76–85.
- 8. Luke DA, Stamatakis KA. Systems science methods in public health: dynamics, networks, and agents. Annual review of public health. 2012;33:357–76. Epub 2012/01/03. pmid:22224885.
- 9. Prell C, Hubacek K, Reed M, Quinn C, Jin N, Holden J, et al. If you have a hammer everything looks like a nail: traditional versus participatory model building. Interdisciplinary Science Reviews. 2007;32(3):263–82.
- 10. Pitt M, Monks T, Crowe S, Vasilakis C. Systems modelling and simulation in health service design, delivery and decision making. BMJ Quality & Safety. 2016;25(1):38. pmid:26115667
- 11.
Osgood N. Computational Simulation Modeling in Population Health Research and Policy. In: LK Apostolopoulos Y., Lemke MK, editor. Complex Systems and Population Health. New York Oxford University Press 2020.
- 12. Silva PCL, Batista PVC, Lima HS, Alves MA, Guimarães FG, Silva RCP. COVID-ABS: An agent-based model of COVID-19 epidemic to simulate health and economic effects of social distancing interventions. Chaos, solitons, and fractals. 2020;139:110088–. Epub 2020/07/07. pmid:32834624.
- 13.
Freebairn L. “Turning mirrors into windows”: A study of participatory dynamic simulation modelling to inform health policy decisions: The University of Notre Dame Australia 2019.
- 14. Rutter H, Savona N, Glonti K, Bibby J, Cummins S, Finegood DT, et al. The need for a complex systems model of evidence for public health. The Lancet. 2017;390(10112):2602–4. pmid:28622953
- 15. Kendall JM. Designing a research project: randomised controlled trials and their principles. Emergency Medicine Journal. 2003;20(2):164. pmid:12642531
- 16. Page A, Atkinson JA, Heffernan M, McDonnell G, Prodan A, Osgood N, et al. Static metrics of impact for a dynamic problem: The need for smarter tools to guide suicide prevention planning and investment. Aust N Z J Psychiatry. 2018;52(7):660–7. Epub 2018/01/24. pmid:29359569.
- 17. Atkinson J-A, Skinner A, Lawson K, Rosenberg S, Hickie IB. Bringing new tools, a regional focus, resource-sensitivity, local engagement and necessary discipline to mental health policy and planning. BMC Public Health. 2020;20(1):814. pmid:32498676
- 18. Occhipinti J-A, Skinner A, Carter S, Heath J, Lawson K, McGill K, et al. Federal and state cooperation necessary but not sufficient for effective regional mental health systems: insights from systems modelling and simulation. Scientific Reports. 2021;11(1):11209. pmid:34045644
- 19. Page A, Atkinson JA, Campos W, Heffernan M, Ferdousi S, Power A, et al. A decision support tool to inform local suicide prevention activity in Greater Western Sydney (Australia). Aust N Z J Psychiatry. 2018;52(10):983–93. Epub 2018/04/20. pmid:29671335.
- 20. Senge PM, Sterman JD. Systems thinking and organizational learning: Acting locally and thinking globally in the organization of the future. European Journal of Operational Research. 1992;59(1):137–50. https://doi.org/10.1016/0377-2217(92)90011-W.
- 21. Voinov A, Bousquet F. Modelling with stakeholders. Environmental Modelling & Software. 2010;25(11):1268–81. https://doi.org/10.1016/j.envsoft.2010.03.007.
- 22. Gilbert N, Ahrweiler P, Barbrook-Johnson P, Narasimhan KP, Wilkinson H. Computational Modelling of Public Policy: Reflections on Practice. Journal of Artificial Societies and Social Simulation. 2018;21(1):14.
- 23. Voinov A, Jenni K, Gray S, Kolagani N, Glynn PD, Bommel P, et al. Tools and methods in participatory modeling: Selecting the right tool for the job. Environmental Modelling & Software. 2018;109:232–55. https://doi.org/10.1016/j.envsoft.2018.08.028.
- 24. Hamilton SH, Fu B, Guillaume JHA, Badham J, Elsawah S, Gober P, et al. A framework for characterising and evaluating the effectiveness of environmental modelling. Environmental Modelling & Software. 2019;118:83–98. https://doi.org/10.1016/j.envsoft.2019.04.008.
- 25. Jones NA, Perez P, Measham TG, Kelly GJ, d’Aquino P, Daniell KA, et al. Evaluating Participatory Modeling: Developing a Framework for Cross-Case Analysis. Environmental Management. 2009;44(6):1180. pmid:19847478
- 26. Király G, Miskolczi P. Dynamics of participation: System dynamics and participation—An empirical review. Systems Research and Behavioral Science. 2019;36(2):199–210. https://doi.org/10.1002/sres.2580.
- 27. Liguori A, McEwen L, Blake J, Wilson M. Towards ‘Creative Participatory Science’: Exploring Future Scenarios Through Specialist Drought Science and Community Storytelling. Frontiers in Environmental Science. 2021;8(293).
- 28.
Maru YT. AK, Perez P. Taking ‘participatory’ in participatory modelling seriously 18th World IMAC/MODSIM Congress; Cairns2009.
- 29.
Hovmand PS. Community Based System Dynamics. New York Springer 2013. https://doi.org/10.1007/s10552-013-0150-z pmid:23378138
- 30. Jordan R, Gray S, Zellner M, Glynn PD, Voinov A, Hedelin B, et al. Twelve Questions for the Participatory Modeling Community. Earth’s Future. 2018;6(8):1046–57. https://doi.org/10.1029/2018EF000841.
- 31. Hedelin B, Gray S, Woehlke S, BenDor TK, Singer A, Jordan R, et al. What’s left before participatory modeling can fully support real-world environmental planning processes: A case study review. Environmental Modelling & Software. 2021;143:105073. https://doi.org/10.1016/j.envsoft.2021.105073.
- 32.
Australian Government Department of Health. Why evaluate? Canberra Australian Government 2012. Available from: https://www1.health.gov.au/internet/publications/publishing.nsf/Content/evaluation-tkit-breastfeeding-prgms-projects~evaluate.
- 33.
Gertler PJ, Martinez S, Premand P, Rawlings LB, Vermeersch CMJ. Impact Evaluation in Practice, Second Edition. Washington, D. C., UNITED STATES: World Bank Publications; 2016.
- 34.
Dictionary C. Meaning of "evaluation" in English: Cambridge Dictionary; n.d. Available from: https://dictionary.cambridge.org/dictionary/english/evaluation.
- 35. Falconi SM, Palmer RN. An interdisciplinary framework for participatory modeling design and evaluation—What makes models effective participatory decision tools? Water Resources Research. 2017;53(2):1625–45. https://doi.org/10.1002/2016WR019373.
- 36. Fynn JF, Hardeman W, Milton K, Murphy J, Jones A. A systematic review of the use and reporting of evaluation frameworks within evaluations of physical activity interventions. International Journal of Behavioral Nutrition and Physical Activity. 2020;17(1):107. pmid:32831111
- 37. Gray S, Voinov A, Paolisso M, Jordan R, BenDor T, Bommel P, et al. Purpose, processes, partnerships, and products: four Ps to advance participatory socio-environmental modeling. Ecol Appl. 2018;28(1):46–61. Epub 2017/09/19. pmid:28922513.
- 38.
Forss K, Marra M, Schwartz R. Evaluating the Complex: Attribution Contribution and Beyond. New Burnswick (USA) and London (UK): Transaction Publishers; 2011.
- 39.
Preskill H, Gopal S, Mack K, Cook J. Evaluating Complexity Propositions for Improving Practice. Boston: 2014.
- 40.
Gopalakrishnan S, Preskill H, Lu SJ. Next Generation Evaluation: Embracing Complexity, Connectivity, and Change: A Learning Brief. Boston, MA: 2013.
- 41. Gates EF. Making sense of the emerging conversation in evaluation about systems thinking and complexity science. Evaluation and Program Planning. 2016;59:62–73. pmid:27591941
- 42. Hassenforder E, Pittock J, Barreteau O, Daniell KA, Ferrand N. The MEPPP Framework: A Framework for Monitoring and Evaluating Participatory Planning Processes. Environmental Management. 2016;57(1):79–96. pmid:26294097
- 43. Rychetnik L, Frommer M, Hawe P, Shiell A. Criteria for evaluating evidence on public health interventions. J Epidemiol Community Health. 2002;56(2):119–27. Epub 2002/01/29. pmid:11812811; PubMed Central PMCID: PMC1732065.
- 44. Munn Z, Peters MDJ, Stern C, Tufanaru C, McArthur A, Aromataris E. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Medical Research Methodology. 2018;18(1):143. pmid:30453902
- 45. Pham MT, Rajić A, Greig JD, Sargeant JM, Papadopoulos A, McEwen SA. A scoping review of scoping reviews: advancing the approach and enhancing the consistency. Research synthesis methods. 2014;5(4):371–85. Epub 2014/07/24. pmid:26052958.
- 46.
Peters MDJ, Godfrey C, McInerney P, Munn Z, Tricco AC, Khalil H. Chapter 11: Scoping Reviews: JBI; 2020. Available from: https://synthesismanual.jbi.global. https://doi.org/10.46658/JBIMES-20-12.
- 47. Arksey H, O’Malley L. Scoping studies: towards a methodological framework. International Journal of Social Research Methodology. 2005;8(1):19–32.
- 48. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ. 2009;339:b2535. pmid:19622551
- 49.
Registration of scoping review: Open Science Framework; 2021. Available from:https://doi.org/10.17605/OSF.IO/2V38E.
- 50.
Alberta Addiction and Mental Health Research Partnership Program. Evaluation Plan and Evaluation Framework. Alberta: n.d.
- 51. Lisy K, Porritt K. Narrative Synthesis: Considerations and challenges. JBI Evidence Implementation. 2016;14(4).
- 52. Hearst MA, Pedersen E, Patil L, Lee E, Laskowski P, Franconeri S. An Evaluation of Semantically Grouped Word Cloud Designs. IEEE Transactions on Visualization and Computer Graphics. 2020;26(9):2748–61. pmid:30872231
- 53.
Effective Use of Word Clouds. Arlington, VA: American Institutes for Research n.d.
- 54. Lynam T. dJW, Sheil D., Kusumanto T., Evans K. A Review of Tools for Incorporating Community Knowledge, Preferences, and Values into Decision Making in Natural Resources Management. Ecology and Society 2007;12:5.
- 55. Zorilla-Miras P. CG, Hera A., Varela-Ortega , Martinez-Santos P, Bromley J, Henriksen H. Evaluation of Bayesian Networks in Participatory Water Resources Management, Upper Guadiana Basin, Spain. Ecology and Society 2010;15(3).
- 56. Matthews KB, Rivington M, Blackstock K, McCrum G, Buchan K, Miller DG. Raising the bar?–The challenges of evaluating the outcomes of environmental modelling and software. Environmental Modelling & Software. 2011;26(3):247–57. https://doi.org/10.1016/j.envsoft.2010.03.031.
- 57. Smajgl A, Ward J. Evaluating participatory research: Framework, methods and implementation results. J Environ Manage. 2015;157:311–9. Epub 2015/05/02. pmid:25929196.
- 58. Maskrey SA, Mount NJ, Thorne CR, Dryden I. Participatory modelling for stakeholder involvement in the development of flood risk management intervention options. Environmental Modelling & Software. 2016;82:275–94. https://doi.org/10.1016/j.envsoft.2016.04.027.
- 59. Hedelin B, Evers M, Alkan-Olsson J, Jonsson A. Participatory modelling for sustainable development: Key issues derived from five cases of natural resource and disaster risk management. Environmental Science & Policy. 2017;76:185–96. https://doi.org/10.1016/j.envsci.2017.07.001.
- 60. Waterlander WE, Luna Pinzon A, Verhoeff A, den Hertog K, Altenburg T, Dijkstra C, et al. A System Dynamics and Participatory Action Research Approach to Promote Healthy Living and a Healthy Weight among 10-14-Year-Old Adolescents in Amsterdam: The LIKE Programme. International journal of environmental research and public health. 2020;17(14):4928. pmid:32650571.
- 61. Zare F, Guillaume JHA, ElSawah S, Croke B, Fu B, Iwanaga T, et al. A formative and self-reflective approach to monitoring and evaluation of interdisciplinary team research: An integrated water resource modelling application in Australia. Journal of Hydrology. 2021;596:126070. https://doi.org/10.1016/j.jhydrol.2021.126070.
- 62. Cash DW, Clark WC, Alcock F, Dickson NM, Eckley N, Guston DH, et al. Knowledge systems for sustainable development. Proceedings of the National Academy of Sciences. 2003;100(14):8086. pmid:12777623
- 63. Rowe G, Frewer LJ. Public Participation Methods: A Framework for Evaluation. Science, Technology, & Human Values. 2000;25(1):3–29. pmid:25309997
- 64.
Argyris C. On Organizational Learning. 2nd Edition ed. New Jersey Wiley-Blackwell; 1999. https://doi.org/10.1074/jbc.274.3.1549 pmid:9880532
- 65.
Patton M. Qualitative evaluation and research methods. Newbury Park, California: Sage Publications; 1992.
- 66.
Curnan S LL, Sharpstee D, Lelle M, Reece M. W.K Kellogg Foundation evaluation handbook. Battle Creek, Michigan, United States: 1998.
- 67. Abelson J, Forest P-G, Eyles J, Smith P, Martin E, Gauvin F-P. Deliberations about deliberative methods: issues in the design and evaluation of public participation processes. Social Science & Medicine. 2003;57(2):239–51. pmid:12765705
- 68. Rowe G, Frewer LJ. Evaluating Public-Participation Exercises: A Research Agenda. Science, Technology, & Human Values. 2004;29(4):512–56. pmid:25309997
- 69.
von Korff Y. Towards an Evaluation Method for Public Participation Processes in AquaStress and NeWater: A proposal for both projects. Montpellier, France: 2005.
- 70. Stewart TR, Dennis RL, Ely DW. Citizen participation and judgment in policy analysis: A case study of urban air quality policy. Policy Sciences. 1984;17(1):67–87.
- 71. Breck T, Einsiedel EF, Jels E. Public Understanding of Science. 2001;10(1):83–98.
- 72. Henriksen HJ, Rasmussen P, Brandt G, von Bülow D, Jensen FV. Public participation modelling using Bayesian networks in management of groundwater contamination. Environmental Modelling & Software. 2007;22(8):1101–13. https://doi.org/10.1016/j.envsoft.2006.01.008.
- 73. Blackstock KL, Kelly GJ, Horsey BL. Developing and applying a framework to evaluate participatory research for sustainability. Ecological Economics. 2007;60(4):726–42. https://doi.org/10.1016/j.ecolecon.2006.05.014.
- 74.
Patton M. Utilization-Focused Evaluation. In: Kellaghan T. SD, editor. International Handbook of Educational Evaluation. 9. Dordrecht: Springer; 2003.
- 75. McCown RL. Changing systems for supporting farmers’ decisions: problems, paradigms, and prospects. Agricultural Systems. 2002;74(1):179–220. https://doi.org/10.1016/S0308-521X(02)00026-4.
- 76. Smajgl A, Ward J. A framework to bridge science and policy in complex decision making arenas. Futures. 2013;52:52–8. https://doi.org/10.1016/j.futures.2013.07.002.
- 77. Beierle TC. Using social goals to evaluate public participation in environmental decisions. Review of Policy Research. 1999;16(3–4):75–103. https://doi.org/10.1111/j.1541-1338.1999.tb00879.x.
- 78. Webler T, Tuler S. Unlocking the Puzzle of Public Participation. Bulletin of Science, Technology & Society. 2002;22(3):179–89.
- 79.
Council NR. Public Participation in Environmental Assessment and Decision Making. Dietz T, Stern PC, editors. Washington, DC: The National Academies Press; 2008. 322 p.
- 80. Carr G, Blöschl G, Loucks DP. Evaluating participation in water resource management: A review. Water Resources Research. 2012;48(11). https://doi.org/10.1029/2011WR011662.
- 81. Hedelin B. Criteria for the assessment of sustainable water management. Environ Manage. 2007;39(2):151–63. Epub 2006/12/13. pmid:17160512.
- 82. Hedelin B. Further development of a sustainable procedure framework for strategic natural resources and disaster risk management. Journal of Natural Resources Policy Research. 2015;7(4):247–66.
- 83. Hedelin B. The Sustainable Procedure Framework for Disaster Risk Management: Illustrated by the Case of the EU Floods Directive in Sweden. International Journal of Disaster Risk Science. 2016;7(2):151–62.
- 84.
Goeller B. A framework for evaluating success in systems analysis. In: Miser H, Quade ES, editor. Handbook of system analysis: craft issues and procedural choices: John Wiley & Sons Ltd; 1988. p. 568–618. https://doi.org/10.1016/0361-9230(88)90053-6 pmid:3064882
- 85. Roughley A. Developing and using program logic in natural resource management: user guide. 2009.
- 86. Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. Bmj. 2015;350:h1258. Epub 2015/03/21. pmid:25791983; PubMed Central PMCID: PMC4366184 of interests and have no relevant interests to declare. Provenance and peer review: Not commissioned; externally peer reviewed.
- 87. Walton M. Applying complexity theory: A review to inform evaluation design. Evaluation and Program Planning. 2014;45:119–26. pmid:24780280
- 88.
Egan M, McGill, E., Penney, T., Anderson de Cuevas, R., Er, V., Orton, L., et al. Guidance on Systems Approaches to Local Public Health Evaluation. London, UK: 2019.
- 89.
Gibbs G. Learning By Doing: Oxford Brooks University 1988.
- 90. Holzer JM, Carmon N, Orenstein DE. A methodology for evaluating transdisciplinary research on coupled socio-ecological systems. Ecological Indicators. 2018;85:808–19. https://doi.org/10.1016/j.ecolind.2017.10.074.
- 91. Kunseler E-M, Tuinstra W, Vasileiadou E, Petersen AC. The reflective futures practitioner: Balancing salience, credibility and legitimacy in generating foresight knowledge with stakeholders. Futures. 2015;66:1–12. https://doi.org/10.1016/j.futures.2014.10.006.
- 92. van Mierlo B, Arkesteijn M, Leeuwis C. Enhancing the Reflexivity of System Innovation Projects With System Analyses. American Journal of Evaluation. 2010;31(2):143–61.
- 93. Zare F, Guillaume JHA, Jakeman AJ, Torabi O. Reflective communication to improve problem-solving pathways: Key issues illustrated for an integrated environmental modelling case study. Environmental Modelling & Software. 2020;126:104645. https://doi.org/10.1016/j.envsoft.2020.104645.
- 94. O’Sullivan RG. Collaborative Evaluation within a framework of stakeholder-oriented evaluation approaches. Evaluation and Program Planning. 2012;35(4):518–22. pmid:22364849
- 95. Louder E, Wyborn C, Cvitanovic C, Bednarek AT. A synthesis of the frameworks available to guide evaluations of research impact at the interface of environmental science, policy and practice. Environmental Science & Policy. 2021;116:258–65. https://doi.org/10.1016/j.envsci.2020.12.006.
- 96.
Cabrera D. Systems evaluation and evaluation systems whitepaper series. Ithaca, NY: 2006.
- 97.
Pain R. WG, Milledge D., Lune Rivers Trust Participatory Action Research Toolkit Durham, UK 2011.
- 98. Rodríguez LF, Brown TM. From voice to agency: guiding principles for participatory action research with youth. New Dir Youth Dev. 2009;2009(123):19–34, 11. Epub 2009/10/16. pmid:19830799.
- 99. Baum F, MacDougall C, Smith D. Participatory action research. Journal of Epidemiology and Community Health. 2006;60(10):854. pmid:16973531
- 100. Freebairn L, Occhipinti J-A, Song YJC, Skinner A, Lawson K, Lee GY, et al. Participatory Methods for Systems Modeling of Youth Mental Health: Implementation Protocol. JMIR Res Protoc. 2022;11(2):e32988. pmid:35129446
- 101. Occhipinti J-A, Skinner A, Freebairn L, Song YJC, Ho N, Lawson K, et al. Which Social, Economic, and Health Sector Strategies Will Deliver the Greatest Impacts for Youth Mental Health and Suicide Prevention? Protocol for an Advanced, Systems Modelling Approach. Frontiers in Psychiatry. 2021;12. pmid:34721120
- 102. Occhipinti JA, Skinner A, Doraiswamy PM, Fox C, Herrman H, Saxena S, et al. Mental health: build predictive models to steer policy. Nature. 2021;597(7878):633–6. pmid:34565800.
- 103. Kearney S, Leung L, Joyce A, Ollis D, Green C. Applying systems theory to the evaluation of a whole school approach to violence prevention. Health Promot J Austr. 2016;27(3):230–5. Epub 2016/10/11. pmid:27719735.
- 104. Atkinson J-A, Wells R, Page A, Dominello A, Haines M, Wilson A. Applications of system dynamics modelling to support health policy. Public Health Research & Practice. pmid:26243490
- 105.
J H. Evaluation Cookbook. Edinburgh, UK Heriot-Watt University (Institute for Computer Based Learning), 1998.
- 106.
MB H. Evaluating System Change: A Planning Guide. Princeton, NJ: 2010.
- 107. Sterling EJ, Zellner M, Jenni KE, Leong K, Glynn PD, BenDor TK, et al. Try, try again: Lessons learned from success and failure in participatory modeling. Elementa: Science of the Anthropocene. 2019;7.