Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Community partners’ responses to items assessing stakeholder engagement: Cognitive response testing in measure development

  • Vetta L. Sanders Thompson ,

    Contributed equally to this work with: Vetta L. Sanders Thompson, Nora Leahy, Nicole Ackermann

    Roles Conceptualization, Data curation, Formal analysis, Methodology, Writing – original draft, Writing – review & editing

    vthompson22@wustl.edu

    Affiliation Washington University in St. Louis, Brown School, St. Louis, MI, United States of America

  • Nora Leahy ,

    Contributed equally to this work with: Vetta L. Sanders Thompson, Nora Leahy, Nicole Ackermann

    Roles Data curation, Formal analysis, Writing – review & editing

    Affiliation Washington University in St. Louis, School of Medicine, St. Louis, MI, United States of America

  • Nicole Ackermann ,

    Contributed equally to this work with: Vetta L. Sanders Thompson, Nora Leahy, Nicole Ackermann

    Roles Data curation, Project administration, Writing – original draft, Writing – review & editing

    Affiliation Washington University in St. Louis, School of Medicine, St. Louis, MI, United States of America

  • Deborah J. Bowen ,

    Roles Conceptualization, Writing – review & editing

    ‡ These authors also contributed equally to this work.

    Affiliation University of Washington, Department of Bioethics and Humanities, Seattle, WA, United States of America

  • Melody S. Goodman

    Roles Conceptualization, Methodology, Resources, Writing – review & editing

    ‡ These authors also contributed equally to this work.

    Affiliation New York University, School of Global Public Health, New York, NY, United States of America

Abstract

Background

Despite recognition of the importance of stakeholder input into research, there is a lack of validated measures to assess how well constituencies are engaged and their input integrated into research design. Measurement theory suggests that a community engagement measure should use clear and simple language and capture important components of underlying constructs, resulting in a valid measure that is accessible to a broad audience.

Objective

The primary objective of this study was to evaluate how community members understood and responded to a measure of community engagement developed to be reliable, valid, easily administered, and broadly usable.

Method

Cognitive response interviews were completed, during which participants described their reactions to items and how they processed them. Participants were asked to interpret item meaning, paraphrase items, and identify difficult or problematic terms and phrases, as well as provide any concerns with response options while responding to 16 of 32 survey items.

Results

The results of the cognitive response interviews of participants (N = 16) suggest concerns about plain language and literacy, clarity of question focus, and the lack of context clues to facilitate processing in response to items querying research experience. Minimal concerns were related to response options. Participants suggested changes in words and terms, as well as item structure.

Conclusion

Qualitative research can improve the validity and accessibility of measures that assess stakeholder experience of community-engaged research. The findings suggest wording and sentence structure changes that improve ability to assess implementation of community engagement and its impact on research outcomes.

Introduction

Policy makers, funders, patients, and providers are increasingly interested in community/patient-engaged research, which is broadly understood as stakeholder-engaged research. Interest in this work is based on the belief that this strategy contributes to more acceptable research designs and methods and culturally sensitive and ethical proposals that reduce participant burden and enhance recruitment and retention of participants [13]. In addition, stakeholder-engaged research may facilitate implementation research. These studies require planning for sustainability after funding, trust among collaborating stakeholders, and ensuring research capacity of stakeholder partners in order to leverage stakeholder resources and develop reciprocal relationships between researchers and stakeholders [1]. To assure that engagement meets the goals articulated, there is a need to rigorously evaluate the impact of stakeholder engagement on the development, implementation, and outcomes of research studies, in order to move from lessons learned to evidence-based practices for stakeholder engagement [2].

It is important to understand how the level of engagement in a partnership is developing, and to what extent the level of engagement is a predictor of outcomes in the study. Rigorous measurement of engagement requires the development and validation of tools to assess stakeholder engagement, using items that respondents understand and that measure what they are intended to measure. A systematic review of measures of stakeholder engagement in research showed that this area of research is not very strong methodologically [3]. However, we present in this paper the results of one step in the process of developing a clearly defined, reliable, and valid measure of stakeholder engagement. Cognitive response interviews were employed to explore participants’ reactions to item wording and response options in addition to their ability to comprehend items and determine a response. A grounded theory approach guided data analysis and interpretation.

In addition to item development, item refinement must be considered in measure development and validation. Measurement items that work well are defined as those that use clear and simple language, are not rated as difficult by interviewers or research participants, do not result in significant missing data, do not yield unexpected frequencies or patterns of association, and capture an important component of the underlying construct [4]. Studies addressing item and measurement quality note variations in item interpretation based on participation in the activity under study and variations in recall based on the frequency of participation in the activity/activities. Community engagement studies are subject to the types of variations in participation that may impact item interpretation and recall [5, 6]. Researchers recognize the importance of the wording of survey questions. One approach to identifying and resolving variations in item interpretation related to experience, language, and construction is to use cognitive response interviewing.

Cognitive interview methods have become an important strategy to ensure the quality and accuracy of survey instruments and are used to identify and analyze sources of response error in survey questionnaires [7, 8]. Because it helps to explain how people come up with answers to survey questions, cognitive response interviewing is used to evaluate and improve survey items and design [8]. Willis and Artino note that cognitive response interviewing is a part of a systematic and rigorous survey design process [9]. They note that the process allows researchers to answer the question: ‘‘Will my respondents interpret my items in the manner that I intended?” (p. 353) [9]. Cognitive response interviewing is an evidence-based option for examining participant comprehension and interpretation of survey items, as well as respondents’ understanding of intended differences in survey items [10, 11].

Two common cognitive interviewing techniques are verbal probing and ‘‘think aloud” [810]. In verbal probing, participants answer questions about their interpretations of a survey item, paraphrase the survey item, and identify words, phrases, or item components that are problematic. In ‘‘think aloud,” participants verbalize ideas that come to mind as they answer a question and thereby shed light on reactions, inferences, and beliefs that helped them arrive at their answer. Both techniques are useful in identifying problems with survey item wording and design, but verbal probing is more likely to address issues with “literacy and plain language (i.e., jargon-free and carefully worded language” consistent with the Federal Plain Language Guidelines [12]).

This article adds to the literature on the measurement of stakeholder engagement by describing literacy concerns, attitudes about the information needed to judge engagement, as well as response preferences for items used in the public health literature to assess the level, type, and impact of community-engaged interventions and research. In addition, findings may improve the way that researchers communicate with stakeholders about community-engaged research and the assessment of this type of research.

Methods

Participant sample

A purposive sample of 16 was recruited to complete one-on-one cognitive response interviews. Eligibility criteria for the cognitive response interviews included being an adult (18 years or older) with experience partnering with researchers on patient- or community-engaged research. Participants were recruited by email from a database of Community Research Fellows Training (CRFT) alumni, who completed the CRFT program in St. Louis, MO, [13] and through referral by CRFT alumni. CRFT was established in 2013 and maintains a voluntary database of graduates of four cohorts (n = 125), 94 (75%) of whom are active alumni and have updated contact information.

Item selection

In 2011, some of the authors of this paper [14] participated in a review of the community-based participatory research (CBPR) and community engagement literature to determine best practices in evaluating adherence to, effectiveness of, and implementation of CBPR, as well as to identify relevant items. Based on this review, the evaluation team for the Program for the Elimination of Cancer Disparities (PECAD) (including some authors) identified and adapted questions for the survey from published measures on group dynamics, characteristics of effective partnerships, intermediate measures of partnership effectiveness, facilitation of partner involvement, and member satisfaction [1517]. The original survey (initiated in 2011) contained 60 items and included both closed- and open-ended questions [14].

In 2013, a PECAD evaluation committee was formed and initiated a revision of the original 2011 survey. The goal was to create a measure that was comprehensive and adequately addressed CBPR principles. The developers of the new measure (including some authors) [2] created and pilot tested a 96-item measure of community engagement in research. The new measure included some items from the original, was fully quantitative and focused on 11 engagement principles. The 96-item measure assessed items on two scales—48 questions measuring quality (how well) and 48 questions measuring quantity (how often), as measured by three to five quality and three to five quantity items that correspond with each engagement principle [2]. The measure used 5-point Likert scale response options. Further details on the original measure are published elsewhere [2].

Content validation and item reduction of the quantitative measure of stakeholder engagement was completed in 2017–2018 [18], using a Delphi Process. A Delphi Process involves administration of multiple rounds of individual online and/or in-person surveys, with participant feedback on aggregated group responses for each round until reaching majority agreement on issues [18]. A five-round, modified Delphi Process was used to reach consensus on engagement principles and items for inclusion, elimination, and revision [19, 20]. The number of survey items on each scale (quantity and quality) was reduced from 48 to 32. There were three to five quality and three to five quantity items corresponding with each engagement principle, each assessed for quality (how well the engagement activity/strategy was implemented or completed) and quantity (how often the engagement activity/strategy was implemented) (Tables 1 and 2). The items that emerged after content validation (Delphi process) were subjected to cognitive response interview.

thumbnail
Table 1. Summary of items removed throughout measure development & validation process–Delphi rounds 1 & 2.

https://doi.org/10.1371/journal.pone.0241839.t001

thumbnail
Table 2. Summary of items removed throughout measure development & validation process–Delphi rounds 3–4.

https://doi.org/10.1371/journal.pone.0241839.t002

Procedures

The institutional review boards at Washington University in St. Louis and at New York University approved this study and the consent procedures used. Interviewers (n = 4) attended a training to ensure consistent interview and data collection procedures. Interviewers were female, led by a PhD psychologist (VLST), MPH-trained project manager (NA), and two MPH student research assistants (including NL). The training provided in-depth instruction in cognitive interviewing, as well as an orientation to the interview guide and protocol. The interviewers received instruction on the use of tablets to administer the cognitive response interview to assure consistency and ease of administration. Although tablets were used during the interview to capture survey item and quantitative question responses, computer-assisted personal interview software was not used, and participant qualitative responses were captured using a digital recorder.

Sixteen eligible participants completed the in-person, one-on-one interviews in study interview rooms. Verbal consent for participation was obtained from all participants after an information sheet was provided. In order to assure that respondents understood their role in cognitive interviews, we explained that the purpose of the interview was to identify problems with item wording and to help us modify the items to improve their use in community-engaged research. We emphasized their role in helping clarify the questions before administering the final survey to 500 participants.

The first author (VLST) and three research assistants (including NA, NL) conducted interviews. Interviewees were greeted by a project staff member, directed to the interview room, and introduced to the interviewer, if different. At the beginning of the session, the interviewer introduced herself, explained the study, and briefly described the use of the tablet, explained the study procedures, and answered any of the participant’s questions. The interviewer read each question aloud and highlighted the availability of a paper version of the survey to ease the participant’s review and consideration. After the participant answered an item, the interviewer completed the verbal probing.

We first administered the draft of the survey items in a standard fashion, followed by scripted open-ended probes and spontaneous probes as needed for clarification. The interview probes were systematically developed before the interview, in order to search for potential problems (i.e., proactive verbal probes) [9] with survey items, as well as response options. The interview probes included questions and statements addressing the following:

  • What do you think the statement is discussing/ describing?
  • How would you rephrase the statement in your own words?
  • Identify all the words in the statement, if any, that you do not understand.
  • Rate how difficult it was to choose a response option for this statement.
  • What, if anything, made this item difficult to answer?
  • Rate the importance of this item for measuring community engagement.
  • Describe what, if anything, makes it important to measuring community engagement in your own words.
  • Rate how satisfied you are with the response options and how would you revise the response options?

To minimize the impact that the order of questions had on the overall results, we used 4 different versions of the questionnaire, with each set containing 16 items from each scale. Participants were assigned an identification number and interview version by NA before the interview was conducted. All 90- to 120-minute interviews were digitally recorded, and each session’s recording was professionally transcribed. Each individual received a $50 gift card for participation.

Data coding and analysis were completed based on both the digitally recorded and professionally transcribed interviews and the field notes from interviewers compiled by NA. Transcripts were reviewed by the lead author (VLST) and project manager (NA) but were not returned to participants for review. After reviewing the project goals, the content of the interviews, and the existing literature, the first author (VLST) developed a defined coding guide that prescribed rules and categories for identifying and recording content.

Because the quality (how well) and quantity (how often) items repeat, these questions were only asked about the quality items. The remaining interview focused on participant reaction to changes in the response options as the survey moved from assessing the quality of community-engaged research activities to assessing the frequency of community-engaged research activities (quantity). For example, the interviewers asked:

  • How did the change of scale, from a quality to quantity scale, affect your understanding of the item?
  • Did it make the statement more difficult or easier to understand?

Two queries were repeated to specifically address the quantity response option.

  • “Tell me about your thought process when choosing a response option.”
  • “Tell me what you thought about to come up with your responses to the statements.”

During the analysis phase, codes and themes were developed based on the elements deemed important in the cognitive response literature on item and questionnaire development [710]. The segments of text containing codes were identified and the codes were extracted, categorized and classified. All transcripts were coded. Once saturation was achieved by two coders, the senior investigator (VLST) and a research assistant (NL) read and coded the interview transcripts independently, identifying text units that addressed item clarity, literacy concerns, contextual issues, difficulty of the item, difficulty of item response, relevance and necessity for measurement of stakeholder engagement, in addition to clarity and appropriateness of response options. Coders met to reach a consensus on the definitions and examples used to code interview text from each transcript as the process proceeded. In cases of disagreement, the coders discussed discrepancies to reach consensus. On completion of coding, the coders reconvened to formulate core ideas and general themes that emerged from each interview. An interview summary, with examples, was developed for each theme. The full research team reviewed and discussed the themes identified in an effort to develop connections among themes and to clarify the relevance and importance of the findings for the measure and the field. Participants did not provide feedback on the findings.

Results

The majority of the cognitive interview participants were female (n = 13; 81%), were African American (n = 11; 69%), and had a college degree or higher level of education (n = 9; 56%). Participants ranged in age from 24 to 73 years with a mean age of 47.3 years (Table 3). All of the participants had previous experience with community-engaged research.

thumbnail
Table 3. Demographic characteristics of interview participants.

https://doi.org/10.1371/journal.pone.0241839.t003

Item comprehension

Most participants did not readily report difficulties with the comprehension or definition of words or phrases; however, there were a few exceptions. Although not a concern among most participants, several reported that the wording of items addressing publication of research products was difficult, including wording related to dissemination, dissemination activities, and intellectual property. One participant stated, “Okay, now you put a big word in there. Okay, involve interested partners in dissemination activities.” Terminology that addressed procedures to assure adherence to CBPR principles, such as memorandum of understanding, governance, management responsibility, and mutually agreed upon were among the terms identified as literacy concerns as stated by a participant: “I’ve not heard that one. Memorandum of understanding.” Stakeholder was identified as jargon that presented a literacy issue. Participants’ recommendations resulted in the replacement of the term stakeholder with partners. Several other words were identified that may also relate to the use of disciplinary jargon. For example, participants noted that food access could be simplified to “places to get food.” Other words were unfamiliar and presented problems, such as fosters, equitable, and inclusiveness (See Table 4). Participants suggested plain language alternatives, such as sharing results or sharing data versus dissemination, roles and responsibilities versus memorandum of understanding and articles and presentations versus intellectual property.

thumbnail
Table 4. Summary of cognitive interview analyses: factors related to comprehension, response and suggestions for change.

https://doi.org/10.1371/journal.pone.0241839.t004

Definitional issues also emerged. The terms cultural factors, problem solving, and leadership responsibilities were viewed as too general or ambiguous.

“That can mean a lot of different things to a lot of different people. You can call a lot of different things a culture. I guess, to make it more clear, explaining what they mean by cultural.” (Participant 1: Female, 54)

“I don’t know what you mean by problem solving. I don’t know what you mean by ongoing. I don’t know if there’s a time limit on that or boundaries.” (Participant 15: Female, 62)

Even when words such as resources, environment, and partners were understood, community participants wanted specifics and context, including what resources and which partners. Participants recommended the use of all partners to assure that both academic and community partners were considered, while sometimes noting that alternative wording was sometimes difficult to generate, particularly without context.

“I would say I’m not completely clear what they mean by the environment.” (Participant 1: Female, 54)

Item response

Participants were generally satisfied with the response options (81.25% for quality and 87.5% quantity) and used the full range of response options for both scales (Table 5). Most participants indicated that it was “extremely easy” or “somewhat easy” (average 74.2% per item) to respond to the items tested. Participants noted that it was easy to transition between quality and quantity items and easier to respond to the quantity compared to the quality items.

thumbnail
Table 5. Average of participants (N = 16) choosing response options over all items.

https://doi.org/10.1371/journal.pone.0241839.t005

Some participants requested the inclusion of “unsure, undecided” as a response option and the inclusion of numbers to ground the quantity scale. The larger concerns raised by participants related to the following: 1) the question stem that preceded items and 2) the ability to respond to items viewed as complex. The stem, “Please rate the quality/quantity or how well/how often academic partners do each of the following,” created confusion about who subsequent items referenced when terms such as partners, partnership, or stakeholders were used. The stem is, therefore, implicated in the comprehension concerns noted. Participants recommended that the terms quality and quantity be removed from the stem and that the term all partners be used instead of the term academic partners.

Because of compound or “double-barreled questions,” two items were identified as creating difficulty in responding: “All partners assist in establishing roles and responsibilities for the collaboration” and “Partners agree on ownership and management responsibility of data and intellectual property.” These items were changed to “All partners assist in establishing roles and related responsibilities for the partnership” and “All partners agree on ownership of data for publications and presentations,” respectively. Other items were identified as problematic because of strong beliefs about how health research should be conducted and partnerships managed.

Relevance for community engagement

Two items were identified as unimportant to the assessment of community engagement by some participants. These items focused on items associated with the CBPR principle that addresses dissemination; thus, they were characterized as research focused. A participant’s thoughts are illustrated below:

“I don’t think it’s a significant indicator of how engaged investigators are if they give authorship to the community partners. I think as long as they give credit to the community partners, that’s what is important.” (Participant 12: Male, 43)

The items seen as having the greatest relevance for the assessment of community engagement were trust, community benefit, respect, power/control, mutual decision making, and valuing the community. Sample participant explanations of principles and items appear below:

“Well, potentially the most important thing is identifying the issues that matter because if the issue itself doesn’t matter then why would the community want to be engaged in the research if that’s not important to them. Also, the result is going to be unimportant.” (Participant 2: Female, 31)

“—that should be equal, the responsibilities. I’m thinking in terms of the community—well, as an equal relationship, so it’s important that both are empowered to do what is necessary to better their circumstances.” (Participant 5: Male, 73)

“So, yes, basically everything that both sides bring are being considered important because it gives mutual respect.” (Participant 11: Female, 38)

“I think having trust among community levels—or amongst community members is important, because then you’re going to get the most accurate answers and you’re going to get—you’re going to get even more than what you asked for. (Participant 7: Female, 24)

Discussion

In order to understand the role that stakeholder-engaged research plays in the development, implementation, and outcomes of research studies, development and validation of measurement tools that can reliably and validly assess stakeholder engagement are required. This paper presents the results of one component in the measurement development process that also has implications for the way that we communicate with community partners about community-engaged research and the assessment of this work.

Results of cognitive response interviewing were consistent with concerns raised by Willis and Artino [9], who suggest that abstract terms are most problematic for participants. In this study, several terms commonly encountered in community engagement literature and measures were perceived as barriers and affected how community members responded to the item. Academic partners and researchers should likely guard against the assumption of common understanding, as participants considered some terms to be vague and in need of examples or context. Although it is appropriate to discuss culture, problem solving, plans, and environment, we must clarify what is referenced at specific times and with specific stakeholders. Even academics involved in community-engaged research may fail to realize when a common vocabulary has ceased to exist. In addition, plain language should be used to assure comprehension of discussions of publication and shared findings, the role of social determinants (such as the ability to get food), and efforts to assure that all partners are treated fairly and included in decisions and access to resources.

The findings suggest that item construction and comprehension issues were of greater concern in this measure development effort than response options. Most participants were satisfied with response options, found it easy to respond, and used the range of response options. Items that were excessively wordy and appeared to ask questions that required a response to two issues were identified as obstacles to participant response. It is important to note that the effort to develop consensus on items during the Delphi process described previously resulted in the development of some of the items identified as complex. Efforts to address diverse community input during item development may result in the need for additional review and editing to avoid item construction errors. In addition, the findings suggest that strong opinions and attitudes about an issue generated some concerns about the language used in survey items. This does not mean that an item should be reworded, but it does suggest that communication in partnerships should consider how messaging may affect dialogue and responses.

Few tested items were perceived as inappropriate or unimportant to the assessment of community-engaged research, although some participants questioned the importance of dissemination issues for community members versus academics. It is possible that the engagement principle guiding dissemination and the relevant items are sensitive to the research phase, i.e. more relevant to participants who are engaged in projects that are in or near the dissemination of the collaborative effort. The general acceptability of items suggests that the principles used to guide item selection are acceptable to the community members likely to be encountered or to participate in stakeholder-engaged research and assessment [19], although participants suggested changes in words and terms, as well as item structure. Minimal concerns were related to response options. These findings should be interpreted cautiously because of the small sample size. However, cognitive response interviewing [7,10] provides in-depth insight into how participants are thinking about and interpreting items, the factors that affect their interpretation and responses, and how comfortable they feel with the language, options, and coverage of topics important to an issue.

Conclusions

Understanding how the level of engagement in a partnership is developing and to what extent level of engagement is a predictor of outcomes in stakeholder-engaged research is important to making progress in community-engaged research. Because researchers have suggested that research on measures of stakeholder engagement is not very strong [3] and rigorous measurement of engagement is required, the results of the current study contribute to an effort to develop and validate a broadly applicable measure of stakeholder engagement. In the results of the cognitive response interviews, which were used to refine the questionnaire being developed, participants suggested concerns about plain language, literacy and clarity of question focus, and the lack of context clues to facilitate responses to items that query research experience. Given that the presented findings are consistent with the literature on stakeholder engagement [2, 16]—although communication concerns were highlighted in the current study—these findings should be of use to both those assessing community-engaged research and those engaging the community in the research process. Researchers should remain cognizant of the use of plain language, literacy levels, and contextual cues as partnerships are discussed and as agreements are developed.

Acknowledgments

The authors thank Sharese Willis for her help editing the manuscript.

References

  1. 1. Patient-Centered Outcomes Research Institute. Patient-Centered Outcomes Research. 2012. Available from: https://www.pcori.org/research-results/patient-centered-outcomes-research.
  2. 2. Goodman MS, Thompson VLS, Arroyo Johnson C, Gennarelli R., Drake BF, Bajwa P, et al. Evaluating community engagement in research: quantitative measure development. J Community Psych. 2017;45:17–32. pmid:29302128
  3. 3. Bowen D, Hyams T, Goodman M, West Km, Harris-Wai J, Yu J-H. Systematic review of quantitative measures of stakeholder engagement. Clinical and Translational Sci. 2017. pmid:28556620
  4. 4. Pasick RJ, Stewart SL, Bird JA, D’Onofrio SN. Quality of data in multiethnic health surveys. Public Health Reports. 2001;116 Suppl 1:223. pmid:11889288
  5. 5. Warnecke RB, Johnson TP, Chavez N, Sudman S, O'Rourke D, Lacey L, et al. Improving question wording in surveys of culturally diverse populations. Annals of Epi. 1997;7:334–342. pmid:9250628
  6. 6. Warnecke RB, Sudman S, Johnson TP, O'Rourke D, Davis AM, Jobe JB. Cognitive aspects of recalling and reporting health-related events: Papanicolaou smears, clinical breast examinations, and mammograms. Am J of Epi. 1997;146:982–992.
  7. 7. Alaimo K, Olson CM, Frongillo EA. Importance of cognitive testing for survey items: an example from food security questionnaires. J of Nutrition Educ. 1999;31:269–75.
  8. 8. Willis GB. Cognitive interviewing: a tool for improving questionnaire design. Thousand Oaks (CA): Sage Publications; 2005.
  9. 9. Willis GB, Artino AR. What do our respondents think we're asking? Using cognitive interviewing to improve medical education surveys. J of Grad. Medical Educ. 2013;5:353–356. pmid:24404294
  10. 10. Drennan J. Cognitive interviewing: verbal data in the design and pretesting of questionnaires. J of Advanced Nurs. 2003;42:57–63. pmid:12641812
  11. 11. Knafl K, Deatrick J, Gallo A, Holcombe G, Bakitas M, Dixon J, et al. The analysis and interpretation of cognitive interviews for instrument development. Research Nurs & Health. 2007;30:224–234. pmid:17380524
  12. 12. Federal Plain Language Guidelines (Revised May, 2011). Available from: https://www.plainlanguage.gov/media/FederalPLGuidelines.pdf
  13. 13. Coats JV, Stafford JD, Sanders Thompson V, Johnson Javois B, Goodman MS. Increasing research literacy: the community research fellows training program. J Empirical Res on Human Res Ethics, 2015;10:3–12.
  14. 14. Arroyo-Johnson C, Allen ML, Colditz GA, Hurtado GA, Davey CS, Thompson VLS, et al. A tale of two community networks program centers: operationalizing and assessing CBPR principles and evaluating partnership outcomes. Prog. Community Health Partnerships: Res, Educ & Action. 2015;9 Suppl:61–69. pmid:26213405
  15. 15. Mainous AG, Smith DW, Geesey ME, Tilley BC. Development of a measure to assess patient trust in medical researchers. Annals of Family Med. 2006;4:247–252. pmid:16735527
  16. 16. Schulz AJ, Israel BA, Lantz P. Instrument for evaluating dimensions of group dynamics within community-based participatory research partnerships. Eval & Prog. Planning 2003;26:249–62.
  17. 17. Weiss ES, Taber SK, Breslau ES, Lillie SE, Li Y. The role of leadership and management in six southern public health partnerships: a study of member involvement and satisfaction. Health Educ & Behav. 2010;37:737–752. pmid:20930135
  18. 18. Brady S. R. Utilizing and adapting the Delphi method for use in qualitative research. International J Qualitative Methods. 2015; 14(5):1–6. 1609406915621381.
  19. 19. Goodman MS, Ackermann N, Bowen DJ, Thompson V. Content validation of a quantitative stakeholder engagement measure. J Community Psych. 2019;47:1937–1951. pmid:31475370
  20. 20. Goodman MS, Ackermann N, Bowen D, members of the Delphi Panel, Sanders Thompson V. Reaching consensus on principles of stakeholder engagement in research. Progress in Community Health Partnerships. Forthcoming 2020. pmid:32280129