Skip to main content
Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

A multi-stage process to develop quality indicators for community-based palliative care using interRAI data

  • Dawn M. Guthrie ,

    Contributed equally to this work with: Dawn M. Guthrie, Nicole Williams

    Roles Conceptualization, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliations Department of Kinesiology and Physical Education, Wilfrid Laurier University, Waterloo, Ontario, Canada, Department of Health Sciences, Wilfrid Laurier University, Waterloo, Ontario, Canada

  • Nicole Williams ,

    Contributed equally to this work with: Dawn M. Guthrie, Nicole Williams

    Roles Data curation, Formal analysis, Methodology, Project administration, Software, Validation, Writing – review & editing

    Affiliation Department of Kinesiology and Physical Education, Wilfrid Laurier University, Waterloo, Ontario, Canada

  • Cheryl Beach ,

    Roles Writing – review & editing

    ‡ These authors also contributed equally to this work.

    Affiliation Integrated Community Services, Fraser Health, Surrey, British Columbia, Canada

  • Emma Buzath ,

    Roles Writing – review & editing

    ‡ These authors also contributed equally to this work.

    Affiliation Provincial Palliative and-End-of-Life Care, Seniors Health and Continuing Care, Alberta Health Services, Calgary, Alberta, Canada

  • Joachim Cohen ,

    Roles Writing – review & editing

    ‡ These authors also contributed equally to this work.

    Affiliation End-of-Life Care Research Group, Vrije Universiteit Brussel, Brussels, Belgium

  • Anja Declercq ,

    Roles Writing – review & editing

    ‡ These authors also contributed equally to this work.

    Affiliations LUCAS – Center for Care Research and Consultancy, KU Leuven, Leuven, Belgium, CESO – Center for Sociological Research, KU Leuven, Leuven, Belgium

  • Kathryn Fisher ,

    Roles Writing – review & editing

    ‡ These authors also contributed equally to this work.

    Affiliation School of Nursing, McMaster University, Hamilton, Ontario, Canada

  • Brant E. Fries ,

    Roles Writing – review & editing

    ‡ These authors also contributed equally to this work.

    Affiliation Department of Health Management and Policy and Department of Geriatric and Palliative Medicine, University of Michigan, Ann Arbor, Michigan, United States of America

  • Donna Goodridge ,

    Roles Writing – review & editing

    ‡ These authors also contributed equally to this work.

    Affiliation College of Medicine, University of Saskatchewan, Saskatoon, Saskatchewan, Canada

  • Kirsten Hermans ,

    Roles Writing – review & editing

    ‡ These authors also contributed equally to this work.

    Affiliation LUCAS – Center for Care Research and Consultancy, KU Leuven, Leuven, Belgium

  • John P. Hirdes ,

    Roles Writing – review & editing

    ‡ These authors also contributed equally to this work.

    Affiliation School of Public Health Sciences, University of Waterloo, Waterloo, Ontario, Canada

  • Hsien Seow ,

    Roles Writing – review & editing

    ‡ These authors also contributed equally to this work.

    Affiliation Department of Oncology, McMaster University, Hamilton, Ontario, Canada

  • Maria Silveira ,

    Roles Writing – review & editing

    ‡ These authors also contributed equally to this work.

    Affiliation Division of Geriatric and Palliative Medicine, Department of Internal Medicine, University of Michigan, Ann Arbor, Michigan, United States of America

  • Aynharan Sinnarajah ,

    Roles Writing – review & editing

    ‡ These authors also contributed equally to this work.

    Affiliation Department of Medicine, Queen’s University, Kingston, Ontario, Canada

  • Susan Stevens ,

    Roles Writing – review & editing

    ‡ These authors also contributed equally to this work.

    Affiliation Nova Scotia Health, Halifax, Halifax, Nova Scotia, Canada

  • Peter Tanuseputro ,

    Roles Writing – review & editing

    ‡ These authors also contributed equally to this work.

    Affiliation Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada

  • Deanne Taylor ,

    Roles Writing – review & editing

    ‡ These authors also contributed equally to this work.

    Affiliations Research Department, Interior Health Authority, Kelowna, British Columbia, Canada, Rural Coordination Centre of BC, Penticton, British Columbia, Canada

  • Christina Vadeboncoeur ,

    Roles Writing – review & editing

    ‡ These authors also contributed equally to this work.

    Affiliations Department of Pediatrics, University of Ottawa, Ottawa, Ontario, Canada, Children’s Hospital of Eastern Ontario Ottawa, Ontario, Canada, Roger Neilson House, Ottawa, Ontario, Canada

  •  [ ... ],
  • Tracy Lyn Wityk Martin

    Roles Writing – review & editing

    ‡ These authors also contributed equally to this work.

    Affiliation Provincial Palliative and-End-of-Life Care, Seniors Health and Continuing Care, Alberta Health Services, Calgary, Alberta, Canada

  • [ view all ]
  • [ view less ]



Individuals receiving palliative care (PC) are generally thought to prefer to receive care and die in their homes, yet little research has assessed the quality of home- and community-based PC. This project developed a set of valid and reliable quality indicators (QIs) that can be generated using data that are already gathered with interRAI assessments—an internationally validated set of tools commonly used in North America for home care clients. The QIs can serve as decision-support measures to assist providers and decision makers in delivering optimal care to individuals and their families.


The development efforts took part in multiple stages, between 2017–2021, including a workshop with clinicians and decision-makers working in PC, qualitative interviews with individuals receiving PC, families and decision makers and a modified Delphi panel, based on the RAND/ULCA appropriateness method.


Based on the workshop results, and qualitative interviews, a set of 27 candidate QIs were defined. They capture issues such as caregiver burden, pain, breathlessness, falls, constipation, nausea/vomiting and loneliness. These QIs were further evaluated by clinicians/decision makers working in PC, through the modified Delphi panel, and five were removed from further consideration, resulting in 22 QIs.


Through in-depth and multiple-stakeholder consultations we developed a set of QIs generated with data already collected with interRAI assessments. These indicators provide a feasible basis for quality benchmarking and improvement systems for care providers aiming to optimize PC to individuals and their families.


The goal of palliative care (PC) is to improve quality of life for individuals and their families facing the problems associated with a life-limiting illness, and to provide care that promotes dignity, respect, and comfort [1, 2]. Palliative and end-of-life care spans the disease process from early diagnosis to end-of-life, inclusive of bereavement. A majority of Canadians surveyed in 2016 supported resources being available for PC at home [3]. Recent data from Ontario—one of the sites of this study and the largest province in Canada with more than 14 million residents–found that among the approximately 50% of decedents who received PC, 43% (roughly 54,000 over the course of one year), had palliative home care services [4]. Despite this, little research has assessed the quality of home-based palliative services [57], instead focusing on utilization, organization, and cost-effectiveness of these services [8, 9]. Existing QIs in the literature tend to focus on hospital use [5, 1012]. Currently, Canada does not have a standard set of quality indicators (QIs) for community-based PC. Having valid and reliable QIs, grounded in a community-based perspective, is essential to identify and monitor areas for improvement [13], and ultimately, contribute to the delivery of optimal care to individuals receiving PC. QIs have enormous potential for improving care by providing important information to a variety of users, such as care providers, consumers, accreditation organizations and researchers [14].

QI rates are often used to compare providers, at a systems level, a process known as benchmarking [15]. Benchmarking can also be thought of as a starting point to understand which factors contribute to quality improvement, to promote discussions among health care providers to encourage organizational change within the organizations being compared, and to learn from each other’s improvement strategies [16].

Although QIs have been proposed for those receiving PC, or individuals with life-limiting illnesses [1723], they have several limitations. The majority of existing QIs tend to rely on administrative data to capture hospital admissions/emergency department (ED) visits and focus on the process or structure of care rather than on outcomes that matter most to individuals receiving PC [20, 2426]. They also tend to be focused predominantly on individuals with cancer [17, 27, 28], despite the fact that cancer is the cause of death of less a third of decedents, and despite PC being increasingly available to those with other life-limiting conditions such as organ failure and dementia [29]. Therefore, QIs are needed that encompass a wide range of individuals who are receiving PC or who could benefit from such care, including those for whom prognosis may be less predictable [30, 31].

We outline the development of a set of proposed QIs that were explicitly defined based on data elements within the interRAI assessments. interRAI is a not-for-profit network of researchers, clinicians, and policy experts from just over 30 countries who develop and test standardized, clinical assessments for use in a variety of health and social services settings (e.g., home care, nursing homes/long-term care, inpatient mental health) [3234]. These assessments are extensively used in multiple parts of Canada, and around the world, and the data are available to interRAI researchers and their trainees. For example, the interRAI Home Care assessment (interRAI HC) is currently used in 21 countries [35, 36] and for all long-term home care clients in Ontario. The assessments are performed by trained assessors (typically registered nurses) using all sources of information, including the person receiving care, informal caregivers (natural support persons) and families, clinical care providers and the medical record. The assessment is completed in Ontario for all home care clients who require home care services for a minimum of 60 days.

Utilizing the existing, routinely-collected, and population-based interRAI data to generate QIs is a very efficient use of this information and avoids the tremendous time and energy required using traditional approaches to quality improvement, such as using detailed chart reviews [37]. In fact, the interRAI data represents one of the only databases in Canada with a sufficient number of assessments and unique individuals to allow for the creation and testing of QIs. There is also a long history of QI development and testing by interRAI researchers [14, 3840], with the earliest QIs proposed in the mid-1990s [41]. Earlier work by our team, using the interRAI assessment for home care, identified a set of preliminary QIs for community-based PC that could be generated with these assessments [42], and also explored the rates of these preliminary QIs by province/territory [43].

The main goal of this paper is to report on the first steps, of a larger study, to develop, test and validate a set of QIs for community-based PC. In this paper, we describe the creation of the QIs that were developed through in-depth consultations with multiple stakeholder groups and evaluation by a panel of experts.

Materials and methods

QI development took place over approximately four years (2017–2021) and is still ongoing (Fig 1). The research team included 18 researchers, clinicians or decisions makers from Canada, the US and Belgium with expertise in palliative medicine, health services research, epidemiology, and knowledge translation. The team was actively involved in all stages of the development of the QIs, which included three sequential phases. The study protocol was reviewed and approved by the Wilfrid Laurier University Research Ethics Board (REB #5683).

Fig 1. Summary of the steps in the QI development process.

Phase I: Qualitative input from key stakeholders

The first activity, in this phase, involved input from decision makers from across Ontario, Canada’s largest province. A group of 30 PC experts participated in a one-day workshop, the results of which have been published previously [44]. Participants included clinical leaders, researchers, front-line care providers as well as health and information system administrators. At the time of the workshop, the province was divided geographically into 14 Local Health Integration Networks (LHINs) and 12 of the 14 LHINs were represented. Other participants represented key organizations such as Hospice Palliative Care Ontario, the Ontario Ministry of Health and Long-Term Care, Ontario Palliative Care Network and the Canadian Institute for Health Information.

In summary, the qualitative analysis of the information provided by the workshop participants identified six key themes important for measuring quality in community-based PC: access to care, patient care, caregiver support, symptom management, spiritual care, and home as the preferred place of death. Where possible, QIs were created that directly related to these six themes (as described below in “Phase II”). For example, since participants discussed symptom management as an important area to assess, we created QIs to measure pain, shortness of breath, fatigue, and other troublesome physical symptoms. The indicator concepts, structure, and definitions were derived from the validated interRAI data elements.

Following the workshop, project Knowledge Users (KUs) helped the research team to recruit individuals receiving PC, their family members and decision makers from across Canada. It was considered inappropriate, and potentially a violation of research ethics, for the research team to directly contact individuals receiving PC and their families. As such, the KUs assisted the team in recruiting potential study participants. KUs represent individuals who are likely to be able to use research results to make informed decisions about health policies, programs and/or practices. The eight KUs on our team included individuals from five different provinces/territories. A series of interviews and focus groups were held with families (n = 9) and decision makers (n = 11), including one individual who was actively receiving PC. The primary goal of these interviews and focus groups was to understand participants’ experiences within the health care system and associate these experiences to measurable QIs that could be developed with existing interRAI data. For example, it was apparent from the interviews that caregivers often felt overwhelmed in their caregiving role and expressed that there was a clear disconnect between what the system could provide and what caregivers expected from the system. Palliative care typically strives to provide access to on-call practitioners during and after office hours. Despite this, some care caregivers felt that during a crisis, they had to rely on ambulances and the use of the emergency department (ED). Caregivers cited emotional and psychological needs as well as loneliness among those receiving PC [45]. This rich qualitative data was used to guide QI development in the next phase of the project.

Phase II: Defining potential QIs

The research team used the feedback from Phase I, along with current literature [46, 47], to define potential QIs that captured the important domains related to PC quality. For example, based on the workshop results, a QI was drafted to capture the rate of caregiver distress, and other QIs were created to capture issues related to symptom management (e.g., QIs related to pain, breathlessness, falls, constipation, nausea/vomiting). Since the family caregivers and decision makers also discussed issues of accessing the ED, QIs were also developed related to ED visits and hospital admissions. QIs were defined to capture issues such as negative mood, anxiety and loneliness since these were discussed by caregivers and are supported as important for quality assessment in the literature [48, 49].

To generate these QIs, we focused on two interRAI assessment systems, namely the interRAI Home Care instrument (interRAI HC) and the interRAI Palliative Care (interRAI PC) tool, since these are currently used in care planning for home care clients, and palliative clients, in various parts of Canada. The interRAI HC, for example, is completed fully across Ontario, Newfoundland and Labrador and Yukon Territory, with partial coverage in British Columbia and Alberta [50], resulting in roughly 250,000 assessments annually. Research on the interRAI instruments supports the validity and reliability of these data and concludes that the overall quality can be trusted when used to support decision-making [51].

Both of these assessments provide several of the same validated health index scales, which are generated automatically once the assessment is completed. These scales include the Depression Rating Scale (DRS) [52], the Pain Scale [53], the Cognitive Performance Scale [54], the Caregiver Risk Evaluation [55], and the Pressure Ulcer Risk Scale [56]. Since these scales have confirmed validation, the scores on these scales were used in the QI definitions when possible. For example, for the QI on the prevalence of negative mood, the QI definition uses the DRS score of 3+, a cut-point with established predictive validity [52]. Other QIs are based on individual assessment items. Each QI has a distinct numerator and denominator (S1 Table). The QIs fall into two broad categories, namely, “follow-up prevalence” QIs and “failure to improve” QIs (Table 1). The first group captures the prevalence of the issue on re-assessment. For these QIs, the rate was based on re-assessments and admission assessments were excluded from the calculation. This is necessary since admission assessments would not truly reflect quality of PC at the time of the assessment. The “failure to improve” QIs assess the lack of improvement on the issue over two points in time. This is important to capture since individuals who come into contact with PC generally have an indication bias of high health needs. While the baseline function of these individuals is important to reflect those needs, future change in those needs (i.e., as addressed by PC) is also important to measure. In total, 27 preliminary QIs were created for further evaluation in Phase III. Of this list, 20 (74.1%) can be generated with both interRAI instruments, five (18.5%) others can be generated only with the interRAI PC data, and the remaining two QIs, can only be calculated with the interRAI HC data.

Table 1. List of the 27 preliminary QIs reviewed by the expert panel, how they relate to the 6 themes identified in Phase I and which interRAI assessment can be used to generate the QI.

Phase III: Modified Delphi panel to evaluate preliminary QIs

The third phase utilized a modified Delphi panel, based on the RAND/ULCA appropriateness method [57]. The main goal of the Delphi panel was to assess the level of agreement among a group of PC clinicians, researchers, and decision makers with respect to keeping or dropping any of the preliminary QIs. The Delphi method has been widely used in PC research [58]. Prior to the Delphi panel receiving the QI definitions and evaluation criteria, an Excel spreadsheet containing each of the QIs and evaluation criteria (along with an information letter/consent form) was shared with two researchers on our team, and a group of graduate students, for their feedback. We first consulted with two researchers with expertise in the Delphi process and a strong understanding of the goals of our study and a small group of graduate students (roughly 15–20), representing “non-experts,” to ensure that the materials were clear and the instructions were easy to follow. The students represented a mix of master’s and PhD level students who were all completing degrees within the School of Public Health Sciences at the University of Waterloo.

Decision makers who took part in an interview or focus group, during Phase I, were eligible to participate in the Delphi panel. Panel participants provided written informed consent prior to their participation. They were asked to evaluate each QI on four criteria: 1) Importance- the extent to which the indicator reflects an important outcome or issue for those receiving PC or their caregivers; 2) Validity- the degree to which the indicator truly captures some aspect of the quality of care (at a population level, not for an individual); 3) Evidence of improved outcomes- evidence that improvement in the indicator can have a positive effect on the individual; and 4) Usability- the extent to which the QI can be readily interpreted and used to improve care delivery. These criteria were based on previous research [59] and were rated on a scale of 1 to 9 (1 = low; 9 = high) as per the RAND/UCLA method. The evaluation spreadsheet also asked for input on the definition of the numerator/denominator. Finally, participants were given the opportunity to provide open-ended feedback on each QI, as well as space to suggest any additional QIs that the Delphi panelists felt were missing from the list, regardless of whether they could be measured with existing interRAI data.

Participants were given six weeks to complete the documents. If documents were not returned two weeks after the initial deadline, an individual reminder e-mail was sent, asking them to be returned within two weeks. After that time, if they still were not returned, a final reminder phone call was made. Any documents not returned then were considered a non-response and no further contact was made with the participant.

Determining consensus among raters

As outlined in the RAND/UCLA manual [57], the process to determine the level of agreement among the panel members involves multiple steps.

  1. Step 1: focused on determining if there was “disagreement” or “agreement,” for each of the four criteria. This involved several calculations in order to arrive at the value of the interpercentile range adjusted for asymmetry (IPRAS). This method is ideal for panel sizes larger than 15, and therefore appropriate for our panel (n = 21) [57]. An example has been provided (S1 File) which outlines how each of the four criteria were determined to have agreement/disagreement.
  2. Step 2: involved assessing the value of the median, for each of the four criteria, in conjunction with whether there was “agreement” or not from Step 1. Each of the four criteria were assigned into one of three mutually exclusive groups, namely “discard,” “retain,” or “review,” based on the work of Nakazawa et al. [60]. “Discard” was defined when the median value was between 1–3 AND there was agreement in Step 1. “Retain” was defined when the median was between 7–9 AND there was agreement from Step 1. “Review” was defined when the median was between 4–6 OR the median was another value AND there was disagreement in Step 1.
  3. Step 3: In this step, each QI was assigned into one of three groups, namely “discard,” “retain,” or “review”, based on a review of the rating of the individual QI’s four criteria. For example, if any of the four criteria were rated “discard” in Step 2, then the QI would be discarded. If three or four of the criteria were considered “retain” then that QI was kept. Two scenarios were used to decide if a QI should be “reviewed.” First, if two of the criteria were considered “retain,” then we retained the QI for further review. Second, if two or more of the criteria were considered “review” then the QI would also fall into the “review” category. It should be noted that the research team utilized the Delphi results as a guide, to support decision-making, but the team also used their discretion when making the final decisions about whether or not to keep a QI for further consideration.


A total of 33 individuals were invited to take part in the Delphi panel. They represented members of our research team (n = 12) and other experts who they suggested that we approach (n = 21). Of the 33 evaluation spreadsheets sent to the Delphi panel members, 21 were completed for a response rate of 63.6%. This group of 21 participants included three individuals who also provided input during Phase I. Among those who did not respond, one person felt that they did not have the necessary expertise to complete the evaluations, and the remainder (n = 11), did not respond after repeated emails. There were six individuals who consented, completed the demographic questionnaire, but then ultimately did not respond. They were very similar to respondents in terms of age (mean = 53.8; sd = 5.9), gender (female = 66.7%) and years of experience working in PC (>10 years = 66.7%). These individuals came from a variety of backgrounds including research (n = 3), nursing (2), and medicine (1).

The Delphi participants were mostly female (71.4%), with a mean age of 46.3 years (sd = 7.3) and the majority had been working in the area of PC for more than 10 years (66.7%; Table 2). The largest proportion (38.1%) came from Ontario, but there were also representatives from British Columbia, Alberta, Nova Scotia and Yukon Territory and two experts from outside of Canada. The majority had a clinical background in nursing or medicine (61.9%), with the remainder involved in PC research or working in the field in the role of a health care administrator or policy maker.

Table 2. Demographic characteristics of individuals who participated in the expert panel.

Of the 27 preliminary QIs that were evaluated, 20 (74.1%) were classified as “retain” and the remainder, as “review” (Table 3). None of the proposed QIs had scores that would put them into the “discard” category. The team decided to keep all QIs where the Delphi panel suggested that the QI be retained. Since there were only seven QIs classified as “review,” a second Delphi panel was deemed unnecessary. Instead, an online meeting was held with the research team, who provided feedback on these QIs. The final decision was made to drop five of these QIs from further consideration, mainly based on the concern about whether these QIs were truly capturing quality of PC services. Panel members also provided suggestions for new QIs that the team should consider. These included topics such as the timely access to PC services, satisfaction with care, the place of death/preferred place of death, and details around the treatment for certain issues (e.g., for depression, for anxiety). Since there were no interRAI data elements to capture these suggestions, no additional QIs were developed.

Table 3. Summary of scores from the expert panel and final decision for each of the proposed QIs.

The modified Delphi panel resulted in 22 QIs kept for further testing and validation. Within the QIs capturing clinical issues, the indicators with the highest scores related to importance were those related to pain, shortness of breath and delirium. Those with the highest importance scores in the “psychosocial” area included QIs capturing caregiver distress, mood and loneliness. Three other QIs were kept related to hospital or ED use and advance directives.


To our knowledge, this is the first project to recommend a set of standardized QIs for community-based PC using existing interRAI data. The proposed set of 22 QIs was developed through a rigorous and multi-year process involving many stakeholders and researchers from across Canada and in two other countries. The QIs explicitly capture the issues cited as important by those receiving PC, their families, and those working in the field. The proposed QIs can be measured with existing interRAI instruments, currently used in more than 30 countries around the world. This allows for cross-country comparisons, which have previously been completed using the QIs for nursing homes [61]. Using the existing interRAI data is an efficient and cost-effective use of this information and avoids the additional effort that would be required if quality was assessed by using detailed chart reviews [37] or additional surveying of staff, individuals receiving PC, and families.

While it is important to understand the validity of individual indicators, it is also important to evaluate the content validity of the set of QIs. Several criteria have been proposed with which to carry out this evaluation [62]. For example, it is important that the QIs adequately cover the depth and breadth of the content of interest. One way to assess this criterion is to compare our QI set to the six key themes which were discussed during Phase I. In five of these themes, we were able to develop at least one QI in each theme. However, we could not create any QIs related to theme six, namely, the home as the preferred place of death, as we lack the data elements in the interRAI tools to capture this.

A second criterion of content validity relates to proportional representation, or the number of QIs in each domain that matches the importance of that domain in the assessment of PC quality. We treated each of the six themes as equally relevant since we had no other information with which to judge the importance of these themes. However, it is clear that given the type of clinical data captured within the interRAI assessments, it was easier to create QIs to address theme four, symptom management, versus the other themes. Another criterion is the costs of measurement which relates to the burden of data collection on providers. In this regard, these preliminary QIs have a very low cost since they are calculated using existing data and no additional data collection is required. The QIs were rated, during the Delphi panel, in terms of importance, providing insight into the criterion that captures the priority or ranking of the QIs. We feel that the QI list does not include redundant QIs, another important criterion, since none of the QIs were suggested to be dropped from the original list during the Delphi process. This provides evidence that all of the 27 preliminary indicators were rated high enough to warrant further consideration. Finally, the QI development did a very good job in addressing stakeholder involvement. The QIs were developed with input from multiple stakeholders which improves the confidence that the QIs have content validity. We were, however, limited to input from only one individual, with lived experience, who was receiving PC, despite nearly a year of effort in attempting to recruit other care recipients from across Canada.

Although the interRAI data represent a very rich source of information, the study team was limited to creating QIs that could be measured using existing items within two interRAI assessments and were unable to create QIs to reflect some salient issues mentioned during the qualitative interviews, many of which have also been cited in the literature as important aspects when assessing the quality of PC. For example, we were unable to create QIs to assess issues related to communication between the person, their family and members of the health care team [63], the use of hospice/PC services [64], the extent to which the person’s wishes were met [64], and access to resources and services. We were also unable to determine what specific treatments were received for certain clinical issues like depression and anxiety [18] and how satisfied people were with the PC services they received [65]. This information is important to assess as part of ongoing quality improvement efforts, although it was not directly germane to the clinical rationale for the interRAI assessments, which is care planning. As a result, this type of information would have to be captured using alternative means.

The proposed QIs provide an efficient means to capture key quality issues using existing interRAI data. Since the interRAI assessments are used widely in Canada, and in multiple other countries, these data provide a cost-effective source of information for testing and validating QIs. Although their main function is to assess the overall care needs of the individual, in order to drive care planning, interRAI assessments are useful for case-mix measurement [6669] and quality assessment [39, 7072]. In Canada, public reporting on select QIs already exists for long-term care homes [73], and those receiving PC and their families deserve a similar level of transparency. The proposed QIs can contribute to improvements in quality by providing detailed information to individual care providers (e.g., home care agencies) to drive internal continuous quality improvement efforts. The QIs can also provide a solid basis for quality benchmarking and learning, when organizations are compared at a systems-level. Finally, the QI data can be used to educate consumers and to guide health care policy.

The next steps in this project will involve analyzing the existing interRAI data to understand the properties of these QIs (e.g., magnitude of the issue, level of variation between geographic regions) as part of the ongoing validation process to decide which ones should be kept in the final list. Our team has access to approximately 3.7 million home care assessments from five provinces and one territory [50]. In addition, the study team has access to approximately 110,000 interRAI PC assessments, which are completed with palliative home care clients in Ontario only. We also plan to create client-level risk adjusters for each of the proposed QIs to account for differences in risk factors across patient populations [7476]. This step is very important since complex illnesses, multiple co-existing conditions and case-mix differences can influence the QI measures, irrespective of quality. Organizations that provide care to more impaired individuals will tend to have higher unadjusted rates, regardless of the quality of care they provide [74]. Risk adjustment methods are therefore needed to maximize the ability to make fair comparisons between providers [77].

As a result of this work, we have identified a set of 22 validated palliative care QIs capturing multiple issues that are important to individuals receiving PC, families and decision makers. This work fills an important gap as many other sectors of the health care system in Canada have access to interRAI-based QIs to assist in decision support and quality improvement [70, 71, 78, 79], but this has been lacking in the PC sector. Once the QIs are finalized, they can be readily embedded into existing software systems for use by Canadian provinces and health authorities who are using the interRAI assessments. They can also be calculated in other countries using these interRAI tools (e.g., the 21 countries using the home care instrument). The final set of QIs will be useful for the purposes of benchmarking performance across different subpopulations of interest, such as health planning/funding regions. The QIs will also provide community-based PC providers, and health system and policy decision makers, with real-time data to support them in targeting their quality improvement efforts and evaluating client outcomes.

Supporting information

S1 Table. Operational definitions for each of the 27 quality indicators (QIs).


S1 File. An example of applying the interpercentile range adjusted for asymmetry (IPRAS).



The authors gratefully acknowledge Kate Fillmore, Laurel Gillespie, Christina Lawand and Michelle Peterson Fraser for their contributions to this project.


  1. 1. Meier DE. Increased access to palliative care and hospice services: opportunities to improve value in health care. The Milbank Quarterly. 2011;89(3):343–80. pmid:21933272
  2. 2. Alberta Health Services. Palliative and End-of-life Care: Alberta Provincial Framework. Calgary, AB: 2014.
  3. 3. Roulston E. Canadians’ View of Palliative Care: National Online Survey. Toronto, ON: Canadian Partnership Against Cancer/Ipsos Public Affairs, 2016.
  4. 4. Health Quality Ontario. Palliative care at the end of life. Toronto: Health Quality Ontario, 2016.
  5. 5. Gagnon B, Nadeau L, Scott S, Dumont S, MacDonald N, Aubin M, et al. The association between home palliative care services and quality of end-of-life care indicators in the province of Quebec. Journal of Pain and Symptom Management. 2015;50(1):48–58. pmid:25656325.
  6. 6. Seow H, Sutradhar R, McGrail K, Fassbender K, Pataky R, Lawson B, et al. End-of-life cancer care: Temporal association between homecare nursing and hospitalizations. Journal of Palliative Medicine. 2016;19(3):263–70. pmid:26673031.
  7. 7. Cohen J, Hermans K, Dupont C, Van den Block L, Deliens L, Leemans K. Nationwide evaluation of palliative care (Q-PAC study) provided by specialized palliative care teams using quality indicators: Large variations in quality of care. Palliative Medicine. 2021;35(8):1525–41. Epub 2021/06/01. pmid:34053348.
  8. 8. Walker H, Anderson M, Farahati F, Howell D, Librach SL, Husain A, et al. Resource use and cost of end-of-life/palliative care: Ontario adult cancer patients dying during 2002 and 2003. Journal of Palliative Care. 2011;27(2):79–88. pmid:21805942
  9. 9. Sussman J, Barbera L, Bainbridge D, Howell D, Yang J, Husain A, et al. Health system characteristics of quality care delivery: a comparative case study examination of palliative care for cancer patients in four regions in Ontario, Canada. Palliative Medicine. 2012;26(4):322–35. pmid:21831915.
  10. 10. Barbera L, Paszat L, Chartier C. Indicators of poor quality end-of-life cancer care in Ontario. Journal of Palliative Care. 2006;22(1):12–7. pmid:16689410
  11. 11. Sussman J, Barbera L, Bainbridge D, Howell D, Yang J, Husain A, et al. Health system characteristics of quality care delivery: a comparative case study examination of palliative care for cancer patients in four regions in Ontario, Canada. Palliative Medicine. 2011;0(00):1–14. pmid:21831915
  12. 12. Grunfeld E, Lethbridge L, Dewar R, Lawson B, Paszat L, Johnston G, et al. Towards using administrative databases to measure population-based indicators of quality of end-of-life care: Testing the methodology. Palliative Medicine. 2006;20:769–77. pmid:17148531
  13. 13. Cohen J, Leemans K. How can you prove that you are delivering good care? Monitoring the quality of palliative care using quality indicators. European Journal of Palliative Care. 2014;21(5):228–31.
  14. 14. Zimmerman DR. Improving nursing home quality of care through outcomes data: the MDS quality indicators. Int J Geriatr Psychiatry. 2003;18(3):250–7. Epub 2003/03/19. pmid:12642895.
  15. 15. Currow DC, Allingham S, Yates P, Johnson C, Clark K, Eagar K. Improving national hospice/palliative care service symptom outcomes systematically through point-of-care data collection, structured feedback and benchmarking. Supportive Care in Cancer. 2015;23(2):307–15. Epub 2014/07/27. pmid:25063272.
  16. 16. Ettorchi-Tardy A, Levif M, Michel P. Benchmarking: A method for continuous quality improvement in health. Healthcare Policy. 2012;7:e101–e19. pmid:23634166
  17. 17. Raijmakers N, Galushko M, Domeisen F, Beccaro M, Hagelin CL, Lindqvist O, et al. Quality indicators for care of cancer patients in their last days of life: literature update and experts’ evaluation. Journal of Palliative Medicine. 2012;15(3):308–16. pmid:22324541
  18. 18. Claessen SJJ, Francke AL, Belarbi HE, Pasman RW, van der Putten MJA, Deliens L. A new set of quality indicators for palliative care: process and results of the development trajectory. Journal of Pain and Symptom Management. 2011;42(2):169–82. pmid:21429703
  19. 19. Leemans K, Cohen J, Francke AL, Stichele RV, Claessen SJJ, Van den Block L, et al. Towards a standardized method of developing quality indicators for palliative care: protocol of the Quality indicators for Palliative Care (Q-PAC) study. BMC Palliative Care. 2013;12(6). pmid:23394401
  20. 20. Lorenz KA, Rosenfeld K, Wenger N. Quality indicators for palliative and end-of-life care in vulnerable elders. Journal of the American Geriatrics Society. 2007;55(S2):S318–S26. pmid:17910553
  21. 21. Dy SM, Kiley K, Ast K, Lupu D, Norton S, Mcmillan SC, et al. Measuring what matters: top-ranked quality indicators for hospice and palliative care from the American Academy of Hospice and Palliative Medicine and Palliative Nurses Association. Journal of Pain and Symptom Management. 2015;49(4):773–81. pmid:25697097
  22. 22. De Schreye R, Houttekier D, Deliens L, Cohen J. Developing indicators of appropriate and inappropriate end-of-life care in people with Alzheimer’s disease, cancer or chronic obstructive pulmonary disease for population-level administrative databases: A RAND/UCLA appropriateness study. Palliative Medicine. 2017;31(10):932–45. pmid:28429629.
  23. 23. van Riet Paap J, Vernooij-Dassen M, Droes R, Radbruch L, Vissers K, Engels Y. Consensus on quality indicators to assess the organisation of palliative cancer and dementia care applicable across national healthcare systems and selected by international experts. BMC Health Services Research. 2014;14:396. pmid:25228087
  24. 24. D’Angelo D, Mastroianni C, Vellone E, Alvaro R, Casale G, Latina R, et al. Palliative care quality indicators in Italy. What do we evaluate? Supportive Care in Cancer. 2012;20:1983–9. pmid:22105162
  25. 25. Woitha K, Van Beek K, Ahmed N, Hasselaar J, Mollard JM, Colombet I, et al. Development of a set of process and structure indicators for palliative care: the Europall project. BMC Health Services Research. 2012;12(1):381. pmid:23122255
  26. 26. Bone AE, Evans CJ, Etkind SN, Sleeman KE, Gomes B, Aldridge M, et al. Factors associated with older people’s emergency department attendance towards the end of life: a systematic review. European Journal of Public Health. 2019;29(1):67–74. pmid:30481305.
  27. 27. Earle CC, Park ER, Lai B, Weeks JC, Ayanian JZ, Block S. Identifying potential indicators of the quality of end-of-life cancer care from administrative data. Journal of Clinical Oncology. 2003;21(6):1133–8. pmid:12637481
  28. 28. Barbera L, Seow H, Sutradhar R, Chu A, Burge F, Fassbender K, et al. Quality of end-of-life cancer care in Canada: a retrospective four-province study using administrative health care data. Current Oncology. 2015;22(5):341–55. pmid:26628867.
  29. 29. World Health Organization. Global Atlas of Palliative Care at the End of Life. Geneva, Switzerland: World Health Organization, 2014.
  30. 30. Lynn J. Living long in fragile health: the new demographics shape end of life care. Hastings Center Report. 2005;35(7):S14–S8. pmid:16468250
  31. 31. Rocker G, Downar J, Morrison S. Palliative care for chronic illness: driving change. Canadian Medical Association Journal. 2016;188:e493–e8. pmid:27551031
  32. 32. Carpenter GI, Hirdes JP. Using interRAI assessment systems to measure and maintain quality in long term care. A Good Life in Old Age? Monitoring and Improving Quality in Long-term Care. Paris, France: OECD/European Commission; 2013. p. 93–139.
  33. 33. Steel K, Jonsson P, DuPasquier JN, Gilgen R, Hirdes JP, Schroll M, et al. Systems of care for frail older persons. Transactions of the American Clinical and Climatological Association. 1999;110:30–7. pmid:10344004
  34. 34. Heckman G, Gray LC, Hirdes JP. Addressing health care needs for frail seniors in Canada: The role of interRAI instruments. Canadian Geriatrics Society Journal of CME. 2013;3.
  35. 35. van der Roest HG, van Eenoo L, van Lier LI, Onder G, Garms-Homolova V, Smit JH, et al. Development of a novel benchmark method to identify and characterize best practices in home care across six European countries: design, baseline, and rationale of the IBenC project. BMC Health Services Research. 2019;19(1):310. Epub 2019/05/17. pmid:31092244.
  36. 36. Foebel AD, van Hout HP, van der Roest HG, Topinkova E, Garms-Homolova V, Frijters D, et al. Quality of care in European home care programs using the second generation interRAI Home Care Quality Indicators (HCQIs). BMC Geriatrics. 2015;15:148. pmid:26572734.
  37. 37. Chartier LB, Ovens H, Hayes E, Davis B, Calder L, Schull M, et al. Improving quality of care through a mandatory provincial audit program: Ontario’s emergency department return visit quality program. Annals of Emergency Medicine. 2021;77(2):193–202. Epub 2020/11/18. pmid:33199045.
  38. 38. Mor V, Angelelli J, Jones R, Roy J, Moore T, Morris JN. Inter-rater reliability of nursing home quality indicators in the US. BMC Health Services Research. 2003;3:20. pmid:14596684
  39. 39. Berg K, Mor V, Morris J, Murphy K, Moore T, Harris Y. Identification and evaluation of existing nursing homes quality indicators. Health Care Financing Review. 2002;23(4):19–36. pmid:12500468
  40. 40. Karon SL, Sainfort F, Zimmerman D. Stability of nursing home quality indicators over time. Medical Care. 2000;37(6):570–9.
  41. 41. Zimmerman DR, Karon SL, Arling G, Ryther Clark B, Collins T, Ross R, et al. Development and testing of nursing home quality indicators. Health Care Financing Review. 1995;16(4):104–27. pmid:10151883
  42. 42. Harman LE, Guthrie DM, Cohen J, Declercq A, Fisher K, Goodridge D, et al. Potential quality indicators for seriously ill home care clients: a cross-sectional analysis using Resident Assessment Instrument for Home Care (RAI-HC) data for Ontario. BMC Palliative Care. 2019;18:3. pmid:30626374
  43. 43. Guthrie DM, Harman LE, Barbera L, Burge F, Lawson B, McGrail K, et al. Quality indicator rates for seriously ill home care clients: Analysis of Resident Assessment Instrument for Home Care Data in six Canadian provinces. Journal of Palliative Medicine. 2019;22(11):1346–56. Epub 2019/05/17. pmid:31094608.
  44. 44. Williams N, Boumans N, Luymes N, White NE, Lemonde M, Guthrie DM. What should be measured to assess the quality of community-based palliative care? Results from a collaborative expert workshop. Palliative and Supportive Care. 2021;1–7. pmid:34154690
  45. 45. Luymes N, Williams N, Garrison L, Goodridge D, Silveira M, Guthrie DM. “The system is well intentioned, but complicated and fallible”: Interviews with caregivers and decision makers about palliative care in Canada. BMC Palliative Care. 2021;20:149. pmid:34551748
  46. 46. Ahluwalia SC, Chen C, Raaen L, Motala A, Walling AM, Chamberlin M, et al. A Systematic Review in Support of the National Consensus Project Clinical Practice Guidelines for Quality Palliative Care, Fourth Edition. Journal of Pain and Symptom Managament. 2018;56(6):831–70. pmid:30391049.
  47. 47. Pasman HRW, Brandt HE, Deliens L, Francke AL. Quality indicators for palliative care: a systematic review. Journal of Pain and Symptom Management. 2009;38(1):145–56. pmid:19615636
  48. 48. Cohen SR, Leis R. What determines the quality of life for terminally-ill cancer patients from their own perspective? Journal of Palliative Care. 2002;18(1):48–58. pmid:12001403
  49. 49. National Consensus Project for Quality Palliative Care. Clinical practice guidelines for quality palliative care, third edition. Pittsburgh, PA: 2013 2013. Report No.
  50. 50. Canadian Institute for Health Information. Profile of Clients in Home Care, 2019–20202021 Mar.8, 2021.
  51. 51. Hogeveen SE, Chen J, Hirdes JP. Evaluation of data quality of interRAI assessments in home and community care. BMC Medical Informatics and Decision Making. 2017;17(1):150. pmid:29084534.
  52. 52. Martin L, Poss JW, Hirdes JP, Jones RN, Stones MJ, Fries BE. Predictors of a new depression diagnosis among older adults admitted to complex continuing care: implications for the Depression Rating Scale (DRS). Age and Ageing. 2008;37(1):51–6. pmid:18033777
  53. 53. Fries BE, Simon SE, Morris JN, Flodstrom C, Bookstein FL. Pain in US nursing homes: validating a pain scale for the Minimum Data Set. The Gerontologist. 2001;41(2):173–9. pmid:11327482
  54. 54. Morris JN, Fries BE, Mehr DR, Hawes C, Mor V, Lipsitz L. MDS Cognitive Performance Scale. Journals of GerontologySeries A, Biological Sciences and Medical Sciences. 1994;49(4):M174–M82. pmid:8014392
  55. 55. Guthrie DM, Williams N, Beach C, Maxwell CJ, Mills D, Mitchell L, et al. Development and validation of Caregiver Risk Evaluation (CaRE): A new algorithm to screen for caregiver burden. Journal of applied gerontology: the official journal of the Southern Gerontological Society. 2021;40:731–41. pmid:32456510
  56. 56. Poss J, Murphy KM, Woodbury MG, Orsted H, Stevenson K, Williams G, et al. Development of the interRAI Pressure Ulcer Risk Scale (PURS) for use in long-term care and home care settings. BMC Geriatrics. 2010;10(1):67. pmid:20854670
  57. 57. Fitch K, Bernstein SJ, Aguilar MD, Burnand B, LaCalle JR, Lazaro P, et al. The RAND/UCLA appropriateness method user’s manual. Santa Monica, CA: 2001.
  58. 58. Junger S, Payne SA, Brine J, Radbruch L, Brearley SG. Guidance on Conducting and REporting DElphi Studies (CREDES) in palliative care: Recommendations based on a methodological systematic review. Palliat Med. 2017;31(8):684–706. Epub 2017/02/14. pmid:28190381.
  59. 59. Seow H, Snyder CF, Mularski RA, Shugarman LR, Kutner JS, Lorenz KA, et al. A framework for assessing quality indicators for cancer care at the end of life. Journal of Pain and Symptom Management. 2009;38(6):903–12. pmid:19775860
  60. 60. Nakazawa Y, Kato M, Yoshida S, Miyashita M, Morita T, Kizawa Y. Population-based quality indicators for palliative care programs for cancer patients in Japan: A Delphi study. Journal of Pain and Symptom Managament. 2016;51(4):652–61. Epub 2015/12/18. pmid:26674609.
  61. 61. Jensdottir AB, Rants M, Hjaltadottir I, Guomundsdottir H, Rook M, Grando V. International comparison of quality indicators in United States, Icelandic and Canadian nursing facilities. International Nursing Review. 2003;50:79–84. pmid:12752906
  62. 62. Schang L, Blotenberg I, Boywitt D. What makes a good quality indicator set? A systematic review of criteria. International Journal for Quality in Health Care. 2021;33(3). Epub 2021/07/21. pmid:34282841.
  63. 63. Yabroff KR, Mandelbaltt JS, Ingham J. The quality of medical care at the end-of-life in the USA: Existing barriers and examples of process and outcome measures. Palliative Medicine. 2004;18:202–16. pmid:15198133
  64. 64. Schenck AP, Rokoske FS, Durham D, Cagle JG, Hanson LC. Quality measures for hospice and palliative care: Piloting the PEACE measures. Journal of Palliative Medicine. 2014;17:769–75. pmid:24921162
  65. 65. Seow H, Bainbridge D, Bryant D, Guthrie D, Urowitz S, Zwicker V, et al. The CaregiverVoice survey: A pilot study surveying bereaved caregivers to measure the caregiver and patient experience at end of life. Journal of Palliative Medicine. 2016;19(7):712–9. pmid:27254096.
  66. 66. Bjorkgren MA, Fries BE, Shugarman LR. Testing a RUG-III based case-mix system for home care. Canadian Journal on Aging. 2000;19 (suppl 2):106–25.
  67. 67. Fries BE, James ML, Martin L, Head MJ, Park PS. A Case-Mix System for Adults with Developmental Disabilities. Health Serv Insights. 2019;12:1178632919856011. Epub 2019/07/03. pmid:31263374.
  68. 68. Fries BE, Schneider D, Foley WJ, Gavazzi M, Burke R, Cornelius E. Refining a case-mix measure for nursing homes: Resource Utilization Groups (RUG-III). Medical Care. 2001;32(7):668–85.
  69. 69. Guthrie DM, Poss JW. Development of a case-mix funding system for adults with combined vision and hearing loss. BMC Health Services Research. 2013;13:137. pmid:23587314
  70. 70. Hirdes JP, Fries BE, Morris JN, Ikegami N, Zimmerman D, Dalby DM, et al. Home care quality indicators (HCQIs) based on the MDS-HC. The Gerontologist. 2004;44(5):665–79. pmid:15498842
  71. 71. Perlman C, Hirdes JP, Barbaree H, Fries BE, McKillop I, Morris JN, et al. Development of mental health quality indicators (MHQIs) for inpatient psychiatry based on the interRAI mental health assessment. BMC Health Services Research. 2013;13(15):1–12. pmid:23305286
  72. 72. Morris JN, Berg K, Topinkova E, Gray LC, Schachter E. Developing quality indicators for in-patient post-acute care. BMC Geriatrics. 2018;18(1):161. pmid:29996767.
  73. 73. Canadian Institute for Health Information. Your Health System: Potentially Inappropriate Use of Antipsychotics in Long-Term Care: Canadian Institute for Health Information; 2019 [Sept. 29, 2021].;jsessionid=Zkd8zMpVT0R77iPDVK9sDlmX.yhs?lang=en#!/indicators/008/potentially-inappropriate-use-of-antipsychotics-in-long-term-care/;mapC1;mapLevel2;/.
  74. 74. Jones RN, Hirdes JP, Poss JW, Kelly M, Berg K, Fries BE, et al. Adjustment of nursing home quality indicators. BMC Health Services Research. 2010;10(96):1–8. pmid:20398304
  75. 75. Dalby DM, Hirdes JP, Fries BE. Risk adjustment methods for home care quality indicators (HCQIs) based on the minimum data set for home care. BMC Health Serv Res. 2005;5:7. pmid:15656901
  76. 76. Dalby DM, Hirdes JP. The relationship between agency characteristics and quality of home care. Home Health Care Services Quarterly. 2008;27(1):59–74. pmid:18510199
  77. 77. Iezzoni LI. Risk adjusting rehabilitation outcomes: an overview of methodologic issues. American Journal of Physical Medicine and Rehabilitation. 2004;83(4):316–26. Epub 2004/03/17. pmid:15024335.
  78. 78. Hirdes JP, van Everdingen C, Ferris J, Franco-Martin M, Fries BE, Heikkilä J, et al. The interRAI suite of mental health assessment instruments: An integrated system for the continuum of care. Frontiers in Psychiatry. 2020;10. pmid:32076412
  79. 79. Norton PG, Murray M, Doupe MB, Cummings GG, Poss JW, Squires JE, et al. Facility versus unit level reporting of quality indicators in nursing homes when performance monitoring is the goal. BMJ open. 2014;4(2):e004488. Epub 2014/02/14. pmid:24523428.