Figures
Abstract
Introduction
In many low income countries, the delivery of quality health services is hampered by health system-wide barriers which are often interlinked, however empirical evidence on how to assess the level and scope of these barriers is scarce. A balanced scorecard is a tool that allows for wider analysis of domains that are deemed important in achieving the overall vision of the health system. We present the quantitative results of the 12 months follow-up study applying the balanced scorecard approach in the BHOMA intervention with the aim of demonstrating the utility of the balanced scorecard in evaluating multiple building blocks in a trial setting.
Methods
The BHOMA is a cluster randomised trial that aims to strengthen the health system in three rural districts in Zambia. The intervention aims to improve clinical care quality by implementing practical tools that establish clear clinical care standards through intensive clinic implementations. This paper reports the findings of the follow-up health facility survey that was conducted after 12 months of intervention implementation. Comparisons were made between those facilities in the intervention and control sites. STATA version 12 was used for analysis.
Results
The study found significant mean differences between intervention(I) and control (C) sites in the following domains: Training domain (Mean I:C; 87.5.vs 61.1, mean difference 23.3, p = 0.031), adult clinical observation domain (mean I:C; 73.3 vs.58.0, mean difference 10.9, p = 0.02 ) and health information domain (mean I:C; 63.6 vs.56.1, mean difference 6.8, p = 0.01. There was no gender differences in adult service satisfaction. Governance and motivation scores did not differ between control and intervention sites.
Citation: Mutale W, Stringer J, Chintu N, Chilengi R, Mwanamwenge MT, Kasese N, et al. (2014) Application of Balanced Scorecard in the Evaluation of a Complex Health System Intervention: 12 Months Post Intervention Findings from the BHOMA Intervention: A Cluster Randomised Trial in Zambia. PLoS ONE 9(4): e93977. https://doi.org/10.1371/journal.pone.0093977
Editor: Waldemar A. Carlo, University of Alabama at Birmingham, United States of America
Received: June 20, 2013; Accepted: February 27, 2014; Published: April 21, 2014
Copyright: © 2014 Mutale et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: The study was funded by Doris Duke Charitable foundation, http://www.ddcf.org/. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Introduction
In many low income countries, delivery of quality health services is hampered by system wide barriers which are often interlinked and their contribution to outcomes difficult to establish [1], [2]. It is therefore important that health managers and researchers recognise this and use methods and approaches which take into account the complexity and connectedness across health system building blocks [1], [3], [4]. Some researchers have argued that part of the problem with the health systems debate and research is that it tends to adopt a reductionist perspective that ignores the complexity of the health system [5]. There are now calls for a paradigm shift in the way interventions are designed and evaluated [1]. Emphasis should be paid not only to outcomes but also to the processes leading to the observed outcomes [1], [6]. It has been recognised that taking a more comprehensive view that expands and challenges the status quo is more likely to provide lessons on what works and why[2], [7]–[9]. However, despite these recent advances in thinking around health systems, there are very few cases of studies empirically addressing these complexities in their design and interpretation of findings. A recent systematic review showed that many evaluations of complex interventions are too narrow and lack a system wide approach [10].
An approach such as a balanced scorecard allows for a comprehensive analysis of domains that are deemed important in achieving the overall vision of the health system [11], [12]. A balanced scorecard is a strategic management tool that was first suggested by Robert Kaplan and David Norton in 1992 [13]. It provides information on areas of strategic importance to guide future planning, but also serves as a snapshot of how well an organisation or system is performing [14]. It is made up of domains and indicators derived from the strategic vision of an organisation aimed at measuring its performance [15], [16].
Although the use of balanced scorecard in health care is being advocated, its application has been mostly limited to high income countries [13], [16]–[20]. The World Health Organisation has recently recommended the use of balanced scorecard in monitoring and evaluation of the health system building blocks [21]. Studies that have applied balanced scorecard have given arguments for adopting balanced scorecard approach in evaluating health system interventions and demonstrating that such a methodology has the potential to guide investments aimed at improving health system especially in low income countries [20]–[24]. The advantage with using a balanced scorecard is that it enables the focus on the overall vision while looking at the processes which are deemed important in achieving the overall goal [13], [15]. Crucially, the balanced scorecard approach provides means for researchers and health system managers to evaluate complex interventions [11].
Edward et al.2011, modified the original balanced scorecard making it more applicable in low income country health care settings. They highlighted six important domains for measuring health system strengthening [22]. Work done in Bangladesh by Khan et al.2012 has highlighted the central role that balance scorecard approaches could play in identifying barriers and facilitators of health system interventions and how data collection guided by balanced scorecard at health facility level could improve decision making [18]. In our recent publication, we applied the balanced scorecard approach to describe the baseline status of three BHOMA intervention districts in Zambia [25]. We reported the applicability of the balanced scorecard in the Zambian health care settings and the implication for evaluating health system interventions targeting the Millennium Development Goals. In this paper we extend this work by presenting preliminary findings after 12 months of implementation of the BHOMA intervention. The BHOMA study is a cluster randomised stepped wedge study of interventions aiming to strengthen the health system in three rural districts of Zambia. The evaluation of the BHOMA intervention utilises both qualitative and quantitative approaches. In this paper, we present the quantitative results of the follow-up study applying the balanced scorecard approach as described at baseline [25]. Qualitative results are presented elsewhere [26].
This study seeks to contribute to the generation of empirical evidence in health system research by utilising an innovative approach that offers an opportunity to assess multiple domains that exits in complex health systems.
Methodology
The BHOMA study is a cluster randomised community intervention that aims to strengthen the health system in three rural districts covering 42 health facilities in Zambia with a total population of 306,000.
The study has a stepped wedge design where the intervention is being rolled-out gradually until all the 42 health facilities receive the intervention. The unit of randomisation is the health facility and its catchment population. The study has an integrated package of interventions, at both health facility and community level. The impact of the interventions is being measured through an evaluation of the interventions using selected endpoints including Standardised Mortality Rate in the population less than 60 years and under-five mortality. The evaluation data is being collected through community and health facility surveys. This paper focuses on the results of the health facility survey conducted in 2012 when 24 clusters were in the intervention phase of the intervention and 18 in the control phase.
The protocol for this trial and supporting CONSORT checklist are available as supporting information; see Protocol S1 and Checklist S1.
Intervention Design
The BHOMA intervention is part of the African health initiative which aims improve population health in five sub Saharan Africa [27]. The intervention commenced in April 2011 when the first set of health facilities received the intervention. All the health facilities are expected to receive the intervention my mid 2013. The final evaluation of the BHOMA intervention will be 2014. In order to ensure objective evaluation, the BHOMA study is made up of two independent teams. The implementation is being done by the Centre for Infectious Diseases Control in Zambia (CIDRZ) while the Zambia AIDS Related Tuberculosis (ZAMBART) is evaluating the project. The teams work closely with each other and the Ministry of Health at national and district level.
The BHOMA intervention is made up of three primary strategies designed to work at different levels of the health system. These are district, health facility and community strategies. The full methodology is described elsewhere [28], [29]. Following is a summary description of the three BHOMA strategies:
The District
In each of the three districts, one Quality Improvement (QI) team is introduced that implements the intervention in target health facilities. The order of implementation was determined at randomisation and the QI teams follow this order when introducing intervention in target heath facilities. Each QI team consists of two nurses and one clinical officer. The teams work closely with the Ministry of Health.
The Health Facility Intervention
The health facility-based intervention aims to improve clinical care quality by implementing practical tools that establish clear clinical care standards, providing essential resources to meet these standards and communicating standards through intensive clinic implementations. Each clinic generates self assessment reports that help identify areas of weakness for further improvement with support from the quality improvement team. Leadership training is provided to the health workers targeting governance, finance, supply chain and human resource management. Staffing support consists of lay workers trained as “Clinic Supporters.” These lay workers are trained to assume as many non-clinical duties as possible. These include registration of patients, filing, triaging, recording vital signs, fast tracking urgent cases and routing patients through services.
The Community Intervention
The BHOMA project has engaged community health workers on a part time basis. They are trained in providing preventive services and tracking missed clinic appointments. They work in collaboration with community health units known as Neighbourhood Health Committees (NHCs) and Traditional Birth Attendants (TBAs). The community health workers are also being trained in capturing and recording local health data and sending it to health facilities via mobile phones or physically.
Figure 1 gives a summary of the BHOMA intervention. The community strategy is expected to drive the demand for health services while the health facility strategy is expected to improve health worker skills, service quality and other health system building blocks. The overall effect of the intervention is to improve health outcomes.
Sampling and Sample Size
There were 48 eligible health facilities in the three BHOMA districts. Six were used for piloting the intervention and all the remaining 42 health facilities were included in the study (Figure 2). Sample size for community survey are reported elsewhere [28]. This paper is focusing on health facility surveys.
Randomisation and Rollout Plans
The 42 health facilities were randomised in the order of receiving the intervention in a step wedge fashion until all receive the intervention. Six facilities are randomised to start the intervention in each step and each step took three months to implement. (Figure 3).
Randomisation was done by a statistician from London School of Hygiene and Tropical Medicine who had no prior knowledge of the study sites. Randomisation was stratified by district.
Evaluation Design
Baseline survey.
A baseline survey was conducted at the beginning of the intervention in 2011. A balanced scorecard was applied to rank the performance of the 42 target health facilities. The results of the baseline study have been reported elsewhere [25].
Follow-up study.
A 12 month follow-up health facility survey was conducted in 42 health facilities between May and September, 2012. Appointments were made with managers before the research team visited each of the health facilities. At each health facility a number of questionnaires were administered targeting health facility managers, health workers and patients. All the study tools were interviewer administered except for the governance which was self administered. At each health facility the health facility officer in-charge and two other health workers were interviewed. Five observations of adult clinical encounters were done irrespective of the presenting complaint. Five observations of child clinical encounters were done with children being eligible if they were under five years and presenting with fever, cough or diarrhoea. Similarly five exit interviews for adults and five for under five child/guardian pair were done following the same approach described at baseline [25]. For specific tools and calculation of domain scores refer to Tools S1, S2, S3, S4, S5, S6, S7.
Data collection was conducted by the evaluating team composed of a team leader who is a medical doctor and epidemiologist, with 18 research assistants with a medical background. Data collectors were trained for five days on how to administer the study tools.
Data Analysis
Data were double entered onto an Access database and exported to STATA version 12 for analysis. Simple frequencies were used to explore the data. Comparisons were made between intervention and control facilities stratified by district and the time in the intervention. We looked at effect of the intervention by time in the intervention to determine whether there was dose relationship. Linear regression was done to determine the correlations between measures of quality for children and adults with health system domains in the balanced scorecard [25]. We adjusted for cluster design using Stata version 12 estimation command with the vce(cluster clustvar) option to obtain a robust variance estimate that adjusts for within-cluster correlation [30]. We also adjusted for baseline scores, district and catchment population. Time in the intervention was left out of the model because of collinearity.
Ethical Considerations
The study was approved by the University of Zambia Bioethics Committee and the London School of Hygiene and Tropical Medicine Ethics Committee. All participants were informed about the purpose of the survey and were asked to sign a consent form before taking part in the study. Parents/guardians signed consent forms on behalf their children. Those who could not write were asked to thumb print the consent form in the presence of an independent observer. Confidentiality was ensured during data collection and subsequent publication of the results.
Results
Health Facility Demographic Characteristics
In total there were 42 health facilities which were randomly allocated to the intervention or control. At the time of follow-up, 4 steps of the intervention had been implemented. 24 health facilities were in the intervention phase (I) while 18 had not received the intervention and so were in the control phase (C). For those health facilities that had received the intervention, 12 had been in the intervention phase for between 3–6 months and 12 for between 9–12 months. (See figure 3).
The majority of the health facilities were classified as rural (81% in Chongwe, 71% in Kafue and 57% in Luangwa). Two health facilities were part of mission hospitals (1 in Chongwe and 1 in Luangwa) neither of which had received the intervention. (See Table 1).
Comparisons of Intervention and Control Health Facilities
Mean scores were calculated for each domain in the balanced scorecard and these are shown in table 2. The major differences in the mean scores between intervention(I) and control (C) health facilities were in the following domains: Training (mean I:C; 87.5.vs 61.1, mean difference 23.3, p = 0.031), adult clinical observation (mean I:C; 73.3 vs.58.0, mean difference 10.9, p = 0.02) and health information (mean I:C; 63.6 vs.56.1, mean difference 6.8, p = 0.003). These differences were statistically significant before and after adjusting for baseline score, catchment population and district. In addition to the above domains, infection control and tracer drugs showed statistically significant difference after adjusting for baseline score, catchment population and district. (i.e. infection control (mean I:C; 86 vs.78, mean difference 9.1, p = 0.03), Tracer drugs (mean I:C; 80 vs.77, mean difference 3.0, p = 0.05). Overall there was no gender differences in adult service satisfaction between control and intervention sites. In addition, governance and motivation scores did not differ between control and intervention sites.
District Comparison of Intervention and Control Health Facilities
In Chongwe district, significant mean differences between intervention and control sites were reported in training domain (I:C; 100 vs.66.0.) and health information domain (mean I: C; 66.2 vs. 58.). Higher mean scores in the intervention were also noted in the Basic infrastructure domain (mean I: C; 81.0 vs.73), infection control domain (mean I: C; 89.3 vs.84.1) and adult clinical observations domain (mean I: C; 64.0 vs.53.0). However, the differences were not statistically significant.
In Kafue district, higher mean scores in the intervention were reported in infection control domain l (mean I:C; 82.2 vs.76.2), health information domain (mean I:C; (60.8 vs.55.7) and adult clinical observation domain (mean I:C; 80.3 vs.68.5). However, the differences were not statistically significant.
In Luangwa district, significant differences between Intervention and control sites were reported in the training domain (mean I: C; 100 vs.33.3) and infection control domain (82.1vs.61.4,) and adult clinical observation (mean I: C; 68.3 vs.51.1).
Dose Dependence Effect of the Intervention
We compared the effect of the intervention by time in the intervention phase. Possible intervention dose effect was noted in the training domain which showed mean increase from 61.1 in the control to 87.5 when the intervention had been in place for 3–6 months and remained stable after the intervention had been in place for between 9–12 months. The adult clinical observations domain showed a similar trend rising from 58 in the control to 68 at 3–6 months and to 72 at 9–12 months of intervention time. These differences were statistically significant. (p<0.05). The domain for Basic equipment showed improvement soon after intervention but deteriorated with time (mean at 3–6 months 78 to 74 at 9–12 months). (See Table 3).
Linear Regression Model
Linear regression was done with the following dependent variables: Adult Clinical observation and service satisfaction scores, Children clinical observation and service satisfaction scores. In addition to all the health system domains applied at baseline [25], an intervention variable was added to the model. The model was adjusted for baseline scores, catchment population and district. There was no difference in children clinical observation score between the intervention and controls sites. However, children clinical observation was significantly correlated with service readiness (coef 1.2, p = 0.01) and health worker motivation (coef 0.44, p = 0.09).
There was a statistically significant difference in adult clinical observation score between intervention and control sites (coef 23.29, p = 0.01). Other domains which correlated with adult clinical observation were; laboratory capacity (coef 0.25, p = 0.04), training (coef 0.16, p = 0.07 and health information (coef 0.87, p = 0.01). There was no difference in adult satisfaction score between the intervention and control sites. However, adult satisfaction score was correlated with health information, (coef 0.29, p = 0.02, service readiness (coef 0.34, p = 0.04), children clinical observation (coef 0.14, p = 0.08) and children satisfaction score (Coef 0.23, p = 0.07). (See Table 4).
Discussion
This study aimed to apply innovative approaches in evaluating a complex health system intervention and hence contribute to generation of empirical evidence to guide health system strengthening investments [31]. Most of the current discussions in this area are at the level of framework or theory but there is lack of empirical data especially from low income countries [3], [32], [33]. Applying a system wide approach in the form of balanced scorecard allowed for a comprehensive analysis of the different domains of the health system and how each was affected by the intervention [10], [34].
The results showed that the BHOMA intervention led to improvements in some domains of the balanced scorecard while other domains remained unaffected. Significant differences between intervention and control sites were only seen in adult clinical observation, training and health information domains. These differences remained significant when analysis was stratified by district. We acknowledge that these results are still interim as our follow up time ranged between 3 and 12 months only, with the last step of health facilities having the intervention for just 3 months. Nonetheless, the results point to some positive effect of the BHOMA intervention regardless of the study district, time in the intervention or baseline scores. We will be able to assess the full effect of the BHOMA intervention when the final assessment is made in 2014.
Interestingly, some domains such as health worker motivation, service satisfaction for children and adults and governance did not show differences between intervention and control sites despite the presence of the BHOMA intervention. This remained true even after adjusting for baseline scores and showed no evidence of dose dependence. It remains unclear why these domains did not respond to the intervention but the short observation time could partly explain this. Complex system theory acknowledges delays between cause and effect [35]. It will be interesting to see how these domains respond with longer intervention time. Other possible explanations have been explored in the qualitative component of the BHOMA evaluation reported elsewhere [26].
Linear regression analysis showed that adult clinical observation score was one measure of service quality that showed statistically significant differences between control and intervention sites. This might be a more sensitive marker of the effect of the intervention which could be useful when evaluating similar interventions aimed at strengthening complex health systems. The children measures of quality did not show any significant difference between intervention and control even after adjusting for catchment population and baseline scores. We reported at baseline that the children measures of quality had lower scores when compared to adults [25]. The current results suggest that child services might still be lagging behind adult services in the BHOMA intervention. This was attributed to low number of health workers being trained in the integrated management of childhood illnesses (IMCI) in most study sites. However, we also acknowledge the limitation reported by other studies done in low income settings which have shown that in-service training may not necessarily translate in behaviour change that support quality improvements [36]. This might be the case in some of the domains that failed to show differences in the presence of intervention, although further follow up is required to confirm this.
Another lesson being learnt from the evaluation was that the effect of the intervention needs to be considered with contextual factors [37], [38]. These were noted to positively or negatively affect the intervention. In our study, we noted that health facilities located in peri-urban areas with larger catchment population and high patient volume seem to perform poorly in most domains despite the presence of the intervention. Their poor performance generally affected the scores across most domains in the intervention sites as all the bigger health facilities had received the intervention. This observation was important as the effect of the intervention could not be guaranteed by simple randomisation but that context was a critical determinant of how well the site performed in the presence of the intervention. Detailed analysis of individual health facilities revealed that hospital based health facilities strongly confounded the mean scores in the control sites as none had received the intervention but still scored very highly in most domains at baseline [25] and follow-up even in the absence of the intervention. In recent times context has been recognised as an important factor that could affect even well designed clinical trials and currently there are efforts to standardise collection of contextual information in clinical trials. Our findings agree with these observations and support efforts to have contextual data considered in understanding the mechanism of change in trial settings [39], [40].
The study had a number of limitations that must be considered when interpreting our findings. Firstly, the study was not powered to look for differences between sites or different types of facilities and therefore we will need to wait for final evaluation before any further interpretation of these findings. Secondly, the time from the implementation to the timing of this interim analysis was relative short. The longest intervention step had received the intervention for 12 months while the last step had received the intervention for 3 months only. This makes the comparison between control and intervention more complex requiring cautious interpretation. We have tried to explore the effect of the intervention by time or step. However, the results remained inconclusive. It is therefore recommended to see the end line evaluation that includes a community survey to make concrete conclusions about the effect of the intervention.
Some study results were based on observation of health workers and how they performed their duties in clinical setting. The fact that they were under observation could have altered their usual behaviour positively or negatively depending on what might be desirebale [41], hence biasing the results of our study. Similarly, exit interviews with clients could be influenced by this form of information bias [42].
The study was done mainly in rural districts of Zambia were health system challenges might be different from urban settings. Therefore our findings could be more applicable to similar rural settings and may not be generalised to urban settings. In addition, the study sites were fixed and limited to 42 health facilities based on what was available in the selected districts. This resulted in small sample size especially when performing stratified analysis by districts. This was worse in Luangwa district which had only 7 health facilities.
Conclusion
This preliminary results show that the balanced scorecard approach can be useful in assessing the effects of complex public health interventions. In evaluation of complex interventions such as the BHOMA, attention should be paid to context. Using system wide approaches and triangulating data collection methods seems to be key to successful evaluation of such complex intervention.
Supporting Information
Tools S1.
Calculation of health facility scores.
https://doi.org/10.1371/journal.pone.0093977.s001
(DOC)
Tools S2.
Adult Clinical observation checklist.
https://doi.org/10.1371/journal.pone.0093977.s002
(DOC)
Tools S3.
Children clinical observation checklist.
https://doi.org/10.1371/journal.pone.0093977.s003
(DOC)
Tools S7.
Service satisfaction for adults and children.
https://doi.org/10.1371/journal.pone.0093977.s007
(DOC)
Checklist S1.
Consort for cluster randomised Trial.
https://doi.org/10.1371/journal.pone.0093977.s008
(DOCX)
Author Contributions
Conceived and designed the experiments: WM JS NC RC MTM HA. Analyzed the data: WM JS NC RC MTM NK JL HA. Contributed reagents/materials/analysis tools: WM JS NC RC MTM NK JL HA DB NS. Wrote the paper: WM JS NC RC MTM NK JL HA DB NS.
References
- 1. Swanson RC, Cattaneo A, Bradley E, Chunharas S, Atun R, et al. (2012) Rethinking health systems strengthening: key systems thinking tools and strategies for transformational change. Health Policy Plan 27 Suppl 4iv54–61.
- 2. English M, Nzinga J, Mbindyo P, Ayieko P, Irimu G, et al. (2011) Explaining the effects of a multifaceted intervention to improve inpatient care in rural Kenyan hospitals–interpretation based on retrospective examination of data from participant observation, quantitative and qualitative studies. Implement Sci 6: 124.
- 3. Adam T, De Savigny D (2012) Systems thinking for strengthening health systems in LMICs: Need for a paradigm shift. Health Policy and Planning 27: iv1–iv3.
- 4.
Agyepong IA, Ko A, Adjei S, Adam T (2012) When “Solution of yesterday become problems of today”Crisis-ridden decision maling in a complex adaptive sytem(CAS)-the additional Duty Hours Allowance in Ghana. Health Policy 27.
- 5.
Julio Frenk (2010) The Global Health System: Strengthening National Health Systems as the Next Step for Global Progress. PlosMedicine 7.
- 6. English M, Schellenberg J, Todd J (2011) Assessing health system interventions: key points when considering the value of randomization. Bull World Health Organ 89: 907–912.
- 7. Paina L, Peters DH (2012) Understanding pathways for scaling up health services through the lens of complex adaptive systems. Health Policy Plan 27: 365–373.
- 8.
Targreed A, Hsu J, de Savigny D, Lavis NJ, Rottingen JA, Bennet S, (2012) Evalauting health systems strengthening Interventions in low-income countries:Are we asking the right questions? Health Policy 27.
- 9.
Targreed A, Hsu J, de Savigny D (2012) Systems thinking for strengthening health systems in LMICS:need for a paradigm shift. Health Policy 27.
- 10.
Taghreed A, Hsu J, de Savigny D, Lavis NJ, Rottingen JA, Bennet S, (2012) Evalauting health systems strengthening Interventions in low-income countries:Are we asking the right questions? Health Policy 27.
- 11. Yap C, Siu E, Baker GR, Brown AD (2005) A comparison of systemwide and hospital-specific performance measurement tools. Journal of Health Care Management 50: 4.
- 12. Kunz H, Schaaf T (2011) General and specific formalization approach for a Balanced Scorecard: An expert system with application in health care. Expert Systems with Applications 38: 1947–1955.
- 13. Kaplan RS, Norton DP (1992) The balanced scorecard–measures that drive performance. Harv Bus Rev 70: 71–79.
- 14. Gauld R, Al-wahaibi S, Chisholm J, Crabbe R, Kwon B, et al. (2011) Scorecards for health system performance assessment: the New Zealand example. Health Policy 103: 200–208.
- 15. Lupi S, Verzola A, Carandina G, Salani M, Antonioli P, et al. (2011) Multidimensional evaluation of performance with experimental application of balanced scorecard: a two year experience. Cost Eff Resour Alloc 9: 7.
- 16. Inamdar SN, Kaplan RS, Jones ML, Menitoff R (2000) The Balanced Scorecard: a strategic management system for multi-sector collaboration and strategy implementation. Qual Manag Health Care 8: 21–39.
- 17.
Bisbe J, Barrubes J (2012) The Balanced Scorecard as a Management Tool for Assessing and Monitoring Strategy Implementation in Health Care Organizations. Rev Esp Cardiol S0300–8932(12)00383–1 [pii]/j.recesp.2012.05.014 [doi].
- 19.
Khan MM, Hotchkiss D, Dmytraczenko T, Zunaid Ahsan K (2012) Use of a Balanced Scorecard in strengthening health systems in developing countries: an analysis based on nationally representative Bangladesh Health Facility Survey. Int J Health Plann Manage 10.1002/hpm.2136 [doi].
- 20. Inamdar N, Kaplan RS, Bower M (2002) Applying the balanced scorecard in healthcare provider organizations. J Healthc Manag 47: 179–195 discussion 195–176.
- 21. Jeffs L, Merkley J, Richardson S, Eli J, McAllister M (2011) Using a nursing balanced scorecard approach to measure and optimize nursing performance. Nurs Leadersh (Tor Ont) 24: 47–58.
- 22.
WHO (2010) Monitoring the Building Blocks of healh systems: A handbook of Indiactors and their measurement stragegies.
- 23. Edward A, Kumar B, Kakar F, Salehi AS, Burnham G, et al. (2011) Configuring balanced scorecards for measuring health system performance: evidence from 5 years’ evaluation in Afghanistan. PLoS Med 8: e1001066.
- 24. El-Jardali F, Saleh S, Ataya N, Jamal D (2011) Design, implementation and scaling up of the balanced scorecard for hospitals in Lebanon: policy coherence and application lessons for low and middle income countries. Health Policy 103: 305–314.
- 25. Bouland DL, Fink E, Fontanesi J (2011) Introduction of the Balanced Scorecard into an academic department of medicine: creating a road map to success. J Med Pract Manage 26: 331–335.
- 26. Mutale W, Godfrey-Fausset P, Mwanamwenge MT, Kasese N, Chintu N, et al. (2013) Measuring health system strengthening: application of the balanced scorecard approach to rank the baseline performance of three rural districts in Zambia. PLoS One 8: e58650.
- 27.
Mutale W, Chintu N, Stringer JS, Chilengi R, Mwanamwenge MT, et al.. (2013) Application of system thinking: Evaluation of the BHOMA health system strengthening intervention in Zambia:.
- 28. Sherr K, Requejo J, Basinga P (2013) Implementation research to catalyze advances in health systems strengthening in sub-Saharan Africa: the African Health Initiative. BMC Health Services Research 13: S1.
- 29. Stringer J, Chisembele-Taylor A, Chibwesha C, Chi H, Ayles H, et al. (2013) Protocol-driven primary care and community linkages to improve population health in rural Zambia: the Better Health Outcomes through Mentoring and Assessment (BHOMA) project. BMC Health Services Research 13: S7.
- 30.
Mutale W, Mwanamwenge TM, Chintu N, Stringer J, Balabanova D, et al.. (2013) Application of system thinking concepts in health system strengthening in low income settings: A proposed conceptual framework for the evaluation of a complex health system intervention: The case of the BHOMA intervention in Zambia:. Lusaka: ZAMBART.
- 32.
Jeffrey M Wooldridge (2010) Econometric Analysis of Cross Section and Panel Data, Second Edition: MIT Press.
- 33. Adam T, Hsu J, De Savigny D, Lavis JN, Rottingen JA, et al. (2012) Evaluating health systems strengthening interventions in low-income and middle-income countries: Are we asking the right questions? Health Policy and Planning 27: iv9–iv19.
- 34.
WHO (2009) Systems Thinking for Health Systems Strengthening.
- 35. Atun R (2012) Health systems, systems thinking and innovation. Health Policy and Planning 27: iv4–iv8.
- 36.
Taghreed A, Hsu J, de Savigny D (2012) Systems thinking for strengthening health systems in LMICS:need for a paradigm shift. Health Policy 27.
- 37. BeLue R, Carmack C, Myers KR, Weinreb-Welch L, Lengerich EJ (2012) Systems thinking tools as applied to community-based participatory research: a case study. Health Educ Behav 39: 745–751.
- 38. Wasunna B, Zurovac D, Bruce J, Jones C, Webster J, et al. (2010) Health worker performance in the management of paediatric fevers following in-service training and exposure to job aids in Kenya. Malar J 9: 261.
- 39. Byrne A, Morgan A, Soto EJ, Dettrick Z (2012) Context-specific, evidence-based planning for scale-up of family planning services to increase progress to MDG 5: health systems research. Reprod Health 9: 27.
- 40. de Savigny D, Webster J, Agyepong IA, Mwita A, Bart-Plange C, et al. (2012) Introducing vouchers for malaria prevention in Ghana and Tanzania: context and adoption of innovation in health systems. Health Policy Plan 27 Suppl 4iv32–43.
- 41. Grant A, Treweek S, Dreischulte T, Foy R, Guthrie B (2013) Process evaluations for cluster-randomised trials of complex interventions: a proposed framework for design and reporting. Trials 14: 15.
- 42.
English M (2013) Designing a theory-informed, contextually appropriate intervention strategy to improve delivery of paediatric services in Kenyan hospitals. Implementation Science 8.
- 43. Horwitz O, Lysgaard-Hansen B (1975) Medical observations and bias. Am J Epidemiol 101: 391–399.
- 44. Van Ryckeghem DM, Crombez G, Goubert L, De Houwer J, Onraedt T, et al. (2013) The predictive value of attentional bias towards pain-related information in chronic pain patients: a diary study. Pain 154: 468–475.