Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Risk-adjusted monitoring of surgical performance

  • Jianbo Li,

    Roles Conceptualization, Methodology, Software

    Affiliation School of Mathematics and Statistics, Jiangsu Normal University, Xuzhou, Jiangsu 221116, China

  • Jiancheng Jiang ,

    Roles Data curation, Formal analysis, Methodology, Software, Supervision, Writing – original draft

    jjiang1@uncc.edu (JJ); jiangxj@sustc.edu.cn (XJ)

    Affiliation Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223, United States of America

  • Xuejun Jiang ,

    Roles Conceptualization, Software, Writing – original draft

    jjiang1@uncc.edu (JJ); jiangxj@sustc.edu.cn (XJ)

    Affiliation Department of Mathematics, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China

  • Lin Liu

    Roles Data curation, Software

    Affiliation The Research Center of Higher Education, Jiangsu Normal University, Xuzhou, Jiangsu 221116, China

Abstract

We propose a nonparametric risk-adjusted cumulative sum chart to monitor surgical outcomes for patients with different risks of post-operative mortality due to risk factors that exist before the surgery. Using varying-coefficient logistic regression models, we accomplish the risk adjustment. Unknown coefficient functions are estimated by global polynomial spline approximation based on the maximum likelihood principle. We suggest a bisection minimization approach and a bootstrap method to determine the chart testing limit value. Compared with the previous (parametric) risk-adjusted cumulative sum chart, a major advantage of our method is that the morality rate can be modeled more flexibly by related covariates, which significantly enhances the monitoring efficiency. Simulations demonstrate nice performance of our proposed procedure. An application to a UK cardiac surgery dataset illustrates the use of our methodology.

Introduction

Monitoring surgical outcomes of clinical trials is a critical task for doctors to timely detect patient deterioration. However, this task becomes complicated for assessing and online monitoring surgical performance. It is known that the surgical performance changes as patient characteristics, such as age, weight, blood pressure, and pulmonary status, varies. Therefore, assessing and monitoring surgical performance should be adjusted according to the patient’s characteristics existing prior to the surgery. This process is the so-called risk adjustment.

There are several methods for monitoring outcomes of surgery. Examples include the Shewhart chart, the sequential probability ratio test, the exponential weighted moving average (EWMA), and the cumulative sum control chart (CUSUM). Recent studies have suggested to use such schemes to monitor the performance of clinical practitioners, including surgical and general practitioners. Among all the control charts, CUSUM has received much attention because of its simple formulation, intuitive representation, and capability to detect small persistent changes, since it was originally proposed [1]. CUSUM was first applied [2] for surgical performance monitoring. Then it was used [3, 4] for monitoring pediatric cardiac surgeries, among others. For an overview, see references [57].

Consider a cardiac surgery example. During 1992–1998, a UK center for cardiac surgery facilitated 6994 cardiac operations including 5212 males and recorded information such as date, surgeon, and Parsonnet score formed by age, gender, hypertension and diabetic status, renal function and left ventricular mass [8]. The age varies between 11 and 99 with mean 62.5, median 64 and standard deviation 11. According to the records, 461 patients died within 30 days of surgery, corresponding to a mortality rate of 6.6%. Hence, it is demanded to assess and monitor the surgical performance in order to help doctors reduce the mortality rate and adjust the surgical plan in subsequent operations. One can use the CUSUM charts in [24] for this task. However, there is no patients’ preoperative risk considered in these works. Such straightforward applications might lead to a biased assessment of surgical performance because of heterogeneity of patients. As indicated in [9], without risk adjustment for heterogeneity among patients, the control chart shows outcomes confounding with the preexisting risk factors. Hence, it is necessary to develop a risk-adjusted (RA) CUSUM for this example.

There are various works on the RA CUSUM in the literature using two classes of models, respectively for discrete and continous outcomes. For discrete outcomes, examples include the RA CUSUM charts for Down’s syndrome which adjusted the risk of the age of mother using logistic regression [10], for shoulder surgery which adjusted for patients’ rehabilitation conditions [11], and for binary cardiac surgical outcomes which adjusted the risk using a likelihood-based scoring method [12]. In a discussion paper [13], advantages and disadvantages of various control charts including the RA CUSUM were discussed for monitoring health-care and public-health surveillance. The incremental advantage of RA CUSUM was further assessed for coronary bypass outcomes [14] using the procedure in [8]. The RA CUSUM was also investigated in [15] for binary outcomes using the logistic regression and the Bayesian method. For continuous outcomes, examples include the RA CUSUMs for survival times using the Cox model [16] and the accelerated failure model [9], among others. However, these cusum charts are all risk adjusted based on parametric models.

To adjust the risk factors in the UK cardiac surgery example, a linear logistic regression was used to model the relationship between the surgical outcome and the Parsonnet score [8]. Let t = 1, 2, … be the indexes for the patients undergoing surgery in time order. The RA CUSUM chart is used to monitor their outcomes. Let Yt = 1 if patient t dies and Yt = 0 otherwise. Given the t-th patient’s outcome Yt and Parsonnet score Pt, the model takes the following form: (1) where pt is the mortality rate defined as P(Yt = 1|Pt), and logit(t) = log(t/(1 − t)) is the logit function. Then the conditional probability mass function of Yt given Pt is and the mortality odds of failure for patient t is pt/(1 − pt). Since different patients have different baseline risk levels, one needs to monitor the change of odds ratio of patients. Let Rt be the mortality odds ratio of patient t. It is interesting to use the RA CUSUM to sequentially test [8]: where R0 is typically the mortality odds determined by the current process performance, and R1 corresponds to an inferior performance. Mathematically, it can be verified that, the probability of failure P(Yt = 1|Pt) equals R0pt/(1 − pt + R0pt) and RApt/(1 − pt + RApt) under H0 and HA, respectively. Hence, the log likelihood ratio for patient t is [8] (2) Then the RA CUSUM statistics are defined as (3) where Z0 = 0. When the value of Zt exceeds a certain threshold value h, a change in value has been found, and an alarm is signaled.

The model parameter θ and hence pt are estimated by maximizing the likelihood of the in-control dataset Specifically, θ is estimated by maximizing the likelihood: The threshold h is usually decided by the average run length (ARL). For detail see the monitoring algorithm to be introduced later. Since {Zt} is decided by the underlying process of pt in model (1), success of the RA CUSUM depends on if the model of pt is appropriate.

If the null hypothesis is rejected, it indicates a significant increase in the mortality rate. By using this method, the Parsonnet score was found to significantly affect the mortality rate, and it was also claimed [8] that this procedure could detect changes in surgical performance earlier than the non-adjusted CUSUM. However, this approach is based on the assumption in model (1) that the log odds ratio of mortality rate is a linear function of the Parsonnet score. This may create modeling bias if the underlying relationship is nonlinear. In particular, the modeling bias becomes more serious when there are interaction effects among the patient characteristics.

To alleviate the modeling bias problem and to cope with possible interaction effect among the patient characteristics, we propose the following varying-coefficient logistic regression (VCLR) model: (4) where Xt = (X1t, X2t, ⋯, Xqt)′, pt = P(Yt = 1|Xt, Ut), μ is the intercept term, Ut is a random variable, for example, any entry of Xt, and β(u) = (β1(u), β2(u), ⋯, βq(u))′ with βi(u) being unknown functions. When β(u) is a constant vector, model (4) reduces to logistic linear regression which includes model (1). Model (4) allows us to model nonlinear relationship between pt and Xt. If Ut is one entry of Xt, it captures nonlinear interaction among Xt. For the UK cardiac surgery example, we take the t-th patient’s age as Ut and Parsonet score Pt as Xt. It is believed that the effect of a patient’s Parsonet score depends on the age, and it is interesting to investigate if there is a nonlinear interaction effect between the Parsonet score and age. Therefore, model (4) can be used to fit the UK cardiac surgery dataset.

We estimate β(⋅) by the maximum likelihood principle with a polynomial splines approximation in the next section. Then we adjust CUSUM statistics Zt in Eq (3) for monitoring the change of odds ratio of patients. Since we use nonparametric model (4) to adjust the risk, our proposed RA CUSUM is a nonparametric RA method. We propose a bisection search algorithm and a bootstrap method to determine the nonparametric risk-adjusted CUSUM chart limit value. Through simulations and a real data example, we illustrate nice performance and the use of the proposed methodology.

Monitoring procedure

Polynomial spline approximation

Global polynomial spline approximation has become a popular tool in nonparametric smoothing. It has advantages of nice finite sample performance and fast implementation. For function β(⋅), it can be approximated by a linear combination of the basis splines (B-splines).

Assume that random variable Ut has finite support [a, b]. Let r be the degree of B spline polynomial, and let ξ1 = ξ2 = ⋯ = ξr = a < ξr+1 < ξr+2 < ⋯ < ξr+N < b = ξr+N+1 = ξr+N+2 = ⋯ = ξ2r+N be the knots for the B spline approximation, where with 0 < v < 0.5 such that . Usually we call with d = r + N the inner knot points. The number of inner knots, N, is a tuning parameter and can be chosen by cross validation [17] or generalized cross validation (GCV) [18, 19]. We denote by the B-spline basis functions based on the knot set . Then the B-spline basis functions enjoy the following properties [20]:

  1. Bj(u) = 0 for u ∉ [ξj, ξj+r];
  2. Bj(u) > 0 for x ∈ [ξj, ξj+r];
  3. for any u ∈ [a, b] and 0 otherwise.

Consequently, for any 1 ≤ jd and any real u, we have Bj(u) ∈ [0, 1]. Given the knots , βi(u) can be approximated by (5) where B(u) = (B1(u), B2(u), ⋯, Bd(u))′ and θi = (θi1, θi2, ⋯, θid)′. Let Vt = (B(Ut)′ X1t, …, B(Ut)′ Xqt,)′ and . Then model (4) reduces to (6) Therefore, the maximum likelihood estimation method can be directly used to fit model (6) with the in-control dataset. This estimation method is standard in nonparametric smoothing [19] and can be implemented via some existing programs, for example glm and bs in the R software.

Cusum monitoring

Based on the in-control observations , we obtain and , the maximum likelihood estimate of the B-spline coefficient θ and the general mean μ in model (6). Then, the ith functional coefficient βi(u) can be estimated by . This leads to the estimate of mortality rate pt: where Using Eq (2) we obtain the estimate of log-likelihood Wt, denoted by . By Eq (3), we calculate the estimate of Zt, denoted by . Note the random properties of the in-control statistic . Let h be the limit value of this testing procedure. If , then we conclude our RA CUSUM chart triggers a signal, which indicates that the mortality rate increases.

The limit value h plays a critical role in the CUSUM chart monitoring. It is usually determined by virtue of average run length (ARL) in control, denoted by ARL0, the expected run steps from the start of process to the time that the signal is triggered. Optimality of CUSUM in terms of ARL was studied in [21, 22]. A better chart has longer ARL0 and shorter ARL1 (ARL when the process is out-of-control). Hence, given a large enough ARL0, the optimal limit value h can be determined. Usually, the optimal limit value h has no closed form. Thus, we use a numerical search method, bisection, to decide it. This approach needs predetermined upper and lower bounds for h.

Let L be the run length for the monitoring process from start until a signal is triggered. Then L is a random variable, and ARL is its expectation, namely, ARL = E(L). The calculation of ARL is critical in determining h. ARL is a theoretical value, so it should be estimated during monitoring. It is usually not a good idea to get a large in-control sample for estimating the ARL. In the next section we propose a bootstrap resampling method to achieve the goal.

For a given ARL0, we can obtain the limit value h through the above procedure. Thus, the CUSUM chart in Eq (3) can be used to monitor the surgical process in the follow up. That is, for the t-th patient, Zt > h indicates an abnormal observation, i.e., the process is out of control. In such a case, surgeons should check where and why the process becomes out of control. In addition, the monitoring efficiency can be assessed by calculating ARL1, similar to that of ARL0.

Monitoring algorithm

Suppose h ∈ [a, b] with a > 0. Given ARL0, we propose the following the chart monitoring algorithm:

  • Phase I. Determination of the optimal limit value hopt.
    1. Draw K bootstrap samples from the in-control sample. For each bootstrap sample, use the procedure in the previous section to calculate the RA CUSUM chart statistic . Let be the N realized values of from all bootstrap samples. Denote by and
    2. Use the bisection method to decide the value of h:
      1. (i). Set h = a and calculate the value of . We use to denote the calculated value. It is required that . Otherwise, it can be done by choosing a smaller value of a.
      2. (ii). Set h = b and calculate the value of . We use to denote the calculated value. It is required that . Otherwise, it can be done by choosing a larger value of b.
      3. (iii). Set h = (a + b)/2 and calculate the value of . We use to denote the calculated value.
      4. (iv). Given a positive integer M, if and , set b = (a + b)/2; if and , set a = (a + b)/2. In general, the pre-assigned positive integer M ranges from 2 to 5 in practice, so that the ARL with hopt quickly approaches to the nominal ARL0 with a certain error tolerance.
      5. (v). Repeat steps (i)-(iv) until Then the optimal value can be taken as hopt = h.
  • Phase II. Monitoring Phase.
    With the optimal value hopt, we calculate the RA CUSUM in Eq (3) based on the estimated varying coefficient logistic regression model (4). Let be the estimate of CUSUM statistic. When , a signal is triggered and monitoring is stopped.

In our experience, the bootstrapping-based bisection numerical method for the determination of h performs well and stably. Other numerical approaches can be used. For example, one can search proper value on a given grid of points. One can also use a theoretical distribution of ARL to determine h. However, one cannot expect that the theoretical method performs stably, since it depends on the assumption of a theoretical distribution. Hence, we use the above method in our numerical study.

The proposed monitoring method can be modified to detect a decrease in the mortality rate odds ratio by the following RA CUSUM chart The above algorithm can be updated to this decrease monitoring with a signal triggered as .

Numerical studies

In this section, we conduct simulations to demonstrate nice performance of the proposed approach and to use the UK cardiac surgery data to illustrate the use of our RA CUSUM chart.

Simulation

The objective of our simulations is to compare our approach with that of Steiner et al. in [8]. We conduct 500 simulations. For each simulation, we set sample size n = 3000, where the first one-third observations are used as the in-control process with n1 = 1000 and ARL0 = 900, and the remaining two-third observations are used for the out-of-control process with sample size n2 = 2000.

We use cubic B splines with N inner knots to approximate β(u). The inner knots are set as equally spaced sample quantiles of . We regard N as tuning parameter, and it is chosen by minimizing the value of ARL1. We evaluate the estimator of β(⋅) by its mean squared errors (RMSE): over a grid point .

Example 1 (Varying-coefficient models) We generate from the following varying coefficient logistic regression model: where UtU(−0.5, 0.5), Xt is uniformly distributed on the set {0, 1, ⋯, 20}, and β(u) = 0.5 cos(πu). For the in-control process, μ = −3, and for the out-of-control process, μ = −3 + log RA, where RA equals to 0.3, 0.5, 0.8, 1.5, 2, 2.5, 3, 3.5, or 4. These values of RA are used to monitor the decrease (RA < 1) and increase (RA > 1) in the mortality rate odds ratio.

Fig 1 shows the boxplot of the RMSE and a typical estimate of β(u), where the typical estimate corresponds to that with median performance in terms of RMSEs in 500 simulations. It indicates that our estimate is quite close to the true curve. Table 1 summarizes the monitoring results of [8] and ours, where “V-C logistic” stands for the results from the RA CUSUM chart based on the varying coefficient logistic regression model (4), and “Linear Logistic” represents the results from the standard logistic model (1). It is seen that both the mean and variance of ARL1 based on the varying coefficient logistic model (4) are significantly smaller than those of ARL1 based on the linear logistic model (1). This indicates that our proposed RA CUSUM chart outperforms the RA CUSUM chart of [8].

thumbnail
Fig 1. Left panel: The boxplot of RMSEs; right panel: A typical estimate of β(u), solid—True, dashed—Typical estimate.

https://doi.org/10.1371/journal.pone.0200915.g001

Figs 2 and 3 display the RA CUSUM charts with RA = 2.0 (increase) and RA = 0.3 (decrease), respectively. For our RA CUSUM, using the monitoring algorithm, we obtain the values of hopt as h1 = 4.2969 for increase monitoring and h2 = −4.9805 for decrease monitoring. For the RA CUSUM of Steiner et al., the values of hopt are calculated as h1 = 4.6876 for increase monitoring and h2 = −5.7813 for decrease monitoring. Fig 2 shows that our procedure for increase monitoring first triggers a signal at time t = 122, and the linear logistic regression-based CUSUM triggers a signal at time t = 176; for decrease monitoring, the times triggering a signal for our procedure and for that of Steiner et al. are t = 140 and t = 194, respectively. This shows that our procedure triggers a signal much earlier than that of Steiner et al. In addition, we can also conclude from the chart fluctuation that our procedure performs more stably than that of Steiner et al.

thumbnail
Fig 2. CUSUM charts with RA = 2.0.

Left panel—ours; right panel—[8].

https://doi.org/10.1371/journal.pone.0200915.g002

thumbnail
Fig 3. CUSUM charts with RA = 0.3.

Left panel—ours; right panel—[8].

https://doi.org/10.1371/journal.pone.0200915.g003

Example 2 (Constant coefficient models) Same as in Example 1, but with β(u) = 0.5. In this example, since the underlying process has form of model (1), the CUSUM method of Steiner et al. in [8] should work. Since our model (4) contains model (1), as expected, the two RA CUSUM charts perform similarly. The simulated results are reported in Table 2. Since the two procedures have similar values of ARL1 according to the mean and standard deviation (Std), the two charts are comparable.

Real data analysis

We here monitor the performance of the UK cardiac surgery using the RA CUSUM charts of [8] and ours. Some details of this dataset were described in the introduction section. For our procedure, we employ the varying-coefficient logistic model (7) which is an extension to the models previously used [8, 14]. Like [8], we treat the data during 1992–1993 as the in-control process and begin monitoring in 1994.

We use model (5) to approximate coefficient function β(⋅), and the optimal number of knots is calculated as 5. Fig 4 displays the fitted curve of β(⋅). It seems that the effect of Parsonnet score nonlinearly depends on Age. In particular, the Parsonnet score strongly correlates with the mortality rate in people aged less than 20 years.

To assess the actual performance of the proposed procedure, we compare it with the method of [8] for detecting 1.5 times the odds of death (R0 = 1, RA = 1.5) and half of the odds of death (R0 = 1, RA = 0.5). By bisection and bootstrapping methods, the values of hopt for the RA CUSUM charts based on models (4) and (1) are 3.43, 3.50 for the increasing detection and −4.16, −4.22 for the decreasing detection, respectively. Figs 5 and 6 plot the RA CUSUM charts. As shown in the figures, the step number triggering a signal for the increase monitoring is 1572 for our RA CUSUM and 1584 for that of [8]. For the decrease monitoring, the corresponding step numbers are 2921 and 2998, respectively. In all cases, the resulting values of ARL0 for both method are 2000. These results show that our proposed procedure can detect an abnormal signal much earlier in the decrease monitoring.

thumbnail
Fig 5. CUSUM charts with RA = 1.5.

Left panel—ours; right panel—[8].

https://doi.org/10.1371/journal.pone.0200915.g005

thumbnail
Fig 6. RA CUSUM charts with RA = 0.5.

Left panel—ours; right panel—[8].

https://doi.org/10.1371/journal.pone.0200915.g006

Conclusion

In this paper, we have proposed the nonparametric RA CUSUM chart based on the varying coefficient logistic regression model for monitoring the surgical outcomes. The maximum likelihood and cubic B spline approximation has been used for estimation. The bisection and bootstrap methods have been incorporated to determine the testing limit values. Numerical studies show the advantages of our method over that of [8].

The relationship between the covariates and the mortality rate is usually unknown in applications. Thus, other nonparametric or semiparametric regression may be employed to capture this relationship. The proposed RA monitoring procedure can be extended to other control charts and (or) a mixture of several control charts, such as the charts based on Bayesian approaches [7] and a combination of EWMA and CUSUM charts [2329].

Acknowledgments

We would like to thank Dr. Tom Treasure at Guy’s Hospital, St. Thomas Street, London, Greater London, SE1 9RT U.K. and Dr. Landon Sego at Pacific Northwest National Laboratory, Richland, WA, U.S.A. for access to the cardiac surgery data (E-mail: Landon.Sego@pnl.gov). Jianbo Li is supported by NSFC (Grant 11571148), Priority Academic Program Development (PAPD) of Jiangsu Higher Education Institutions, and Qinglan Project in Jiangsu. Xuejun Jiang is supported by Natural Science Foundation of Guangdong province of China (2017A030313012) and Shenzhen Sci-Tech Fund (JCYJ20170307110329106).

References

  1. 1. Page ES. Continuous inspection schemes. Biometrika. 1954;41:100–114.
  2. 2. Williams SM, Parry BJ, Schlup MM. Quality control: an application of the CUSUM. British Medical Journal. 1992;304(6838):1359–1361. pmid:1611337
  3. 3. de Leval MR, Francois K, Bull C, Brawn W, Spiegelhalter DJ. Analysis of a cluster of surgical failures: application to a series of neonatal arterial switch operations. The Journal of Thoracic and Cardiovascular Surgery. 1994;107:914–924. pmid:8127123
  4. 4. Steiner SH, Cook RJ, Farewell VT. Monitoring paired binary surgical outcomes using cumulative summation charts. Statistics in Medicine. 1999;18:69–86. pmid:9990693
  5. 5. Grigg O, Farewell V. An Overview of Risk-Adjusted Charts. Journal of the Royal Statistical Society Series A (Statistics in Society). 2004;167(3):523–539.
  6. 6. Biau DJ, Rigon MR, Petit GG, Nizard RS, Porcher R. Quality control of surgical and interventional procedures: a review of the CUSUM. Quality Safe Health Care. 2007;16:203–207.
  7. 7. Zeng L. Risk-Adjusted Performance Monitoring in Healthcare Quality Control. In “Quality and Reliability Management and Its Applications”,Pham H. (ed.). Springer-Verlag, London; 2016.
  8. 8. Steiner SH, Cook RJ, Farewell VT, Treasure T. Monitoring surgical performance using risk adjusted cumulative sum charts. Biostatistics. 2000;1:441–452. pmid:12933566
  9. 9. Sego LH, Reynolds MR Jr, Woodall WH. Risk-adjusted monitoring of survival times. Statistics in Medicine. 2009;28:1386–1401. pmid:19247982
  10. 10. Lie RT, Heuch I, Irgens LM. A new sequential procedure for surveillance of Down’s syndrome. Statistics in medicine. 1993;12(1):13–25. pmid:8446800
  11. 11. Chiu JE, Chen ZH, Tsai HH. Applying of risk-adjusted CUSUM control chart monitoring of medical information in shoulder surgery study. In: 2013 10th International Conference on Service Systems and Service Management. IEEE; 2013. p. 792–794.
  12. 12. Steiner SH, Cook RJ, Farewell VT. Risk-adjusted monitoring of binary surgical outcomes. Medical Decision Making. 2001;21(3):163–169. pmid:11386623
  13. 13. Woodall WH. The Use of Control Charts in Health-Care and Public-Health Surveillance (with discussion). Journal of Quality Technology. 2006;38(2):89–104.
  14. 14. Novick RJ, Fox SA, Stitt LW, Forbes TL, Steiner S. Direct comparison of risk-adjusted and non–risk-adjusted CUSUM analyses of coronary artery bypass surgery outcomes. The Journal of thoracic and cardiovascular surgery. 2006;132(2):386–391. pmid:16872967
  15. 15. Zeng L, Zhou S. A Bayesian approach to risk-adjusted outcome monitoring in healthcare. Statistics in Medicine. 2011;33:3431–3446.
  16. 16. Biswas P, Kalbfleisch JD. A risk-adjusted CUSUM in continuous time based on the Cox model. Statistics in Medicine. 2008;27:3382–3406. pmid:18288785
  17. 17. Stone M. Cross-validation choice and assessment of statistical predictions (with Discussion). Journal of the Royal Statistical Society B. 1974;36:111–147.
  18. 18. Golub G, Heath M, Wahba G. Generalized cross-validation as a method for choosing a good ridge parameter. Technometrics. 1979;21(2):215–223.
  19. 19. Hastie T, Tibshirani R. Generalized Additive Models. Chapman and Hall, London; 1990.
  20. 20. De Boor C. A practical guide to splines. New York: Springer; 1972.
  21. 21. Moustakides GV. Optimal stopping times for detecting changes in distribution. Ann Statist. 1986;14:1379–1387.
  22. 22. Ritov Y. Decision theoretic optimality of the cusum procedure. Ann Statist. 1990;18:1464–1469.
  23. 23. Lorden G. Procedures for reacting to a change in distribution. Ann Math Statist. 1971;42:520–527.
  24. 24. Lorden G, Eisenberger I. Detection of Failure rate increases. Technometrics. 1973;15:167–175.
  25. 25. Lucas JM. Combined Shewhart-CUSUM quality control scheme. J Quality Tech. 1982;14:51–59.
  26. 26. Rowlands J, Nix B, Abdollahian M, Kemp K. Snub-nosed V-Mask Control Schemes. The Statistician. 1982;31:1–10.
  27. 27. Dragalin V. The design and analysis of 2-CUSUM procedure. Comm Statist Simulation Comput. 1997;26:67–81.
  28. 28. Sparks RS. CUSUM charts for signalling varying location shifts. J Quality Tech. 2000;32:157–171.
  29. 29. Han D, Tsung F, Hu X, Wang K. CUSUM AND EWMA multi-charts for detecting a range of mean shifts. Statistica Sinica. 2007;17:1139–1164.