CUSUM chart | |||||||||||||
Proposer: | E. S. Page | ||||||||||||
Subgroupsize: | n = 1 | ||||||||||||
Measurementtype: | Cumulative sum of a quality characteristic | ||||||||||||
Qualitycharacteristictype: | Variables data | ||||||||||||
Distribution: | Normal distribution | ||||||||||||
Sizeofshift: | ≤ 1.5σ | ||||||||||||
Meancenter: | The target value, T, of the quality characteristic | ||||||||||||
Meanupperlimit: |
=max\lbrack0,xi-\left(T+K\right)+
| ||||||||||||
Meanlowerlimit: |
=max\lbrack0,\left(T-K\right)-xi+
| ||||||||||||
Meanstatistic: | Ci=
\barxj-T |
In statistical quality control, the CUSUM (or cumulative sum control chart) is a sequential analysis technique developed by E. S. Page of the University of Cambridge. It is typically used for monitoring change detection.[1] CUSUM was announced in Biometrika, in 1954, a few years after the publication of Wald's sequential probability ratio test (SPRT).[2]
E. S. Page referred to a "quality number"
\theta
A few years later, George Alfred Barnard developed a visualization method, the V-mask chart, to detect both increases and decreases in
\theta
As its name implies, CUSUM involves the calculation of a cumulative sum (which is what makes it "sequential"). Samples from a process
xn
\omegan
S0=0
Sn+1=max(0,Sn+xn+1-\omegan)
When the value of S exceeds a certain threshold value, a change in value has been found. The above formula only detects changes in the positive direction. When negative changes need to be found as well, the min operation should be used instead of the max operation, and this time a change has been found when the value of S is below the (negative) value of the threshold value.
Page did not explicitly say that
\omega
This differs from SPRT by always using zero function as the lower "holding barrier" rather than a lower "holding barrier".[1] Also, CUSUM does not require the use of the likelihood function.
As a means of assessing CUSUM's performance, Page defined the average run length (A.R.L.) metric; "the expected number of articles sampled before action is taken." He further wrote:[2]
When the quality of the output is satisfactory the A.R.L. is a measure of the expense incurred by the scheme when it gives false alarms, i.e., Type I errors (Neyman & Pearson, 1936[4]). On the other hand, for constant poor quality the A.R.L. measures the delay and thus the amount of scrap produced before the rectifying action is taken, i.e., Type II errors.
The following example shows 20 observations
X
From the
Z
X
3\sigma
SH
Column | Description | |||
---|---|---|---|---|
X | The observations of the process with an expected mean \bar{x} \sigmaX | |||
Z | The normalized observations, i.e. centered around the mean and scaled by the standard deviation Zn=
| |||
SH | The high CUSUM value, detecting a positive anomaly, {SH}n+1=max(0,{SH}n+Zn+1-\omega) | |||
SL | The low CUSUM value, detecting a negative anomaly, {SL}n+1=max(0,{SL}n-Zn+1-\omega) |
where
\omega
\omega
Cumulative observed-minus-expected plots are a related method.