Quality Control Systems
Dr. Tony Badrick
CEO, RCPAQAP Australia
When we think of internal quality control, we usually consider putting the quality control samples onto the analyzer, reviewing the sample results using certain rules, and not much more. But there is more than this to a quality control system [2]. It consists of the following concepts; an understanding of error, a sample to use to determine of the analytical system is in control, a set of rules and acceptable ranges for the QC result, a process to follow if the QC sample is not in control including troubleshooting possible causes and subsequent rectification of the problem including verification that the problem is fixed, identification of any patient results which may need to be rerun, and a process to escalate these patient risk results.
Let us consider each of these components of the QC system
Firstly, an understanding of error. We design a QC system to detect errors, but only the errors we expect to find. In analytical systems, we identify two types of error, a shift in the mean of the QC sample (bias) and an increase in the standard deviation (SD) of the distribution of QC results (imprecision). These two errors are related to the two components of the Gaussian distribution of the QC results, that is the mean and SD of repeated assays of the QC sample. This is a critical point to understand, as the success of the rules we will use ultimately depends on the accuracy of this mean and SD. There are guidelines that provide advice on how many QC samples are needed to determine these two parameters.
To assess if the assay is in control, we will run a sample, a QC sample that we expect will behave the same way that a patient sample behaves in the assay so that a change in the assay that affects patient results will be reflected in this QC sample [3, 4]. The QC sample is usually prepared from pooled human serum samples but has been modified by spiking to produce higher values, filtering, adding preservatives, and lyophilization [5]. These steps are necessary to make the material stable and useful, but they may impact the way the sample behaves in the assay compared to a patient sample.
Next, we select rules that will identify when a QC sample result is outside the expected statistically acceptable variation. We recall that the Gaussian (normal) distribution predicts that 95% of the results should lie within the mean plus or minus two SDs, and 99% will lie within the mean plus or minus 3 SDs. Using the properties of the Gaussian distribution, we can derive the probability of two or more QC sample results occurring in certain patterns, for example, two results occurring within the mean plus or minus two SD. These rules are the Westgard rules, and they rely on QC sample results following the Gaussian distribution [6–8]. Note that these rules are determined to identify statistically significant differences from the QC sample distribution when the assay is in control. For instance, these rules will not reduce patient harm from incorrect results if the statistically significant difference is greater than a clinically significant change! [9]
When a control rule alarms, meaning that the QC sample result is outside the statistically expected variation range, then, patient results may have an error that could affect their interpretation. Note that we are assuming that a shift in bias or SD that is statistically significant may not be clinically significant. There are common and uncommon causes of a change in the QC value. The most common cause is a problem with the QC material itself [10]. It may have been incorrectly prepared (reconstitution or thawing), maintained (evaporation or deterioration), or handled (wrong QC sample used. Other causes include reagent lot changes, calibrator lot, instrument malfunction, or inappropriate operator intervention. To determine which of these has occurred and if there has been an impact on patient results, there needs to be a documented and understood process of troubleshooting, rectification, and verification that the assay is back in control [11].
If there has been a problem with the assay that has been detected by the QC sample, then it is essential that any patient samples affected are identified. Again, there should be a documented process that may involve sample every fifth or tenth patient sample looking for a significant difference in the two results from the patient. Then these patient results should be amended, and the clinicians notified where this is deemed necessary in terms of patient risk.
The system above relies on several basic issues. The QC sample behaves the same way as a patient sample, there is a relationship between a statistically significant difference detected by the QC rule and a clinically significant result, and the frequency of QC samples used will detect an error in a timely manner. Sohow can we incorporate these concerns into our QC strategy?
The first problem relates to the commutability of the QC sample, dose it Bhave as a patient sample does in the assay? This is difficult as many QC materials do not, and it is not an easy task to prove that the QC material is commutable. There are other approaches to QC [12], but conventional QC will be with us for some time, so we need to be aware of this issue. When changing reagent lots, it is worth using previous patient samples to ensure that these changes do not lead to drift in patient results that May not be detected by conventional QC.
The next problem is the frequency of QC samples [13]. Originally when the Westgard rules were developed, patient samples were bracketed by QC samples, and a ‘batch’ of patient results was not released until the QC sample at the end of the batch passed assessment. Now patient results are usually released as soon as they are analysed, so a QC failure will be detected after results have been sent to the referring clinician. This has occurred because modern instrumentation is extremely reliable and stable. There are a few times when there is an analytical failure. However, this can breed complacency in operators [14]. There are few QC failures and most of those occur because of the QC material itself. Operators will see few real failures due to the reagents or instrument. The infrequent failures lead to the risk that when a real failure occurs, it will not be detected or responded to.
There are some steps we can put into place to ensure laboratories build processes that can mitigate against these potential errors [15].
These can be useful to apply every day. But QC is poorly understood and practiced today [16]. We all need to think about what we do and why do it!
1. ISO 15189:2022 Medical laboratories — Requirements for quality and competence, ISO. (2022).
2. T. Badrick, The quality control system, Clin Biochem Rev. 29 Suppl 1 (2008) S67- 70.
3. W.G. Miller, G.L. Myers, Commutability still matters, Clin Chem. 59 (2013) 1291– 1293. https://doi.org/10.1373/clinchem.2013.208785.
4. W.G. Miller, G.L. Myers, R. Rej, Why commutability matters, Clin Chem. 52 (2006) 553–554. https://doi.org/10.1373/clinchem.2005.063511.
5. P.M.S. Clark, L.J. Kricka, T.P. Whitehead, Matrix effects in clinical analysis: commutability of control materials between the Ketchum, Beckman and SMA 12 60 glucose and urea methods, Clinica Chimica Acta. 113 (1981) 293–303. https://doi.org/10.1016/0009-8981(81)90282-5.
6. J.O. Westgard, S.A. Westgard, Establishing evidence-based statistical quality control practices, Am J ClinPathol. 151 (2019) 364–370. https://doi.org/10.1093/AJCP/AQY158
7. J.O. Westgard, Statistical Quality Control Procedures, Clin Lab Med. 33 (2013) 111–124. https://doi.org/10.1016/j.cll.2012.10.004.
8. A. Katayev, J.K. Fleming, Past, present, and future of laboratory quality control: patient- based real-time quality control or when getting more quality at less cost is not wishful thinking, J Lab Precis Med. 5 (2020) 28–28. https://doi.org/10.21037/jlpm-2019-qc-03.
9. M. Panteghini, F. Ceriotti, G. Jones, W. Oosterhuis, M. Plebani, S. Sandberg, Strategies to define performance specifications in laboratory medicine: 3 years on from the Milan Strategic Conference, Clin Chem Lab Med. 55 (2017) 1849– 1856. https://doi.org/10.1515/cclm-2017- 0772.
10. P.J. Howanitz, G.A. Tetrault, S.J. Steindel, Clinical laboratory quality control: A costly process now out of control, Clinica Chimica Acta. 260 (1997) 163–174. https://doi.org/10.1016/S0009- 8981(96)06494-7.
11. G. Jones, J. Calleja, D. Chesher, C. Parvin, J. Yundt-Pacheco, M. Mackay, T. Badrick, Collective Opinion Paper on a 2013 AACB Workshop of Experts seeking Harmonisation of Approaches to Setting a Laboratory Quality Control Policy, Clin Biochem Rev. 36 (2015) 87–95.
12. H.H. van Rossum, A. Bietenbeck, M.A. Cervinski, A. Katayev, T.P. Loh, T.C. Badrick, Benefits, limitations and controversies on patient-based real-time quality control (PBRTQC) and the evidence behind the practice, Clin Chem Lab Med. 59 (2021) 1213–1220. https://doi.org/10.1515/cclm-2021-0072.
13. C.A. Parvin, A.M. Gronowski, Effect of analytical run length on quality-control (QC) performance and the QC planning process, Clin Chem. 43 (1997) 2149–2154.
14. T. Badrick, A.S. Brown, Identifying human factors as a source of error in laboratory quality control, J Lab Precis Med. (2023).
15. C. Parvin, J. Jones, QC design: It’s easier than you think. MLO. 2013; 45(12):18- 22. Medical Laboratory Observer. 45 (2013) 18–22.
16. C. Parvin, J. Jones, QC design: It’s easier than you think. MLO. 2013; 45(12):18- 22. Medical Laboratory Observer. 45 (2013) 18–22.