To determine the agreement and reliability of fully automated coronary artery calcium (CAC) scoring in a lung cancer screening population.
Materials and Methods
1793 low-dose chest CT scans were analyzed (non-contrast-enhanced, non-gated). To establish the reference standard for CAC, first automated calcium scoring was performed using a preliminary version of a method employing coronary calcium atlas and machine learning approach. Thereafter, each scan was inspected by one of four trained raters. When needed, the raters corrected initially automaticity-identified results. In addition, an independent observer subsequently inspected manually corrected results and discarded scans with gross segmentation errors. Subsequently, fully automatic coronary calcium scoring was performed. Agatston score, CAC volume and number of calcifications were computed. Agreement was determined by calculating proportion of agreement and examining Bland-Altman plots. Reliability was determined by calculating linearly weighted kappa (κ) for Agatston strata and intraclass correlation coefficient (ICC) for continuous values.
44 (2.5%) scans were excluded due to metal artifacts or gross segmentation errors. In the remaining 1749 scans, median Agatston score was 39.6 (P25–P75∶0–345.9), median volume score was 60.4 mm3 (P25–P75∶0–361.4) and median number of calcifications was 2 (P25–P75∶0–4) for the automated scores. The κ demonstrated very good reliability (0.85) for Agatston risk categories between the automated and reference scores. The Bland-Altman plots showed underestimation of calcium score values by automated quantification. Median difference was 2.5 (p25–p75∶0.0–53.2) for Agatston score, 7.6 (p25–p75∶0.0–94.4) for CAC volume and 1 (p25–p75∶0–5) for number of calcifications. The ICC was very good for Agatston score (0.90), very good for calcium volume (0.88) and good for number of calcifications (0.64).
Citation: Takx RAP, de Jong PA, Leiner T, Oudkerk M, de Koning HJ, Mol CP, et al. (2014) Automated Coronary Artery Calcification Scoring in Non-Gated Chest CT: Agreement and Reliability. PLoS ONE 9(3): e91239. https://doi.org/10.1371/journal.pone.0091239
Editor: Arrate Muñoz-Barrutia, University of Navarra, Spain
Received: October 2, 2013; Accepted: February 8, 2014; Published: March 13, 2014
Copyright: © 2014 Takx et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: The NELSON-trial was supported by the Netherlands Organisation for Health Research and Development (ZonMw, grant number 22000130, http://www.zonmw.nl/); Dutch Cancer Society Koningin Wilhelmina Fonds; Stichting Centraal Fonds Reserves van Voormalig Vrijwillige Ziekenfondsverzekeringen (RVVZ); Siemens Germany provided 4 digital workstations and LungCARE for the performance of 3D measurements; Rotterdam Oncologic Thoracic Steering Committee; and the G. Ph. Verhagen Trust, Flemish League Against Cancer, Foundation Against Cancer, and Erasmus Trust Fund. The contribution of I. Isgum was financially partly supported by the project Care4Me (Cooperative Advanced Research for Medical Efficiency, grant number ITEA2 Call3-08004, http://www.compassis.com/care4me/) in the framework of the EU research programme ITEA (Information Technology for European Advancement). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have the following interests. HJ de Koning is member of the medical advisory board of Roche Diagnostics. Siemens Germany provided 4 digital workstations and LungCARE for the performance of 3D measurements. This does not alter the authors’ adherence to all the PLOS ONE policies on sharing data and materials.
Smoking is an important factor in the etiology of cardiovascular disease (CVD) , . Coronary artery calcification (CAC) is observed frequently in patients with cardiovascular events and in advanced atherosclerotic plaques . CAC scoring with ECG-gated computed tomography (CT) has emerged as an important imaging biomarker for CVD and all-cause mortality , , . Based on CAC scores, patients can be assigned into CVD risk categories to guide treatment .
Low-dose non-gated chest CT has been applied for lung cancer screening in smokers , . In spite of suboptimal image acquisition, CAC scoring from lung cancer screening CT has been shown to be a strong and independent predictor of cardiovascular events and all-cause mortality , , . Also, several studies demonstrated good agreement between CAC scores determined using low-dose non-gated CT, as acquired in lung cancer screening, and CAC scores quantified using gated cardiac CT , , . Budoff et al.  and Kim at al.  found a correlation of 0.96 and 0.89 between Agatston CAC scores obtained with and without gated CT, respectively. Furthermore, intraclass correlation coefficient (ICC) between absolute Agatston scores on two low-dose ungated CT scans within four months was very good (0.94) . These findings indicate that CAC scores obtained in lung cancer screening setting can be used for identification of subjects at risk of CVD events. Integrated screening for lung cancer and CVD in smokers could optimize risk prediction without additional radiation exposure for the participant. Manual scoring of CAC on low-dose non-gated CT is time-consuming as a result of the increased number of slices and the high prevalence of coronary calcification, difficult due to cardiac motion and thus cumbersome and expensive in a screening setting. Moreover, manual scoring may add to inter-rater variability, although a previous study found an ICC between human raters of 0.97 in a small set of 50 randomly selected CT scans . Automated quantification of CAC could overcome these limitations and previous studies demonstrated preliminary feasibility using non-gated CT .
The objective of our study was to determine the agreement and reliability of automated CAC scoring compared with reference scores in a large set of scans acquired in a lung cancer screening data.
This study included participants of lung cancer screening trial who smoked 15 or more cigarettes per day for 25 years or 10 or more cigarettes for 30 years, and were current smoker or had quit less than 10 years ago.
This current study is an ancillary study of the Dutch-Belgian Randomized Lung Cancer Screening Trial (Dutch acronym: NELSON study) (ISRCTN63545820) and was approved by the institutional ethical boards of the participating medical centers (University Medical Centre Groningen, University Medical Centre Utrecht, Kennemer Gasthuis Haarlem [the Netherlands], and University Hospital Leuven [Belgium]). Furthermore, the Ministery of Health approved the NELSON trial after positive advice of the Dutch Health Council. Written informed consent was obtained from all participants. The NELSON study was designed to investigate whether lung cancer screening by low-dose CT will reduce 10-year lung cancer mortality by at least 25% in high-risk (ex-)smokers between ages 50 and 75 compared with a control group without screening.
Images were obtained in University Medical Center Utrecht on a 16-slice CT scanner with a 16 mmx0.75 mm collimation (Mx8000 IDT or Brilliance-16P CT, Philips Medical Systems, Best, the Netherlands). A 120 kV tube voltage was applied in participants weighing less than 80 kg and in participants weighing more than 80 kg the tube voltage was increased to 140 KV. The mAs settings were depended on the CT hardware used and adjusted accordingly. All scans were reconstructed to a slice thickness of 3.1 mm and an increment of 1.4 mm .
Manual CAC scoring in chest CT scans from lung cancer screening study is extremely time-consuming and cumbersome due to cardiac motion, image noise and numerous calcifications in high-risk population . Hence, to set the reference standard that enables evaluation of the automatic method in a large data set from this study the following approach was utilized. First, coronary calcifications were identified automatically, using the preliminary version of the evaluated algorithm for automated CAC scoring . Thereafter, four trained raters, a radiologist with six years of experience in cardiac CT and three medical students, set the reference standard for this study. The raters inspected and when deemed necessary corrected the errors of the algorithm. Each scan was inspected by one of the four raters. Prior to this, the medical students received extensive training (e.g. reviewed at least 100 scans) for this study by a board certified chest radiologist. Readers were blinded to the participant’s age, sex and clinical data. Visually identified stents were excluded from quantification. Also, the raters discarded scans with artifacts caused by metal implants. Finally, to ensure high quality of the reference standard, one research physician with four years of experience in cardiac CT evaluated all cases and excluded those containing gross segmentation errors, i.e. incorrectly identified lesions as coronary calcifications, or coronary calcifications missed by the raters. In such a way identified coronary calcifications served as a reference study for further evaluation.
CAC scores were automatically quantified without any user interaction using previously published algorithm . The software applied a threshold of 130 HU in combination with three-dimensional connected component labeling to mark potential calcifications (candidates). Subsequently, each candidate was described by size, spatial and texture characteristics. Volume of each candidate was used a size feature. Spatial features were determined using a coronary calcium atlas providing an a priori probability for spatial appearance of coronary calcifications in a chest CT scan (e.g. spatial probability that a candidate is a coronary calcification). Texture features were computed using Gaussian filters at multiple scales. Based on the features, coronary calcifications were identified using a supervised pattern recognition system with k-nearest neighbor and support vector machine classifiers. Finally, identified coronary calcifications were quantified as Agatston score and total calcium volume (mm3). To determine CVD risk of subjects, Agatston score was divided into five strata (0, 1–10, 11–100, 101–400, and >400) .
To determine human interrater reliability and to establish whether presegmentation of coronary calcifications by automatic software, i.e. initial automatic identification of coronary artery calcifications, influenced the reference scores, the same four raters independently scored a subset of 199 consecutive CT scans fully manually, thus without any presegmentation.
Normally distributed data are presented as mean ± standard deviation (SD) and non-normally distributed data as median plus 25th–75th percentile (P25–P75). Quartile coefficient of dispersion (QCD) was calculated to determine dispersion. Inter-rater agreement and reliability were calculated , . Agreement is the degree to which the scores are identical and reliability is defined as the ratio of variability between CT scans to the total variability of all quantifications in the sample. Agreement is especially important when assessing the usability of a score to monitor health status-changes over time using repeated measurements. Agreement was determined by calculating the proportion of subjects with the same CVD risk determined by the reference and automatically, and examining Bland-Altman plots with 95% limits of agreement. The measurement error of CAC score increases with higher CAC scores . Accordingly, we applied a regression approach for non-uniform differences to model the variation of the absolute differences between the two measurement techniques . The 95% repeatability limits were calculated by multiplying the predicted absolute difference by 1.96×(π/2)0.5, since the absolute difference has a half-normal distribution . Reliability is the degree to which the test can effectively distinguish between study participants, regardless of rater error. Reliability is of importance in diagnostic practice to distinguish between affected and non-affected persons at a single time-point. Reliability between automated and reference quantification and between fully manual scoring and reference scoring was determined by calculating linearly weighted kappa (κ) for Agatston strata and two-way-mixed ICC for continuous values. Interrater reliability of fully manual scoring was calculated using Kendall’s coefficient of concordance (Kendall’s w) for Agatston risk categories and two-way-random ICC for continuous values. P values <0.05 were considered to be statistically significant. Statistical analyses were performed with SPPS version 19 (SPSS Inc, Chicago, Illinois, USA) and R version 2.10.2 (R Foundation for Statistical Computing, Vienna, Austria).
1793 participants (median age 60.1, P25–P75 56.7–64.3 years; 97 females) underwent a non-contrast enhanced non-ECG-gated CT of the chest. 44 scans were discarded because of beam hardening artifacts (18), or gross segmentation errors (26). Median Agatston score was 55.8 (P25–P75∶1.1–449.0; QCD: 1.00; range: 0–12080.9), median volume score was 87.4 mm3 (P25–P75∶3.2–509.7; QCD: 0.99; range: 0–9610,9) and median number of calcifications was 3 (P25–P75∶1–9; QCD: 0.80; range: 0–53) based on the reference scores; and 39.6 (P25–P75∶0–345.9; QCD: 1.00; range: 0–8363.3), 60.4 mm3 (P25–P75∶0–361.4; QCD: 1.00; range:0–6656.1) and 2 (P25–P75∶0–4; QCD: 1.00; range: 0–35) based on automated scores, respectively.
Agreement between Reference and Automated Cac Score
The proportion of agreement between the Agatston strata of the reference and automated CAC score was 1386 (79.2%) of 1749 participants (Table 1). Further analysis of discordant pairs revealed that most discordant pairs occurred in the right coronary artery (RCA) and were due to unaccounted calcifications by the automated method (Table 2, Figure 1,2). A shift of more than one Agatston stratum was observed in 83 (4.7%). Bland-Altman plots (Figure 3) with the limits of agreement showing a systematic error due to an underestimation of automated quantified CAC scores and number of calcifications. Median difference was 2.5 (p25–p75∶0.0–53.2; QCD: 1.00) for Agatston score, 7.6 (p25–p75∶0.0–94.4; QCD: 1.00) for CAC volume and 1 (p25–p75∶0–5; QCD: 1.00) for number of calcifications.
Example of missed calcification in the LM by automated scoring method (A) compared with reference calcification in green (B). No stent was present.
Example of an ‘outlier’ by automated quantification (A) compared to reference (B). In the LAD a severe calcification and black voids are visible. No stent was present.
Average score from reference and automated quantification is plotted against difference between the two quantification methods. The plots reveal underestimated calcium scores by automated quantification and an increasing difference with a higher average score. Regression formulas for absolute difference are multiplied by +/−1.96*(π/2)0.5 to get the 95% limits of agreement. For Agatston score: Y = (−64.482+15.332 *×0.5)*1.96*(π/2)0.5; For volume CAC score: Y = (−74.202+16.530*×0.5)*1.96*(π/2)0.5; and for number of calcifications: Y = (−1.743+3.073*×0.5)*1.96*(π/2)0.5.
Reliability of Reference and Automated CAC Score
For Agatston risk categories the linearly weighted kappa demonstrated very good reliability (κ = 0.85) . For continuous values, despite underestimation CAC scores by automated quantification, the ICC was very good for Agatston score (0.90), very good for calcium volume (0.88) and good for number of calcifications (0.64).
Human Interrater Reliability
Human interrater reliability was calculated based on a subset of 199 consecutive participants. Kendall’s w for Agatston risk categories among the four human raters was very good (0.88). The ICC among the four human observers was very good for Agatston score (0.95), for calcium volume (0.96) and for number of calcifications (0.89). The ICC between fully manual scoring and reference scoring was at least 0.96 for Agatston score, 0.97 of calcium volume and 0.90 for number of calcifications. Bland-Altman plots (Figure 4) with the limits of agreement compare the performance of board certified chest radiologist with the reference standard and with each observer.
This current study demonstrates that CAC score can be quantified on non-gated chest CT using automated software. The agreement and reliability of the fully automated scoring are good when compared to reference scores. Lung cancer screening for which guidelines have been published  enables additional identification of subjects at risk of CVD. Given the large number of potential participants automated quantification may prove of great value.
The application of CAC quantification with CT as a screening test has been proposed and adds incremental information for prediction of all-cause mortality and cardiovascular events , . Moreover, lung cancer screening participants are at increased risk of a cardiovascular event, since aging and smoking are important risk factors for both conditions . Automated quantification of CAC would allow cardiovascular risk stratification without additional costs and without additional radiation exposure for the participants. To employ automatic quantification of CAC, high agreement and reliability of the algorithm are very important for longitudinal follow-up and to guide treatment .
This study demonstrates good agreement and very good reliability of the evaluated algorithm. Nevertheless, errors were present and automatically obtained scores are systematically lower than those defined by the raters. However, comparison to interscan agreement in low dose, non-ECG synchronized chest scans reveals that the errors of the automatic scoring are similar to those that would be obtained by manual expert scoring in another scan . Namely, the software incorrectly classified a calcium score of zero in 8.2% (144/1749). For a comparison, due to the interscan variation 5.3% (31/584) of scans had positive by the first and zero score by the second scan.  Furthermore, in our study a shift of more than one Agatston risk category was found in less than 5% of subjects. The majority of these shifts was in the risk categories with an Agatston score of less than 100. Scores higher than 100 are related to an increased atherosclerotic burden, multi-vessel disease, coronary heart disease and overall cardiovascular events , . Previous research showed that the main causes of discordance are higher level of noise, motion artifacts and motion unsharpness congruent with cardiac motion on low-dose non-gated CT . In particular visualization of the right coronary artery is known to be difficult because of motion artifacts . In the present study we found a high prevalence of CAC, therefore we had enough power to assess agreement and reliability, since variability in CAC score is strongly linked with the total amount of CAC.
A recent meta-analysis determined the reliability between gated and non-gated CT and found a very good pooled Cohen’s kappa (κ = 0.89), however in the non-gated group the cardiovascular event rate was higher in subjects without CAC showing that is it not possible to exclude CAC on non-gated scans . One previous study, in which Agatston scores were derived from non-gated chest CT scans, demonstrated good interscan reliability for Agatston risk categories (unweighted κ = 0.67) and very good interrater reliability (ICC = 0.97) . The interrater reliability we observed in this study was only slightly less, which may be caused by the difference in experience between the raters. In line with previous research evaluating automated CAC scoring using non-gated CT we also observed an underestimation of CAC score .
Evaluation of CVD risk in lung cancer screening studies could also be performed manually in a semi-quantitative manner using ordinal scale. Such evaluation might relate well with CVD events . However, such scoring would require expert time. This study demonstrates that fully automatic quantitative CAC scoring is feasible in large scale lung cancer screening trials without additional expert time.
Our study has several limitations. First, scans were obtained using low-dose non-gated chest CT, thus resulting in increased levels of noise and artifacts due to cardiac motion. However, this is current practice in lung cancer screening. Moreover, earlier studies demonstrated that coronary calcium scores determined with low-dose non-ECG synchronized chest CT correlate well with scores obtained with dedicated ECG-gated cardiac CT ,  and that they are strong and independent predictor of cardiovascular events , . Second, the reference standard for CAC was defined using a preliminary version of the automated software with subsequent manual correction. This made establishing of the reference standard easier and quicker, and thus made the study feasible in a large set of scans. However, the readers might have been biased by the presented results and therefore, we investigated whether this induced errors in the reference segmentations. The ICC between fully manual scoring and manually corrected reference results was very good (all >0.90), indicating the little effect of automatic presegmentation on the reference standard. Another limitation of our study was that manual scoring was performed partly by medical students. They however received intensive training for this study by a board certified chest radiologist, and in addition, independent reader inspected results of manual scoring and excluded scans with gross segmentation errors and metal artifacts. In patients with metal coronary stents calcium scoring would not result in risk reclassification. Also, in the remaining data set, the ICC between the four raters was very good. Finally, the method was evaluated with lung cancer screening scans acquired at single site. Future work will aim to broaden the evaluation of the method to scans acquired in multiple centers and possibly to scans made in multiple lung cancer screening trails.
In summary, automated quantification of CAC is feasible in non-gated non-contrast enhanced chest CT with good reliability and agreement when compared to reference scores. Nevertheless, CAC scores are lower when quantified automatically. The false negative zero scores indicate concern about the possibility to accurately identify subjects having a zero or low calcium score. The application of automated quantification of CAC in a lung cancer screening population can widen the scope of screening and help identify participants with a high-risk for cardiovascular events .
Conceived and designed the experiments: RAPT PAdJ TL MO HJdK CPM MAV II. Performed the experiments: RAPT PAdJ II. Analyzed the data: RAPT PAdJ II. Contributed reagents/materials/analysis tools: CPM MAV II. Wrote the paper: RAPT PAdJ TL MO HJdK CPM MAV II. Designed the Nelson trial, obtained permission and gathered funds: MO HJdK.
- 1. Hansson GK (2005) Inflammation, atherosclerosis, and coronary artery disease. N Engl J Med 352: 1685–1695.
- 2. Unverdorben M, von Holt K, Winkelmann BR (2009) Smoking and atherosclerotic cardiovascular disease: part II: role of cigarette smoking in cardiovascular disease development. Biomark Med 3: 617–653.
- 3. Wexler L, Brundage B, Crouse J, Detrano R, Fuster V, et al. (1996) Coronary artery calcification: pathophysiology, epidemiology, imaging methods, and clinical implications. A statement for health professionals from the American Heart Association. Writing Group. Circulation 94: 1175–1192.
- 4. Yeboah J, McClelland RL, Polonsky TS, Burke GL, Sibley CT, et al. (2012) Comparison of novel risk markers for improvement in cardiovascular risk assessment in intermediate-risk individuals. JAMA 308: 788–795.
- 5. Nasir K, Rubin J, Blaha MJ, Shaw LJ, Blankstein R, et al. (2012) Interplay of coronary artery calcification and traditional risk factors for the prediction of all-cause mortality in asymptomatic individuals. Circ Cardiovasc Imaging 5: 467–473.
- 6. Ma S, Liu A, Carr J, Post W, Kronmal R (2010) Statistical modeling of Agatston score in multi-ethnic study of atherosclerosis (MESA). PLoS One 5: e12036.
- 7. McEvoy JW, Blaha MJ, Nasir K, Blumenthal RS, Jones SR (2012) Potential use of coronary artery calcium progression to guide the management of patients at risk for coronary artery disease events. Curr Treat Options Cardiovasc Med 14: 69–80.
- 8. van den Bergh KA, Essink-Bot ML, Borsboom GJ, Th Scholten E, Prokop M, et al. (2010) Short-term health-related quality of life consequences in a lung cancer CT screening trial (NELSON). Br J Cancer 102: 27–34.
- 9. National Lung Screening Trial Research Team, Aberle DR, Adams AM, Berg CD, Black WC, et al (2011) Reduced lung-cancer mortality with low-dose computed tomographic screening. N Engl J Med 365: 395–409.
- 10. Mets OM, Vliegenthart R, Gondrie MJ, Viergever MA, Oudkerk M, et al. (2013) Lung cancer screening CT-based prediction of cardiovascular events. JACC Cardiovasc Imaging 6: 899–907.
- 11. Xie X, Zhao Y, de Bock GH, de Jong PA, Mali WP, et al. (2013) Validation and Prognosis of Coronary Artery Calcium Scoring inNon-Triggered Thoracic Computed Tomography: Systematic Review and Meta-Analysis. Circ Cardiovasc Imaging.
- 12. Jacobs PC, Gondrie MJ, van der Graaf Y, de Koning HJ, Isgum I, et al. (2012) Coronary artery calcium can predict all-cause mortality and cardiovascular events on low-dose CT screening for lung cancer. AJR Am J Roentgenol 198: 505–511.
- 13. Wu MT, Yang P, Huang YL, Chen JS, Chuo CC, et al. (2008) Coronary arterial calcification on low-dose ungated MDCT for lung cancer screening: concordance study with dedicated cardiac CT. AJR Am J Roentgenol 190: 923–928.
- 14. Budoff MJ, Nasir K, Kinney GL, Hokanson JE, Barr RG, et al. (2011) Coronary artery and thoracic calcium on noncontrast thoracic CT scans: comparison of ungated and gated examinations in patients from the COPD Gene cohort. J Cardiovasc Comput Tomogr 5: 113–118.
- 15. Kim SM, Chung MJ, Lee KS, Choe YH, Yi CA, et al. (2008) Coronary calcium screening using low-dose lung cancer screening: effectiveness of MDCT with retrospective reconstruction. AJR Am J Roentgenol 190: 917–922.
- 16. Jacobs PC, Isgum I, Gondrie MJ, Mali WP, van Ginneken B, et al. (2010) Coronary artery calcification scoring in low-dose ungated CT screening for lung cancer: interscan agreement. AJR Am J Roentgenol 194: 1244–1249.
- 17. Isgum I, Prokop M, Niemeijer M, Viergever MA, van Ginneken B (2012) Automatic coronary calcium scoring in low-dose chest computed tomography. IEEE Trans Med Imaging 31: 2322–2334.
- 18. Rutten A, Isgum I, Prokop M (2011) Calcium scoring with prospectively ECG-triggered CT: using overlapping datasets generated with MPR decreases inter-scan variability. Eur J Radiol 80: 83–88.
- 19. Isgum I, Prokop M, Jacobs PC, Gondrie MJ, Mali WPTM, et al. (2010) Automatic coronary calcium scoring in low-dose non-ECG-synchronized thoracic CT scans. in SPIE MedImag: vol. 7624: 76240M–76240M-76248.
- 20. Rumberger JA, Brundage BH, Rader DJ, Kondos G (1999) Electron beam computed tomographic coronary calcium scanning: a review and guidelines for use in asymptomatic persons. Mayo Clin Proc 74: 243–252.
- 21. Buckens CF, de Jong PA, Mol C, Bakker E, Stallman HP, et al. (2013) Intra and interobserver reliability and agreement of semiquantitative vertebral fracture assessment on chest computed tomography. PLoS One 8: e71204.
- 22. Kottner J, Audige L, Brorson S, Donner A, Gajewski BJ, et al. (2011) Guidelines for Reporting Reliability and Agreement Studies (GRRAS) were proposed. J Clin Epidemiol 64: 96–106.
- 23. Hokanson JE, MacKenzie T, Kinney G, Snell-Bergeon JK, Dabelea D, et al. (2004) Evaluating changes in coronary artery calcium: an analytic method that accounts for interscan variability. AJR Am J Roentgenol 182: 1327–1332.
- 24. Sevrukov AB, Bland JM, Kondos GT (2005) Serial electron beam CT measurements of coronary artery calcium: Has your patient’s calcium score actually changed? AJR Am J Roentgenol 185: 1546–1553.
- 25. Bland JM (2005) The half-normal distribution method for measurement error: two case studies. Unpublished talk available on http://www-users.york.ac.uk/~mb55/talks/halfnor.pdf.
- 26. Landis JR, Koch GG (1977) The measurement of observer agreement for categorical data. Biometrics 33: 159–174.
- 27. Wender R, Fontham ET, Barrera E Jr, Colditz GA, Church TR, et al. (2013) American Cancer Society lung cancer screening guidelines. CA Cancer J Clin 63: 107–117.
- 28. Polonsky TS, McClelland RL, Jorgensen NW, Bild DE, Burke GL, et al. (2010) Coronary artery calcium score and risk classification for coronary heart disease prediction. JAMA 303: 1610–1616.
- 29. Shaw LJ, Raggi P, Schisterman E, Berman DS, Callister TQ (2003) Prognostic value of cardiac risk factors and coronary artery calcium screening for all-cause mortality. Radiology 228: 826–833.
- 30. Mets OM, Vliegenthart R, Gondrie MJ, Viergever MA, Oudkerk M, et al. (2013) Lung Cancer Screening CT-Based Prediction of Cardiovascular Events. JACC Cardiovasc Imaging.
- 31. Callister TQ, Cooil B, Raya SP, Lippolis NJ, Russo DJ, et al. (1998) Coronary artery disease: improved reproducibility of calcium scoring with an electron-beam CT volumetric method. Radiology 208: 807–814.
- 32. Schmermund A, Voigtlander T (2011) Predictive ability of coronary artery calcium and CRP. Lancet 378: 641–643.
- 33. Liu YC, Sun Z, Tsay PK, Chan T, Hsieh IC, et al. (2013) Significance of coronary calcification for prediction of coronary artery disease and cardiac events based on 64-slice coronary computed tomography angiography. Biomed Res Int 2013: 472347.
- 34. Bielak LF, Kaufmann RB, Moll PP, McCollough CH, Schwartz RS, et al. (1994) Small lesions in the heart identified at electron beam CT: calcification or noise? Radiology 192: 631–636.
- 35. Achenbach S, Ropers D, Holle J, Muschiol G, Daniel WG, et al. (2000) In-plane coronary arterial motion velocity: measurement with electron-beam CT. Radiology 216: 457–463.
- 36. Shemesh J, Henschke CI, Farooqi A, Yip R, Yankelevitz DF, et al. (2006) Frequency of coronary artery calcification on low-dose computed tomography screening for lung cancer. Clin Imaging 30: 181–185.
- 37. Mets OM, de Jong PA, Prokop M (2012) Computed tomographic screening for lung cancer: an opportunity to evaluate other diseases. JAMA 308: 1433–1434.