September 2014, VOLUME 203
NUMBER 3

Recommend & Share

September 2014, Volume 203, Number 3

Letters

Methodologic Concerns in Reliability of Noncalcified Coronary Artery Plaque Burden Quantification

+ Affiliation:
1Safety Promotion and Injury Prevention Research Center, Shahid Beheshti University of Medical Sciences Tehran, Iran

Citation: American Journal of Roentgenology. 2014;203: W343-W343. 10.2214/AJR.14.12649

I read with interest the article by Oberoi and colleagues [1] in the January 2014 issue of the AJR. The authors aimed to evaluate the reproducibility of noncalcified coronary artery plaque burden quantification from coronary CT angiography across different commercial analysis platforms.

As the authors pointed out the log volume data were compared with the Pearson correlation coefficient and Bland-Altman analysis. They reported that differences in plaque volume measurements on intraplatform repeat measurements were statistically insignificant (p = 0.923) [1]. The Pearson correlation coefficient has nothing to do with reliability analysis [24]. Why did the authors not use one of the well-known tests for reliability, such as intraclass correlation coefficient (ICC) or weighted kappa [24]? Regarding reliability or agreement, ICC should be used for quantitative variables and weighted kappa (not simple kappa because kappa has its own limitations) for qualitative variables [24].

Oberoi et al. [1] also reported that the Pearson correlation coefficient was found to be 0.677 (p < 0.001; 95% CI, 0.608–0.735) between software platforms 1 and 2, 0.672 (p < 0.001; 0.603–0.732) between software platforms 1 and 3, and 0.550 (p < 0.001; 0.463–0.627) between software platforms 2 and 3.

It is crucial to realize that clinical importance is a completely different issue from statistical significance, and clinical importance should be considered the priority. Therefore, even if Pearson correlation were the correct test to assess reliability, the strength of the relation as clinical importance should be considered for clinical judgment instead of statistical significance [24]. Moreover, statistics cannot provide a simple substitute for clinical judgment [24].

As Oberoi et al. [1] pointed out in their conclusion, currently available noncalcified plaque quantification software provides good intraplatform reproducibility but poor interplatform reproducibility. Such a conclusion is simply a misleading message, and misinterpretation due to inappropriate choice of statistical tests to assess reliability should be avoided by clinicians; otherwise, we will face mismanagement of patients in our routine clinical care [24].

WEB—This is a web exclusive article.

References
Next section
1. Oberoi S, Meinel FG, Schoepf UJ, et al. Reproducibility of noncalcified coronary artery plaque burden quantification from coronary CT angiography across different image analysis platforms. AJR 2014; 202:[web]W43–W49 [Abstract] [Google Scholar]
2. Rothman KJ, Lash TL, Greenland S. Modern epidemiology, 3rd ed. Baltimore, MD: Lippincott Williams & Wilkins, 2010 [Google Scholar]
3. Sabour S, Dastjerdi EV. Reliability of assessment of nasal flow rate for nostril selection during nasotracheal intubation: common mistakes in reliability analysis. (letter) J Clin Anesth 2013; 25:162 Epub Jan 16, 2013 [Google Scholar]
4. Lin LI. A concordance correlation coefficient to evaluate reproducibility. Biometrics 1989; 45:255–268 [Google Scholar]