Interpretations of Examinations Outside of Radiologists' Fellowship Training: Assessment of Discrepancy Rates Among 5.9 Million Examinations From a National Teleradiology Databank
Abstract
Please see the Editorial Comment by Jonathan L. Mezrich discussing this article.
BACKGROUND. In community settings, radiologists commonly function as multispecialty radiologists, interpreting examinations outside of their area of fellowship training.
OBJECTIVE. The purpose of this article was to compare discrepancy rates for preliminary interpretations of acute community-setting examinations that are concordant versus discordant with interpreting radiologists' area of fellowship training.
METHODS. This retrospective study used the databank of a U.S. teleradiology company that provides preliminary interpretations for client community hospitals. The analysis included 5,883,980 acute examinations performed from 2012 to 2016 that were preliminarily interpreted by 269 teleradiologists with a fellowship of neuroradiology, abdominal radiology, or musculoskeletal radiology. When providing final interpretations, client on-site radiologists voluntarily submitted quality assurance (QA) requests if preliminary and final interpretations were discrepant; the teleradiology company's QA committee categorized discrepancies as major (n = 8444) or minor (n = 17,208). Associations among examination type (common vs advanced), relationship between examination subspecialty and the teleradiologist's fellowship (concordant vs discordant), and major and minor discrepancies were assessed using three-way conditional analyses with generalized estimating equations.
RESULTS. For examinations with a concordant subspecialty, the major discrepancy rate was lower for common than for advanced examinations (0.13% vs 0.26%; relative risk [RR], 0.50, 95% CI, 0.42–0.60; p < .001). For examinations with a discordant subspecialty, the major discrepancy rate was lower for common than advanced examinations (0.14% vs 0.18%; RR, 0.81; 95% CI, 0.72–0.90; p < .001). For common examinations, the major discrepancy rate was not different between examinations with concordant versus discordant subspecialty (0.13% vs 0.14%; RR, 0.90; 95% CI, 0.81–1.01; p = .07). For advanced examinations, the major discrepancy rate was higher for examinations with concordant versus discordant subspecialty (0.26% vs 0.18%; RR, 1.45; 95% CI, 1.18–1.79; p < .001). The minor discrepancy rate was higher among advanced examinations for those with concordant versus discordant subspecialty (0.34% vs 0.29%; RR, 1.17; 95% CI, 1.00–1.36; p = .04), but not different for other comparisons (p > .05).
CONCLUSION. Major and minor discrepancy rates were not higher for acute community-setting examinations outside of interpreting radiologists' fellowship training. Discrepancy rates increased for advanced examinations.
CLINICAL IMPACT. The findings support multispecialty radiologist practice in acute community settings. Efforts to match examination and interpreting radiologist sub-specialty may not reduce diagnostic discrepancies.
HIGHLIGHTS
Key Finding
•
Among 5,883,980 preliminary teleradiology interpretations of acute community-setting examinations, common examinations' major and minor discrepancy rates were not different when concordant versus discordant with radiologists' fellowship training (p > .05); advanced examinations' major and minor discrepancy rates were higher when concordant with radiologists' fellowship (RR, 1.45 and 1.17, respectively; p < .05).
Importance
•
Radiology practice leaders should carefully consider merits of current efforts to match interpretation of subspecialty examinations with radiologists' fellowship training in the acute community setting.
Fellowship training has been reported by radiology practices to be one of the most desirable attributes in newly hired radiologists [1]. Indeed, more than 90% of radiologists entering the job market having completed fellowship training [2]. Fellowships provide opportunities for radiology trainees to advance their interpretive expertise in a subspecialty area beyond that which is typically attained during residency [3]. Nevertheless, although many radiologists identify their practice by their area of fellowship training [4], most radiologists practice as multispecialty radiologists, interpreting examinations that are outside of their fellowship area [5, 6]. Indeed, the operation of many nonacademic radiology practices is contingent on the practice's radiologists maintaining a broad scope of practice and interpreting examinations across a range of subspecialties [7, 8].
Prior studies have compared primary imaging interpretations with secondary interpretations performed by subspecialists and have compared interpretations by different subspecialized radiologists in terms of clinical outcomes among emergency department patients [9–15]. However, we are unaware of studies that have compared interpretations performed for examinations that are concordant versus discordant with the interpreting radiologist's fellowship training in a controlled fashion. Further, the earlier referenced studies involved subspecialized academic radiologists and tertiary care patient cohorts, even though multispecialty radiologist practice is most commonly encountered in nonacademic community settings.
The aim of this study was to compare discrepancy rates for preliminary interpretations of imaging examinations that are concordant versus discordant with the interpreting radiologist's fellowship training among acute examinations performed in community settings.
Methods
This retrospective study was approved by the institutional review board of the University of Michigan and compliant with HIPAA. The requirement for informed consent was waived.
Studies
This study used the databank of a large U.S. teleradiology company, Virtual Radiologic. The company contracts with client hospitals in primarily community (non-academic) settings as well as some academic hospitals. The teleradiology company offers services 24 hours a day, 7 days a week, although it arranges specific hours of coverage with each client facility. During the arranged hours, radiologists working for the teleradiology company provide interpretations for primarily acute examinations performed at the client hospitals. The teleradiology company generally aims to provide a preliminary report within 30 minutes of examination completion. Typically, the client's on-site radiologist provides a final interpretation at a later time. Examinations interpreted by the teleradiology company are mostly ordered in emergency departments, though occasionally they are ordered in inpatient or outpatient settings. The examinations evaluated in this study included radiography, CT, MRI, and ultrasound. All examinations were assigned a subspecialty designation according to the examination's Current Procedural Terminology (CPT) code [16]. The most common subspecialties for imaging examinations interpreted by the teleradiology company retrieved in the data set were neuroimaging, abdominal imaging, and musculoskeletal imaging. The client's on-site radiologist who provides the final interpretation may voluntarily enter a quality assurance (QA) request if they disagree with the teleradiologist's preliminary interpretation. For examinations of multiple body regions within a single imaging encounter (e.g., a CT examination of the chest, abdomen, and pelvis), the QA requests are provided at the patient level and bundled across the individual examinations comprising the encounter.
The teleradiology company's databank was accessed to identify all examinations performed from January 1, 2012, to December 31, 2016, for which both a preliminary report was made by the teleradiologist and a final report was made by the client's on-site radiologist. Examinations across multiple body regions performed within a single session were excluded because the bundling of QA requests precluded QA assessment at the examination level. Finally, studies interpreted by radiologists who did not report a fellowship or who reported a fellowship other than neuroradiology, abdominal radiology, or musculoskeletal radiology (corresponding with the three most common subspecialties for examinations within the databank) were excluded to form this study's final sample.
Common Versus Advanced Imaging Examinations
Examinations were labeled in the teleradiology company's information systems as common or advanced according to the type of examination and/or imaging protocol used to acquire the examination. For neuroimaging, common examinations included CT examinations performed without IV contrast material, whereas advanced examinations included CTA and MRI examinations. For abdominal imaging, common examinations included radiographs, CT examinations performed without IV contrast material, and contrast-enhanced CT examinations obtained with standard protocols, whereas advanced examinations included CTA examinations and examinations obtained with complex protocols. In musculo-skeletal imaging, common examinations included radiographs and CT examinations, and advanced examinations included MRI examinations. Although some of the advanced examinations (e.g., abdominal CT using a liver or renal mass protocol) may not typically be deemed urgent, most had been designated as acute by the client hospital.
Quality Assurance and Determination of Discrepancies
The QA requests were evaluated by the teleradiology company's QA committee, which is composed of fellowship-trained radiologists and which rendered a final determination as to whether a discrepancy had occurred. When the committee determined that a discrepancy had occurred, the severity of the discrepancy was graded as major or minor. This determination was based on a number of factors, including impact on patient safety and clinical outcomes. The present study's analysis was based on the QA committee's determination of whether a discrepancy had occurred and of whether discrepancies were major or minor; QA requests by the clients' radiologists that were deemed not to represent discrepancies by the QA committee were not considered in the analysis.
Statistical Analysis
For each imaging examination, the radiologist's area of fellowship training was compared with the examination's designated subspecialty in the teleradiology company's information system according to the study's CPT code. Examinations were denoted as concordant when the teleradiologist's fellowship training (neuroradiology, abdominal radiology, or musculoskeletal radiology) was the same as the examination subspecialty and as discordant when the fellowship area differed from the examination subspecialty.
A conditional analysis was used to assess the relationship among three factors: presence of discrepancy (classified as major or minor), relationship of examination and radiologist sub-specialty (concordant or discordant), and examination classification (common or advanced). The conditional analysis was used to reduce spurious associations that could result from a marginal analysis. The analysis was conditioned first on whether the examinations were concordant or discordant with respect to the radiologist's fellowship to determine discrepancy rates for common and advanced examinations. The analysis was then conditioned on whether the examination was common or advanced to determine the discrepancy rates for concordant and discordant examinations with respect to radiologists' fellowship training.
A generalized estimating equation (GEE) was used to account for intraradiologist correlation and interradiologist variability given that each radiologist in the database had typically interpreted thousands of examinations [17], as further described in the Supplemental Methods (available in the electronic supplement to this article at https://doi.org/10.2214/AJR.21.26656). Because the outcomes of interest, namely major and minor discrepancies, had probabilities close to zero, the odds ratio was interpreted as an approximation of relative risk (RR), which was used for results reporting. For significant differences, the percentage difference in likelihood of discrepancies between groups was computed as 100 × (1 – RR). The statistical significance level was set to p = .05. Data preprocessing and other descriptive statistics were performed in R (version 3.6.2, R Project for Statistical Consulting). The GEE model output was generated using SAS software (SAS Institute, version 9.4).
Results
Study Sample
During the study period, both a preliminary interpretation by the teleradiologist and a final interpretation by the client's on-site radiologist were available for 14,678,419 imaging examinations (Fig. 1). Of these, 1,445,196 examinations were excluded because of bundled QA assessments, and 7,349,243 examinations were excluded because the preliminary interpretation was not rendered by a radiologist with fellowship training in neuroradiology, abdominal radiology, or musculoskeletal radiology. These exclusions resulted in a final study sample of 5,883,980 examinations (Fig. 1). Mean patient age at the time of the examinations was 50 years (range, 0–110 years). The examinations were performed in 5,883,350 unique patients (2,538,216 men and 3,290,836 women; for 54,298 patients, sex was not identified in the teleradiology database).

Table 1 summarizes characteristics of included examinations. The examinations were interpreted by 269 teleradiologists with fellowship training in neuroradiology (n = 82; 30.5%), abdominal radiology (n = 132; 49.1%), and musculoskeletal radiology (n = 55; 20.4%). Of the 5,883,980 examinations, 2,352,749 (40.0%) were concordant and 3,531,231 (60.0%) were discordant with respect to the interpreting radiologist's fellowship. In addition, 5,340,108 (90.8%) were classified as common and 543,872 (9.2%) were advanced. According to the determinations of the teleradiology company's QA committee, a total of 25,652 discrepancies occurred, yielding an overall discrepancy rate of 0.43%, including 8444 major discrepancies (rate of 0.14%) and 17,208 minor discrepancies (rate of 0.29%).
Characteristic | Value |
---|---|
No. of interpreting teleradiologists | 269 |
Examination subspecialty | |
Neuroradiology | 1,776,043 (30.2) |
Abdominal imaging | 3,045,198 (51.8) |
Musculoskeletal imaging | 1,062,739 (18.0) |
Interpreting radiologist's fellowship with respect to examination subspecialty | |
Concordant | 2,352,749 (40.0) |
Discordant | 3,531,231 (60.0) |
Examination type | |
Common | 5,340,108 (90.8) |
Advanced | 543,872 (9.2) |
No. (rate) of discrepancies | |
Major | 8444 (0.14) |
Minor | 17,208 (0.29) |
Overall | 25,652 (0.43) |
Note—Unless otherwise indicated, values are expressed as number of examinations, with percentage in parentheses.
Comparison of Common Versus Advanced Examinations
Table 2 summarizes major and minor discrepancy rates among subsets of examinations. Among examinations for which the sub-specialty was concordant with the radiologist's fellowship training, the frequency of major discrepancies was significantly lower (p < .001) for common examinations (0.13%) than for advanced examinations (0.26%), with an RR of 0.50 (95% CI, 0.42–0.60; 49.8% decreased likelihood of a major discrepancy for common than for advanced examinations among examinations with a concordant subspecialty). The frequency of minor discrepancies was not significantly different (p = .11) between common examinations (0.30%) and advanced examinations (0.34%), with an RR of 0.89 (95% CI, 0.77–1.03).
Subseta | No. | Major Discrepancies | Minor Discrepancies | ||||
---|---|---|---|---|---|---|---|
n (%)b | p | Relative Risk ± SE (95% CI) | n (%)c | p | Relative Risk ± SE (95% CI) | ||
Examination subspecialty vs radiologist fellowship training | |||||||
Concordant | 2,352,749 | 3272 (0.14) | < .001 | 0.50 ± 0.88 (0.42–0.60) | 7088 (0.30) | .11 | 0.89 ± 0.08 (0.77–1.03) |
Common | 2,167,173 | 2796 (0.13) | 6465 (0.30) | ||||
Advanced | 185,576 | 476 (0.26) | 623 (0.34) | ||||
Discordant | 3,531,231 | 5172 (0.15) | < .001 | 0.81 ± 0.06 (0.72–0.90) | 10,120 (0.29) | .95 | 1.00 ± 0.05 (0.91–1.09) |
Common | 3,172,935 | 4538 (0.14) | 9091 0.29) | ||||
Advanced | 358,296 | 634 (0.18) | 1029 (0.29) | ||||
Examination classification | |||||||
Common | 5,340,108 | 7334 (0.14) | .07 | 0.90 ± 0.06 (0.81–1.01) | 15,556 (0.29) | .48 | |
Concordant | 2,167,173 | 2796 (0.13) | 6465 (0.30) | 1.04 ± 0.06 (0.93–1.17) | |||
Discordant | 3,172,935 | 4538 (0.14) | 9091 (0.29) | ||||
Advanced | 543,872 | 1110 (0.20) | < .001 | 1.45 ± 0.11 (1.18–1.79) | 1652 (0.30) | .04 | 1.17 ± 0.08 (1.00–1.36) |
Concordant | 185,576 | 476 (0.26) | 623 (0.34) | ||||
Discordant | 358,296 | 634 (0.18) | 1029 (0.29) |
Note—SE = standard error.
a
Common and advanced refer to classification of examinations within databank; concordant and discordant refer to the examination's subspecialty versus the radiologist's area of fellowship training.
b
Percentages computed only among studies with major or no discrepancy.
c
Percentages computed only among studies with minor or no discrepancy.
Among examinations for which the subspecialty was discordant with the radiologist's fellowship training, the frequency of major discrepancies was significantly lower (p < .001) for common examinations (0.14%) versus advanced examinations (0.18%), with an RR of 0.81 (95% CI, 0.72–0.90; 19.2% decreased likelihood of a major discrepancy for common than for advanced examinations among studies with a discordant subspecialty). The frequency of minor discrepancies was not significantly different (p = .95) between common examinations (0.29%) and advanced examinations (0.29%), with an RR of 1.00 (95% CI, 0.91–1.09).
Comparison of Examinations With Concordant Versus Discordant Subspecialty Relative to Radiologist Fellowship Training
Among common examinations, the frequency of major discrepancies was not significantly different (p = .07) between examinations with a subspecialty concordant (0.13%) versus discordant (0.14%) with the radiologist's fellowship, with an RR of 0.90 (95% CI, 0.81–1.01). The frequency of minor discrepancies was not significantly different (p = .48) between examinations with a concordant (0.30%) versus discordant (0.29%) subspecialty, with an RR of 1.04 (95% CI, .93–1.17).
Among advanced examinations, the frequency of major discrepancies was significantly higher (p < .001) among examinations with a subspecialty concordant (0.26%) versus discordant (0.18%) with the radiologist's fellowship, with an RR of 1.45 (95% CI, 1.18–1.79; 45.0% increased likelihood of a major discrepancy for examinations with a concordant versus discordant subspecialty among advanced examinations). The frequency of minor discrepancies was also significantly higher (p = .04) among studies with a concordant (0.34%) versus discordant (0.29%) subspecialty, with an RR of 1.17 (95% CI, 1.00–1.36; 17.0% increased likelihood of a minor discrepancy for examinations with a concordant versus discordant subspecialty among advanced examinations).
Discussion
In this study, we assessed discrepancy rates for preliminary teleradiology interpretations of nearly 6 million acute examinations performed in community settings. Whether interpreting examinations with a subspecialty concordant or discordant with their fellowship training, radiologists had lower discrepancy rates for common than advanced examinations. Among common examinations, radiologists had no significant difference in major or minor discrepancy rates for concordant versus discordant studies. This observation is not surprising. Acute common examinations are frequently encountered because of residency training, and accurate interpretation of such examinations is generally expected as a core skill of practicing radiologists [18, 19]. For the radiologists in the present study, the high-volume nature of the teleradiology practice may also help to maintain proficiency in a broad skill set given the association between experience and competence [7].
Although imaging utilization volumes have increased in the acute setting, radiology practices continue to face pressures to provide subspecialized interpretations [20, 21]. The American College of Radiology has advocated for the value of multi-specialist radiologists in addressing such challenges and in providing solutions for practices' operations and workflows [4, 7, 8]. Given that the overwhelming majority of radiologists complete a fellowship, such multispecialty practice typically entails fellowship-trained radiologists interpreting examinations outside of their fellowship subspecialty. Our findings support multispecialty radiologist practice by showing maintained performance when radiologists interpret acute common examinations outside of their fellowship training.
Among advanced examinations, radiologists had significantly higher major and minor discrepancy rates for examinations concordant with their fellowship training. This finding is unexpected and the cause is unclear. Perhaps the assignment of advanced cases was not random, whereby the most complex examinations, whether through assignment or through voluntary case selection, were interpreted by radiologists with a concordant fellowship. Alternatively, perhaps radiologists had lower confidence in discordant examinations and exhibited greater caution in their interpretation. Further investigation of this finding is warranted, including stratifying the results by specific subspecialty and considering additional variables (e.g., the time during the night or day at which the preliminary interpretation was provided, imaging volumes during given shifts) [22].
We note a number of study limitations. First, we did not account for radiologist experience, whether in terms of years in practice or of numbers of examinations interpreted. We chose to assess the impact of fellowship training alone in response to prior work that has suggested the alternate view that fellowship training alone is limited in categorizing radiologists and that the distribution of radiologists' work relative value units across subspecialties may better reflect practice patterns [23, 24]. Second, the examination's sub-specialty designation may have been incorrectly matched to the teleradiologist's fellowship training for examinations with potentially ambiguous or overlapping CPT codes (e.g., pelvic MRI performed to evaluate for prostate cancer incorrectly designated as musculoskeletal imaging); nonetheless, we expect that such classification errors are uncommon. Third, not all examinations were systematically reviewed for misinterpretation, and our analysis relied on voluntary reporting of discrepancies by the client's on-site radiologists. It is possible that discrepancies were underreported. Indeed, the discrepancy rates in this study are substantially lower than previously reported imaging discrepancy rates [9, 25, 26]. Fourth, QA requests submitted by the client's on-site radiologists may have reflected a difference in opinion without any true discrepancy, a larger amount of clinical information available for the on-site radiologist [27], or a misinterpretation by the client's on-site radiologist [28]. However, the teleradiology's QA committee evaluated all QA requests and is expected to have arbitrated such cases. Fifth, we excluded imaging encounters encompassing multiple body regions because of the inability of the computer systems to individually assign such events' QA discrepancy reports. Multiregion examinations, such as CT of the chest, abdomen, and pelvis, are obtained in trauma evaluation and commonly encountered in the emergency department. Such examinations may be expected to incur high discrepancy rates given the large number of images that must be reviewed within short turnaround times. Sixth, the fellowship training of the client's on-site radiologist was unknown and could not be compared with the fellowship training of the teleradiologist. Seventh, we did not control for various factors possibly contributing to discrepancies including case volume, time of night or day, hours into a shift, radiologist years in practice, and case complexity [21]. Of note, the distinction between common and advanced examinations does not directly correspond with case complexity. Eighth, emergency radiology is a distinct radiology subspecialty focused on acute imaging and for which fellowship training is available [19]. None of the radiologists included in this study had completed an emergency radiology fellowship. Finally, our primary study outcome was discrepancy rates between preliminary and final interpretations. We did not directly assess diagnostic accuracy or the impact of the interpretations on other downstream patient outcomes.
In conclusion, when interpreting common acute imaging examinations performed in community settings, radiologists exhibited no significant difference in discrepancy rates between studies within versus outside of their area of fellowship training. This finding of maintained performance supports multispecialty radiologist practice. The observation of higher discrepancy rates for advanced examinations when concordant with radiologists' fellowship training was unexpected and may be a result of factors other than fellowship training not reflected in the analysis, and further study is warranted. Radiology practice leaders may find the results useful for operational decision-making regarding coverage arrangements for acute examinations. In particular, radiology practice leaders should carefully consider efforts to match interpretation of subspecialty examinations with radiologists' fellowship training in acute community settings.
Supplemental Content
File (21_26656_suppl.pdf)
- Download
- 827.78 KB
References
1.
Smith SM, Demissie S, Raden M, Yarmish G. A survey of academic radiology department chairs on hiring recent graduates as new attending physicians. Acad Radiol 2015; 22:1471–1476
2.
Baker SR, Luk L, Clarkin K. The trouble with fellowships. J Am Coll Radiol 2010; 7:446–451
3.
Herr KD, Risk B, Hanna TN. Diagnostic radiology resident perspectives on fellowship training and career interest in emergency radiology. Emerg Radiol 2018; 25:653–658
4.
Fleishon HB, Pyatt RS Jr. Multispecialty radiology: bridging the gap. J Am Coll Radiol 2021; 18:1223–1224
5.
Rosenkrantz AB, Wang W, Hughes DR, Duszak R Jr. Generalist versus sub-specialist characteristics of the U.S. radiologist workforce. Radiology 2018; 286:929–937
6.
Rosenkrantz AB, Fleishon HB, Friedberg EB, Duszak R Jr. Practice characteristics of the United States general radiologist workforce: most generalists work as multispecialists. Acad Radiol 2020; 27:715–719
7.
Friedberg E, Chong ST, Pyatt RS Jr, et al. Unifying the silos of subspecialized radiology: the essential role of the general radiologist. J Am Coll Radiol 2018; 15:1158–1163
8.
Liebscher L, Sherry C, Breslau J, et al. The general radiologist in the 21st century. J Am Coll Radiol 2012; 9:554–559
9.
Wu MZ, McInnes MD, Macdonald DB, Kielar AZ, Duigenan S. CT in adults: systematic review and meta-analysis of interpretation discrepancy rates. Radiology 2014; 270:717–735
10.
Chalian M, Del Grande F, Thakkar RS, Jalali SF, Chhabra A, Carrino JA. Second-opinion subspecialty consultations in musculoskeletal radiology. AJR 2016; 206:1217–1221
11.
Rozenberg A, Kenneally BE, AbrahAm JA, et al. Clinical impact of second-opinion musculoskeletal subspecialty interpretations during a multi-disciplinary orthopedic oncology conference. J Am Coll Radiol 2017; 14:931–936
12.
Zan E, Yousem DM, Carone M, Lewin JS. Second-opinion consultations in neuroradiology. Radiology 2010; 255:135–141
13.
Carter BW, Erasmus JJ, Truong MT et al. Quality and value of subspecialty reinterpretation of thoracic CT scans of patients referred to a tertiary cancer center. J Am Coll Radiol 2017; 14:1109–1118
14.
Davenport MS, Khalatbari S, Keshavarzi N, et al. Differences in outcomes associated with individual radiologists for emergency department patients with headache imaged with CT: a retrospective cohort study of 25, 596 patients. AJR 2020; 214:1122–1130
15.
Davenport MS, Khalatbari S, Ellis JH, Cohan RH, Chong ST, Kocher KE. Novel quality indicators for radiologists interpreting abdominopelvic CT images: risk-adjusted outcomes among emergency department patients with right lower quadrant pain. AJR 2018; 210:1292–1300
16.
American Medical Association. Current procedural terminology, professional edition 2015. American Medical Association, 2015
17.
Zeger SL, Liang KY. Longitudinal data analysis for discrete and continuous outcomes. Biometrics 1986; 42:121–130
18.
Ruma J, Klein KA, Chong S, et al. Cross-sectional examination interpretation discrepancies between on-call diagnostic radiology residents and subspecialty faculty radiologists: analysis by imaging modality and sub-specialty. J Am Coll Radiol 2011; 8:409–414
19.
Mellnick V, Raptis C, McWilliams S, Picus D, Wahl R. On-call radiology resident discrepancies: categorization by patient location and severity. J Am Coll Radiol 2016; 13:1233–1238
20.
Levin DC, Rao VM, Parker L, Frangos AJ. Continued growth in emergency department imaging is bucking the overall trends. J Am Coll Radiol 2014; 11:1044–1047
21.
Chong ST, Robinson JD, Davis MA, et al. Emergency radiology: current challenges and preparing for continued growth. J Am Coll Radiol 2019; 16:1447–1455
22.
Hanna TN, Lamoureux C, Krupinski EA, Weber S, Johnson JO. Effect of shift, schedule, and volume on interpretive accuracy: a retrospective analysis of 2.9 million radiologic examinations. Radiology 2018; 287:205–212
23.
Rosenkrantz AB, Hughes DR, Duszak R Jr. Increasing subspecialization of the national radiologist workforce. J Am Coll Radiol 2020; 17:812–818
24.
Rosenkrantz AB, Wang W, Hughes DR, Ginocchio LA, Rosman DA, Duszak R Jr. Academic radiologist subspecialty identification using a novel claims-based classification system. AJR 2017; 208:1249–1255
25.
Rosenkrantz AB, Duszak R Jr, Babb JS, Glover M, Kang SK. Discrepancy rates and clinical impact of imaging secondary interpretations: a systematic review and meta-analysis. J Am Coll Radiol 2018; 15:1222–1231
26.
Chung R, Rosenkrantz AB, Shanbhogue KP. Expert radiologist review at a hepatobiliary multidisciplinary tumor board: impact on patient management. Abdom Radiol (NY) 2020; 45:3800–3808
27.
Mullins ME, Lev MH, Schellingerhout D, Koroshetz WJ, Gonzalez RG. Influence of availability of clinical history on detection of early stroke using un-enhanced CT and diffusion-weighted MR imaging. AJR 2002; 179:223–228
28.
Abujudeh HH, Boland GW, Kaewlai R, et al. Abdominal and pelvic computed tomography (CT) interpretation: discrepancy rates among experienced radiologists. Eur Radiol 2010; 20:1952–1957
STUDY GUIDE
Interpretations of Examinations Outside of Radiologists' Fellowship Training: Assessment of Discrepancy Rates Among 5.9 Million Examinations From a National Teleradiology Databank
Joseph J. Budovec, MD1, Alan Mautz, MD2
1Medical College of Wisconsin, Milwaukee, WI.
2Northern Light AR Gould Hospital, Presque Isle, ME.
*Please note that the authors of the Study Guide are distinct from those of the companion article.
Introduction
1. What have previous studies comparing primary imaging interpretation versus secondary interpretation shown?
2. Is there a difference between the interpretations of subspecialized radiologists in terms of clinical outcomes among emergency department patients?
3. What is the intended aim of this study? Is the question relevant and timely? Is an appropriate rationale provided for performing the study?
Methods
4. What study design was used? How does this study differ from previous studies? What studies were included in the study sample? What studies were excluded?
5. How were common versus advanced imaging examinations determined? Does the study address how the teleradiology database queried had both preliminary and final interpretations available for review?
6. How were the data analyzed? Did the study provide appropriate rationale for the different types of analysis used?
7. What are the limitations of this study? Are these limitations adequately discussed?
Results
8. Did the study accomplish its intended aim?
9. What was the overall discrepancy rate? The major discrepancy rate? The minor discrepancy rate?
Discussion
10. How does this study differ from previously published studies evaluating discrepancy rates?
11. Why was there a significantly higher rate of major and minor discrepancies for examinations that were concordant with fellowship training?
12. Does your practice or institution use a teleradiology service? If so, how do the results of this study compare with your practice or institution's experience?
13. If you were to design a similar study, what changes would you make to the study design?
14. How do you interpret the clinical impact statement provided at the end of this study?
Background Reading
1.
Davenport MS, Khalatbari S, Ellis JH, Cohan RH, Chong ST, Kocher KE. Novel quality indicators for radiologists interpreting abdominopelvic CT images: risk-adjusted outcomes among emergency department patients with right lower quadrant pain. AJR 2018; 210:1292–1300
2.
Davenport MS, Khalatbari S, Keshavarzi N, et al. Differences in outcomes associated with individual radiologists for emergency department patients with headache imaged with CT: a retrospective cohort study of 25,596 patients. AJR 2020; 214:1122–1130
3.
Liebscher L, Sherry C, Breslau J, et al. The general radiologist in the 21st century. J Am Coll Radiol 2012; 9:554–559
4.
Rosenkrantz AB, Fleishon HB, Friedberg EB, Duszak R Jr. Practice characteristics of the United States general radiologist workforce: most generalists work as multispecialists. Acad Radiol 2020; 27:715–719
Information & Authors
Information
Published In
Copyright
© American Roentgen Ray Society.
History
Submitted: July 31, 2021
Revision requested: August 19, 2021
Revision received: September 17, 2021
Accepted: October 25, 2021
First published: November 3, 2021
Keywords
Authors
Metrics & Citations
Metrics
Citations
Export Citations
To download the citation to this article, select your reference manager software.