Review
Medical Physics and Informatics
August 23, 2013

Cognitive and System Factors Contributing to Diagnostic Errors in Radiology

Abstract

OBJECTIVE. In this article, we describe some of the cognitive and system-based sources of detection and interpretation errors in diagnostic radiology and discuss potential approaches to help reduce misdiagnoses.
CONCLUSION. Every radiologist worries about missing a diagnosis or giving a false-positive reading. The retrospective error rate among radiologic examinations is approximately 30%, with real-time errors in daily radiology practice averaging 3–5%. Nearly 75% of all medical malpractice claims against radiologists are related to diagnostic errors. As medical reimbursement trends downward, radiologists attempt to compensate by undertaking additional responsibilities to increase productivity. The increased workload, rising quality expectations, cognitive biases, and poor system factors all contribute to diagnostic errors in radiology. Diagnostic errors are underrecognized and underappreciated in radiology practice. This is due to the inability to obtain reliable national estimates of the impact, the difficulty in evaluating effectiveness of potential interventions, and the poor response to systemwide solutions. Most of our clinical work is executed through type 1 processes to minimize cost, anxiety, and delay; however, type 1 processes are also vulnerable to errors. Instead of trying to completely eliminate cognitive shortcuts that serve us well most of the time, becoming aware of common biases and using metacognitive strategies to mitigate the effects have the potential to create sustainable improvement in diagnostic errors.
Diagnostic errors are estimated to account for 40,000–80,000 deaths annually in U.S. hospitals alone [1]. These figures only partially account for patients whose ambulatory misdiagnoses lead to death, and they do not include nonlethal disability, which may be just as common as death [2]. Tort claims for negligent diagnostic errors result in billions of dollars in payouts annually [2]. Nearly 75% of all medical malpractice claims against radiologists are related to diagnostic errors [3]. Every radiologist worries about missing a diagnosis or erring too heavily on the side of caution and giving a false-positive reading [4].

Definition, Prevalence, and Impact of Diagnostic Errors

Diagnostic error has been defined as a diagnosis that is missed, wrong, or delayed as detected by some subsequent definitive test or finding [5]. Here we use the terms “diagnostic error” and “misdiagnosis” interchangeably and do not distinguish between them. In radiology, the most common problems leading to medical malpractice lawsuits are due to failure to diagnose [6]. This means oversight of abnormalities or misinterpretation of radiologic images [3, 7].
Errors in diagnostic radiology have long been recognized, beginning with the pioneering revelation of Garland [8] in 1949. Multiple studies have identified suboptimal radiology processes as contributors to the overwhelming number of medical errors and escalating economic costs, which are estimated at more than $38 billion annually [9, 10]. Overall, approximately 30% of abnormal radiographic studies are missed. Approximately 4% of radiologic interpretations rendered by radiologists in daily practice contain errors [11]. Quekel et al. [12] found that 19% of lung cancers presenting as a nodule with a median diameter of 16 mm on chest radiographs were missed, and even higher rates between 25% and 90% have been reported in the literature [1315].
Mammography has been the standard of care for the detection of breast carcinoma. However, a misdiagnosis of breast cancer occurs in 4–30% of screening mammography studies according to multiple randomized controlled trials [16, 17]. Given that 38,294,403 mammography studies were performed annually in the United States as of 2013, it is evident why radiologic misdiagnosis is an important public health issue [18]. Moreover, screening mammography also results in overdiagnosis in 1–54% of cases, which represents the false-positive findings that would not have become symptomatic during a woman's lifetime if no screening had taken place [19].
With radiologic diagnostic testing, as in laboratory medicine [20], diagnostic errors may result from failures related to test ordering before a radiologist is ever involved or in the ordering clinician's use of the results after the radiologist's work is complete. Diagnostic errors attributed to radiologists have been grouped as related to failures in detection, interpretation, communication of results, or suggesting an appropriate follow-up test [6].
Despite the high prevalence and serious consequences of diagnostic errors, until recently, they have received relatively little attention. For example, a text search of the 1999 Institute of Medicine (IOM) report To Err is Human [21], which focused on the importance of medical error, found the term “diagnostic errors” mentioned only twice compared with 70 times for “medication errors” [21].

The Cause of Error in Radiology: System-Related Causal Factors and Cognitive-Perceptual Causal Factors

Diagnostic error in internal medicine is commonly multifactorial in origin, typically secondary to a mix of cognitive and system factors [22]. In radiology, cognitive errors (e.g., a missed lung nodule when interpreting a chest radiograph) are usually linked to problems of visual perception (scanning, recognition, interpretation). System errors (e.g., failure to communicate the presence of a nodule to the ordering physician) are usually linked to problems with the health system or context of care delivery. As with general medical diagnosis, errors often result from a combination or interaction between the two (e.g., night-staffed preliminary reports by resident radiologists that are altered in a final report but not fully communicated to caregivers) [23]. As described later in this article, certain system factors (e.g., lighting conditions, shift length, pace of reading required) have a profound effect on the likelihood of cognitive diagnostic errors in radiology.

Cognitive Errors in Radiology

The dual-process theory of reasoning has emerged as the dominant theoretic model for cognitive processing during human decision making in real-world settings [24]. This model proposes two general classes of cognitive operations and suggests causal explanations of where and how diagnostic errors occur in clinical reasoning [25]. Early in diagnosis, radiologists must assess the features of an imaging finding for pattern recognition. If the condition is recognized, so-called “type 1” (automatic) processes will rapidly and effortlessly make the diagnosis and nothing further may be required. If it is not, then linear, analytical, deliberate, and effortful “type 2” processes are engaged instead. Dynamic oscillation may occur between the two systems throughout the decision-making process. Certain types of errors are prone to occur when type 1 processes are used because mental shortcuts (heuristics) used in this type of cognitive processing are particularly susceptible to human biases [25], which will be further described later. Errors occurring during type 2 processes are believed to be less frequent in everyday practice but may be no less consequential [25]. These cognitive processes are also impacted by internal (e.g., fatigue, stress) and external (e.g., lighting) factors.
Graber et al. [22] showed that cognitive factors contribute to the diagnostic error in 74% of cases. Cognitive errors include faulty perception, failed heuristics, and biases. We rely on these shortcuts in reasoning to minimize delay, cost, and anxiety in our clinical decision making. Over the past three decades, the cognitive evolution in psychology literature has given rise to extensive literature on cognitive bias in decision making. Cognitive bias is best defined as a replicable pattern in perceptual distortion, inaccurate judgment, and illogical interpretation [26]. Cognitive biases are the result of psychologic distortions in the human mind, which persistently lead to the same pattern of poor judgment, often triggered by a particular situation. Some authors suggest that metacognition (thinking about thinking) may enable us to avoid being trapped by these cognitive biases using deliberate type 2 cognitive forcing strategies [27, 28]. Rather than eliminating these cognitive shortcuts that serve us well most of the time, we might be better served by recognizing the potential diagnostic dangers that arise from specific shortcuts and overriding them when appropriate.
Dozens of cognitive biases have been described [29]. Some of these biases likely play only a small role in radiology diagnostic error (e.g., certain emotional biases associated with direct patient interaction) [30]. On the basis of a review of recent literature, we identified five cognitive biases particularly likely to lead to diagnostic errors in radiology (anchoring, framing, search satisfication, premature closure, and multiple alternative bias) and potential metacognitive strategies to reduce them [27, 31].

Anchoring Bias

Anchoring is relying on an initial impression and failing to adjust this impression in light of subsequent information [32]. For example, in a patient with multiple sclerosis who develops a new enhancing brain lesion seen on MRI, the most appropriate diagnosis might be another demyelinating plaque. A repeat image a week later showing additional enhancing lesions might be dismissed as more demyelinating plaques during a multiple sclerosis exacerbation without a closer inspection that might identify features suggestive of CNS lymphoma. Anchoring is particularly dangerous when combined with confirmation bias, when the radiologist seeks confirming evidence to support the hypothesis rather than contradictory evidence to refute it [29].
Corrective strategy: Avoid early guesses; seek to disprove the initial diagnosis, rather than just confirm it (or seek disconfirming information rather than confirmatory information); when findings are worsening, reconsider the diagnosis or get a second opinion.

Framing Bias or Effect

Framing is being strongly influenced by subtle ways in which the problem is worded or framed [32]. For example, a radiologist detects multiple foci of abnormal activity in bilateral ribs on a bone scan in a frail elderly patient. If a truncated clinical indication states, “history of weight loss, chest pain,” this finding might be erroneously interpreted as strongly suggesting metastatic lesions. If the full indication, “history of weight loss, chest pain after recent fall down stairs” were provided, a diagnosis of multiple rib fractures would be made. Because radiologists rely on abridged clinical details, framing may be a major contributor to diagnostic error in radiology.
Corrective strategy: Masked read before reviewing clinical indication; seek more clinical information from treating physicians when image interpretation is tightly coupled with clinical context or abnormal findings are likely to alter management.

Availability Bias

Availability is the tendency to consider diagnoses more likely if they readily come to mind. For example, if a radiologist missed lung cancer on a chest radiograph, he or she is more likely to overcall suspected lung nodules on subsequent chest radiographs despite the low likelihood. Radiologists should be aware of the tendency to overestimate the frequency of previously missed, unusual, or otherwise memorable cases.
Corrective strategy: Obtain and use objective information to estimate the true base rate of a diagnosis; benchmark diagnostic performance against peers (e.g., for screening mammography, radiologists should enroll in the American College of Radiology National Mammography Data Registry to compare their recall rate, cancer detection rate, and positive predictive value with the established local, regional, and national benchmarks) [33, 34].

Search Satisficing (Satisfaction of Search)

Search satisficing is the tendency to stop a search for abnormality once one diagnosis that is evaluated as likely is found. For example, when a brain mass is identified on CT in a patient with headache, a radiologist might miss ethmoid or sphenoid sinus consolidation (especially if the radiologist does not know that the brain tumor diagnosis is old or that the patient has a fever).
Corrective strategy: Use a checklist or algorithmic approach to insure a systematic search, particularly for “do-not-miss” diagnoses [35]; always commence a secondary search after the first search has been completed; be mindful of known combinations (e.g., multiple foreign bodies, multiple fractures or contusions, or infarction or vascular occlusion).

Premature Closure

Premature closure is the tendency to accept a diagnosis before full verification. For example, in a patient with myasthenia gravis and a homogeneous mediastinal mass seen on chest CT, a diagnosis of thymoma might be made, even though thymic hyperplasia, lymphoma, and germ cell tumors remain on the differential diagnosis. A general limitation of imaging diagnoses is that pathologic diagnoses are inferred and not confirmed until tissue pathology is obtained.
Corrective strategy: Always generate a differential diagnosis (use checklists for common lesion differentials); never convert a working diagnosis to a final diagnosis before full (pathologic) verification.

System-Related Error in Radiology

In internal medicine, system-related factors contribute to diagnostic error in 65% of cases [22]. The vast majority of system-related error in these cases relates to problems with policies and procedures, inefficient processes, teamwork, communication, and technical and equipment failures [22]. Factors such as equipment failures and the methods of communicating dangerous radiographic findings to treating clinicians influence the likelihood of radiographic diagnostic error from a patient perspective. However, our focus is primarily on those factors that affect the likelihood of diagnostic error by the radiology provider. System issues, such as lighting conditions, shift length and timing, task repetitiveness, pace of reading images, and environmental distractions, all may impact the psychophysical process of visual diagnosis. Many of these issues ultimately exert their effects through visual and mental fatigue for radiologists [36].
Fatigue is a subcategory of system-related error in radiology because health care providers are constantly required to deliver quality patient care while under the stress of disrupted circadian rhythms. Although many other system issues coexist and contribute to misdiagnosis, we choose fatigue as the primary example for this discussion because it is a well-studied field. As medical reimbursement continues to trend downward, radiologists attempt to compensate by undertaking additional responsibilities and increasing organizational productivity. The increased workload and rising quality expectations, poor communication, cognitive biases, and imperfect information systems serve as major sources of fatigue, often leading to diagnostic errors [37]. Despite continuously evolving technology refinement and development, the current medical imaging system has developed as a one-size-fits-all model with relative inflexibility, which can impede workflow and productivity as well as cause end-user fatigue [36]. As imaging volume and complexity continue to grow over time, the impact of visual fatigue on diagnostic accuracy is becoming increasingly important [38].

Visual Fatigue

Krupinski et al. [39] studied the direct impact of fatigue using fractures on skeletal radiographs as the detection task. There is a significant reduction in diagnostic accuracy after the day of work (p < 0.05) with associated increasing myopia. As expected, subjective ratings of physical discomfort, eye strain, and lack of motivation also increase by the end of the workday. Interestingly, residents suffered greater effects of fatigue on all measures compared with attending radiologists [39]. The effects of visual fatigue seen with static radiographs also seem to apply to cross-sectional imaging examinations, which are displayed dynamically [40]. After a work shift, radiologists have increased variability in their ocular convergence capabilities, indicating increased oculomotor strain and visual fatigue. Detection accuracy for pulmonary nodules was reduced on dynamically displayed CT images in the resident group only, with no significant effect on the attending physicians. Interestingly, this difference between the residents and attending physicians was also previously shown in the fracture detection study. Accommodative relaxation (shifting the focal point from near to far or vice versa) is effective in reducing visual fatigue. In fact, a radiologist can even become more resistant to visual fatigue by undergoing automated accommodative training [36, 41, 42].

Decision (Mental) Fatigue

Radiologists also experience decision fatigue as a consequence of continuous and prolonged decision making [43]. Decision fatigue is thought to increase later in the day or after the work shift when cognitive processes respond to mental strain by taking short cuts, leading to poor judgment and diagnostic errors [37]. Those working prolonged shifts, off hours and with high-volume or high-complexity tasks are at the greatest risk [43]. In particular, one of the most vulnerable populations is radiology residents who provide preliminary interpretations independently during off hours [43].

Potential Solutions

The ultimate goal in reducing diagnostic errors is to first describe, analyze, and research cognitive biases in the context of medical decision making and then to find effective ways of cognitively ridding ourselves and our peers of bias. Rather than attempting to completely eliminate cognitive shortcuts that often serve us well, becoming aware of the common biases will lead to a more sustained improvement in patient care. Moreover, there is no one simple solution to diagnostic errors. Improving diagnostic accuracy will require a multidimensional approach that includes renewed emphasis on traditional teaching of clinical skills, exploration of new methods for diagnostic education (e.g., simulation or gaming), improvements in health information technology systems, and investment in the basic science of clinical diagnosis [44].

Feedback System: Radiology-Pathology Correlation

Radiologic-pathologic correlation of many clinical diagnoses has been described in the literature, but its adoption as a quality measure to assess radiologists' diagnostic accuracy is a relatively new concept. For diagnoses with pathologic correlation, we can track data on positive predictive value, disease detection rates, and abnormal interpretation rates to determine the interpretive accuracy for individual radiologists [45]. A Cornell Medical Center study showed that the error rate of radiologic-pathologic correlation in suspected acute appendicitis is a feasible and effective measure of interpretive accuracy of radiologists [46]. This study provided documentation of departmental accuracy of diagnosis. Further research with larger, multiinstitutional studies may enable the development of national benchmarks for radiologic-pathologic concordance in acute appendicitis and other conditions. Each radiologist will be required to interpret a sufficient number of cases to draw statistically significant conclusions for individual accuracy.

Peer Review

Multiple regulatory organizations require the ongoing practice-based evaluation of physician performance. In radiology, the single most important measure of performance is diagnostic accuracy of interpretation because errors can directly result in patient harm.
Peer review is continuous, systematic, and critical reflection and evaluation of physician performance using structured procedures. Peer review acts as an essential tool to assess radiologists' performance and to improve diagnostic accuracy. Setting up a successful peer review program requires a committed team and a positive culture [47] that is conscientious regarding consumption of radiologists' valuable time and disruption of workflow.

Education

The problem of misdiagnosis cannot be solved without education, but it also cannot be solved with education alone. Five evidence-based educational recommendations should be considered: First, teach from cases that are numerous, varied, and unknown; second, focus learners on real-world diagnostic decisions; third, force integration of analytic and intuitive thinking; fourth, make meta-awareness part of the curriculum; and fifth, take a multidimensional approach to evaluation (Newman-Toker DE, presented at 2012 Grand Rounds of Johns Hopkins Armstrong Institute). Training programs for medical students, residents, and fellows should include structured practice in diagnostic reasoning with model patients and simulations that include opportunities for self-reflection on reasoning processes and formative feedback [1]. The trainees should be taught not to miss certain key diagnoses; the board certification organizations also need to emphasize key elements of diagnostic accuracy as part of robust evaluation methods. These key competencies include the knowledge to make correct diagnoses, ability to use electronic resources effectively to find information, awareness of common cognitive biases and meta-cognitive strategies to mitigate them, mature clinical judgment, and ability to engage in eliminating cognitive bias [21].

Empower Information Technology Tools to Improve Training

A critical step to reduce diagnostic errors is the process of defining radiology quality metrics and developing the information technology tools to quantify and track these quality metrics. More specifically, the IOM cited a lack of adequate resident supervision and excessive fatigue as significant contributors to diagnostic errors, which resulted in the recent implementation of the Accreditation Council for Graduate Medical Education restriction of resident work hours to 80 per week [48]. To evaluate trainee performance while on-call, the University of Pennsylvania radiology department developed a software application (Orion) to facilitate the identification and monitoring of major discrepancies in preliminary reports issued on-call [49]. The study included 19,200 on-call studies interpreted by residents and 13,953 studies interpreted by fellows. Standard macros were used to classify these reports as “agreement,” “minor discrepancy,” and “major discrepancy” on the basis of the potential to impact patient outcome or management. This new software enables the residency director to use the major discrepancy rate to identify outliers and knowledge gaps in specific subspecialty areas within the training program [50]. The program can also be used to evaluate the rate of diagnostic errors by the length of shift and volume of studies [51]. Through a powerful information technology tool such as this, we can better understand the contributory factors in misdiagnosis and design solutions to improve physician training and reduce errors.

Structured Reporting Systems

Structured reporting has gained attention in the radiology community for improving communication between referring physicians and radiologists [52]. The written report is the most tangible product of radiologists and is intended to improve the organization, content, readability, and usefulness of the radiology report as well as advance the efficiency and effectiveness of the reporting process [52]. Despite the importance of the radiology report, it has historically been created with free-style conventional dictation, leading to nonstandardized, error prone, vague, incomplete, or untimely delivery of findings with significant interobserver variability. Both referring clinicians and radiologists have found that structured reports have better content and greater clarity than conventional reports for body CT [53], although structured reports did not significantly improve report accuracy or completeness according to a cohort study [54]. According to a survey study, more than 80% of clinicians prefer to receive standardized reports that consist of templates with separate headings for each organ system [55]. However, most radiology residency programs in the United States do not provide residents with more than 1 hour of reporting instruction a year [56]. According to 92% of clinicians and 95% of radiologists, structured reporting should be an obligatory part of residency training [55]. This serves as a good area for further prospective studies to see whether a structured reporting system can improve diagnostic accuracy, particularly given fears of “copy and paste” errors [57].
Structured reporting also serves the important role of a checklist (i.e., a cognitive job aid), a meta-cognitive tool that can help circumvent some cognitive biases and reflect on cognitive shortcuts that often lead to diagnostic errors. Mindfully using checklists encourages the user to decrease reliance on memory; step back to examine the thinking process (metacognition); develop strategies to avoid predictable biases (cognitive forcing); and recognize altered emotional states caused by fatigue, sleep deprivation, or other stressful conditions [58]. Diagnostic checklists have been shown to be effective in reducing errors in other fields of medicine, such as emergency medicine and anesthesiology [5862]. Using structured reporting as a diagnostic checklist can help users to consider common and particularly serious (“do-not-miss”) diagnoses in a systematic manner [35].

Computer-Aided Detection

In this age of digital information, new clinical decision support tools empower physicians in many areas, such as constructing a differential diagnosis list and ordering appropriate diagnostic testing. Within radiology, clinical decision support primarily takes the form of computer-aided detection, which has gained clinical acceptance for assisting imaging diagnosis. Using mammography as an example, studies have shown that computer-aided detection can improve the sensitivity of a single reader, with an incremental cancer detection rate ranging between 1% and 19% [63]. However, computer-aided detection will also substantially decrease specificity and cause unnecessary further testing in approximately 6–35% of women [63]. Evidence indicates that computer-aided detection does not perform as well as double human reading in the context of breast screening mammography, leaving room for refinement of computer-aided detection algorithms to address this issue [63]. Computer-aided detection has also been used in CT for the detection of pulmonary nodules. Similar to its impact on mammography, computer-aided detection substantially increases the sensitivity of lung nodule detection with a concomitant decrease in specificity [64]. The role of computer-aided detection in diagnostic imaging is emerging and necessitates well-designed prospective studies.

Design Workload to Align With Productivity Benchmarks

There is a wealth of literature on the negative impact of excessive workload, long work hours, and fatigue on patient safety and medical errors [6567]. The clinical productivity of radiologists is most commonly measured with the resource-based relative value scale with a relative value unit (RVU) assigned to each radiology examination [68]. By design, the RVU scale does not account for important administrative, leadership, or academic efforts. Moreover, the RVU does not assess the quality of services or the professionalism of the radiologist [69]. For this reason, academic and private radiology practices have different benchmarks of productivity. Lu et al. [70] reported a mean clinical workload of 9671 annual examinations or 7136 RVUs per full-time academic radiologist. Not surprisingly, this reflects increases of 15% and 22% from their previous survey in 2003 [71].
In comparison, a full-time private practice radiologist on average interprets 12,669 examinations or 7429 RVUs per year [72]. Armed with these published benchmarks of radiologist productivity in both academic and private practice environments, we have a more realistic perspective of what constitutes an excessive workload, which can negatively impact radiologists' performance and diagnostic accuracy. Solutions that limit the workload will be difficult to implement secondary to a loss in productivity and profitability, but there are alternative less costly strategies that can help improve diagnostic performance. Examples include instituting double reads, limiting the length of work shifts, establishing structured breaks, and switching between modalities during the workday. It is commonly believed that looking away at a distant object at least twice an hour during computer usage is sufficient for preventing visual fatigue [73]. In addition, proper lighting, better workstation ergonomics, and eyeglass correction have also been suggested as effective solutions [36].

Conclusion

Diagnostic errors are underrecognized and underappreciated in radiology practice because of the inability to obtain reliable national estimates of the impact, difficulty in evaluating effectiveness of potential interventions, and poor response to systemwide solutions. Most clinical work is executed through type 1 processes to minimize cost, anxiety, and delay; however, type 1 processes are also vulnerable to errors. Instead of trying to completely eliminate cognitive shortcuts that serve us well most of the time, becoming aware of common biases and using meta-cognitive strategies to mitigate their effects have the potential to create sustainable improvement in diagnostic errors.
For diagnostic errors to receive the resources and attention they deserve in the field of patient safety, multiple approaches are required. First, we need the methodology to accurately measure diagnostic errors, thereby evaluating the effectiveness of potential interventions. Second, we need to encourage research in the basic science of diagnostic errors to better understand why we make mistakes and how we can prevent them. Third, we need to maximize and refine available information technology tools, such as computer-aided detection. Finally, training programs should include diagnostic reasoning; board certification organizations also need to emphasize key elements of diagnostic accuracy in the licensing process. In summary, health information technology, improved education, and increasing acknowledgment of diagnostic errors hold promise in error reduction.

References

1.
Newman-Toker DE, Pronovost PJ. Diagnostic errors: the next frontier for patient safety. JAMA 2009; 301:1060–1062
2.
Saber-Tehrani AS, Lee HW, Matthews SC, et al. 20-year summary of US malpractice claims for diagnostic errors from 1985–2005. (abstr) Proceedings of the fourth Annual Diagnostic Error in Medicine Conference. Chicago, IL: Johns Hopkins University School of Medicine, 2011
3.
Berlin L, Berlin JW. Malpractice and radiologists in Cook County, IL: trends in 20 years of litigation. AJR 1995; 165:781–788
4.
Berlin L. Accuracy of diagnostic procedures: has it improved over the past 5 decades? AJR 2007; 188:1173–1178
5.
Graber M. Diagnostic errors in medicine: a case of neglect. Jt Comm J Qual Patient Saf 2005; 31:106–113
6.
Pinto A, Brunese L. Spectrum of diagnostic errors in radiology. World J Radiol 2010; 2:377–383
7.
Berlin L. Malpractice and radiologists, update 1986: an 11.5-year perspective. AJR 1986; 147:1291–1298
8.
Garland LH. On the scientific evaluation of diagnostic procedures. Radiology 1949; 52:309–328
9.
Johnson CD, Krecke KN, Miranda R, et al. Quality initiatives: developing a radiology quality and safety program—a primer. RadioGraphics 2009; 29:951–959
10.
Kruskal JB, Anderson S, Yam CS, et al. Strategies for establishing a comprehensive quality and performance improvement program in a radiology department. RadioGraphics 2009; 29:315–329
11.
Borgstede JP, Lewis RS, Bhargavan M, Sunshine JH. RADPEER quality assurance program: a multifacility study of interpretive disagreement rates. J Am Coll Radiol 2004; 1:59–65
12.
Quekel LG, Kessels AG, Goei R, van Engelshoven JM. Miss rate of lung cancer on the chest radiograph in clinical practice. Chest 1999; 115:720–724
13.
Heelan RT, Flehinger BJ, Melamed MR, et al. Non-small-cell lung cancer: results of the New York screening program. Radiology 1984; 151:289–293
14.
Muhm JR, Miller WE, Fontana RS, et al. Lung cancer detected during a screening program using 4-month chest radiographs. Radiology 1983; 148:609–615
15.
Stitik FP, Tockman MS. Radiographic screening in the early detection of lung cancer. Radiol Clin North Am 1978; 16:347–366
16.
Giess CS, Frost EP, Birdwell RL. Difficulties and errors in diagnosis of breast neoplasms. Semin Ultrasound CT MR 2012; 33:288–299
17.
Humphrey LL, Helfand M, Chan BK. Breast cancer screening: a summary of the evidence for the U.S. Preventive Services Task Force. Ann Intern Med 2002; 137:347–360
18.
U.S. Food and Drug Administration website. Mammography Quality Standards Program. www.fda.gov/Radiation-EmittingProducts/MammographyQualityStandardsActandProgram/facilityScorecard/ucm113858.htm. Accessed July 1, 2013
19.
de Gelder R, Heijnsdijk EA, van Ravesteyn NT, Fracheboud J, Draisma G, de Koning HJ. Interpreting overdiagnosis estimates in population-based mammography screening. Epidemiol Rev 2011; 33:111–121
20.
Plebani M. Errors in clinical laboratories or errors in laboratory medicine? Clin Chem Lab Med 2006; 44:750–759
21.
Wachter RM. Why diagnostic errors don’t get any respect: and what can be done about them. Health Aff 2010; 29:1605–1610
22.
Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med 2005; 165:1493–1499
23.
McCreadie G, Oliver TB. Eight CT lessons that we learned the hard way: an analysis of current patterns of radiological error and discrepancy with particular emphasis on CT. Clin Radiol 2009; 64:491–499
24.
Weaver SJ, Newman-Toker DE, Rosen MA. Cognitive skill decay and diagnostic error: best practices for continuing education in healthcare. J Contin Educ Health Prof 2010; 30:208–220
25.
Croskerry P. Clinical cognition and diagnostic errors: applications of a dual process model of reasoning. Adv in Health Sci Educ 2009; 14:27–35
26.
Ariely D. Predictably irrational: the hidden forces that shape our decisions. New York, NY: HarperCollins, 2000
27.
Croskerry P. Cognitive forcing strategies in clinical decision making. Ann Emerg Med 2003; 41:110–120
28.
Croskerry P. The cognitive imperative: thinking about how we think. Acad Emerg Med 2000; 7: 1223–1231
29.
Croskerry P. Achieving quality in clinical decision making: cognitive strategies and detection of bias. Acad Emerg Med 2002; 9:1184–1204
30.
Croskerry P, Abbass A, Wu AW. Emotional influences in patient safety. J Patient Saf 2010; 6:199–205
31.
Redelmeier DA. The cognitive psychology of missed diagnoses. Ann Intern Med 2005; 142:115–120
32.
Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med 2003; 78:775–780
33.
Burnside ES, Sickles EA, Bassett LW, et al. The ACR BI-RADS experience: learning from history. J Am Coll Radiol 2009; 6:851–860
34.
American College of Radiology website. National Mammography Database. www.acr.org/Quality-Safety/National-Radiology-Data-Registry/National-Mammography-DB. Accessed October 10, 2012
35.
Graber ML, Wachter RM, Cassel CK. Bringing diagnosis into the quality and safety equations. JAMA 2012; 308:1211–1212
36.
Blehm C, Vishnu S, Khattak A, Mitra S, Yee RW. Computer vision syndrome: a review. Surv Ophthalmol 2005; 50:253–262
37.
Reiner BI, Krupinski E. The insidious problem of fatigue in medical imaging practice. J Digit Imaging 2012; 25:3–6
38.
Vertinsky T, Foster B. Prevalence of eye strain among radiologists: influence of viewing variables on symptoms. AJR 2005; 184:681–686
39.
Krupinski EA, Berbaum KS, Caldwell RT, Schartz KM, Kim J. Long radiology workdays reduce detection and accommodation accuracy. J Am Coll Radiol 2010; 7:698–704
40.
Krupinski EA, Berbaum KS, Caldwell RT, Schartz KM, Maden MT, Kramer DJ. Do long radiology workdays affect nodule detection in dynamic CT interpretation? J Am Coll Radiol 2012; 9:191–198
41.
Iwasaki T, Tawara A, Miyake N. Reduction of asthenopia related to accommodative relaxation by means of far point stimuli. Acta Opthalmol Scand 2005; 83:81–88
42.
Cooper J, Feldman J, Selenow A, et al. Reduction of asthenopia after accommodative facility training. Am J Optom Physiol Opt 1987; 64:430–436
43.
Gaba DM, Howard SK. Patient safety: fatigue among clinicians and the safety of patients. N Engl J Med 2002; 347:1249–1255
44.
Eva KW. What every teacher needs to know about clinical reasoning. Med Educ 2005; 39:98–106
45.
Lee JK. Quality: a radiology imperative—interpretation accuracy and pertinence. J Am Coll Radiol 2007; 4:162–165
46.
Gurian MS, Kovanlikaya A, Beneck D, Baron KT, John M, Brill PW. Radiologic-pathologic correlation in acute appendicitis: can we use it as a quality measure to assess interpretive accuracy of radiologists. Clin Imaging 2011; 35:421–423
47.
Kaewlai R, Abujudeh H. Peer review in clinical radiology practice. AJR 2012; 199:[web]W158–W162
48.
Accreditation Council for Graduate Medical Education (ACGME) website. Common program requirements. duty hours: ACGME standards. Effective July 1, 2011. www.acgme.org/acgmeweb/Portals/0/PDFs/commonguide/CompleteGuide_v2%20.pdf. Accessed August 5, 2012
49.
Itri JN, Kim W, Scanlon MH. Orion: a web-based application designed to monitor resident and fellow performance on-call. J Digit Imaging 2011;24:897–907
50.
Ruutiainen AT, Scanlon MH, Itri JN. Identifying benchmarks for discrepancy rates in preliminary interpretations provided by radiology trainees at an academic institution. J Am Coll Radiol 2011;8:644–648
51.
Itri JN, Redfern RO, Scanlon MH. Using a web-based application to enhance resident training and improve performance on-call. Acad Radiol 2010; 17:917–920
52.
Kahn CE, Langlotz CP, Burnside ES, et al. Toward best practices in radiology reporting. Radiology 2009; 252:852–856
53.
Schwartz LH, Panicek DM, Berk AR, Li Y, Hricak H. Improving communication of diagnostic radiology findings through structured reporting. Radiology 2011; 260:174–181
54.
Johnson AJ, Chen MY, Swan JS, Applegate KE, Littenberg B. Cohort study of structured reporting compared with conventional dictation. Radiology 2009; 253:74–80
55.
Bosmans JM, Weyler JJ, De Schepper AM, Parizel PM. The radiology report as seen by radiologists and referring clinicians: results of the COVER and ROVER surveys. Radiology 2011;259:184–195
56.
Sistrom C, Lanier L, Mancuso A. Reporting instruction for radiology residents. Acad Radiol 2004; 11:76–84
57.
Hirschtick R. A piece of my mind: copy-and-paste. JAMA 2006; 295:2335–2336
58.
Ely JW, Graber ML, Croskerry P. Checklists to reduce diagnostic errors. Acad Med 2011; 86:307–313
59.
Hales BM, Pronovost PJ. The checklist: a tool for error management and performance improvement. J Crit Care 2006; 21:231–235
60.
Gawande A. The checklist: if something so simple can transform intensive care, what else can it do? New Yorker 2007; 86–101
61.
Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med 2006; 355:2725–2732
62.
Hart EM, Owen H. Errors and omissions in anesthesia: a pilot study using a pilot’s checklist. Anesth Analg 2005; 101:246–250
63.
Houssami N, Given-Wilson R, Ciatto S. Early detection of breast cancer: overview of the evidence on computer-aided detection in mammography screening. J Med Imaging Radiat Oncol 2009; 53:171–176
64.
Saba L, Caddeo G, Mallarini G. Computer-aided detection of pulmonary nodules in computed tomography: analysis and review of the literature. J Comput Assist Tomogr 2007; 31:611–619
65.
Beckmann U, Baldwin I, Durie M, et al. Problems associated with nursing staff shortage: an analysis of the first 3600 incident reports submitted to the Australian Incident Monitoring Study (AIMS-ICU). Anaesth Intensive Care 1998; 26:396–400
66.
Tarnow-Mordi WO, Hau C, Warden A, et al. Hospital mortality in relation to staff workload: a 4-year study in an adult intensive care unit. Lancet 2000; 356:185–189
67.
Pronovost PJ, Jenckes MW, Dorman T, et al. Organizational characteristics of intensive care units related to outcomes of abdominal aortic surgery. JAMA 1999; 281:1310–1317
68.
Hsiao WC, Braun P, Becker ER, Thomas SR. The resource-based relative value scale: toward the development of an alternative physician payment system. JAMA 1987; 258:799–802
69.
Duszak R, Muroff LR. Measuring and managing radiologist productivity. Part 1. Clinical metrics and benchmarks. J Am Coll Radiol 2010; 7:452–458
70.
Lu Y, Zhao S, Chu PW, Arenson RL. An update survey of academic radiologists’ clinical productivity. J Am Coll Radiol 2008; 5:817–826
71.
Lu Y, Arenson RL. The academic radiologist’s clinical productivity: an update. Acad Radiol 2005; 12:1211–1223
72.
Monaghan DA, Kassak KM, Ghomrawi HM. Determinants of radiologists’ productivity in private group practices in California. J Am Coll Radiol 2006; 3:108–114
73.
Cheu RA. Good vision at work. Occup Health Saf 1998; 67:20–24

Information & Authors

Information

Published In

American Journal of Roentgenology
Pages: 611 - 617
PubMed: 23971454

History

Submitted: November 17, 2012
Accepted: November 29, 2012

Keywords

  1. cognitive biases
  2. diagnostic errors
  3. fatigue
  4. medical errors
  5. misdiagnosis

Authors

Affiliations

Cindy S. Lee
All authors: The Russell H. Morgan Department of Radiology, Johns Hopkins University School of Medicine, 22 S Greene St, Baltimore, MD, 21201.
Paul G. Nagy
All authors: The Russell H. Morgan Department of Radiology, Johns Hopkins University School of Medicine, 22 S Greene St, Baltimore, MD, 21201.
Sallie J. Weaver
All authors: The Russell H. Morgan Department of Radiology, Johns Hopkins University School of Medicine, 22 S Greene St, Baltimore, MD, 21201.
David E. Newman-Toker
All authors: The Russell H. Morgan Department of Radiology, Johns Hopkins University School of Medicine, 22 S Greene St, Baltimore, MD, 21201.

Notes

Address correspondence to P. G. Nagy ([email protected]).

Metrics & Citations

Metrics

Citations

Export Citations

To download the citation to this article, select your reference manager software.

Articles citing this article

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share on social media