Journal Club
Residents' Section
July 2010

Evidence-Based Radiology: A Primer in Reading Scientific Articles


OBJECTIVE. In this article we provide a basic guide to reading scientific articles that we hope will improve the reader's ability to read and critically appraise the primary literature.
CONCLUSION. We provided a series of guidelines and questions to consider when reading the primary literature. This guide is intended to help individuals read and critically appraise the primary literature and participate more fully in journal clubs and evidence-based radiology.


During their training, radiologists acquire practical skills (performing and interpreting radiologic examinations) and core radiologic knowledge. Medicine is a dynamic field, however, and radiologists must have the skills to think critically and to analyze the medical literature [1, 2]. A fundamental principle of medicine is that clinical practice should be based on the critical analysis and evaluation of methodologically sound research. This principle is termed evidence-based medicine (EBM), and it is defined as the integration of the best research evidence with clinical expertise and the incorporation of patient values [35]. Consequently, critical appraisal and assessment of applicability of the literature are fundamentals in EBM [3, 610]. Critical analysis in EBM also is included in the competencies of “medical knowledge” and “practice-based learning and improvement” as defined by the Accreditation Counsel for Graduate Medical Education (ACGME) outcome project report [11]. In addition to mandates put forth by the American Board of Radiology and ACGME, critical-thinking skills are relevant to enhancing resident research [1214].
Journal clubs can provide an effective format to teach EBM and critical analysis [15, 16]. One difficulty with creating an effective journal club is the method of picking appropriate articles [2, 16]. Heilbrun [16] reports that residents often commented that they had difficulty in selecting articles and that they wanted more direct guidance. In addition, van Beek and Malone [2] point out that residents may have difficulty analyzing the literature if they are unfamiliar with the pathophysiology and imaging appearances involved. The goals of a journal club are to learn from primary sources and to learn to think critically and analytically. This article offers a basic framework in which to read and critically appraise the primary literature.

Approaches to Reading the Article

There are myriad approaches that one may take in initially reading an article. Regardless of which approach is taken, a thorough reading of the article is necessary. When initially reading the article, a few questions that one should keep in mind are
What is the question that the authors are trying to address?
What are the general issues surrounding the research question or hypothesis?
Where do the authors' specific aims fit into what is already known about the subject?
Is the topic timely and relevant?

A Systematic Approach to Critically Reading an Article

The Abstract

The purpose of the abstract is to provide a concise overview of the entire study. A good abstract should contain a concise summary of the work, highlight the primary results, and make a brief statement about the significance of the findings. For original research articles published in the AJR, for example, the abstract will contain Objective, Materials and Methods, Results, and Conclusion sections. Other journals may structure their abstracts differently; however, all abstracts should contain the article's essential information. In the Objective section, the authors should briefly explain the major objective of the study. In the Materials and Methods and Results sections, the authors should explain how the study was conducted and describe the major findings in the Results section. In the Conclusion, the authors should describe whether or not the major objective of the study was met and make a statement about the significance of the findings.
Questions to ask when reading the abstract include
In the absence of being able to read the entire article, would the abstract adequately summarize the article's content?
Are there major discrepancies between the abstract and the body of the article? Pitkin et al. [17] found that discrepancies occurred in 18–68% of the articles that they reviewed.
Does the abstract's conclusion address the specific aim of the investigation?


The Introduction section of the article should provide a rationale for the study and explain the study's specific goals. Specific questions to address while reading the introduction section include
What is the question that the authors are trying to address?
Does the introduction provide a conceptual framework for the research question?
What are the general issues surrounding the authors' question?
How does the authors' specific question fit into what is already known about the subject?
Do the authors build a logical case and context for their hypothesis?
Although it does not have to be in the last paragraph of the introduction, the authors' hypothesis and study aim should be easy to identify. If the authors' question is not clearly discernible, concerns are raised about the validity of the research.
Additional questions to ask while reading the introduction in the section include
Has the authors' specific research question previously been answered? If so, does this article add to the fund of medical knowledge?
Does this article cover an important topic? Although a study may be well designed and executed, the information derived from the study may not be of clinical use.

The Materials and Methods Section

The Materials and Methods section contains the pertinent information regarding the study population, the study design, the data collection techniques, and the methods of data analysis. As described by Provenzale and Stanley [18], the Materials and Methods section is in essence a blueprint by which another investigator should be able to reproduce the study.
The Materials and Methods section is, in many ways, the most important section within the article. A well-written Materials and Methods section explains the authors' study methodology. Ideally, the study methodology should include information regarding subject recruitment, including inclusion and exclusion criteria, subject allocation, the intervention or test performed (including a sufficient description of the technical parameters), and the methods of data analysis.
When reading the Materials and Methods section, it is always important to keep the authors' primary question in mind. One should ask the following question: “Is the method that the authors used a reasonable approach to answer the question?” A common flaw in experimental design is that the research methodology fails to adequately test the hypothesis.
The internal validity of a study refers to the study's quality and is based on the adequacy of the research methodology. A well-designed study attempts to minimize and eliminate bias and confounding factors. Bias is not a pejorative term; instead, bias reflects the degree to which a result deviates from the truth. Bias differs from random error. Confounding factors result from patient characteristics and other causal factors, which are separate from the characteristic that is being measured, and that may affect the outcome of the study. Consequently, a well-designed study should attempt to match the subject groups as closely as possible and ensure that the management of the respective groups is similar in every way with the exception of the intervention of interest. Questions to keep in mind while reading the Materials and Methods section include
How were the subjects selected? Was subject selection based on randomization, consecutive entry, or a convenience sample?
How were subjects included or excluded?
Do subjects appropriately represent the population of interest?
Are the study groups comparable? • Were the study groups managed so that the only difference between the study groups is the intervention or test of interest?
Were the subjects adequately followed-up? For those subjects not completing the study, did the authors conduct an intention-to-treat analysis?
When assessing the internal validity of a study, attention should be directed to the diagnostic test or intervention being evaluated. In the radiology literature, it is typical to analyze the performance characteristics of a particular test or the effectiveness of a therapeutic intervention. When considering the study methodology, the following questions should be kept in mind [19, 20]:
Was the diagnostic test or intervention evaluated in an appropriate selection of patients? For example, was the test or intervention evaluated in patients in whom it would be routinely used in clinical practice?
Was the diagnostic test or intervention compared with an independent, reference standard? If so, was the comparison performed in a blinded fashion?
Was the reference standard applied, regardless of the test result?
Was the test or intervention validated in this second, independent group of subjects?
Just as it is important that study groups are allocated or adjusted to ensure comparability, it is equally important to ensure that outcomes are measured fairly so as to eliminate measurement bias. Studies may be single, double, triple blinded, or not blinded at all. In double-blind studies, the subjects and investigators do not know who has been assigned to the various groups. When considering a study that evaluates the diagnostic performance of a test, it is important to ask whether or not the interpreting radiologist was blinded not only with respect to the results of the reference standard, but also with respect to the patients' demographics, clinical history, and previous imaging or laboratory examinations.
Other questions to keep in mind include intraobserver and interobserver variability. If multiple measurements were made of the same subject by the same investigator, was intraobserver variability reported? In a similar fashion, if multiple investigators interpreted the studies, were the interobserver coefficients reported? If so, how closely did the kappa coefficients correlate?
Additional questions to consider include [21]
Has the imaging method been described in sufficient detail to be reproduced by one's own practice?
Has the imaging test or procedure being evaluated been performed to the same level of quality as the reference standard? Has the reference standard been performed adequately?
Have the varying generations of technology development within the same technique been adequately considered in the study design?
Has radiation exposure been considered? Did the study follow the as low as reasonable achievable principles?
How were the images reviewed? Were the images reviewed on a PACS or on hard copy?

The Results Section

The Results section presents the findings from the study. In a well-written Results section, the authors should logically and clearly describe the study findings. Frequently, the data are presented in tables, graphs, and charts. The data in the figures should agree with the data in the text of the article. In addition, the Results section should not contain unexpected data sets.
When reading the Results section, consider questions such as
What data are presented?
Do the data follow from the investigators methods? Is it clear where the data came from?
Is it clear how the data were obtained?
Are all the data presented, and are all groups accounted for?
If all the subjects or groups are not accounted for, how did the authors address this issue? Did the investigators perform an intent-to-treat analysis?
What do the results show?
Could these results have been from chance?
Myriad statistical tests may be performed depending on the research methodology. The description of these tests is beyond the scope of this article. Nonetheless, the Results section should answer the question: Are the results real and relevant? The authors should include p values and 95% CIs, where appropriate.
The study results may also include other assessments, including sensitivity and specificity, positive and negative predictive values, positive and negative likelihood ratios, absolute risk reduction and relative risk reduction, numbers needed to treat, and receiver operating characteristic curves.

The Discussion

In the Discussion section, the authors of the article should state whether or not their hypothesis was verified. In addition, the authors should review and comment on other studies relating to their investigation and explain what, if any, different differences exist among their findings and those reported in the literature. Furthermore, the authors should attempt to provide an explanation for any discrepancies. Frequently, the authors may use the Discussion section to explain methodologic limitations within the study. In addition, the authors frequently will use the Discussion section to help strengthen their conclusions. Questions to keep in mind when reading the Discussion section include
What conclusions did the authors draw from the data? Would I draw the same conclusions?
Are the authors' conclusions supported and based on the methods and data? Do the conclusions drawn from the data disagree with the authors' conclusions? If so, going back to the Results section to see where the discrepancy in interpretation occurred may be helpful.
Do the results and conclusion apply to the patients in my practice?
How does the study advance knowledge?
Do the authors acknowledge limitations of the study? Are there additional limitations that should be included?
Do the authors adequately account for any unexpected results?
The questions in these sections are designed to provide a basic framework for critically analyzing an article. The questions are by no means complete. Research methodologies vary considerably depending on the hypothesis being tested. To improve the quality of studies of diagnostic performance, the Standards for Reporting of Diagnostic Accuracy (STARD) initiative was created. STARD includes a checklist for studies on diagnostic performance [22, 23], consisting of 25 elements. A similar statement, the Consolidated Standards of Reporting Trials (CONSORT), aims to improve the reporting of clinical trials [24, 25]. In a similar fashion, the Quality of Reporting of Meta-Analyses (QUOROM) statement also establishes rules for improving the quality of reports of meta-analyses [26].


Medicine remains an ever-changing field, with a continually growing corpus of knowledge. Ideally, our practice should reflect the “systematic application of the best evidence to evaluate the available options and decision making” [27]. Our goal as physicians is to be more than consumers of scientific and medical research. The validity or strength of the research relies on the methodology used. In an ideal study, all biases would be removed and all confounding variables eliminated. Unfortunately, in clinical research, elimination of all biases and confounding variables is an impossible task. All studies have limitations. Consequently, it is easy to become overly critical when analyzing a study.
The goal of critical analysis is to determine whether or not, given the limitations of a study, the study conclusion is valid and useful. In the end, do we believe the results? Should we apply the study results to our patients and clinical practice? Only through close reading and thoughtful analysis can we adequately answer these questions. This article provides a contextual framework within which to approach this goal.


Address correspondence to J. J. Budovec ([email protected]).
This is a Web exclusive article.


Stolberg HO, Norman GR, Moran LA, Gafni A. A core curriculum in the evaluative sciences for diagnostic imaging. Can Assoc Radiol J 1998; 49:295 –306
van Beek EJR, Malone DE. Evidence-based practice in radiology education: why and how should we teach it? Radiology 2007; 243:633 –640
Malone DE. Evidence-based practice in radiology: an introduction to the series. Radiology 2007; 242:12–14
Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn't. BMJ 1996; 312:71 –72
Sardanelli F, Hunink MG, Gilbert FJ, Di Leo G, Krestin GP. Evidence-based radiology: why and how? Eur Radiol 2010; 20:1 –15
Malone DE. Evidence-based practice in radiology: what color is your parachute? Abdom Imaging 2008; 33:3–5
Kelly AM. Evidence-based radiology: step 1—ask. Semin Roentgenol 2009; 44:140–146
Kelly AM. Evidence-based practice: an introduction and overview. Semin Roentgenol 2009; 44:131–139
Dodd JD. Evidence-based practice in radiology: steps 3 and 4—appraise and apply diagnostic radiology literature. Radiology 2007; 242:342–354
Malone DE, Staunton M. Evidence-based practice in radiology: step 5 (evaluate)—caveats and common questions. Radiology 2007; 243:319–328
Collins J, de Christenson MR, Gray L, et al. General competencies in radiology residency training: definitions, skills, education and assessment. Acad Radiol 2002; 9:721–726
Gunderman RB, Nyce JM, Steele J. Radiologic research: the residents' perspective. Radiology 2002; 223:308–310
Rogers LF. The “win-win” of research. (commentary) AJR 1999; 172:877
Medina LS, Blackmore CC. Evidence-based radiology: review and dissemination. Radiology 2007; 244:331–336
Kelly AM, Cronin P. Evidence-based practice journal club: how we do it. Semin Roentgenol 2009; 44:209–213
Heilbrun ME. Should radiology residents be taught evidence-based radiology? An experiment with “the EBR Journal Club”. Acad Radiol 2009; 16:1549 –1554
Pitkin RM, Branagan MA, Burmeister LF. Accuracy of data in abstracts of published research articles. JAMA 1999; 281:1110 –1111
Provenzale JM, Stanley RJ. A systematic guide to reviewing a manuscript. AJR 2005; 185:848–854
Cronin P. Evidence-based medicine: step 3—critical appraisal of therapeutic literature. Semin Roentgenol 2009; 44:166 –169
Cronin P. Evidence-based radiology: step 3—critical appraisal of diagnostic literature. Semin Roentgenol 2009; 44:158 –165
Dodd JD, MacEneaney PM, Malone DE. Evidence-based radiology: how to quickly assess the validity and strength of publications in the diagnostic radiology literature. Eur Radiol 2004; 14:915–922
Bossuyt PM, Reitsma JB, Bruns DE, et al. Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. AJR 2003; 181:51 –55
Bossuyt PM, Reitsma JB, Bruns DE, et al. Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. Radiology 2003; 226:24–28
Moher D, Jones A, Lepage L. Use of the CONSORT statement and quality of reports of randomized trials: a comparative before-and-after evaluation. JAMA 2001; 285:1992 –1995
Moher D, Schulz KF, Altman D. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomized trials. JAMA 2001; 285:1987 –1991
Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, Stroup DF. Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement—Quality of Reporting of Meta-Analyses. Lancet 1999; 354:1896 –1900
Evidence-Based Radiology Working Group. Evidence-based radiology: a new approach to the practice of radiology. Radiology 2001; 220:566 –575

Information & Authors


Published In

American Journal of Roentgenology
Pages: W1 - W4
PubMed: 20566774


Submitted: March 10, 2010
Accepted: April 5, 2010


  1. critical thinking
  2. evidence-based medicine
  3. evidence-based radiology



Joseph J. Budovec
Both authors: Department of Radiology, Medical College of Wisconsin, 9200 W Wisconsin Ave., Milwaukee, WI 53226.
Charles E. Kahn, Jr.
Both authors: Department of Radiology, Medical College of Wisconsin, 9200 W Wisconsin Ave., Milwaukee, WI 53226.

Metrics & Citations



Export Citations

To download the citation to this article, select your reference manager software.

Articles citing this article

View Options

View options


View PDF

PDF Download

Download PDF







Copy the content Link

Share on social media