January 2019, VOLUME 212
NUMBER 1

Recommend & Share

January 2019, Volume 212, Number 1

FOCUS ON: Neuroradiology/Head and Neck Imaging

Review

Machine Learning in Neurooncology Imaging: From Study Request to Diagnosis and Treatment

+ Affiliations:
1Department of Radiology and Biomedical Imaging, University of California, San Francisco, 505 Parnassus Ave, L-352, San Francisco, CA 94143.

2Department of Radiology, Thomas Jefferson University Hospitals, Philadelphia, PA.

Citation: American Journal of Roentgenology. 2019;212: 52-56. 10.2214/AJR.18.20328

ABSTRACT
Next section

OBJECTIVE. Machine learning has potential to play a key role across a variety of medical imaging applications. This review seeks to elucidate the ways in which machine learning can aid and enhance diagnosis, treatment, and follow-up in neurooncology.

CONCLUSION. Given the rapid pace of development in machine learning over the past several years, a basic proficiency of the key tenets and use cases in the field is critical to assessing potential opportunities and challenges of this exciting new technology.

Keywords: artificial intelligence, machine learning, neuroimaging, neurooncology

Interest in machine learning has grown substantially over the past 5 years, particularly in the realm of medical imaging. This article focuses on the role machine learning can play in the journey from initial diagnosis through treatment and follow-up in neurooncology. Imaging, clinical history, and a detailed physical examination are critical for high-quality diagnosis and treatment. Future applications range from extending structural and physiologic data we can infer from images to streamlining complex noninterpretive processes that impact patient satisfaction and care. This article explores both the current state-of-the-art and that of the near future related to application of machine learning. Discussion regarding technical concepts of creating machine learning algorithms are presented in detail in a prior AJR article [1].

At the Time of Study Request
Previous sectionNext section

Selecting the appropriate imaging protocol is a common quality assurance problem in radiology. Inappropriate choice of protocol contributes to health care cost and waste. This time-consuming process relies on the radiologist's knowledge of imaging protocols and attention to the clinician's specific requests, which often requires reading through the medical record, reviewing prior imaging studies, or both. Although much of the interest in machine learning has focused on interpretation of pixel data, algorithms can also be applied to gaining knowledge from text using a set of techniques called natural language processing (NLP). NLP based on narrative clinical information from the electronic medical record has been used to identify the correct imaging study to order (decision support) as well as automating choice of examination protocol and prioritization. Recent studies show that machine learning algorithms accurately acquire knowledge from text and use ordering information like study indications to determine protocols for brain and body examinations, including the need for a contrast agent [26].

Scheduling is a persistent source of patient and ordering provider dissatisfaction. Some algorithms have shown promise in routing patients to imaging site on the basis of patient location and setting expectations by predicting examination and wait times [7, 8]. For example, patients referred for newly diagnosed tumors could be scheduled to be imaged at the site of neurosurgical consultation with a neuronavigation protocol that may be more extensive than a routine follow-up examination. Patients with long-term stability could be scanned at locations that are more convenient to them with a tailored protocol, which may be shorter and not require gadolinium-based contrast medium.

At the Time of Image Acquisition
Previous sectionNext section

Machine learning methods can be used to improve quality at various stages of image acquisition and reconstruction. Differences in vendors, field strength, sequence parameters, and acquisition orientation can lead to image quality heterogeneity. Prescribing repeatable coverage from scan to scan, ensuring that sequence parameters are standardized, and assessing image quality after acquisition are areas of active investigation with some commercially available products [912]. Furthermore, the era of low-dose CT and fast MRI has heralded a boom of machine learning applications to acquire high-quality images in less time. Using advanced machine learning techniques such as sparse dictionary learning and convolutional neural networks (CNNs) for noise reduction in low-dose CT can reduce dose substantially [1316]. Similarly, these techniques are being applied in MRI reconstruction from highly undersampled k-space [1719]. Machine learning has been used to simulate higher or lower field strength after acquisition of paired data, generate superresolution images, improve the signal-to-noise ratio of perfusion imaging, and reduce scan time for lengthy acquisitions such as advanced diffusion imaging [2025].

Machine learning can also be applied to decrease the amount of time needed to perform complex image reconstructions, bringing these techniques closer to clinical applicability [26]. Additionally, machine learning has been explored for modality conversion or creating a synthetic CT image from a conventional MRI study, which could then be used for surgical or radiotherapy planning [2729]. Because multiparametric imaging is needed to arrive at the appropriate diagnosis and help guide therapy in many complex neurooncologic cases, the field would greatly benefit from efforts to improve image quality, reduce scan time, and potentially eliminate the need for redundant examinations.

Before Interpretation
Previous sectionNext section

Nearly every radiology practice prioritizes worklists by a number of factors including the time when a study was obtained, patient location, ordering provider concern (urgent vs routine), or even advanced criteria like same-day appointments. Recent work has used machine learning to improve triaging by identifying critical findings (e.g., hemorrhage on head CT) within the image data [3033]. An important component of triage is early notification of the treatment team. One commercial product recently approved by the U.S. Food and Drug Administration (FDA) includes an algorithm for detecting potential stroke when evaluating CT images [34]. These algorithms are tireless and can evaluate studies regardless of patient location or class, increasing the likelihood that acute complications like hemorrhage or infection in patients with brain tumors will be detected quickly.

Consistent display of imaging sequences (a hanging protocol) regardless of vendor or protocol is a constant headache for radiologists and is a particular challenge with the myriad of brain tumor follow-up MRI studies. Identifying pulse sequences from DICOM meta-data (e.g., TR/TE, series description) relies on a set of hand-coded rules built individually at each institution. These rules must account for variation between sequence names applied by different vendors, repeated series, and manual editing [35]. There is tremendous potential in training models to use not only metadata but also pixel data to accurately identify modality, body part, image plane, and pulse sequence to drive hanging protocols. Studies have shown that high-quality hanging protocols have direct impact on radiologist productivity. In neurooncology, assessing treatment response requires review of multiple pulse sequences over serial examinations, so high-quality hanging protocols would improve radiologist satisfaction and likely quality of care.

Radiologists understand the value of detailed patient context during image interpretation and that reviewing the electronic medical record is a time-consuming endeavor. Machine learning is being used to generate context-based summaries of the electronic health record [36, 37]. In neurooncology, medical histories often are complex and span long periods of time with several providers. As a result, much of the effort spent trying to put together the full clinical picture could be mitigated by electronic health record mining to present the interpreting radiologist with the most current and relevant information.

At the Time of Image Interpretation
Previous sectionNext section
Segmentation and Lesion Detection

Computer-aided detection (CAD) was first developed to help radiologists interpret radiographic images, including mammograms and chest radiographs [38, 39]. The prevalent CAD paradigm highlights suspicious findings during image interpretation. These suspicious findings were identified using hand-crafted features derived from human knowledge of disease appearance. The ability to generalize and scale these hand-crafted features across modalities or the case of rare diseases is a major limitation of traditional CAD. Advances in machine learning, including deep CNNs, no longer rely on hand-crafted features and instead identify features for a particular diagnosis during the training process without human intervention. CNNs show promise in both lesion detection and segmentation [40].

Although lesion detection identifies the location of a potential abnormality within images, lesion segmentation marks individual pixels containing an abnormality (a segmentation mask). A segmentation mask can be used to calculate lesion volume and quantify signal characteristics, edge morphology, and texture. In neurooncology, radiologists measure lesion size over time to assess treatment response. These measurements rely on manual segmentation, which is time-consuming and tedious. Because of the time and complexity of this task, radiologists often use approximations like largest single diameter or 2D measurements rather than tumor volume. In addition to being less time-consuming, segmentation masks avoid human subjectivity and can be reliably reproduced. Whereas many traditional machine learning techniques have been used for segmentation with mixed results, recent deep learning–based algorithms have pushed the state of the art to near-human performance on a variety of benchmarks. Among the most commonly cited applications is the automated segmentation of various brain tumor components including regions of enhancement, edema, and necrosis. Such a tool would greatly impact a wide range of indications including diagnosis, surgical guidance, radiation therapy planning, and follow-up. Other well-studied segmentation tasks in neuroradiology with potential clinical utility include tools for detection and quantification of normal gray and white matter, microhemorrhage, and infarct [4151].

The success of machine learning research is directly linked to high-quality large datasets. Several open-source neuroradiology datasets are available, most arising from the Medical Image Computing and Computer Assisted Intervention Society. The Brain Tumor Image Segmentation dataset and the Cancer Imaging Archives/Cancer Genome Atlas give information for brain tumors specifically [52, 53]. Although publicly available datasets of labeled brain tumor images have facilitated the development of several promising applications, a number of associated challenges persist. These large datasets lack consistent labeling strategies, especially of fine details. Further, included images are often older, scanned at lower field strengths, and with low resolution. Because these images typically depict newly diagnosed lesions, an area for continued growth is the evaluation of real-world complexity like resection cavities and treatment effects. Machine learning algorithms are sensitive to the data used for training; although studies of their efficacy have shown promise, their performance can still be improved.

Diagnosis, Classification, and Outcome Prediction

Radiomics, a process that converts medical images into mineable high-dimensional data for diagnosis, classification, and outcome prediction, has broadened the study of tumors beyond established imaging features and metrics [54, 55]. The 2016 World Health Organization classification of CNS tumors underscores the importance of genetic information for brain tumor diagnosis and treatment [56]. Correspondingly, the study of the relationship between imaging features and genetic data (radiogenomics) has seen a sharp rise. This increase is particularly evident in brain tumor research because of a large amount of imaging data collected on individual tumors. Identification of imaging phenotypes that correlate with genetic markers such as isocitrate dehydrogenase (IDH) mutation, 1p/19q code-letion, ATRX, and telomerase reverse transcriptase are highly relevant because they have been strongly associated with prognosis. The utility of CNNs in radiogenomics was first described in 2015 by Pan et al. [57], who used anatomic MR images to predict tumor grade. More recently, machine learning has been used to predict IDH mutation, 1p/19q codeletion, and methylguanine-DNA methyltransferase methylation, with the most promising results to date achieving prediction accuracies of 83–94% [5862]. Further applications include survival prediction integrating anatomic imaging with clinical and therapeutic response assessment data.

One limitation of radiogenomics is that classification can be impeded by overlap of certain imaging features among different genetic alterations and the spatial heterogeneity of features that can change during the course of treatment. Recent studies have shown that genetic alterations caused by treatment result in intratumor heterogeneity; the recurrent portion of the tumor can have an entirely different genetic make-up than the original tumor site and typically manifests in a much more aggressive phenotype [63]. A strategy for incorporating multimodal imaging and machine learning to identify such differences, both spatially and in terms of severity, can help surgeons sample the most malignant tumor region and resect infiltrative tumor beyond the contrast-enhancing lesion. Such a strategy will also facilitate prediction of subsequent outcome measures at different points of care. To date, implementations in brain tumor imaging have focused primarily on anatomic and DW images and have not taken advantage of methodologic advances from other fields that can incorporate diverse datasets from multiple time points. Many new and flexible machine learning techniques including long short-term memory recurrent neural networks have been optimized for this task and show tremendous promise for incorporating time-series data for analysis and prediction [64]. Integrating other physiologic and metabolic imaging results from MRI, PET, or both will also be critical for accurate diagnosis.

As machine learning becomes more prevalent and data sources become more complex, the need for consistency in acquisition is magnified. Radiologists will need to increase attention to standardization of imaging and data collection methods. In the meantime, we need to collate datasets from multiple institutions and accurately represent the range of acquisition heterogeneity.

Other Considerations
Previous sectionNext section

Recently, the FDA recognized that the traditional approach to medical device regulation does not fit well with the iterative nature of digital health application development [65]. The agency outlined plans to address this gap including draft guidance on software designed for patient and clinical decision support (CDS) [66]. The draft guidance aims to identify types of decision support that are not considered medical devices and thus are not subject to regulation. Examples of exempt CDS included matching patient-specific information with current practice and treatment guidelines and software that identifies drug-drug or drug-allergy interactions. In the draft guidance, the FDA explicitly states that applications to process or analyze medical images remain medical devices and are therefore subject to regulation under existing legislation.

Although the draft guidance does not mention machine learning directly, it states that to avoid regulation as a medical device “the CDS function must be intended to enable health care professionals to independently review the basis for the recommendations presented by the software so that they do not rely primarily on such recommendations, but rather on their own judgment, to make clinical decisions for individual patients” [66]. Even though these draft guidances are subject to change, machine learning algorithms relevant to neurooncology are unlikely to be exempt from review as medical devices in the foreseeable future.

In their response letter to the draft guidance, the American Medical Informatics Association called for the FDA to host a public forum to discuss standards for transparency and performance of decision support software in machine learning–based environments [67]. Some open challenges to be addressed include developing quality assurance processes and validation paradigms as well as determining responsibility for algorithm mistakes. This space undoubtedly will continue to evolve.

Conclusion
Previous sectionNext section

Application of machine learning provides radiologists with tools to increase consistency and productivity and to uncover new diagnostic possibilities. Because of a complex interplay of factors, including federal regulation of algorithms that provide diagnosis, radiologists are likely to see the impact of machine learning in areas such as acquisition and workflow enhancements before general diagnostic support. The modern radiologist must therefore have a functional understanding of machine learning concepts and play an active role in developing and implementing these techniques.

References
Previous section
1. Kohli M, Prevedello LM, Filice RW, Geis JR. Implementing machine learning in radiology practice and research. AJR 2017; 208:754–760 [Abstract] [Google Scholar]
2. Wang X, Peng Y, Lu L, Lu Z, Summers RM. TieNet: text-image embedding network for common thorax disease classification and reporting in chest x-rays. Computer Vision Foundation website. openaccess.thecvf.com/content_cvpr_2018/papers/Wang_TieNet_Text-Image_Embedding_CVPR_2018_paper.pdf. Accessed October 8, 2018 [Google Scholar]
3. Brown AD, Marotta TR. A natural language processing-based model to automate MRI brain protocol selection and prioritization. Acad Radiol 2017; 24:160–166 [Google Scholar]
4. Brown AD, Marotta TR. Using machine learning for sequence-level automated MRI protocol selection in neuroradiology. J Am Med Inform Assoc 2018; 25:568–571 [Google Scholar]
5. Lakhani P, Prater AB, Hutson RK, et al. Machine learning in radiology: applications beyond image interpretation. J Am Coll Radiol 2018; 15:350–359 [Google Scholar]
6. Trivedi H, Mesterhazy J, Laguna B, Vu T, Sohn JH. Automatic determination of the need for intravenous contrast in musculoskeletal MRI examinations using IBM Watson's natural language processing algorithm. J Digit Imaging 2018; 31:245–251 [Google Scholar]
7. Spyropoulos CD. AI planning and scheduling in the medical hospital environment. Artif Intell Med 2000; 20:101–111 [Google Scholar]
8. Joseph A, Hijal T, Kildea J, Hendren L, Herrera D. Predicting waiting times in radiation oncology using machine learning. In: Chen X, Luo B, Luo F, Palade V, Wani MA, eds. 16th IEEE International Conference on Machine Learning and Applications. Piscataway, NJ: IEEE, 2017:1024–1029 [Google Scholar]
9. Itti L, Chang L, Ernst T. Automatic scan prescription for brain MRI. Magn Reson Med 2001; 45:486–494 [Google Scholar]
10. Benner T, Wisco JJ, van der Kouwe AJW, et al. Comparison of manual and automatic section positioning of brain MR images. Radiology 2006; 239:246–254 [Google Scholar]
11. Zheng Y, Liu D, Georgescu B, Nguyen H, Comaniciu D. 3D deep learning for efficient and robust landmark detection in volumetric data. In: Navab N, Hornegger J, Wells W, Frangi A, eds. Medical image computing and computer-assisted intervention: MICCAI 2015. Cham, Switzerland: Springer, 2015:565–572 [Google Scholar]
12. Pizarro RA, Cheng X, Barnett A, et al. Automated quality assessment of structural magnetic resonance brain images based on a supervised machine learning algorithm. Front Neuroinform 2016; 10:52 [Google Scholar]
13. Fang R, Chen T, Sanelli PC. Towards robust deconvolution of low-dose perfusion CT: sparse per-fusion deconvolution using online dictionary learning. Med Image Anal 2013; 17:417–428 [Google Scholar]
14. Chen H, Zhang Y, Zhang W, et al. Low-dose CT via convolutional neural network. Biomed Opt Express 2017; 8:679–694 [Google Scholar]
15. Yang Q, Yan P, Zhang Y, et al. Low dose CT image denoising using a generative adversarial network with Wasserstein distance and perceptual loss. IEEE Trans Med Imaging 2018; 37:1348–1357 [Google Scholar]
16. Tang Y, Cai J, Lu L, et al. CT image enhancement using stacked generative adversarial networks and transfer learning for lesion segmentation improvement. arXiv website. arxiv.org/pdf/1807.07144.pdf. Published July 18, 2018. Accessed October 8, 2018 [Google Scholar]
17. Yang Y, Sun J, Li H, Yu Z. Deep ADMM-Net for compressive sensing MRI. In: Lee DD, von Luxburg U, Garnett R, Sugiyama M, Guyon I, eds. Advances in neural information processing systems 30. Red Hook, NY: Curran Associates, 2016:1–9 [Google Scholar]
18. Ravishankar S, Bresler Y. MR image reconstruction from highly undersampled k-space data by dictionary learning. IEEE Trans Med Imaging 2011; 30:1028–1041 [Google Scholar]
19. Schlemper J, Caballero J, Hajnal JV, Price AN, Rueckert D. A deep cascade of convolutional neural networks for dynamic MR image reconstruction. IEEE Trans Med Imaging 2018; 37:491–503 [Google Scholar]
20. Golkov V, Dosovitskiy A, Sämann P, et al. q-Space deep learning for twelve-fold shorter and model-free diffusion MRI scans. In: Navab N, Hornegger J, Wells W, Frangi A, eds. Medical image computing and computer-assisted intervention: MICCAI 2015. Cham, Switzerland: Springer, 2015:37–44 [Google Scholar]
21. Kim KH, Choi SH, Park SH. Improving arterial spin labeling by using deep learning. Radiology 2018; 287:658–666 [Google Scholar]
22. Bahrami K, Shi F, Rekik I, Shen D. Convolutional neural network for reconstruction of 7T-like images from 3T MRI using appearance and anatomical features. In: Carneiro G, Mateus D, Loïc P, et al., eds. Deep learning and data labeling for medical applications. Cham, Switzerland: Springer International, 2016:39–47 [Google Scholar]
23. Oktay O, Bai W, Lee M, et al. Multi-input cardiac image super-resolution using convolutional neural networks. In: Ourselin S, Joskowicz L, Sabuncu MR, Unal G, Wells W, eds. Medical image computing and computer-assisted intervention: MICCAI 2016. Cham, Switzerland: Springer, 2016:246–254 [Google Scholar]
24. Pham CH, Ducournau A, Fablet R, Rousseau F. Brain MRI super-resolution using deep 3D convolutional networks. In: 14th IEEE international symposium on biomedical imaging. Piscataway, NJ: IEEE, 2017:197–200 [Google Scholar]
25. Umehara K, Ota J, Ishimaru N, et al. Super-resolution convolutional neural network for the improvement of the image quality of magnified images in chest radiographs. In: Styner MA, Angelini ED, eds. Medical imaging 2017: image processing—proceedings of SPIE. Bellingham, WA: SPIE, 2017:101331 [Google Scholar]
26. Yu S, Dong H, Yang G, et al. Deep dealiasing for fast compressive sensing MRI. arXiv website. arxiv.org/pdf/1705.07137.pdf. Published May 19, 2017. Accessed October 8, 2018 [Google Scholar]
27. Nie D, Cao X, Gao Y, Wang L, Shen D. Estimating CT image from MRI data using 3D fully convolutional networks. In: Carneiro G, Mateus D, Loïc P, et al., eds. Deep learning and data labeling for medical applications. Cham, Switzerland: Springer International, 2016:170–178 [Google Scholar]
28. Leynes AP, Yang J, Wiesinger F, et al. Zero-echo-time and Dixon deep pseudo-CT (ZeDD CT): direct generation of pseudo-CT images for pelvic PET/MRI attenuation correction using deep convolutional neural networks with multiparametric MRI. J Nucl Med 2018; 59:852–858 [Google Scholar]
29. Ben-Cohen A, Klang E, Raskin SP, et al. Cross-modality synthesis from CT to PET using FCN and GAN networks for improved automated lesion detection. arXiv website. arxiv.org/pdf/1802.07846.pdf. Published February 21, 2018. Updated July 23, 2018. Accessed October 8, 2018 [Google Scholar]
30. Desai V, Flanders AE, Lakhani P. Application of deep learning in neuroradiology: automated detection of basal ganglia hemorrhage using 2D-convolutional neural networks. arXiv website. arxiv.org/ftp/arxiv/papers/1710/1710.03823.pdf. Published October 10, 2017. Updated October 27, 2017. Accessed October 8, 2018 [Google Scholar]
31. Chilamkurthy S, Ghosh R, Tanamala S, et al. Development and validation of deep learning algorithms for detection of critical findings in head CT scans. arXiv website. arxiv.org/pdf/1803.05854.pdf. Published March 13, 2018. Updated April 12, 2018. Accessed October 8, 2018 [Google Scholar]
32. Grewal M, Srivastava MM. RADnet: Radiologist level accuracy using deep learning for hemorrhage detection in CT scans. In: 15th IEEE international symposium on biomedical imaging. Piscataway, NJ: IEEE, 2018:281–284 [Google Scholar]
33. Arbabshirani MR, Fornwalt BK, Mongelluzzo GJ, et al. Advanced machine learning in action: identification of intracranial hemorrhage on computed tomography scans of the head with clinical workflow integration. npj Digital Medicine 2018: 1:article 9 [Google Scholar]
34. U.S. Food and Drug Administration website. FDA permits marketing of clinical decision support software for alerting providers of a potential stroke in patients. www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/ucm596575.htm. Published February 13, 2018. Accessed June 25, 2018 [Google Scholar]
35. Morioka CA, Valentino DJ, Duckwiler G, et al. Disease specific intelligent pre-fetch and hanging protocol for diagnostic neuroradiology workstations. Proc AMIA Symp 2001:468–472 [Google Scholar]
36. Hsu W, Taira RK, El-Saden S, Kangarloo H, Bui AAT. Context-based electronic health record: toward patient specific healthcare. IEEE Trans Inf Technol Biomed 2012; 16:228–234 [Google Scholar]
37. Senders JT, Arnaout O, Karhade AV, et al. Natural and artificial intelligence in neurosurgery: a systematic review. Neurosurgery 2018; 83:181–192 [Google Scholar]
38. Kakeda S, Moriya J, Sato H, et al. Improved detection of lung nodules on chest radiographs using a commercial computer-aided diagnosis system. AJR 2004; 182:505–510 [Abstract] [Google Scholar]
39. Baker JA, Rosen EL, Lo JY, Gimenez EI, Walsh R, Soo MS. Computer-aided detection (CAD) in screening mammography: sensitivity of commercial CAD systems for detecting architectural distortion. AJR 2003; 181:1083–1088 [Abstract] [Google Scholar]
40. Tang Y, Harrison AP, Bagheri M, Xiao J, Summers RM. Semi-automatic RECIST labeling on CT scans with cascaded convolutional neural networks. arXiv website. arxiv.org/pdf/1806.09507.pdf. Published June 25, 2018. Accessed October 9, 2018 [Google Scholar]
41. Yan K, Bagheri M, Summers RM. 3D context enhanced region-based convolutional neural network for end-to-end lesion detection. arXiv website. arxiv.org/pdf/1803.09648.pdf. Published June 25, 2018. Updated July 29, 2018. Accessed October 9, 2018 [Google Scholar]
42. Cai J, Tang Y, Lu L, et al. Accurate weakly supervised deep lesion segmentation on CT scans: self-paced 3D mask generation from RECIST. arXiv website. arxiv.org/pdf/1801.08614.pdf. Published January 25, 2018. Accessed October 8, 2018 [Google Scholar]
43. Zhuge Y, Krauze AV, Ning H, et al. Brain tumor segmentation using holistically nested neural networks in MRI images. Med Phys 2017; 44:5234–5243 [Google Scholar]
44. Ding Y, Dong R, Lan T, et al. Multi-modal brain tumor image segmentation based on SDAE. Int J Imaging Syst Technol 2018; 28:38–47 [Google Scholar]
45. Xia X, Kulis B. W-net: a deep model for fully unsupervised image segmentation. arXiv website. arxiv.org/pdf/1711.08506.pdf. Published November 22, 2017. Accessed October 9, 2018 [Google Scholar]
46. Zhao L, Jia K. Multiscale CNNs for brain tumor segmentation and diagnosis. Comput Math Methods Med 2016; 2016:1–7 [Google Scholar]
47. Havaei M, Davy A, Warde-Farley D, et al. Brain tumor segmentation with deep neural networks. arXiv website. arxiv.org/pdf/1505.03540.pdf. Published May 13, 2015. Updated May 20, 2016. Accessed October 9, 2018 [Google Scholar]
48. Pereira S, Pinto A, Alves V, Silva CA. Brain tumor segmentation using convolutional neural networks in MRI images. IEEE Trans Med Imaging 2016; 35:1240–1251 [Google Scholar]
49. Hussain S, Anwar SM, Majid M. Brain tumor segmentation using cascaded deep convolutional neural network. In: 39th annual international conference of the IEEE Engineering in Medicine and Biology Society. Piscataway, NJ:IEEE, 2017:1998–2001 [Google Scholar]
50. Zhao X, Wu Y, Song G, Li Z, Zhang Y, Fan Y. A deep learning model integrating FCNNs and CRFs for brain tumor segmentation. Med Image Anal 2018; 43:98–111 [Google Scholar]
51. Korfiatis P, Kline TL, Erickson BJ. Automated segmentation of hyperintense regions in FLAIR MRI using deep learning. Tomography 2016; 2:334–340 [Google Scholar]
52. Bakas S. The multimodal brain tumor segmentation challenge 2018. Perelman School of Medicine website. www.braintumorsegmentation.org. Accessed June 25, 2018 [Google Scholar]
53. The Cancer Imaging Archive website. TCIA collections. www.cancerimagingarchive.net. Accessed June 25, 2018 [Google Scholar]
54. Lambin P, Leijenaar RT, Deist TM, et al. Radiomics: the bridge between medical imaging and personalized medicine. Nat Rev Clin Oncol 2017; 14:749–762 [Google Scholar]
55. Kickingereder P, Burth S, Wick A, et al. Radiomic profiling of glioblastoma: identifying an imaging predictor of patient survival with improved performance over established clinical and radiologic risk models. Radiology 2016; 280:880–889 [Google Scholar]
56. Louis DN, Perry A, Reifenberger G, et al. The 2016 World Health Organization classification of tumors of the central nervous system: a summary. Acta Neuropathol 2016; 131:803–820 [Google Scholar]
57. Pan Y, Huang W, Lin Z, et al. Brain tumor grading based on neural networks and convolutional neural networks. In: 37th annual international conference of the IEEE Engineering in Medicine and Biology Society. Piscataway, NJ: IEEE, 2015:699–702 [Google Scholar]
58. Chang K, Bai HX, Zhou H, et al. Residual convolutional neural network for the determination of IDH status in low- and high-grade gliomas from MR imaging. Clin Cancer Res 2018; 24:1073–1081 [Google Scholar]
59. Akkus Z, Ali I, Sedlář J, et al. Predicting deletion of chromosomal arms 1p/19q in low-grade gliomas from MR images using machine intelligence. J Digit Imaging 2017; 30:469–476 [Google Scholar]
60. Han L, Kamdar MR. MRI to MGMT: predicting methylation status in glioblastoma patients using convolutional recurrent neural networks. Pac Symp Biocomput 2018; 23:331–342 [Google Scholar]
61. Korfiatis P, Kline TL, Lachance DH, Parney IF, Buckner JC, Erickson BJ. Residual deep convolutional neural network predicts MGMT methylation status. J Digit Imaging 2017; 30:622–628 [Google Scholar]
62. Chang P, Grinband J, Weinberg BD, et al. Deep-learning convolutional neural networks accurately classify genetic mutations in gliomas. AJNR 2018; 39:1201–1207 [Google Scholar]
63. van Thuijl HF, Mazor T, Johnson BE, et al. Evolution of DNA repair defects during malignant progression of low-grade gliomas after temozolomide treatment. Acta Neuropathol 2015; 129:597–607 [Google Scholar]
64. Sainath TN, Vinyals O, Senior A, Sak H. Convolutional, long short-term memory, fully connected deep neural networks. In: IEEE international conference on acoustics, speech and signal processing. Piscataway, NJ: IEEE, 2015:4580–4584 [Google Scholar]
65. Gottlieb S. Statement from FDA Commissioner Scott Gottlieb, M.D., on advancing new digital health policies to encourage innovation, bring efficiency and modernization to regulation. U.S. Food and Drug Administration website. www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/ucm587890.htm. Published December 7, 2017. Accessed October 8, 2018 [Google Scholar]
66. U.S. Food and Drug Administration website. Clinical and patient decision support software: draft guidance for industry and Food and Drug Administration staff. www.fda.gov/downloads/MedicalDevices/DeviceRegulationandGuidance/GuidanceDocuments/UCM587819.pdf. Published December 8, 2017. Accessed October 8, 2018 [Google Scholar]
67. Fridsma DB. AMIA response to FDA draft guidance on clinical and patient decision support software. American Medical Informatics Association website. www.amia.org/sites/default/files/AMIA-Response-to-FDA-Draft-Guidance-on-Clinical-and-Patient-Decision-Support-Software.pdf. Published February 6, 2018. Accessed October 8, 2018 [Google Scholar]
Address correspondence to J. E. Villanueva-Meyer ().

Recommended Articles

Machine Learning in Neurooncology Imaging: From Study Request to Diagnosis and Treatment

Free Access, , , ,
American Journal of Roentgenology. 2019;212:26-37. 10.2214/AJR.18.20218
Abstract | Full Text | PDF (900 KB) | PDF Plus (846 KB) | Erratum 
Free Access, , , , , , ,
American Journal of Roentgenology. 2019;212:38-43. 10.2214/AJR.18.20224
Abstract | Full Text | PDF (1076 KB) | PDF Plus (757 KB) | Erratum 
Free Access, , , , , , ,
American Journal of Roentgenology. 2019;212:44-51. 10.2214/AJR.18.20260
Abstract | Full Text | PDF (980 KB) | PDF Plus (841 KB) | Supplemental Material 
Free Access, , ,
American Journal of Roentgenology. 2019;212:15-25. 10.2214/AJR.18.20202
Abstract | Full Text | PDF (1054 KB) | PDF Plus (1016 KB) 
Free Access,
American Journal of Roentgenology. 2019;212:9-14. 10.2214/AJR.18.19914
Abstract | Full Text | PDF (906 KB) | PDF Plus (810 KB) 
Free Access,
American Journal of Roentgenology. 2019;212:513-519. 10.2214/AJR.18.20490
Abstract | Full Text | PDF (718 KB) | PDF Plus (739 KB)