November 2020, VOLUME 215
NUMBER 5

Recommend & Share

November 2020, Volume 215, Number 5

Medical Physics and Informatics

Original Research

Artificial Intelligence Predictive Analytics in the Management of Outpatient MRI Appointment No-Shows

+ Affiliation:
1Department of Radiology, Changi General Hospital, 2 Simei St 3, Singapore 529889, Republic of Singapore.

Citation: American Journal of Roentgenology. 2020;215: 1155-1162. 10.2214/AJR.19.22594

ABSTRACT
Next section

OBJECTIVE. Outpatient appointment no-shows are a common problem. Artificial intelligence predictive analytics can potentially facilitate targeted interventions to improve efficiency. We describe a quality improvement project that uses machine learning techniques to predict and reduce outpatient MRI appointment no-shows.

MATERIALS AND METHODS. Anonymized records from 32,957 outpatient MRI appointments between 2016 and 2018 were acquired for model training and validation along with a holdout test set of 1080 records from January 2019. The overall no-show rate was 17.4%. A predictive model developed with XGBoost, a decision tree-based ensemble machine learning algorithm that uses a gradient boosting framework, was deployed after various machine learning algorithms were evaluated. The simple intervention measure of using telephone call reminders for patients with the top 25% highest risk of an appointment no-show as predicted by the model was implemented over 6 months.

RESULTS. The ROC AUC for the predictive model was 0.746 with an optimized F1 score of 0.708; at this threshold, the precision and recall were 0.606 and 0.852, respectively. The AUC for the holdout test set was 0.738 with an optimized F1 score of 0.721; at this threshold, the precision and recall were 0.605 and 0.893, respectively. The no-show rate 6 months after deployment of the predictive model was 15.9% compared with 19.3% in the preceding 12-month preintervention period, corresponding to a 17.2% improvement from the baseline no-show rate (p < 0.0001). The no-show rates of contactable and noncontactable patients in the group at high risk of appointment no-shows as predicted by the model were 17.5% and 40.3%, respectively (p < 0.0001).

CONCLUSION. Machine learning predictive analytics perform moderately well in predicting complex problems involving human behavior using a modest amount of data with basic feature engineering, and they can be incorporated into routine workflow to improve health care delivery.

Keywords: artificial intelligence, machine learning, MRI, no-show, XGBoost

Hospital outpatient appointment no-shows are a common problem and a burden to health care systems worldwide, with clinic no-show rates reported in Africa (43.0%), South America (27.8%), Asia (25.1%), North America (23.5%), Europe (19.3%), and Oceania (13.2%) [1]. No-shows waste limited health care resources and contribute to inefficiencies of overwhelmed health care systems, resulting in scarce resources not being optimized, long appointment lead times, and patients being denied timely care [2]. Given the aging population in many developed countries and the rising costs of health care delivery, there is an imperative need to keep health care both accessible and affordable.

Although multiple studies have investigated factors for predicting specialist outpatient appointment no-shows [310], mainly through traditional logistic regression statistical techniques with a limited number of variables [9, 1113], fewer studies exist that describe the use of nonlinear supervised machine learning predictive models trained using higher-dimensional datasets. In the past few years, supervised machine learning algorithms have shown tremendous success in a variety of classification and regression tasks, with examples of industry applications including prediction of fraudulent credit card transactions [14], website advertisement click-through rates [15], and viewer ratings of movies [16]. These powerful machine learning techniques are gradually being deployed in health care systems to improve the delivery of quality clinical care and to better facilitate planning of resource allocation, manpower deployment, equipment acquisition, and capital expenditure.

Appointment no-shows are a multifaceted problem given the multitude of behavioral, social, medical, physical, logistic, and geographic factors that interact in a complex and unpredictable fashion to influence the outcome of appointment attendance; nonetheless, this problem may be tractable given the capabilities and successes of recent machine learning techniques trained using high-dimensional datasets to produce complex high-performance predictive models. For radiology departments, accurately predicting the individual patient risk of an outpatient scan appointment no-show may enable more intelligent and reliable intervention strategies, such as selective appointment overbooking, telephone, e-mails, and short message service (SMS) text reminders [1720]. For example, the need to perform overbooking can be improved by predicting the likelihood of a no-show at the time that the appointment slot is booked because no-shows are known to not occur randomly [1]. In this way, overbooking can be tailored to the likelihood of individual patients not turning up and to minimize the risk of appointment collisions, which could lead to patient dissatisfaction if overbooking was not performed using a robust systematic process [10]. Successful development and deployment of such prediction tools could reduce outpatient scan wait times and increase the scanner utilization rate, optimizing the use of scarce medical resources to improve accessibility and contain rising health care costs.

In this study, we describe a quality improvement project at our institution designed to predict outpatient MRI appointment no-shows for workflow deployment and evaluate the impact of this predictive method after implementation. We used state-of-the-art machine learning models that were developed with a modest amount of data acquired from frontline information technology systems used in daily routine operations in the radiology department.

Materials and Methods
Previous sectionNext section

A waiver of institutional review board consent was granted. Records from 32,957 outpatient MRI appointments scheduled for 25,461 unique patients in the radiology department at our institution between January 2016 and December 2018 were extracted from the hospital radiology information system and outpatient appointment system used in front-line department operations for model training and validation. A further holdout test set of 1080 records from January 2019 was also acquired.

All direct patient identifiers were scrubbed from the records. Dates of birth were used to calculate patient age and then were discarded, whereas residential postal codes were used to map patients to one of 28 postal districts in Singapore as a surrogate indicator of patients' physical distance from the hospital and then were also discarded. The datasets contained a total of 21 categoric and numeric patient features, excluding the target variable (i.e., no-show status) (Table 1). No information on individual patient-specific medical conditions was acquired.

TABLE 1: Patient and MRI Appointment Features Extracted From the Hospital Radiology Information System and Outpatient Appointment System

Initial evaluation of various machine learning predictive models developed with widely used open-source software tools (Python, version 2.7 [Guido van Rossum]; XGBoost, version 0.80 [Tianqi Chen]; scikit-learn, version 0.20.2 [David Cournapeau]; TensorFlow, version 1.8.0 [Google Brain Team]; and Keras, version 2.1.5 [François Chollet]) was performed (Table 2). The 2016–2018 MRI appointment data were randomly split into training and validation sets in an 80:20 ratio, and the models were trained and validated using 10-fold cross-validation. Class imbalance was addressed by applying appropriate weight factors or by undersampling of the majority class. Hyperparameters were tuned through a random grid search optimized for the maximum ROC AUC. Cutoff thresholds were optimized to yield the highest F1 scores, representing the harmonic means of precision (positive predictive value) and recall (sensitivity). The best-performing model (XGBoost) was selected for workflow deployment after evaluation against the final holdout test set.

TABLE 2: Performance Metrics of the Machine Learning Prediction Models Evaluated

For workflow implementation, the XGBoost model was calibrated at a conservative cutoff threshold to intervene in 25% of outpatient MRI appointments, because this would entail no additional manpower costs and departmental resources compared with the preexisting nonsystematic workflow. This preintervention baseline workflow entailed placing telephone call reminders to a similar proportion of patients, based and dependent on the likelihood of those patients being no-shows as perceived by different individual MRI technicians who were rostered to call these patients each day (e.g., some technicians may prioritize calling patients undergoing MRI examinations of multiple regions, whereas others may contact patients with evening appointments).

Commencing from March 2019, upcoming out-patient appointment lists that included the probabilities of individual patients being no-shows were generated weekly, and these reports were used by the rostered MRI technicians to place telephone call reminders to the top 25% of patients with the highest risk of appointment no-shows as predicted by the model one business day before the scheduled appointments. Patients who could be contacted through telephone calls made by the MRI technicians, both before and after deployment of the XGBoost predictive model workflow, were verbally reminded of their upcoming MRI appointments (including timing and any specific prescanning preparation required) and were specifically asked whether they intended to keep the appointment for their scheduled scans. Appointments were rescheduled or canceled accordingly depending on the outcome of the telephone calls. We did not implement double booking of appointments or the use of SMS text reminders for the duration of this project. The outpatient appointment system captures no-show events as patients failing to show up for their originally scheduled appointments, and it does not treat rescheduled or cancelled appointments as no-show appointments.

Outpatient MRI appointment no-show rates over 6 months (March to August 2019) after implementation of the interventional measures were compared with the immediate preceding 12-month preintervention baseline period (March 2018 to February 2019). The comparison was performed using the two-sample test for equality of proportions with use of the open-source statistical program R (version 3.6.0, R Foundation), with the null hypothesis being no difference in no-show rates before and after implementation of the workflow intervention measures aided by machine learning. A similar comparison was made with the overall no-show rate across the 12 calendar quarters preceding implementation of the intervention measures. We also performed subgroup analysis of contactable and noncontactable patients with a high-risk of appointment no-shows as predicted by the model.

Results
Previous sectionNext section

Of the 32,957 outpatient MRI appointment records from January 2016 to December 2018, a total of 21,520 (65.3%) were for male patients and 11,437 (34.7%) were for female patients. The patient age range was 11–101 years, with a mean patient age of 49 years. The overall no-show rate during this period was 17.4% (5734 of 32,957 appointments). The no-show rate, which showed an increase from 16.4% in 2016 to 18.3% in 2017 and then to 19.1% in 2018, served as the main impetus for this quality improvement project and provided the rationale for postintervention comparison with the rate for the immediately preceding 12-month preintervention baseline period. Any values for ordering department (1.5%), postal district (7.7%), and language (29.2%) that were missing from the dataset were imputed as regular values in the analysis; there were no missing values for the other features.

The threshold-independent training metrics of the XGBoost model include a ROC AUC of 0.746 and an area under precision-recall curve of 0.732 (Figs. 1 and 2). At an optimized F1 score threshold of 0.708, precision and recall were 0.606 and 0.852, respectively, for an overall accuracy of 0.654. The hyperparameters of the XGBoost model are presented in Table 3.

figure
View larger version (34K)

Fig. 1 —ROC of deployed XGBoost (version 0.80, Tianqi Chen) prediction model with AUC of 0.746. Diagonal line denotes reference, and circles on ROC curve denote data points.

figure
View larger version (28K)

Fig. 2 —Precision-recall curve of deployed XGBoost (version 0.80, Tianqi Chen) prediction model with AUC of 0.732. Circles on ROC curve denote data points.

TABLE 3: Hyperparameters for the Deployed XGBoost Prediction Model Optimized for ROC AUC

This XGBoost model evaluated against the final holdout test set containing 1080 cases uninvolved in model training and validation showed consistent robust performance with a ROC AUC of 0.738, for an optimized F1 score of 0.721; at this threshold, the precision and recall were 0.605 and 0.893, respectively, with an overall accuracy of 0.656.

The top 10 factors and their relative importance in predictive performance for the best-performing XGBoost model for our dataset are presented in Figure 3. In our institutional setting, these are mainly factors are related to patient age and the MRI appointment wait time (expressed in days), with a lesser contribution linked to the number of appointment reschedulings, male patients, certain postal districts, and certain weekdays. Note that these factors are associated with high conditional probabilities of no-shows occurring independent of the study data distribution; for instance, although the overall sex distribution of the study data is skewed toward male patients, the conditional probability of a no-show occurring given a male patient is independently higher than the conditional probability of a no-show occurring given a female patient.

figure
View larger version (37K)

Fig. 3 —Relative importance of various factors identified by XGBoost (version 0.80, Tianqi Chen) prediction model.

For workflow deployment, the cutoff threshold of this model was set to intervene in 25% of all outpatient MRI appointments, for previously described reasons, yielding an expected F1 score and precision and recall values of 0.507, 0.770, and 0.378, respectively, as opposed to 69% of all patients that would need to be called at the optimized F1 score threshold.

The outpatient MRI appointment no-show rate for the 6 months after implementation of the targeted intervention measures was 15.9% (961 of 6027 appointments) compared with 19.3% (2112 of 10,967 appointments) for the preceding 12-month preintervention baseline period. This corresponds to a statistically significant absolute difference of 3.31% (95% conference interval, 2.13–4.50%) and a 17.2% (95% CI, 11.2–22.8%) relative improvement from the baseline no-show rate through simple intervention measures with the predictive model set at a conservative threshold (Fig. 4), thus rejecting the null hypothesis (p < 0.0001).

figure
View larger version (74K)

Fig. 4 —Weekly outpatient MRI appointment no-show rates for 1 year before (19.3%) and 6 months after (15.9%) implementation of intervention measures in March 2019, as guided by XGBoost (version 0.80, Tianqi Chen) prediction model (p < 0.0001). Squares denote data points.

The number of MRI appointment no-shows for the 12 calendar quarters before (March 2016 to February 2019) and two calendar quarters after (March to August 2019) implementation of the XGBoost predictive model–guided intervention measures are presented in Table 4. There was an absolute difference of 2.23% (95% CI, 1.21–3.24%) in the overall no-show rate after implementation of the intervention measures (15.9%) compared with the overall no-show rate of 18.2% for the preceding 3 years (p < 0.0001).

TABLE 4: No-Show Rates for the 12 Calendar Quarters Before (March 2016 to February 2019) and Two Quarters After (March to August 2019) Implementation of the XGBoost Model–Guided Intervention Measures

Subgroup analysis was performed between the contactable and noncontactable patients in the group at high-risk of having appointment no-shows as predicted by the model. Among the 930 contactable patients, 192 appointments were rescheduled or canceled, allowing the appointment slots to be freed and were excluded from the analysis (Table 5). The no-show rates of the remaining contactable and noncontactable patients were 17.5% (129 of 738 appointments) and 40.3% (208 of 516 appointments), respectively, with an absolute difference of 22.8% (95% CI, 17.8–27.9%) and a relative difference of 56.6% (p < 0.0001).

TABLE 5: Appointment Attendance During Selected Months in 2019 for Contactable and Noncontactable Patients at High Risk of Appointment No-Shows as Predicted Using the XGBoost Model
Discussion
Previous sectionNext section

Given limited specialist health care resources, increasing demand, and long appointment wait times for imaging studies in many public health care systems worldwide, outpatient MRI appointment no-shows are a pressing problem that need to be addressed. Indeed, the lead times for outpatient imaging appointments are tracked as a clinical quality indicator by the Ministry of Health in Singapore. At our institution, the overall outpatient MRI appointment no-show rate between 2016 and 2018 was 17.4%, with an increasing trend noted.

Appointment no-shows are a multifactorial problem involving the interaction of complex human and nonhuman factors. Some studies have investigated factors associated with outpatient radiology appointment no-shows [12, 21], using mainly traditional logistic regression based statistical modeling. There is emerging literature describing the use of state-of-the-art artificial intelligence predictive models to study appointment no-shows. In a recent study, Nelson et al. [22] investigated outpatient MRI appointment no-shows using a dataset of 22,318 appointments containing 81 features and achieved a ROC AUC of 0.852 with precision of 0.511 based on a gradient-boosted machine learning model. Lee et al. [23] also developed a machine learning model that was trained using the highly feature-engineered data of 1 million specialist outpatient clinic appointments at a tertiary hospital in Singapore, with 42 features extracted from the institutional business enterprise data warehouse, achieving a ROC AUC of up to 0.832. Several machine learning models were evaluated in the latter study, with the best-performing model based on the XGBoost algorithm. On the basis of these studies, we likewise evaluated and deployed a XGBoost predictive machine learning model specific to radiology outpatient MRI appointment no-shows, albeit with a substantially smaller dataset with only basic feature engineering. Of more importance, and given the absence of available studies in this aspect, we also evaluated the quality improvement impact on outpatient MRI appointment no-show rates after sustained deployment of the machine learning model into our routine departmental workflow.

Recently developed open-source algorithms such as XGBoost have found success with state-of-the-art performance, dominating online applied machine learning competitions for structured and tabular data, such as those found on websites like Kaggle [24]. XGBoost is an ensemble gradient-boosted decision tree algorithm optimized for execution speed and model performance [25]. XGBoost models have been evaluated in a number of recent clinical studies, examples of which include studies predicting hospital admissions originating from emergency departments [26], patient outcomes of acute ischemic strokes [27], breast cancer survival from multiparametric breast MRI [28], and computer-aided diagnosis of lung nodules [29].

For workflow deployment of our model, given the practical limits of available manpower resources in our department, a relatively conservative threshold was applied to provide telephone reminders for 25% of all outpatient MRI appointments. This systematic approach potentially yields up to 4.2 times the performance found when random calling is performed (expected positive predictive value, 0.770 vs 0.174), with an expected sensitivity of 0.378 for predicting no-shows. Although long-term performance measures of the effectiveness of our predictive model are not available because of the relatively short period of implementation, given the large number of MRI scans performed in our department, we were able to achieve in 6 months a highly statistically significant and meaningful 17.2% improvement in the outpatient MRI appointment no-show rate, through implementation of simple intervention measures involving a minority of patients relative to the preintervention baseline, with absolute no-show rates decreasing from 19.3% to 15.9%. On the basis of this finding, we estimate that the efficiency gain for the department will be approximately $180,000 per year (≈ 3.3% × 11,000 MRI examinations performed a year × a mean MRI scan cost of $500 at our hospital).

Subgroup analysis also showed a strong prospective predictive ability of the model in identifying patients at high risk of having no-show appointments, with a no-show rate of 40.3% for noncontactable patients in the group predicted to be at high risk for appointment no-shows. Implementing the simple intervention measure of providing reminder telephone calls for the high-risk group led to a substantially lower no-show rate for contactable patients (17.5%), which is comparable to the overall prevalence rate. We anticipate that a larger decrease in no-show rates could be achieved if the predictive model was set at a lower threshold, albeit at the cost of having to make an increased number of reminder telephone calls.

Although enterprise business intelligence data warehouses commonly used in large institutional health care facilities are ideal data sources for developing machine learning models, there are potential practical challenges in acquiring initial training data and subsequently maintaining sustainable data pipeline access to information contained within for implementation purposes. These should be considered during the planning stages when developing and deploying artificial intelligence solutions. Country or institutional data privacy issues and policies may place limits on unfettered access to patient data, and full access to a complete list of features contained in these warehouses is unlikely in most operations. However, a modest amount of sustainable, albeit more limited patient data and features may nonetheless remain useful, as illustrated in the present study. We had access to only limited resource utilization data on MRI appointment attendance and generic patient demographics captured by the radiology information system and outpatient appointment system databases used in daily business operations in our department, and we intentionally used sustainable datasets that were reliably retrievable during training and deployment.

As with many machine learning solutions, such predictive models have narrow use in that they apply to specific cases and are tailored to the requirements of each facility and institution. Machine learning models also have to be periodically retrained and reevaluated with updated data, particularly as institutional workflow practices, patient demographics, and equipment evolve with time to maintain their usefulness and to counter concept drift [30] in which the validity of the learned concept or target variable gradually deviates over time because of known and unknown changes in the context or environment in which the model was developed, leading to less accurate predictions and deteriorating model performance.

This study has several limitations. We elected to use a prospective cohort over a randomized controlled study design because the main objective was to implement a quality improvement initiative that could be deployed quickly with immediate impact to address the real-life workflow problem of an increasing MRI appointment no-show rate. A randomized controlled design would have required a longer time to implement but may better address the heterogeneous unknown baseline workflow variability between MRI technician estimates of the likelihood of no-shows and serve as a cleaner comparison. Time-series autoregressive model training from the appointment scheduling data may also provide more accurate forecasts and may better account for seasonality of no-show rates; however, we did not identify an obvious pattern in no-show rates from the raw data and have included periodic parameters such as month, week, and day of the week as input categories in the models (with seasonal weather being a minor factor in Singapore because it is a small, tropical equatorial country). Our available training dataset was also modest by most machine learning standards, comprising only 32,957 records in two classes with 21 features (most of which contained less than 100 distinct values). By comparison, the simple and widely used Modified National Institute of Standards and Technology dataset of handwritten digits for beginner machine learning projects, described by Geoff Hinton as the “Drosophila of machine learning” [31], includes 70,000 examples in 10 classes with 784 (28 × 28) features each of 256 possible values. Even with the modest amount of data used in our project, it was possible to develop a moderately well-performing machine learning predictive model suitable for workflow deployment to achieve significant improvement in service quality.

We believe that the main strength of the present study lies in its empirical approach, given the lack of published literature quantifying the impact of actual workflow implementation, with previous studies postulating the potential benefits of applying machine learning techniques to this problem. We envisage improved performance of our model with higher-quality data containing more relevant features, such as International Classification of Diseases, 10th Revision, codes (for some patients, for instance, low back pain may have resolved by the time of the scan appointment), time-series autoregressive model training from appointment scheduling data, and advanced feature engineering measures such as natural language processing or tokenization of free text features, synthetic oversampling techniques (e.g., SMOTE [synthetic minority oversampling technique] [32]), and processing of postal codes into actual physical distances. We note that the study by Nelson et al. [22] used advanced feature engineering, such as conversion of addresses to longitudes and latitudes and SMOTE oversampling. Although Nelson et al. [22] and Lee et al. [23] evaluated the potential cost savings based on theortectical modeling, in contrast to the present study, no quality improvement metrics were reported from actual sustained deployment of the model on the ground. The aim of our study was not to produce a highly complex model but, rather, to produce one that could be developed relatively quickly, would require minimal data processing, and would be readily deployable in workflow practice for quality improvement.

Conclusion
Previous sectionNext section

State-of-the-art artificial intelligence predictive analytics can perform moderately well in solving complex multifactorial operational problems such as outpatient MRI appointment no-shows, using a modest amount of data and basic feature engineering. Such data may be readily retrievable from frontline information technology systems commonly used in most hospital radiology departments, and they can be readily incorporated into routine workflow practice to improve the efficiency and quality of health care delivery.

Based on a presentation at the Singapore Radiological Society 2019 annual meeting, Singapore.

References
Previous sectionNext section
1. Dantas LF, Fleck JL, Cyrino Oliveira FL, Hamacher S. No-shows in appointment scheduling: a systematic literature review. Health Policy 2018; 122:412–421 [Google Scholar]
2. LaGanga LR, Lawrence SR. Clinic overbooking to improve patient access and increase provider productivity. Decis Sci 2007; 38:251–276 [Google Scholar]
3. Peng Y, Erdem E, Shi J, Masek C, Woodbridge P. Large-scale assessment of missed opportunity risks in a complex hospital setting. Inform Health Soc Care 2016; 41:112–127 [Google Scholar]
4. Blumenthal DM, Singal G, Mangla SS, Macklin EA, Chung DC. Predicting non-adherence with outpatient colonoscopy using a novel electronic tool that measures prior non-adherence. J Gen Intern Med 2015; 30:724–731 [Google Scholar]
5. Kheirkhah P, Feng Q, Travis LM, Tavakoli-Tabasi S, Sharafkhaneh A. Prevalence, predictors and economic consequences of no-shows. BMC Health Serv Res 2016; 16:13 [Google Scholar]
6. Kempny A, Diller GP, Dimopoulos K, et al. Determinants of outpatient clinic attendance amongst adults with congenital heart disease and outcome. Int J Cardiol 2016; 203:245–250 [Google Scholar]
7. Menendez ME, Ring D. Factors associated with non-attendance at a hand surgery appointment. Hand (N Y) 2015; 10:221–226 [Google Scholar]
8. Torres O, Rothberg MB, Garb J, Ogunneye O, Onyema J, Higgins T. Risk factor model to predict a missed clinic appointment in an urban, academic, and underserved setting. Popul Health Manag 2015; 18:131–136 [Google Scholar]
9. Alaeddini A, Yang K, Reddy C, Yu S. A probabilistic model for predicting the probability of no-show in hospital appointments. Health Care Manage Sci 2011; 14:146–157 [Google Scholar]
10. Huang Y, Hanauer DA. Patient no-show predictive model development using multiple data sources for an effective overbooking approach. Appl Clin Inform 2014; 5:836–860 [Google Scholar]
11. Chua SL, Chow WL. Development of predictive scoring model for risk stratification of no-show at a public hospital specialist outpatient clinic. Proc Singapore Healthcare 2019; 28:96–104 [Google Scholar]
12. AlRowaili MO, Ahmed AE, Areabi HA. Factors associated with no-shows and rescheduling MRI appointments. BMC Health Serv Res 2016; 16:679 [Google Scholar]
13. Huang YL, Hanauer DA. Time dependent patient no-show predictive modelling development. Int J Health Care Qual Assur 2016; 29:475–488 [Google Scholar]
14. Awoyemi JO, Adetunmbi AO, Oluwadare SA. Credit card fraud detection using machine learning techniques: a comparative analysis. In: Proceedings of the 2017 IEEE International Conference on Computing, Networking, and Informatics (ICCNI). Piscataway, NJ: IEEE, 2017:1–9 [Google Scholar]
15. McMahan HB, Holt G, Sculley D, et al. Ad click prediction: a view from the trenches. In: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Chicago, IL: Association for Computing Machinery, 2013:1222–1230 [Google Scholar]
16. Bennett J, Lanning S. The Netflix Prize. In: Proceedings of the KDD Cup and Workshop 2007. New York, NY: Association for Computing Machinery, 2007:3–6 [Google Scholar]
17. Hasvold PE, Wootton R. Use of telephone and SMS reminders to improve attendance at hospital appointments: a systematic review. J Telemed Telecare 2011; 17:358–364 [Google Scholar]
18. Gurol-Urganci I, de Jongh T, Vodopivec-Jamsek V, Atun R, Car J. Mobile phone messaging reminders for attendance at healthcare appointments. Cochrane Database Syst Rev 2013; (12):CD007458 [Google Scholar]
19. Robotham D, Satkunanathan S, Reynolds J, Stahl D, Wykes T. Using digital notifications to improve attendance in clinic: systematic review and meta-analysis. BMJ Open 2016; 6:e012116 [Google Scholar]
20. Parikh A, Gupta K, Wilson AC, Fields K, Cosgrove NM, Kostis JB. The effectiveness of outpatient appointment reminder systems in reducing no-show rates. Am J Med 2010; 123:542–548 [Google Scholar]
21. Mander GTW, Reynolds L, Cook A, Kwan MM. Factors associated with appointment non-attendance at a medical imaging department in regional Australia: a retrospective cohort analysis. J Med Radiat Sci 2018; 65:192–199 [Google Scholar]
22. Nelson A, Herron D, Rees G, Nachev P. Predicting scheduled hospital attendance with artificial intelligence. NPJ Digit Med 2019; 2:26 [Google Scholar]
23. Lee G, Wang S, Dipuro F, et al. Leveraging on predictive analytics to manage clinic no show and improve accessibility of care. In: Proceedings of the 2017 IEEE International Conference on Data Science and Advanced Analytics (DSAA). Piscataway, NJ: IEEE, 2017:429–438 [Google Scholar]
24. Kaggle website. Competitions. www.kaggle.com/competitions. Accessed August 30, 2020 [Google Scholar]
25. Chen T, Guestrin C. XGBoost. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, NY: Association for Computing Machinery, 2016:785–794 [Google Scholar]
26. Hong WS, Haimovich AD, Taylor RA. Predicting hospital admission at emergency department triage using machine learning. PLoS One 2018; 13:e0201016 [Google Scholar]
27. Xie Y, Jiang B, Gong E, et al. Use of gradient boosting machine learning to predict patient outcome in acute ischemic stroke on the basis of imaging, demographic, and clinical information. AJR 2019; 212:44–51 [Abstract] [Google Scholar]
28. Tahmassebi A, Wengert GJ, Helbich TH, et al. Impact of machine learning with multiparametric magnetic resonance imaging of the breast for early prediction of response to neoadjuvant chemo-therapy and survival outcomes in breast cancer patients. Invest Radiol 2019; 54:110–117 [Google Scholar]
29. Nishio M, Nishizawa M, Sugiyama O, et al. Computer-aided diagnosis of lung nodule using gradient tree boosting and Bayesian optimization. PLoS One 2018; 13:e0195875 [Google Scholar]
30. Sammut C, Webb GI. Encyclopedia of machine learning and data mining. New York, NY: Springer, 2017 [Google Scholar]
31. Goodfellow I, Bengio Y, Courville A. Deep learning. Cambridge, MA: The MIT Press, 2016:800 [Google Scholar]
32. Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP. SMOTE: synthetic minority over-sampling technique. J Artif Intell Res 2002; 16:321–357 [Google Scholar]
Address correspondence to L. R. Chong ().

Recommended Articles

Artificial Intelligence Predictive Analytics in the Management of Outpatient MRI Appointment No-Shows

Free Access, , , ,
American Journal of Roentgenology. 2020;215:1163-1170. 10.2214/AJR.19.22556
Abstract | Full Text | PDF (836 KB) | PDF Plus (772 KB) 
Free Access, , , ,
American Journal of Roentgenology. 2020;215:1123-1129. 10.2214/AJR.19.22604
Abstract | Full Text | PDF (794 KB) | PDF Plus (695 KB) 
Free Access, , , ,
American Journal of Roentgenology. 2020;215:1146-1154. 10.2214/AJR.19.22372
Abstract | Full Text | PDF (1293 KB) | PDF Plus (1336 KB) 
Free Access, , , , , ,
American Journal of Roentgenology. 2020;215:1072-1084. 10.2214/AJR.20.22791
Abstract | Full Text | PDF (1321 KB) | PDF Plus (1156 KB) 
Free Access, , , ,
American Journal of Roentgenology. 2020;215:1113-1122. 10.2214/AJR.20.22847
Abstract | Full Text | PDF (952 KB) | PDF Plus (737 KB) 
Free Access, , , , , ,
American Journal of Roentgenology. 2020;215:1104-1112. 10.2214/AJR.20.22843
Abstract | Full Text | PDF (1064 KB) | PDF Plus (1094 KB)