Fenn, Alexander. “Development of Machine Learning Models to Predict Admission from ED to Inpatient and Intensive Units.” Poster Presentation presented at the 2020 Society of Academic Emergency Medicine (SAEM) National Conference, Denver, CO, May 12, 2020. https://www.saem.org/annual-meeting

TITLE

Development of Machine Learning Models to Predict Admission from ED to Inpatient and Intensive Units

AUTHORS AND AFFILIATIONS

Alexander Fenn1, Connor Davis2, Neel Kapadia3, Daniel Buckland3, Marshall Nichols2, Michael Gao2, William Knechtle2, Suresh Balu2, Mark Sendak2, B. Jason Theiling3

1Duke University School of Medicine, 2Duke Institute of Health Innovation, 3Division of Emergency Medicine, Department of Surgery, Duke University School of Medicine

[Note: Instructions state to only include primary affiliation]

ABSTRACT

The purpose of this study was to develop a machine learning model to accurately predict need for admission to the inpatient wards or intensive care units (ICUs) to reduce the time from a patient’s arrival to the ED to securing an appropriate inpatient bed.

Data was curated from 624,391 adult patient encounters to the 3 EDs of a large academic health system from October 2014 to October 2018. Of these encounters, 83.7% of patients were discharged, 14.7% were admitted to an inpatient ward, and 1.6% were admitted to an ICU. The following features were built: age, race, gender, chief complaint, mode of arrival, and Emergency Severity Index (ESI) from ED intake; vitals, orders, medication administrations, and analyte results during the ED encounter; and comorbidities and admissions within the year preceding the ED encounter. A logistic regression model was developed and evaluated at different timepoints throughout the ED encounter: preadmission (solely using patient history); initial triage presentation; one hour after arrival; and last hour of ED encounter. The encounters were split into a 75% train set and a 25% test set. Both AUROC and area under the precision-recall curve (AUPRC) were used to compare model performance.

All results are reported on test sets. The preadmissions model (305 features) yielded AUROCs of 0.82 (95% CI 0.81-0.82) for admission prediction and 0.83 (95% CI 0.82-0.84) for ICU prediction. AUPRCs were 0.54 for admission prediction and 0.07 for ICU prediction. The triage model (357 features) yielded AUROCs of 0.90 (95% CI 0.89-0.90) for admission prediction and 0.93 (95% CI 0.93-0.94) for ICU prediction. AUPRCs were 0.67 for admission prediction and 0.21 for ICU prediction. Including data one hour after arrival (577 features) yielded AUROCs of 0.91 (95% CI 0.91-0.92) for admission prediction and 0.95 (95% CI 0.94-0.95) for ICU prediction. AUPRCs were 0.71 for admission prediction and 0.32 for ICU prediction. Including data for the last hour of the encounter (577 features) yielded AUROCs of 0.96 (95% CI 0.96-0.96) for admission prediction and 0.97 (95% CI 0.96-0.97) for ICU prediction. AUPRCs were 0.76 for admission prediction and 0.37 for ICU prediction.

A machine learning model accurately predicts patient admission to an inpatient ward or ICU bed. Implementation of this model may improve patient flow by expediting patient triage and admission to appropriate units.