Next Article in Journal
Human Papillomavirus Infection and the Risk of Erectile Dysfunction: A Nationwide Population-Based Matched Cohort Study
Next Article in Special Issue
Deep-ADCA: Development and Validation of Deep Learning Model for Automated Diagnosis Code Assignment Using Clinical Notes in Electronic Medical Records
Previous Article in Journal
Comparison of Benzbromarone and Allopurinol on Primary Prevention of the First Gout Flare in Asymptomatic Hyperuricemia
Previous Article in Special Issue
LungNet22: A Fine-Tuned Model for Multiclass Classification and Prediction of Lung Disease Using X-ray Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development and Validation of an Artificial Intelligence Electrocardiogram Recommendation System in the Emergency Department

1
Institute of Life Sciences, School of Public Health, National Defense Medical Center, Taipei 11499, Taiwan
2
Department of Emergency Medicine, Tri-Service General Hospital, National Defense Medical Center, Taipei 11499, Taiwan
3
School of Nursing, National Defense Medical Center, Taipei 11499, Taiwan
4
Planning and Management Office, Tri-Service General Hospital, National Defense Medical Center, Taipei 11490, Taiwan
*
Author to whom correspondence should be addressed.
J. Pers. Med. 2022, 12(5), 700; https://doi.org/10.3390/jpm12050700
Submission received: 1 April 2022 / Revised: 25 April 2022 / Accepted: 25 April 2022 / Published: 27 April 2022
(This article belongs to the Special Issue Artificial Intelligence Application in Health Care System)

Abstract

:
The machine learning-assisted electrocardiogram (ECG) is increasingly recognized for its unprecedented capabilities in diagnosing and predicting cardiovascular diseases. Identifying the need for ECG examination early in emergency department (ED) triage is key to timely artificial intelligence-assisted analysis. We used machine learning to develop and validate a clinical decision support tool to predict ED triage patients’ need for ECG. Data from 301,658 ED visits from August 2017 to November 2020 in a tertiary hospital were divided into a development cohort, validation cohort, and two test cohorts that included admissions before and during the COVID-19 pandemic. Models were developed using logistic regression, decision tree, random forest, and XGBoost methods. Their areas under the receiver operating characteristic curves (AUCs), positive predictive values (PPVs), and negative predictive values (NPVs) were compared and validated. In the validation cohort, the AUCs were 0.887 for the XGBoost model, 0.885 for the logistic regression model, 0.878 for the random forest model, and 0.845 for the decision tree model. The XGBoost model was selected for subsequent application. In test cohort 1, the AUC was 0.891, with sensitivity of 0.812, specificity of 0.814, PPV of 0.708 and NPV of 0.886. In test cohort 2, the AUC was 0.885, with sensitivity of 0.816, specificity of 0.812, PPV of 0.659, and NPV of 0.908. In the cumulative incidence analysis, patients not receiving an ECG yet positively predicted by the model had significantly higher probability of receiving the examination within 48 h compared with those negatively predicted by the model. A machine learning model based on triage datasets was developed to predict ECG acquisition with high accuracy. The ECG recommendation can effectively predict whether patients presenting at ED triage will require an ECG, prompting subsequent analysis and decision-making in the ED.

1. Introduction

An electrocardiogram (ECG) is a noninvasive and readily available tool that provides vital information about cardiovascular diseases such as acute coronary syndrome, arrhythmia, and hemodynamic instability in the emergency department (ED). Guidelines for acute coronary syndrome (ACS) suggest an ECG be taken within 10 min as an initial step of risk stratification to identify high-risk patients for timely management [1,2]. Therefore, most ED triages have pre-established screening criteria, such as acute chest, epigastric pain and pressure, or pain radiating to the jaw or left arm, to identify patients who should receive an immediate ECG examination. Other than that, the decision for an ECG examination is generally symptoms-based and driven by physicians. Thus, the time to complete an ECG acquisition in the ED depends on when a patient’s assessment by a physician started, except for patients with cardiac chest pain, who are prioritized for immediate ECG examination once certain pre-established rules of ACS criteria are met at triage [3,4,5,6].
Currently, there is no clinical decision tool for ECG acquisition in conditions other than ACS. For example, the rule-based rapid ECG by Graff et al. [7]; the history, ECG, age, risk factors, and troponin (HEART) pathway [8]; the prioritization rule for an immediate ECG by Glickman et al. [4]; and the Emergency Department Assessment of Chest Pain Score (EDACS) [9] are designed specifically for ACS and are not intended to cover all the other indications. Several ED triage systems, such as the emergency severity index [10], Manchester Triage System (MTS) [11], and Canadian Triage and Acuity Scale (CTAS) [12], are designed to assess all complaints and are neither sensitive nor specific as ECG indicators, or even as indicators of ACS [13,14]. Furthermore, in most circumstances, an ECG will only be analyzed when a physician is available, with a delay of likely dozens of minutes to an hour after ED registration, depending on workloads. Effectively coping with this time window may expedite clinical decision-making on identifying insidious abnormalities that are critical during initial assessment or herald a deteriorating patient yet are unrecognized during the ED stay, particularly in an overcrowded ED.
With artificial intelligence (AI) incorporated into digital ECG analysis, the featurization techniques of ECG analysis have substantially outperformed conventional ECG interpretation [15]. Rapid advances in AI-based deep learning algorithms with high diagnostic performance have opened a new frontier for ECG in cardiology and even crossed over into other medical fields. For example, an AI-based 12-lead ECG can identify left ventricular dysfunction [16], mitral regurgitation, aortic stenosis [17,18], and hypertrophic cardiomyopathy with high accuracy [19]. Previously, we demonstrated the superiority of AI-ECG in detecting acute myocardial infarction in the ED [20]. Using deep learning techniques, a 12-lead ECG can assist early identification of certain metabolic disorders, such as hypokalemia in thyrotoxic periodic paralysis, hyperkalemia, and digoxin intoxication [21,22]. Moreover, AI-ECG has been studied as a means of predicting certain risks, with promising results such as predicting ventricular dysfunction in asymptomatic individuals [16], estimating atrial fibrillation risk for a patient with ECG in sinus status [16], predicting HbA1C levels that correlate with the progression of diabetes mellitus [23], and sending alerts for patients at high risk of in-hospital cardiac arrest. Furthermore, by integrating AI-ECG with AI-chest X-ray, a 12-lead ECG could also enhance the stratification of patients with chest pain who are at risk of aortic dissection [24]. These applications of AI-assisted ECG analysis have not only transformed our knowledge of its capabilities, but also underscore its vital value in clinical applications, particularly in emergency settings. To effectively initiate and coordinate AI in a timely manner, an AI-assisted decision support tool for ECG acquisition used upon a patient’s arrival at the ED is indicated to help build a smart ECG surveillance system.
ECG is well-known for its benefits in early identification of patients with acute myocardial infarction. Recent studies also demonstrated that early execution of ECG increases early identification of life-threatening conditions, such as hyperkalemia [25], digoxin intoxication [22], and pneumothorax [26], which prompt immediate medical interventions. The objective of the present study was to develop and validate a tool with which to predict the need for ECG acquisition as the patient arrives at the ED. Using machine learning to analyze relevant ED triage data with respect to ECG acquisition, we developed a model regarding the prediction of ECG acquisition for patients at ED triage. By integrating such a decision support tool, the ED can speed up the time to first ECG acquisition, thus facilitating clinical decision-making and, in combination with active analysis by AI, alerting physicians to potential risks necessitating early intervention.

2. Methods

2.1. Population

We retrospectively collected data from August 2017 to November 2020 in the ED of a tertiary hospital in Taipei, which has an estimated annual ED volume of 90,000 per year. The study included all adult ED visits with a clear, recorded disposition of either admission or discharge. Patients of age less than 20 years or with incomplete records were excluded. The institutional review board of Tri-Service General Hospital, Taipei, Taiwan approved this study.
From August 2017 to November 2020, a total of 354,576 patient visits in the ED triage registry were recorded. Among all enrolments, there were 52,868 visits by patients aged less than 20 and 50 visits with incomplete records that were excluded, leaving the analytic cohort of 301,658 ED visits. Data on ED visits were divided into four sets (cohorts), including the development cohort (August 2017–December 2018), the validation cohort (January 2019–June 2019), test cohort 1 (July 2019–6 February 2020), and test cohort 2 (7 February 2020–November 2020, during the COVID-19 pandemic). The test sets were data collected before and during the COVID-19 pandemic, with the onset date defined as 7 February 2020, by Taiwan’s central epidemic command center. A diagram of the sampling process designed to assure a robust and reliable data set of training, validation, and testing for the model development and validation is shown in Figure 1. Once a patient’s data were placed in one of the data sets, those data were not used in other sets.

2.2. Data Source

The study included variables of demographics, triage assessment, and chief complaints. Demographic information was either collected at triage or available from electronic health records at the time of the patient encounter, and included age, sex, height, and weight. Triage assessment variables included those routinely collected at triage, such as arrival time, vital signs, and triage acuity levels assigned by the triage nurses. Vital signs included systolic and diastolic blood pressure, pulse, respiratory rate, and body temperature. Thirteen variables of demographics and triage assessment were collected. Chief complaints were recorded in free text format and contained patients’ main descriptions of illnesses collected at triage, which were categorized into 404 different variables using word segmentation technology. We defined the outcome variable as ECG acquisition within 2 h of the ED visit. The detailed information is described in the next section.

2.3. Model Training and Development

All data elements were obtained from the enterprise data warehouse, using SQL queries to extract relevant raw data in comma-separated-value format. All subsequent processing was performed in R version 3.4 (R Foundation for Statistical Computing, Vienna, Austria). Below, we summarize the processing steps for each category.

2.4. Response Variable

The primary response variable was whether the patient received an ECG exam or not, encoded in a binary variable (1 = done, 0 = none).

2.5. Demographic Variable

Thirteen variables in total were included. Values beyond physiologic limits were replaced with missing values. Missing data were imputed using multiple imputations in multivariate analysis [27].

2.6. Chief Complaint Variable

Given the high number of unique values (>500) for chief complaints, we classified the main descriptions into 404 variables using jiebaR and encoded them with one-hot encoding.

2.7. Model Selection

We trained logistic regression, decision tree, random forest, and gradient boosting (XGBoost) using the development cohort, and decided on a final model based on their accuracy in the validation cohort. All models were trained only once, using default hyperparameters. The R functions “ctree” in the “party” package, “randomForest” in the “randomForest” package, and “xgb.train” in the “xgboost” package were used to implement the above models. The test cohorts were evaluated for the likelihood of ECG acquisition only once by the final model.

2.8. Variables of Importance

Information gain is a metric that quantifies the improvement in accuracy of a tree-based algorithm from a split, based on a given variable [28]. We calculated the mean information gain for each variable based on 100 training iterations of the full XGBoost model and listed the top 20 important variables.

2.9. Statistical Analysis

The area under receiver operating characteristic curve (AUC) was used to assess model performance, with 95% confidence intervals constructed using the DeLong method implemented in the pROC package [29]. Youden’s index was used to find the optimal cutoff point based on the validation cohort, and we applied it to calculate the sensitivity, specificity, positive predictive value, and negative predictive value for each model [30]. The Kaplan–Meier (KM) method was used to calculate the probability of an ECG acquisition event in the 48 h following admission. The log-rank test was used to test significance.

3. Results

3.1. Demographics of the Development, Validation, and Test Cohorts

In Table 1, the development cohort comprised 129,444 patient visits, with male sex accounting for 49.8%, a mean age of 53.80, and a mean body mass index of 23.91. Patients in this cohort were predominantly categorized as triage level III (73.5%), followed by triage level II (17.2%). The triage level proportions were relatively comparable among the four cohorts. Trauma visits accounted for less than one-fifth of admissions. The mean pulse rate was 86.9 per minute, the mean respiratory rate was 18.7 per minute, and the mean systolic blood pressure was 135.2 mmHg. In the development set, 33,097 patients had an ECG acquisition within 2 h of their ED admission, accounting for one-fourth of all patient visits, whereas approximately one-third of the patients in the validation and test sets had the examination within 2 h.

3.2. Model Development and Validation

Using machine learning of logistic regression, decision tree, random forest, and XGBoost methods, we developed four models and compared their performance on the validation cohort. As shown in Figure 2, the ROC curve demonstrated that the XGBoost model had the highest discriminatory ability (AUC 0.887), followed by the logistic regression model (AUC 0.885), the random forest model (AUC 0.878), and the decision tree model (AUC 0.845). Therefore, we selected XGBoost as the final model of choice for further validation in the two test cohorts.

3.3. Performance of the XGBoost Model in the Test Cohorts

As XGBoost outperformed the other three models in 2 h ECG prediction, we further examined its performance in two test cohorts. As shown in Figure 3, the AUC results for predicting 2 h ECG in the XGBoost model were 0.891 in test cohort 1 and 0.885 in test cohort 2, with sensitivities of 0.812 and 0.816, and specificities of 0.814 and 0.812, respectively. The positive predictive values (PPVs) were 0.708 and 0.659, and the negative predictive values (NPV) were 0.886 and 0.908, for test cohorts 1 and 2, respectively, suggesting that the XGBoost method had good discriminatory ability. Relevant parameters of the model are shown in Table 2.

3.4. Variable Significance in the XGBoost Model

To gain insights into the relevance of each predictor, the top 20 variables of significance for ECG acquisition in the XGBoost model were listed (Figure 4). These were, following the ranking, triage level, age, chest pain, trauma, severely acute peripheral pain, dizziness, chest tightness, irregular heartbeat, temperature, low back pain, pulse, systolic blood pressure (SBP), pregnancy greater than 20 weeks, urinary retention, shortness of breath, diastolic blood pressure (DBP), acute central pain, BMI, weight, and severely acute central pain.

3.5. ECG Acquisition in Initially ECG Non-Acquisition Patients Stratified by the AI Model

In Figure 5, we used Kaplan–Meier analysis to compare the difference between positively and negatively AI-predicted groups in those not receiving ECG examinations within 2 h in the two test cohorts. In test cohort 1, out of 36,290 ED visits, 6499 were positively predicted for an ECG acquisition yet did not initially include the examination. The cumulative probability of non-acquisition of an ECG within 48 h of follow-up (false positives) was significantly lower in this group compared with those negatively predicted (true positives) (p < 0.001). In test cohort 2, the samples collected during the COVID-19 pandemic, out of 44,841 ED visits, 8233 were positively predicted for an ECG acquisition yet did not receive it, and the cumulative probability of non-acquisition within 48 h of follow-up remained significantly lower in the positively predicted group (20% vs. 50%, p < 0.001).

4. Discussion

We have developed a triage-based ECG acquisition prediction tool from a cohort of more than 120,000 ED visit samples, reaching an AUC of 0.89, in a single hospital. In two test cohorts, the ECG decision model demonstrated consistent performance, with AUCs of 0.89 and 0.88, respectively, suggesting that an ECG acquisition for ED patients can be predicted with high accuracy at initial encounter using triage data input. Among the variables in the prediction model, triage acuity, age, and chest pain were, in descending order, the three most important predictors of a need for ECG acquisition. Machine learning techniques on ED triage assessment can provide real-time clinical decision support for ECG acquisition with high accuracy (Figure 6).
Machine learning techniques have been widely applied in varied medical fields in prioritizing patients for specific fast healthcare services, such as triage, disease detection, prediction, and classification [31]. To our knowledge, this is the first study to design a decision support tool for predicting the need for ECG acquisition, using machine learning techniques to analyze ED triage data. The prediction tool is designed to help connect triage-based ECG acquisition to an integrated AI-assisted ECG analysis that may aid the decision-making that leads to the early identification of critical conditions. The major advantage of using a triage data system for predictive modeling is that these data are immediately available once a patient arrives at the ED, where clinically relevant information can be obtained for estimating the odds of ECG acquisitions for timely responses. Moreover, in the modeling of validation and testing processes, we examined large data sets of more than 170,000 ED visit samples, testing and comparing four different prediction models and identifying the XGBoost model as having the best discriminatory ability. XGBoost has been considered a gradient boosting method that not only gives great performance and accuracy in both regression and the classification of tabular data, but can also quickly run multiple training cycles while tuning the hyperparameters [32,33]. The decision support tool will respond to information automatically, thus avoiding adding calculations or cumbersome checklist screening to the already-heavy clinical burdens of ED personnel.
An AI-assisted ECG recommendation tool can be a critical element of an ED with an intelligent decision support system (IDSS). With the integration of machine learning and modern computing, the decision support system has evolved to supply smart behavior and support users’ decision-making interactively [34]. The IDSS can learn from previous experiences, recognize the relative importance of certain elements of the decision, gather information, and respond spontaneously according to predefined authorization of the decision-making algorithms, which can potentially improve efficiency and play a critical step in building up a smart ED. Many of the ED patients may present with non-chest pain or atypical symptoms of cardiovascular or pulmonary diseases, such as painless aortic dissection [35], painless acute coronary syndrome, or coexistence of acute myocardial infarction and aortic dissection [36], which are challenging to physicians and warrant earlier identification and clarification for appropriate treatment. By interrogating the need for ECG in real time with the IDSS, the system can signal conditions of relevant risk early. Moreover, AI-aided analysis of ECG can predict heart failure, pulmonary embolisms, electrolyte imbalances, and high risk of mortality. Early response to ECG acquisition can initiate subsequent decisions and diagnosis to identify these potentially critical conditions while managing them with prompt interventions.
Time to an ECG acquisition is indicative of the clinical relevance of time-sensitive characteristics. Hence, we defined the number of patients receiving an ECG within 2 h of arrival as the outcome in developing the model. This minimized the number of ECGs acquired due to conditions other than presenting illnesses or to admission routines. Overall, the model had a precision close to 90%, with sensitivity and specificity over 80%. Moreover, in the follow-up analysis of patients who did not acquire an ECG within 2 h of the initial encounter, those who were positively predicted by the model were significantly more likely to receive an ECG examination within 48 h, further suggesting the model’s robust discriminative ability. Of note, despite the comparable performance of the prediction model on the two test cohorts, the predictive precision on test cohort 2 was slightly inferior to that on test cohort 1. The differences between these two cohorts are likely attributable to temporal differences in the composition of ED patients before and during the COVID-19 pandemic. Contrary to the situation in many other countries, in Taiwan the number of ED visits for febrile conditions dropped by 30 percent during the COVID-19 pandemic [22]. The pandemic also decreased the number of low-acuity ED visits and shortened lengths of stay at the ED. There have also been reports of delayed medical attention for chest pain during the pandemic, leading to poor prognosis in myocardial infarction cases [37]. The public panic and changes in ED patient compositions might have reduced the model’s precision.
The model included vital signs, triage acuity, and chief complaints as predictive variables that are readily available in ED records. The inclusion of comorbidity factors could sensibly improve the model’s fitness; however, data on these variables were frequently incomplete or not available at the initial presentation, and thus, we did not include the features into model training. Among all predictors in the prediction model, the triage acuity variable gained the highest weight, and age ranked as the second most important variable. Patients with increased disease acuity and age often require more medical attention, and an ECG is often an indispensable examination. In addition to cardiopulmonary symptoms and trauma, features of pain-associated conditions were also weighted higher in information gain among others in the prediction model, indicating that an early ECG survey was commonly needed in those with pain-related syndromes in the ED. There are growing numbers of studies on ED triage systems using machine learning techniques to predict various outcomes such as ICU or hospital admission [38,39], length of stay [40], mortality [38], ED revisits [41], and need for critical care [42], with the results mostly suggesting that predictive performance improves with machine learning techniques compared with conventional analysis. By integrating such a decision support tool, the ED may accelerate ECG diagnostic processes by activating an integrated AI-assisted ECG analysis to increase diagnostic proficiency and identify crucial abnormalities early.
Currently, the recommendation tool is not intended to replace physicians’ clinical judgment, but rather to serve as an ancillary tool to facilitate the clinical pathway, particularly in a chaotic ED where task-switching burdens are high during rush hours. The system also does not replace the chest pain protocol at triage, which remains unchanged in its current routines, but rather to work as an adjunct for physicians during patient assessment, recommending those who may need an ECG for subsequent assessment. This strategy would not divert manpower from those in urgent need of an ECG, but serve as a backup reminder to avoid delay in treating those who may otherwise benefit from early ECG interpretation. Therefore, the ECG recommendation can alert ED physicians to obtain ECG results, identify potentially critical diseases or life-threatening conditions, and avoid delayed intervention in the crowded ED.
The main limitation of our study is that the model was derived from a single hospital, limiting its generalizability. Despite the differences in disease distribution among different hospitals, most of the ED triages can be conceptually highly similar, while the variables collected are commonly generalizable. It is reasonable that a hospital should develop its own predictive model based on its own local data to obtain the most precise algorithm. Second, relevant information such as comorbidities and their control status were not captured and included in the assessment. Inclusion of comorbidity data through synchronous acquisition from previous medical records may likely have improved the model’s precision, but at the expense of increased computing load. However, the prediction model was intended to capture characteristics of acute illnesses through a symptom- and acuity-based status immediately collected from registry data for a real-time response. Third, despite being validated with two test cohorts reaching 90 percent accuracy, the model misclassified 10 percent of patient visits. Because the prediction model is heavily weighted by triage acuity, which relies solely on triage nurses’ subjective judgment, and because patients’ initial presenting symptoms are sometimes obscure, assessment variability may exist [34]. Fourth, the model has not been implemented in clinical practice, and its clinical impact, such as on quality of care and outcomes, warrants further study.
In conclusion, an ECG recommendation system can assist clinical decision-making, prompting examinations and activating analyses and timely feedback for physicians. Implementation of an ECG decision support system may enhance decision-making, reducing delay in the chaotic ED environment.

Author Contributions

Conceptualization, S.-J.C. and H.-H.C.; methodology, S.-H.T.; software, D.-J.T.; formal analysis, D.-J.T.; investigation, S.-J.C. and H.-H.C.; resources, C.-C.L.; data curation, D.-J.T.; writing—original draft preparation, S.-J.C. and D.-J.T.; writing—review and editing, S.-H.T. and H.-H.C.; visualization, C.-C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported in part by funding from the Medical Affairs Bureau Ministry of National Defense, Taiwan (MND-MAB-C11-111041 to S.-J.C.), the Tri-Service General Hospital, Taiwan (TSGH-E-111210 to S.-J.C.), and the Ter-Zer Foundation for Educational Achievement (B1081030 to S.-J.C.).

Institutional Review Board Statement

The Institutional Review Board of the Tri-Service General Hospital approved this study (A202005159).

Informed Consent Statement

Patient consent was waived because data were collected retrospectively and in anonymized files and encrypted from the hospital to the data controller.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Antman, E.M.; Anbe, D.T.; Armstrong, P.W.; Bates, E.R.; Green, L.A.; Hand, M.; Hochman, J.S.; Krumholz, H.M.; Kushner, F.G.; Lamas, G.A.; et al. ACC/AHA guidelines for the management of patients with ST-elevation myocardial infarction—Executive summary: A report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Writing Committee to Revise the 1999 Guidelines for the Management of Patients with Acute Myocardial Infarction). Circulation 2004, 110, 588–636. [Google Scholar] [CrossRef]
  2. Ibanez, B.; James, S.; Agewall, S.; Antunes, M.J.; Bucciarelli-Ducci, C.; Bueno, H.; Caforio, A.L.P.; Crea, F.; Goudevenos, J.A.; Halvorsen, S.; et al. 2017 ESC Guidelines for the management of acute myocardial infarction in patients presenting with ST-segment elevation: The Task Force for the management of acute myocardial infarction in patients presenting with ST-segment elevation of the European Society of Cardiology (ESC). Eur. Heart J. 2018, 39, 119–177. [Google Scholar] [CrossRef] [Green Version]
  3. Luepker, R.V.; Apple, F.S.; Christenson, R.H.; Crow, R.S.; Fortmann, S.P.; Goff, D.; Goldberg, R.J.; Hand, M.M.; Jaffe, A.S.; Julian, D.G.; et al. Case definitions for acute coronary heart disease in epidemiology and clinical research studies: A statement from the AHA Council on Epidemiology and Prevention; AHA Statistics Committee; World Heart Federation Council on Epidemiology and Prevention; the European Society of Cardiology Working Group on Epidemiology and Prevention; Centers for Disease Control and Prevention; and the National Heart, Lung, and Blood Institute. Circulation 2003, 108, 2543–2549. [Google Scholar] [CrossRef] [PubMed]
  4. Glickman, S.W.; Shofer, F.S.; Wu, M.C.; Scholer, M.J.; Ndubuizu, A.; Peterson, E.D.; Granger, C.B.; Cairns, C.B.; Glickman, L.T. Development and validation of a prioritization rule for obtaining an immediate 12-lead electrocardiogram in the emergency department to identify ST-elevation myocardial infarction. Am. Heart J. 2012, 163, 372–382. [Google Scholar] [CrossRef]
  5. O’Donnell, S.; Monahan, P.; McKee, G.; McMahon, G.; Curtin, E.; Farrell, S.; Moser, D. Towards prompt electrocardiogram acquisition in triage: Preliminary testing of a symptom-based clinical prediction rule for the Android tablet. Eur. J. Cardiovasc. Nurs. 2019, 18, 289–298. [Google Scholar] [CrossRef] [PubMed]
  6. Takakuwa, K.M.; Burek, G.A.; Estepa, A.T.; Shofer, F.S. A method for improving arrival-to-electrocardiogram time in emergency department chest pain patients and the effect on door-to-balloon time for ST-segment elevation myocardial infarction. Acad. Emerg. Med. 2009, 16, 921–927. [Google Scholar] [CrossRef]
  7. Graff, L.; Palmer, A.C.; Lamonica, P.; Wolf, S. Triage of patients for a rapid (5-min) electrocardiogram: A rule based on presenting chief complaints. Ann. Emerg. Med. 2000, 36, 554–560. [Google Scholar] [CrossRef] [Green Version]
  8. Backus, B.E.; Six, A.J.; Kelder, J.C.; Bosschaert, M.A.; Mast, E.G.; Mosterd, A.; Veldkamp, R.F.; Wardeh, A.J.; Tio, R.; Braam, R.; et al. A prospective validation of the HEART score for chest pain patients at the emergency department. Int. J. Cardiol. 2013, 168, 2153–2158. [Google Scholar] [CrossRef] [Green Version]
  9. Than, M.; Flaws, D.; Sanders, S.; Doust, J.; Glasziou, P.; Kline, J.; Aldous, S.; Troughton, R.; Reid, C.; Parsonage, W.A.; et al. Development and validation of the Emergency Department Assessment of Chest pain Score and 2 h accelerated diagnostic protocol. Emerg. Med. Australas. 2014, 26, 34–44. [Google Scholar] [CrossRef] [PubMed]
  10. Gilboy, N.; Tanabe, T.; Travers, D.; Rosenau, A.M. Emergency Severity Index (ESI): A Triage Tool for Emergency Department Care, 4th ed.; Emergency Nurses Association: Schaumburg, IL, USA, 2020. [Google Scholar]
  11. Nishi, F.A.; de Oliveira Motta Maia, F.; de Souza Santos, I.; de Almeida Lopes Monteiro da Cruz, D. Assessing sensitivity and specificity of the Manchester Triage System in the evaluation of acute coronary syndrome in adult patients in emergency care: A systematic review. JBI Database Syst. Rev. Implement. Rep. 2017, 15, 1747–1761. [Google Scholar] [CrossRef]
  12. Beveridge, R.; Ducharme, J.; Janes, L.; Beaulieu, S.; Walter, S. Reliability of the Canadian emergency department triage and acuity scale: Interrater agreement. Ann. Emerg. Med. 1999, 34, 155–159. [Google Scholar] [CrossRef]
  13. Frisch, S.O.; Faramand, Z.; Leverknight, B.; Martin-Gill, C.; Sereika, S.M.; Sejdić, E.; Hravnak, M.; Callaway, C.W.; Al-Zaiti, S. The Association Between Patient Outcomes and the Initial Emergency Severity Index Triage Score in Patients with Suspected Acute Coronary Syndrome. J. Cardiovasc. Nurs. 2020, 35, 550–557. [Google Scholar] [CrossRef]
  14. Nishi, F.A.; Polak, C.; Cruz, D. Sensitivity and specificity of the Manchester Triage System in risk prioritization of patients with acute myocardial infarction who present with chest pain. Eur. J. Cardiovasc. Nurs. 2018, 17, 660–666. [Google Scholar] [CrossRef]
  15. Attia, Z.I.; Harmon, D.M.; Behr, E.R.; Friedman, P.A. Application of artificial intelligence to the electrocardiogram. Eur. Heart J. 2021, 42, 4717–4730. [Google Scholar] [CrossRef]
  16. Attia, Z.I.; Kapa, S.; Lopez-Jimenez, F.; McKie, P.M.; Ladewig, D.J.; Satam, G.; Pellikka, P.A.; Enriquez-Sarano, M.; Noseworthy, P.A.; Munger, T.M.; et al. Screening for cardiac contractile dysfunction using an artificial intelligence-enabled electrocardiogram. Nat. Med. 2019, 25, 70–74. [Google Scholar] [CrossRef]
  17. Kwon, J.M.; Kim, K.H.; Akkus, Z.; Jeon, K.H.; Park, J.; Oh, B.H. Artificial intelligence for detecting mitral regurgitation using electrocardiography. J. Electrocardiol. 2020, 59, 151–157. [Google Scholar] [CrossRef] [PubMed]
  18. Kwon, J.M.; Lee, S.Y.; Jeon, K.H.; Lee, Y.; Kim, K.H.; Park, J.; Oh, B.H.; Lee, M.M. Deep Learning-Based Algorithm for Detecting Aortic Stenosis Using Electrocardiography. J. Am. Heart Assoc. 2020, 9, e014717. [Google Scholar] [CrossRef]
  19. Ko, W.Y.; Siontis, K.C.; Attia, Z.I.; Carter, R.E.; Kapa, S.; Ommen, S.R.; Demuth, S.J.; Ackerman, M.J.; Gersh, B.J.; Arruda-Olson, A.M.; et al. Detection of Hypertrophic Cardiomyopathy Using a Convolutional Neural Network-Enabled Electrocardiogram. J. Am. Coll. Cardiol. 2020, 75, 722–733. [Google Scholar] [CrossRef] [PubMed]
  20. Liu, W.C.; Lin, C.S.; Tsai, C.S.; Tsao, T.P.; Cheng, C.C.; Liou, J.T.; Lin, W.S.; Cheng, S.M.; Lou, Y.S.; Lee, C.C.; et al. A deep learning algorithm for detecting acute myocardial infarction. EuroIntervention 2021, 17, 765–773. [Google Scholar] [CrossRef] [PubMed]
  21. Lin, C.S.; Lin, C.; Fang, W.H.; Hsu, C.J.; Chen, S.J.; Huang, K.H.; Lin, W.S.; Tsai, C.S.; Kuo, C.C.; Chau, T.; et al. A Deep-Learning Algorithm (ECG12Net) for Detecting Hypokalemia and Hyperkalemia by Electrocardiography: Algorithm Development. JMIR Med. Inform. 2020, 8, e15931. [Google Scholar] [CrossRef]
  22. Chang, D.W.; Lin, C.S.; Tsao, T.P.; Lee, C.C.; Chen, J.T.; Tsai, C.S.; Lin, W.S.; Lin, C. Detecting Digoxin Toxicity by Artificial Intelligence-Assisted Electrocardiography. Int. J. Environ. Res. Public Health 2021, 18, 3839. [Google Scholar] [CrossRef]
  23. Lin, C.S.; Lee, Y.T.; Fang, W.H.; Lou, Y.S.; Kuo, F.C.; Lee, C.C.; Lin, C. Deep Learning Algorithm for Management of Diabetes Mellitus via Electrocardiogram-Based Glycated Hemoglobin (ECG-HbA1c): A Retrospective Cohort Study. J. Pers. Med. 2021, 11, 725. [Google Scholar] [CrossRef] [PubMed]
  24. Liu, W.T.; Lin, C.S.; Tsao, T.P.; Lee, C.C.; Cheng, C.C.; Chen, J.T.; Tsai, C.S.; Lin, W.S.; Lin, C. A Deep-Learning Algorithm-Enhanced System Integrating Electrocardiograms and Chest X-rays for Diagnosing Aortic Dissection. Can. J. Cardiol. 2021, 38, 160–168. [Google Scholar] [CrossRef]
  25. Lin, C.; Chau, T.; Lin, C.-S.; Shang, H.-S.; Fang, W.-H.; Lee, D.-J.; Lee, C.-C.; Tsai, S.-H.; Wang, C.-H.; Lin, S.-H. Point-of-care artificial intelligence-enabled ECG for dyskalemia: A retrospective cohort analysis for accuracy and outcome prediction. NPJ Digit. Med. 2022, 5, 8. [Google Scholar] [CrossRef]
  26. Lee, C.C.; Lin, C.S.; Tsai, C.S.; Tsao, T.P.; Cheng, C.C.; Liou, J.T.; Lin, W.S.; Lee, C.C.; Chen, J.T.; Lin, C. A deep learning-based system capable of detecting pneumothorax via electrocardiogram. Eur. J. Trauma Emerg. Surg. 2022, 1–10. [Google Scholar] [CrossRef]
  27. Van Buuren, S.; Groothuis-Oudshoorn, K. Mice: Multivariate imputation by chained equations in R. J. Stat. Softw. 2011, 45, 1–67. [Google Scholar] [CrossRef] [Green Version]
  28. Guyon, I.; Elisseeff, A. An introduction to variable and feature selection. J. Mach. Learn. Res. 2003, 3, 1157–1182. [Google Scholar]
  29. DeLong, E.R.; DeLong, D.M.; Clarke-Pearson, D.L. Comparing the areas under two or more correlated receiver operating characteristic curves: A nonparametric approach. Biometrics 1988, 44, 837–845. [Google Scholar] [CrossRef]
  30. Ruopp, M.D.; Perkins, N.J.; Whitcomb, B.W.; Schisterman, E.F. Youden Index and optimal cut-point estimated from observations affected by a lower limit of detection. Biom. J. J. Math. Methods Biosci. 2008, 50, 419–430. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Salman, O.H.; Taha, Z.; Alsabah, M.Q.; Hussein, Y.S.; Mohammed, A.S.; Aal-Nouman, M. A review on utilizing machine learning technology in the fields of electronic emergency triage and patient priority systems in telemedicine: Coherent taxonomy, motivations, open research challenges and recommendations for intelligent future work. Comput. Methods Programs Biomed. 2021, 209, 106357. [Google Scholar] [CrossRef] [PubMed]
  32. Khera, R.; Haimovich, J.; Hurley, N.C.; McNamara, R.; Spertus, J.A.; Desai, N.; Rumsfeld, J.S.; Masoudi, F.A.; Huang, C.; Normand, S.L.; et al. Use of Machine Learning Models to Predict Death after Acute Myocardial Infarction. JAMA Cardiol. 2021, 6, 633–641. [Google Scholar] [CrossRef]
  33. Uddin, S.; Khan, A.; Hossain, M.E.; Moni, M.A. Comparing different supervised machine learning algorithms for disease prediction. BMC Med. Inform. Decis. Mak. 2019, 19, 281. [Google Scholar] [CrossRef]
  34. Fernandes, M.; Vieira, S.M.; Leite, F.; Palos, C.; Finkelstein, S.; Sousa, J.M.C. Clinical Decision Support Systems for Triage in the Emergency Department using Intelligent Systems: A Review. Artif. Intell. Med. 2020, 102, 101762. [Google Scholar] [CrossRef] [PubMed]
  35. Yanamadala, A.; Kumar, S.; Lichtenberg, R. It is a medical emergency! Act fast: A case report of painless aortic dissection. Eur. Heart J. Case Rep. 2019, 3, ytz072. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Lentini, S.; Perrotta, S. Aortic dissection with concomitant acute myocardial infarction: From diagnosis to management. J. Emerg. Trauma Shock 2011, 4, 273–278. [Google Scholar] [CrossRef]
  37. Hammad, T.A.; Parikh, M.; Tashtish, N.; Lowry, C.M.; Gorbey, D.; Forouzandeh, F.; Filby, S.J.; Wolf, W.M.; Costa, M.A.; Simon, D.I.; et al. Impact of COVID-19 pandemic on ST-elevation myocardial infarction in a non-COVID-19 epicenter. Catheter. Cardiovasc. Interv. Off. J. Soc. Card. Angiogr. Interv. 2020, 97, 208–214. [Google Scholar] [CrossRef] [PubMed]
  38. Raita, Y.; Goto, T.; Faridi, M.K.; Brown, D.F.M.; Camargo, C.A., Jr.; Hasegawa, K. Emergency department triage prediction of clinical outcomes using machine learning models. Crit. Care 2019, 23, 64. [Google Scholar] [CrossRef] [Green Version]
  39. Fenn, A.; Davis, C.; Buckland, D.M.; Kapadia, N.; Nichols, M.; Gao, M.; Knechtle, W.; Balu, S.; Sendak, M.; Theiling, B.J. Development and Validation of Machine Learning Models to Predict Admission from Emergency Department to Inpatient and Intensive Care Units. Ann. Emerg. Med. 2021, 78, 290–302. [Google Scholar] [CrossRef]
  40. Chrusciel, J.; Girardon, F.; Roquette, L.; Laplanche, D.; Duclos, A.; Sanchez, S. The prediction of hospital length of stay using unstructured data. BMC Med. Inform. Decis. Mak. 2021, 21, 351. [Google Scholar] [CrossRef]
  41. Ben-Assuli, O.; Vest, J.R. Data mining techniques utilizing latent class models to evaluate emergency department revisits. J. Biomed. Inf. 2020, 101, 103341. [Google Scholar] [CrossRef]
  42. Yun, H.; Choi, J.; Park, J.H. Prediction of Critical Care Outcome for Adult Patients Presenting to Emergency Department Using Initial Triage Information: An XGBoost Algorithm Analysis. JMIR Med. Inf. 2021, 9, e30770. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Flowchart diagram of the study cohort generation.
Figure 1. Flowchart diagram of the study cohort generation.
Jpm 12 00700 g001
Figure 2. The ROC curves from four prediction models in the validation cohort. The ROC curves (x-axis = specificity and y-axis = sensitivity) and AUCs were calculated using the validation set. The operating point (red dot) was selected based on the maximum of Youden’s index in the validation cohort, and was used for calculating the corresponding sensitivity and specificity. Sens, sensitivity; Spec, specificity; AUC, area under receiver operating characteristic curve.
Figure 2. The ROC curves from four prediction models in the validation cohort. The ROC curves (x-axis = specificity and y-axis = sensitivity) and AUCs were calculated using the validation set. The operating point (red dot) was selected based on the maximum of Youden’s index in the validation cohort, and was used for calculating the corresponding sensitivity and specificity. Sens, sensitivity; Spec, specificity; AUC, area under receiver operating characteristic curve.
Jpm 12 00700 g002
Figure 3. The ROC curves from XGBoost model in two test cohorts. The ROC curves (x-axis = specificity and y-axis = sensitivity) and AUCs were calculated using the testing sets. The operating point was selected based on the maximum of Youden’s index in the validation cohort, and was used for calculating the corresponding sensitivity and specificity.
Figure 3. The ROC curves from XGBoost model in two test cohorts. The ROC curves (x-axis = specificity and y-axis = sensitivity) and AUCs were calculated using the testing sets. The operating point was selected based on the maximum of Youden’s index in the validation cohort, and was used for calculating the corresponding sensitivity and specificity.
Jpm 12 00700 g003
Figure 4. Top 20 variables of significance in the XGBoost model (information gain).
Figure 4. Top 20 variables of significance in the XGBoost model (information gain).
Jpm 12 00700 g004
Figure 5. ECG acquisition in initially ECG non-acquisition patients stratified by AI model. The red line demonstrates patients correctly stratified by AI model (true negative), and the blue line demonstrates patients incorrectly stratified by AI model (false positive). The ordinate shows the cumulative probability of non-acquisition of an ECG and the abscissa indicates hours from time of the ED admission.
Figure 5. ECG acquisition in initially ECG non-acquisition patients stratified by AI model. The red line demonstrates patients correctly stratified by AI model (true negative), and the blue line demonstrates patients incorrectly stratified by AI model (false positive). The ordinate shows the cumulative probability of non-acquisition of an ECG and the abscissa indicates hours from time of the ED admission.
Jpm 12 00700 g005
Figure 6. Scheme of machine learning models for ECG decision support at ED triage.
Figure 6. Scheme of machine learning models for ECG decision support at ED triage.
Jpm 12 00700 g006
Table 1. Characteristics of development, validation, and testing cohorts.
Table 1. Characteristics of development, validation, and testing cohorts.
CharacteristicDevelopmentValidationTesting 1Testing 2p-Value
Male gender, n (%)64,462 (49.8)23,305 (48.6)28,285 (48.7)33,857 (51.1)<0.001
Age (year)53.80 ± 21.1453.64 ± 20.9153.25 ± 20.7152.09 ± 20.45<0.001
Height (cm)163.85 ± 8.90163.73 ± 8.95163.81 ± 8.99164.36 ± 9.00<0.001
Weight (kg)64.49 ± 13.9264.73 ± 14.1464.80 ± 14.1265.35 ± 14.23<0.001
Body mass index (kg/m2)23.91 ± 4.2124.03 ± 4.2624.03 ± 4.2424.07 ± 4.24<0.001
Temperature (°C)36.74 ± 0.8836.69 ± 0.9036.72 ± 0.9036.69 ± 0.84<0.001
Triage level, n (%) <0.001
I4672 (3.6)1786 (3.7)1974 (3.4)2073 (3.1)
II22,220 (17.2)8311 (17.3)9364 (16.1)9579 (14.5)
III95,136 (73.5)35,222 (73.5)43,545 (75.0)45,905 (69.3)
IV6385 (4.9)2242 (4.7)2739 (4.7)3072 (4.6)
V1031 (0.8)357 (0.7)420 (0.7)5625 (8.5)
Trauma, n (%)24,605 (19.0)8505 (17.7)10,640 (18.3)12,308 (18.6)<0.001
Pulse (beats/min) 86.93 ± 18.8686.89 ± 18.7387.43 ± 18.5286.60 ± 18.56<0.001
Breath (breaths/min)18.74 ± 2.6818.83 ± 2.2718.72 ± 2.4018.59 ± 2.00<0.001
SBP (mmHg)135.22 ± 24.94135.96 ± 25.02134.36 ± 24.55134.72 ± 24.45<0.001
DBP (mmHg)77.86 ± 16.1576.59 ± 16.0877.96 ± 15.7878.55 ± 15.81<0.001
ECG in 2 h, n (%)33,097 (25.6)16,825 (35.1)20,764 (35.8)20,380 (30.8)<0.001
Data were expressed as mean and standard deviation or as numbers with percentage according to data characteristics. SBP, systolic blood pressure; DBP, diastolic blood pressure.
Table 2. Model parameters from the validation and test cohorts.
Table 2. Model parameters from the validation and test cohorts.
CohortAccuracyRecallPrecisionF Scores
Validation0.8050.8340.6810.750
Test 10.8130.8120.7080.757
Test 20.8140.8160.6590.729
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tsai, D.-J.; Tsai, S.-H.; Chiang, H.-H.; Lee, C.-C.; Chen, S.-J. Development and Validation of an Artificial Intelligence Electrocardiogram Recommendation System in the Emergency Department. J. Pers. Med. 2022, 12, 700. https://doi.org/10.3390/jpm12050700

AMA Style

Tsai D-J, Tsai S-H, Chiang H-H, Lee C-C, Chen S-J. Development and Validation of an Artificial Intelligence Electrocardiogram Recommendation System in the Emergency Department. Journal of Personalized Medicine. 2022; 12(5):700. https://doi.org/10.3390/jpm12050700

Chicago/Turabian Style

Tsai, Dung-Jang, Shih-Hung Tsai, Hui-Hsun Chiang, Chia-Cheng Lee, and Sy-Jou Chen. 2022. "Development and Validation of an Artificial Intelligence Electrocardiogram Recommendation System in the Emergency Department" Journal of Personalized Medicine 12, no. 5: 700. https://doi.org/10.3390/jpm12050700

APA Style

Tsai, D. -J., Tsai, S. -H., Chiang, H. -H., Lee, C. -C., & Chen, S. -J. (2022). Development and Validation of an Artificial Intelligence Electrocardiogram Recommendation System in the Emergency Department. Journal of Personalized Medicine, 12(5), 700. https://doi.org/10.3390/jpm12050700

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop