You are here
Home > Blog > General Topics > Discharge Readiness After Surgery Predicted By AI

Discharge Readiness After Surgery Predicted By AI

Discharge Readiness After Surgery Predicted By AI

Postoperative hospital discharge is a pivotal phase in the clinical journey of surgical patients, necessitating intricate decision-making grounded in a diverse array of clinical data contained within electronic medical records. Despite the richness of information, the inherent subjectivity in human decision-making introduces variability. This brilliant research proposes the integration of artificial intelligence into the decision-making process to address this challenge. The study focuses on developing and evaluating machine learning algorithms that utilize routine observations and laboratory parameters to formulate the ‘Adelaide Score,’ predicting the discharge of general surgery patients within twelve (12) and twenty-four (24) hours. This innovative approach aims to augment the efficiency and consistency of postoperative discharge planning in contemporary electronic healthcare systems.



The research background underscores the critical juncture of postoperative hospital discharge within the clinical trajectory of surgical patients. Decision-making regarding discharge readiness relies on clinical data housed in electronic medical records, encompassing vital signs, symptoms, laboratory parameters, and clinical opinions from various disciplines [1]. However, the subjective nature of this decision-making process introduces inherent variability influenced by environmental and individual factors [1]. This subjectivity challenges the consistency and objectivity of discharge planning, as human opinions are susceptible to bias under varying circumstances.

Recognizing the limitations of exclusive reliance on human interpretation, the research proposes the integration of artificial intelligence to supplement decision-making in postoperative discharge planning [2]. This recommendation stems from acknowledging the ubiquity of electronic data within modern hospital systems and the potential for artificial intelligence to mitigate variability and improve systemic efficiency [2]. By leveraging machine learning algorithms, the study aims to enhance the precision of decision-making in predicting postoperative discharge within specific timeframes. This approach marks a departure from traditional reliance on subjective assessments, introducing a data-driven and systematic method to optimize outcomes in the postoperative phase.

In pursuit of this innovative approach, the research specifically focuses on developing and evaluating machine learning algorithms utilizing routine observations and laboratory parameters, creating the ‘Adelaide Score’ [3]. This score aims to predict postoperative discharge for general surgery patients within twelve (12) and twenty-four (24) hours, contributing a novel dimension to discharge planning strategies—integrating A. In this context, I (artificial intelligence) holds promise for improving efficiency and standardizing decision-making processes in the contemporary landscape of electronic healthcare systems.



The research methodology commenced with obtaining ethics approval from Central Adelaide’s Local Health Network Human Research Ethics Committee, marked by reference number 16409, with a waiver of individual consent. The study encompassed a retrospective cohort analysis of consecutive elective and emergency patients admitted under various general surgery services at two tertiary hospitals in South Australia over a two-year (2) period starting in April 2020. Patients with readmission or in-hospital mortality were excluded from the study.

The model of care in the participating hospitals adhered to benchmarks set by the Royal Australasian College of Surgeons, employing a conventional approach to postoperative discharge planning. This involved morning ward rounds conducted by individual surgical teams, with discharge decisions confirmed by the treating consultant surgeon or senior surgical fellow/registrar. While some surgical teams implemented standardized postoperative treatment protocols like enhanced recovery after surgery (ERAS), these were not uniformly applied across all teams in the study. The composition of surgical teams varied, including consultants, fellows, residents, interns, medical students, nursing, and allied health staff.

Data ascertainment involved systematic collection of ward round note timings, demographic data, admission/discharge timings, and information on vital signs, pain scores, bowel movements, and laboratory parameters from electronic medical records and administrative databases. The outcome of interest was discharge within 12 or 24 hours, determined based on recorded discharge times in administrative records. The research employed three machine learning datasets stratified by the admission date, encompassing training, testing, and validation sets, and used XGBoost, random forest, and logistic regression models. Hyperparameter tuning was performed before evaluation on the holdout derivation test dataset and the validation dataset, with each ward round note treated as a different timepoint for analysis.



The research analysis centered on assessing the efficacy of the best-performing model in predicting postoperative discharge within the subsequent 12 hours, with the validation dataset as the primary focus. Standard performance metrics, such as sensitivity, specificity, positive predictive value, and negative predictive value, were systematically computed using both Python and R. These metrics offered a comprehensive evaluation of the model’s predictive accuracy, shedding light on its ability to identify patients eligible for discharge within the specified timeframe correctly.

Furthermore, the Area under the Curve (A.U.C.) for the receiver operator curve was calculated as a crucial indicator of the model’s overall discriminatory power. The A.U.C. provides a consolidated measure of the trade-off between sensitivity and specificity, offering insights into the model’s performance across various decision thresholds. Also, a confidence interval for the A.U.C. was established by applying bootstrapping methodologies to enhance the robustness of these assessments. This rigorous statistical analysis framework aimed to provide a thorough understanding of the model’s reliability and precision in predicting postoperative discharge within a 12-hour window, contributing valuable insights to the study’s overall findings.



 Patient Characteristics:

– The study involved 8,826 general surgery patients with an average age of 55.2 years.

– Almost half of the participants were females.

– The average duration of stay for these patients was 63 hours.

– The analysis incorporated a comprehensive examination of 42,572 ward round note timings.

Discharge Times:

– A notable 20.7% of cases experienced discharge within the first 12 hours, amounting to 8800 instances.

– Within a slightly extended 24 hours, 23.2% of cases, or 9885 instances, saw patients being discharged.

Prediction of Discharge within 12 hours:

– The random forest model came out as the most effective, boasting an accuracy of 0.84 and 0.85 on both the derivation and validation datasets.

– The XGBoost model was closely followed with accuracies of 0.84 and 0.83, exhibiting A.U.C.s of 0.87 and 0.85.

– The logistic regression model demonstrated accuracies of 0.80 and 0.81, with A.U.C.s of 0.72 and 0.73.

– All models displayed a higher specificity than sensitivity in predicting discharge within 12 hours.

Prediction of Discharge within 24 hours;

– Once again, the random forest model excelled as the best-performing model, achieving accuracies of 0.83 and 0.84 on the derivation and validation datasets.

– The logistic regression and XGBoost models yielded comparable performance to their 12-hour discharge predictions.

– Across the board, all models favored specificity over sensitivity when predicting discharge within 24 hours.

These findings illustrate machine learning models’ performance metrics and outcomes in forecasting postoperative discharge within particular periods, emphasizing the Random Forest model’s superiority across multiple evaluation criteria.



The study’s discussion section highlights the successful development and evaluation of a machine learning-derived algorithm called the Adelaide Score [3]. This algorithm, incorporating observations and laboratory parameters, demonstrates accurate predictions of postoperative discharge within 12 and 24 hours for general surgery patients [3]. The random forest model emerges as the top-performing algorithm, exhibiting robust calibration and superior performance metrics despite the inherent complexity of predicting hospital discharge [3, 6]. The specificity-focused nature of all models suggests the potential of the Adelaide Score as a safe clinical tool for discharge planning in surgical systems [3].

The research addresses a notable gap in evidence-based measures for predicting hospital discharge after general surgery. While previous studies have produced risk calculators [4, 5], the Adelaide Score introduces an artificial intelligence measure, potentially superseding traditional scoring systems [3]. The score’s reliance on standardized physiological indicators makes it a promising tool for future surgical systems, leveraging real-time electronic medical record data for improved efficiency and risk management in postoperative pathways [3, 8].

Further development and validation studies are warranted, particularly for external validation across diverse settings [3]. The score’s automatic calculation through direct interfacing with institutional electronic medical records is essential for practical implementation, ensuring efficiency within discharge planning processes [3]. Effective communication strategies for the Adelaide Score’s results are crucial, requiring human factor analysis to optimize how, to whom, and how the score is presented [3]. The continuous output of the score, ranging from zero (0) to one (1), can be converted to a 0-100 scale for ease of communication, aiding in ranking patients based on discharge probability [3].

The study concludes by acknowledging the potential impact of the Adelaide Score on standardizing communication and perioperative processes within surgical systems [3]. Implementation studies should consider outcome variations based on the recipients of score notifications, emphasizing healthcare professionals’ diverse roles in postoperative care and discharge planning [3].



Adelaide Score Limitations

  1. Developed based on observation and laboratory data.
  2. Does not capture perturbations beyond specified parameters.
  3. Lacks indicators of patient mobility and post-discharge home environment.
  4. Excludes sociocultural factors due to heterogeneity in natural language data.
  5. Emphasizes supplementary role, not a substitute for human clinical decision-making


Study Limitations

  1. Absence of a gold-standard composite measure for surgical patient discharge readiness.
  2. Adelaide Score indicates the most likely, not ideal, discharge time.
  3. Restricted detail in the clinical picture with observation and laboratory data.
  4. Exclusion of social data limits consideration of factors influencing discharge.
  5. Potential confounding factors in real-life clinical practice and discharge planning:
  6. Variations in individual clinician and treating team practices.
  7. System and infrastructure differences between participating institutions.
  8. Influence of evolving surgical discharge practices during the COVID-19 pandemic.
  9. Future integration of additional inputs may enhance performance.
  10. Necessity of electronic medical record integration for feasible Adelaide Score calculation.




The Adelaide Score, an A.I. measure, effectively predicts general surgery patient discharge within 12 and 24 hours, streamlining the process globally. The random forest model showed optimal performance. Future steps include external validation, exploring additional inputs, and predicting the failure of standardized clinical protocols. Implementation studies are recommended before potential integration into surgical systems, offering real-time updates within electronic medical records for improved surgical care.



  1. De Martino B, Kumaran D, Seymour B, Dolan RJ. Frames, biases, and rational decision-making in the human brain.Science2006;313:684–7. Frames, biases, and rational decision-making in the human brain – PubMed (
  2. Hillestad R, Bigelow J, Bower Aet al. Can electronic medical record systems transform health care? Potential health benefits, savings, and costs.Health Aff.2005;24: 1103–17. Can electronic medical record systems change health care? Possible health benefits, savings, and costs – PubMed (
  3. He J, Baxter SL, Xu J, Xu J, Zhou X, Zhang K. The practical implementation of artificial intelligence technologies in medicine.Nat. Med.2019;25:30–6. The practical implementation of artificial intelligence technologies in medicine – PubMed (
  4. Royal Australasian College of Surgeons.Surgical Audit Guide. [Cited May 18, 2023]. Available from URL: 
  5. Kovoor JG, Bacchi S, Gupta AK, O’Callaghan PG, Trochsler MI, Maddern GJ. Standardising optimization in surgery.ANZ J. Surg.2023;93:24–5. The Adelaide Score: An artificial intelligence measure of readiness for discharge after general surgery – Kovoor – 2023 – A.N.Z. Journal of Surgery – Wiley Online Library
  6. Ljungqvist O, Scott M, Fearon KC. Enhanced recovery after surgery: a review.JAMA Surg.2017;152: 292–8. Enhanced Recovery After Surgery: A Review – PubMed (
  7. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation.J. Chronic Dis.1987;40: 373–83. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation – PubMed (
  8. Australian Bureau of Statistics.Census of population and housing: socio-economic indexes for areas (S.E.I.F.A.). Canberra, A.C.T., Australia: Commonwealth Government of Australia [Cited Feb 2020]. 2033.0.55.001 – Census of Population and Housing: Socio-Economic Indexes for Areas (S.E.I.F.A.), Australia, 2016 (

Oncology Related Tools


Latest Research


About Author

Similar Articles

Leave a Reply