Home
Abstract
My Abstract(s)
Login
ePosters
Back
Final Presentation Format
Non-Moderated Poster Abstract
Eposter Presentation
Eposter in PDF Format
https://storage.unitedwebnetwork.com/files/1237/3bc7cfe94dda94d3c96a9863c4cd422b.pdf
Accept format: PDF. The file size should not be more than 5MB
Eposter in Image Format
https://storage.unitedwebnetwork.com/files/1237/623ea479ba5279d78e3b8b80ecdf0c51.jpg
Accept format: PNG/JPG/WEBP. The file size should not be more than 2MB
Presentation Date / Time
Submission Status
Submitted
Abstract
Abstract Title
Quality Assessment of AI-Generated Versus Physician-Written Admission Summaries in Urology
Presentation Type
Non-Moderated Poster Abstract
Manuscript Type
Clinical Research
Abstract Category *
AI in Urology
Author's Information
Number of Authors (including submitting/presenting author) *
6
No more than 10 authors can be listed (as per the Good Publication Practice (GPP) Guidelines).
Please ensure the authors are listed in the right order.
Country
Taiwan
Co-author 1
Yun-Sheng Wu b07401082@ntu.edu.tw National Taiwan University College of Medicine School of Medicine Taipei Taiwan *
Co-author 2
Liang-Chen Huang sam831009@gmail.com En Chu Kong Hospital Division of Urology, Department of Surgery New Taipei Taiwan - National Taiwan University Hospital Department of Urology Taipei Taiwan
Co-author 3
Jung-Yang Yu ericyu29218218@gmail.com National Taiwan University Hospital Department of Urology Taipei Taiwan -
Co-author 4
Chung-Cheng Wang ericwcc@ms27.hinet.net En Chu Kong Hospital Division of Urology, Department of Surgery New Taipei Taiwan -
Co-author 5
Jung Yu yujung19960513@gmail.com National Taiwan University Hospital Department of Urology Taipei Taiwan -
Co-author 6
Jian-Hua Hong cliffordhong622@gmail.com National Taiwan University Hospital Department of Urology Taipei Taiwan -
Co-author 7
Co-author 8
Co-author 9
Co-author 10
Co-author 11
Co-author 12
Co-author 13
Co-author 14
Co-author 15
Co-author 16
Co-author 17
Co-author 18
Co-author 19
Co-author 20
Abstract Content
Introduction
Recent advances in large language models (LLMs) have opened new possibilities for automating clinical documentation. The use of generative AI to produce admission summaries offers a potential solution to the increasing documentation burden faced by clinicians. The standardized workflow of elective surgeries in the urology department offers a valuable setting to explore the integration of AI for automating documentation processes. This study aims to explore the feasibility and performance of LLM-based AI tools in generating admission notes, and to compare their quality and completeness with clinician-written summaries using a standardized assessment framework.
Materials and Methods
Patients scheduled for elective inpatient procedures through outpatient clinics between January and July 2024 were enrolled. Admission summaries were generated by GPT-4o based on unstructured outpatient clinic notes and compared with physician-written admission notes for the corresponding procedures. Documentation quality was assessed using the QNOTE scoring system, a validated tool comprising 12 categories and 44 items for evaluating clinical documentation across multiple domains. Comparisons among the three groups—outpatient clinic notes, AI-generated admission summaries (GPT-4o), and physician-written admission notes—were conducted using non-parametric statistical methods. A p-value < 0.05 was considered statistically significant.
Results
A total of 46 patients were included in the analysis. Comparative results are summarized in Table 1. Both GPT-4o–generated and physician-written admission summaries demonstrated superior documentation quality, with median total QNOTE scores of 78.00 and 80.50, respectively (out of 84), compared to 43.00 for the original outpatient clinic notes. While physician-authored summaries scored significantly higher in multiple categories, this difference may reflect the AI’s limited input compared to physicians' access to comprehensive clinical data. Notably, in sections emphasizing structured information (e.g. review of systems, physical findings), AI performance was comparable to or exceeded that of physicians. This suggests inconsistencies in physician documentation, with some outpatient findings omitted from structured admission notes.
Conclusions
Generative AI models (GAI/LLMs) demonstrated the capacity to generate high-quality admission summaries for elective urology procedures based on outpatient clinic notes. While physician-written notes achieved higher overall QNOTE scores, the AI-generated summaries performed comparably in several structured domains, and in some instances outperformed physicians—particularly in documenting symptoms and physical findings. Although clinician oversight remains essential, AI-generated notes offer a promising supplement to clinical workflows. Further studies are warranted to evaluate their integration into routine practice, including assessments of efficiency and reliability.
Keywords
AI-Generated, Admission Summaries, Urology, QNOTE
Figure 1
https://storage.unitedwebnetwork.com/files/1237/a7fd6d555b6e71d6821145cd560df9ab.png
Figure 1 Caption
Table 1. Comparison of the assessment of QNOTE for outpatient clinic note, AI-generated note and physician-written note
Figure 2
Figure 2 Caption
Figure 3
Figure 3 Caption
Figure 4
Figure 4 Caption
Figure 5
Figure 5 Caption
Character Count
2252
Vimeo Link
Presentation Details
Session
Date
Time
Presentation Order