Home
Abstract
My Abstract(s)
Login
ePosters
Back
Final Presentation Format
Non-Moderated Poster Abstract
Eposter Presentation
Eposter in PDF Format
Accept format: PDF. The file size should not be more than 5MB
Eposter in Image Format
Accept format: PNG/JPG/WEBP. The file size should not be more than 2MB
Presentation Date / Time
Submission Status
Submitted
Abstract
Abstract Title
THE APPLICATION OF A LARGE LANGUAGE MODEL (ChatGPT) IN DAILY UROLOGICAL PRACTICE: OUR EXPERIENCE FROM A TERTIARY CARE INSTITUTE FROM INDIA
Presentation Type
Podium Abstract
Manuscript Type
Clinical Research
Abstract Category *
AI in Urology
Author's Information
Number of Authors (including submitting/presenting author) *
2
No more than 10 authors can be listed (as per the Good Publication Practice (GPP) Guidelines).
Please ensure the authors are listed in the right order.
Country
India
Co-author 1
Nisanth Puliyath drnishyurology@gmail.com CALICUT MEDICAL COLLEGE UROLOGY Calicut India *
Co-author 2
Venugopalan AV whitestethescope@gmail.com CALICUT MEDICAL COLLEGE Urology Calicut India -
Co-author 3
-
Co-author 4
-
Co-author 5
-
Co-author 6
-
Co-author 7
-
Co-author 8
-
Co-author 9
-
Co-author 10
-
Co-author 11
Co-author 12
Co-author 13
Co-author 14
Co-author 15
Co-author 16
Co-author 17
Co-author 18
Co-author 19
Co-author 20
Abstract Content
Introduction
ChatGPT is a large language model (LLM), which is an artificial intelligence tool. This tool is yet to be validated for routine clinical use. In this study, we evaluate the capabilities of ChatGPT for routine urology practice in our institute.
Materials and Methods
We assessed the capability of ChatGPT in answering the common queries of our patients. The urologists from our department prepared 50 common clinical questions from different subspecialties of urology. They then graded ChatGPT-generated answers to these questions for accuracy on a 6-point Likert scale (range 1 – completely incorrect to 6 – completely correct). Scores were summarized with descriptive statistics and then compared.
Results
ChatGPT was able to do many of the text-based works like writing consents for routine urological procedures, which could be translated later using Google Translate to Malayalam. These consents were assessed and ChatGPT scored a 4/5 on the Likert scale. This is of value in a language-diverse country like India and aids residents. We also tried LLM in academic activities like preparation of presentations, covering letters, book summaries etc. Across all questions (n=50), median accuracy score was 5 (between almost completely and completely correct) with mean score of 5.2. Median completeness score was 3 (complete and comprehensive) with mean score of 2.8. For questions rated easy, medium, and hard, median accuracy scores were 6, 5.5, and 5 (mean 5.0, 4.7, and 4.6; p=0.05). Accuracy scores for binary and descriptive questions were similar (median 6 vs. 5; mean 5 vs. 4.9; p=0.07). The quality of information was graded based on the section 2 of the DISCERN tool with a median score of 16, corresponding to poor.
Conclusions
ChatGPT generated largely accurate information to diverse medical queries as judged by urologists although with important limitations. It is also helpful in a multi-language country like India for clinical documentation purposes. Further research and model development are needed to correct inaccuracies and for validation.
Keywords
artificial intelligence, LLM,
Figure 1
Figure 1 Caption
Figure 2
Figure 2 Caption
Figure 3
Figure 3 Caption
Figure 4
Figure 4 Caption
Figure 5
Figure 5 Caption
Character Count
2020
Vimeo Link
Presentation Details
Session
Date
Time
Presentation Order