Eposter Presentation
 
Accept format: PDF. The file size should not be more than 5MB
 
Accept format: PNG/JPG/WEBP. The file size should not be more than 2MB
 
Submitted
Abstract
Artificial intelligence to identify surgical anatomy for intraoperative guidance during laparoscopic donor nephrectomy
Video Abstract
Clinical Research
AI in Urology
Author's Information
7
No more than 10 authors can be listed (as per the Good Publication Practice (GPP) Guidelines).
Please ensure the authors are listed in the right order.
Singapore
Chloe Ong chloeosh@gmail.com National University Hospital, National University Health System Department of Urology Singapore Singapore *
Lin Kyaw linkyawmr@gmail.com National University Hospital, National University Health System Department of Urology Singapore Singapore -
Manchi Leung manchi@smartsurgerytek.com Smart Surgery Tek Taipei Taiwan -
Yu-Chieh Lee julielee@smartsurgerytek.com Smart Surgery Tek Taipei Taiwan -
Bo-An Tsai boan.tsai@smartsurgerytek.com Smart Surgery Tek Taipei Taiwan -
Jeff Shih-Chieh Chueh jeffchueh@gmail.com National Taiwan University Hospital, National Taiwan University Department of Urology Taipei Taiwan -
Ho Yee Tiong surthy@nus.edu.sg National University Hospital, National University Health System Department of Urology Singapore Singapore -
 
 
 
 
 
 
 
 
 
 
 
 
 
Abstract Content
Although the risk of intraoperative complications of laparoscopic donor nephrectomy (LDN) is now acceptably low, the work continues to minimise technical mishaps during this ‘high stakes’ surgery. This video demonstrates the pilot use of a patented proprietary deep learning (DL)-based computer vision (CV) to automatically recognise key anatomical structures and prevent intraoperative injuries, which is especially crucial during the learning curve.
7027 images manually annotated by pixels were selected from 16 surgical videos (National University Hospital, NUH) for training as ground truth, and 2266 annotated images from 4 separate surgical videos were used for validation. This ensured a balanced validation ratio of nearly 20% for each label (spleen, left kidney, renal artery, renal vein, and ureter). The YOLO (You Only Look Once) v11x DL network (https://docs.ultralytics.com/models/yolo11/), known for its speed and accuracy in real-time detection, was adapted to train our model. For further optimisation, it uses a sophisticated loss function which incorporates the accuracy of each pixel in segmentation tasks (binary cross-entropy loss), compares the predicted bounding box coordinates against ground truth (bounding box loss), and emphasises the importance of difficult-to-detect labels (distribution focal loss). Metrics were calculated based on true positives (TP) and false negatives (FN) as below: • Precision = TP/(TP+FP) • Recall = TP/(TP+FN) • F1 score = 2(Precision*Recall)/(Precision+Recall) High precision minimises false positives which could disrupt surgical workflows, while high recall ensures comprehensive detection, minimising false negatives that could affect patient safety. F1 serves as the harmonic mean of recall and precision.
Quantitative evaluation of the validation dataset using the hold-out validation method yielded performance metrics as in the figure below. Prospective evaluation was performed on a video from another surgeon (JC) and institution (National Taiwan University) and also in real-time in NUH.
Our pilot study demonstrates an innovative machine learning design’s ability to accurately identify vital anatomical structures in LDN. This is a crucial first step for further artificial intelligence-guided applications such as intra-operative guidance, education, and post-hoc operative analysis and operative standards evaluation.
artificial intelligence, laparoscopy, donor nephrectomy, anatomy
https://storage.unitedwebnetwork.com/files/1237/719ad681cd70345ad17cd174bfd13ade.png
Performance metrics of validation dataset
 
 
 
 
 
 
 
 
2047
https://vimeo.com/1065520663
Presentation Details
Free Paper Podium(05): Transplantation
Aug. 15 (Fri.)
14:36 - 14:42
12