Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published July 15, 2022 | Submitted
Report Open

Quantification of Robotic Surgeries with Vision-Based Deep Learning

Abstract

Surgery is a high-stakes domain where surgeons must navigate critical anatomical structures and actively avoid potential complications while achieving the main task at hand. Such surgical activity has been shown to affect long-term patient outcomes. To better understand this relationship, whose mechanics remain unknown for the majority of surgical procedures, we hypothesize that the core elements of surgery must first be quantified in a reliable, objective, and scalable manner. We believe this is a prerequisite for the provision of surgical feedback and modulation of surgeon performance in pursuit of improved patient outcomes. To holistically quantify surgeries, we propose a unified deep learning framework, entitled Roboformer, which operates exclusively on videos recorded during surgery to independently achieve multiple tasks: surgical phase recognition (the what of surgery), gesture classification and skills assessment (the how of surgery). We validated our framework on four video-based datasets of two commonly-encountered types of steps (dissection and suturing) within minimally-invasive robotic surgeries. We demonstrated that our framework can generalize well to unseen videos, surgeons, medical centres, and surgical procedures. We also found that our framework, which naturally lends itself to explainable findings, identified relevant information when achieving a particular task. These findings are likely to instill surgeons with more confidence in our framework's behaviour, increasing the likelihood of clinical adoption, and thus paving the way for more targeted surgical feedback.

Additional Information

Attribution 4.0 International (CC BY 4.0) Data availability. The data from the University of Southern California and St. Antonius Hospital are not publicly available. Code availability. All models were developed using Python and standard deep learning libraries such as PyTorch42. The code and model parameters will be made publicly available via GitHub. Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article.

Attached Files

Submitted - 2205.03028.pdf

Files

2205.03028.pdf
Files (14.1 MB)
Name Size Download all
md5:02422768bf1908a91ce3bb0699b506cb
14.1 MB Preview Download

Additional details

Created:
August 20, 2023
Modified:
October 24, 2023