Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published August 2020 | Submitted
Book Section - Chapter Open

Temporal Segmentation of Surgical Sub-tasks through Deep Learning with Multiple Data Sources

Abstract

Many tasks in robot-assisted surgeries (RAS) can be represented by finite-state machines (FSMs), where each state represents either an action (such as picking up a needle) or an observation (such as bleeding). A crucial step towards the automation of such surgical tasks is the temporal perception of the current surgical scene, which requires a real-time estimation of the states in the FSMs. The objective of this work is to estimate the current state of the surgical task based on the actions performed or events occurred as the task progresses. We propose Fusion-KVE, a unified surgical state estimation model that incorporates multiple data sources including the Kinematics, Vision, and system Events. Additionally, we examine the strengths and weaknesses of different state estimation models in segmenting states with different representative features or levels of granularity. We evaluate our model on the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS), as well as a more complex dataset involving robotic intra-operative ultrasound (RIOUS) imaging, created using the da Vinci® Xi surgical system. Our model achieves a superior frame-wise state estimation accuracy up to 89.4%, which improves the state-of-the-art surgical state estimation models in both JIGSAWS suturing dataset and our RIOUS dataset.

Additional Information

© 2020 IEEE. This work was funded by Intuitive Surgical, Inc. We would like to thank Dr. Azad Shademan and Dr. Pourya Shirazian for their support of this research.

Attached Files

Submitted - 2002.02921.pdf

Files

2002.02921.pdf
Files (2.2 MB)
Name Size Download all
md5:f62357425c856302b1c626e4ff60b4d0
2.2 MB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 20, 2023