Computer Vision News - April 2022

40 Robotic Assisted Surgeries The imagingmodalitiesusedduring Robotic Assisted Surgeries (RAS) are numerous, but perhaps the most informative one is the video feed. Surgical videos can be obtained from laparoscopic or operation- room (OR) cameras, and they are the “eyes” of the surgeon. Keeping in mind the end-goal, the first step is to implement surgical phase recognition by applying artificial intelligence (AI) and computer vision (CV) methods . It is essential to detect at every time- point what procedural step is currently conducted to understand what tools are needed, what risks are relevant, and to alert the staff. Initially the team attempted using convolutional neural networks (CNN) to classify the procedural phase by a single frame. This method gave decent results in some of the frames. For example, it is relatively easy to understand that suturing occurs when there is a needle in the frame. However, when the needle is not visible in the frame, the CNN will be misled to classify the frame to be part of a non-suturing phase. Furthermore, some phases (e.g. exploration phase), “ can be detected only from a series of frames, as “ Our long-term goal is autonomous surgery. Robots can be very accurate, they don’t get tired, and they can adapt quickly when necessary. We are still far from autonomous surgery, so we start with smaller steps, ” says Asher Patinkin , experienced algorithm developer at RSIP Vision. CHALLENGES IN VIDEO FOR ROBOTIC ASSISTED SURGERIES by Oren Wintner, RSIP Vision Asher Patinkin

RkJQdWJsaXNoZXIy NTc3NzU=