Every Robotic Assisted Surgeries (RAS) requires some level of navigation. While in open surgery the target is viewed directly, minimally invasive RAS views come from inside the body cavity, with a restricted field-of-view (FOV). Also, the surgeon’s hands are occupied with the tools, whereas the camera is controlled by an assistant, adding another complication to the procedure – requiring perfect collaboration between them. Another challenge arises from anatomical and physiological differences between patients which make it difficult to accurately position surgical tools and recognize target organs. In gastroscopies or colonoscopies, the singular wide-angle view is often difficult to interpret, and objective navigational aid can be beneficial. Recently developed AI technology offers a solution to these challenges, as described below.
Autonomous Camera Control
Directing the camera towards the desired FOV can be achieved in several ways. Firstly, the system can be trained to keep pre-selected key points in the view, and automatically correct the angle to maintain similar positioning of these points in the image. Using deep-learning, the system can detect and segment the surgical tools or key anatomical features, and use them as anchors for the image. Aligning the camera position with the detected (or selected) key-points will continue aiming the camera at the desired FOV.
Another method involves a camera for surgeon’s eye movement tracking. Using well-established eye-tracking algorithms, the system can analyze the direction where the surgeon is looking and aim the camera for the desired FOV. These methods can navigate the camera to the ideal position based on the surgical needs.
Intraoperative Guidance Using AI
One of the common challenges in video guided surgical procedures is navigation to the desired destination. Anatomical differences, unusual camera point-of-view, pathologies, lack of depth perception, and other factors all account for navigational difficulties. There are several levels for navigational assistance:
- Anatomical landmark recognition – The system can be trained using deep-learning to detect specific organs or key anatomical landmarks and highlight them on the screen. These can be used to familiarize the surgeon with the anatomy and lead to the target site.
- Path suggestion – another step towards fully automated robots is automatic navigation. The system can learn navigation using data from previous procedures and suggest preferred paths. This removes decision-making responsibility from the surgeon and allows faster and smoother navigation.
- Generate warnings – fusion of the image with pre-op MRI/CT scan or with anatomical atlas can provide additional information which is not visible to the surgeon. If the surgical tools are in proximity to vulnerable blood vessels or nerve bundles which can be compromised if contacted, the system can detect them and warn the surgeon. This will significantly reduce surgical complications and add a layer of quality control to the procedure.
- Real time tool tracking – for endoscopic procedures, a set of algorithms known as SLAM (Simultaneous localization and mapping) can create a map and track the position of the scope simultaneously. This enables accurate positioning of the scope throughout the lumen in real time.
- Procedural planning – Using pre-op scans, the system can plan an ideal path for the procedure. This feature is especially valuable to endoscopic procedures, where the path does not change due to incisions.
These are several aspects where navigation during RAS can be improved by implementing AI in the system.
AI Contributes to RAS
AI can significantly improve RAS – shorter procedures, better outcomes, assume responsibilities from the surgeon, and reduce adverse events. RSIP Vision has a multidisciplinary team, including engineers and clinicians, with vast experience in implementing state-of-the-art algorithms into medical devices. We specialize in custom-tailored solutions and can help in creating the ideal solution and speed time to market.