Computer Vision News - June 2021

18 Surgical Robotics Research Three other experiments are led to evaluate the efficacy of the representations. These are measured using the accuracy, precision, recall and F-1 classification metrics. The first one is a skill classification experiment done by training a 3-class gradient boosting-based classifier with the representations obtained previously by the encoder-decoder architecture. The result of this experiment indicates that there is a significant retention of information related to surgeon skill and that the inter-class variability between tasks remains low with highest performance across the suturing task (0.812 ± 0.0228) and lowest across the Knot-Tying dataset (0.768 ± 0.0303). Since a major portion of the classification error is contributed by surgeons having an “intermediate” skill, this also shows that there is a grey line between expert and beginner levels. The second experiment is similar to the previous one, except that it focuses on gesture classification. This is also done by training a 3-class gradient boosting-based classifier with the representations. Promising results from this task are also found with an average accuracy across datasets of 0.745. Finally, the authors try a transfer learning-based gesture recognition task. This aims at classifying gestures on two datasets based on the representations from the other one. This is repeated for all three combinations of representations used. Although a slight decrease in performance is found, this experiment concludes that the representations are sufficiently robust across multiple task s and might facilitate transfer learning. Herewe are, at the end of another paper review, which opens up this month’s section on surgical robotics. This is a demonstration of interesting research going on in this field and of a novel application of an encoder-decoder architecture. We hope to keep discovering together more papers using deep learning architectures for novel applications and, as suggested by the authors, we share their same curiosity to see one day soon a universal technology that tracks surgical progress in real-time, giving feedback regarding possible mistakes, surgical scene depth, next gesture suggestion etc. with high accuracy.

RkJQdWJsaXNoZXIy NTc3NzU=