Computer Vision News - September 2022

29 When AI Learns the Unexpected can increase confidence in the system. 2. Clinical value - Learning the exact consideration behind an accurate prediction may lead to clinical understanding. Assume the above example- ifGrad-CAMexposes thatoften extended suturing leads to a surgical failure, this can have clinical implications - an emphasis on faster suturing, and even more detailed training programs on specific suture tasks. 3. Regulatory pathway - the ability to explain how the classifier works will simplify the regulatory process, as it is no longer a black box, but rather a logical classifier with semantically explainable working methods. It is uncertain what explainability will teach us regarding a finalized neural network. We may find that the network learns from hidden features, or from obvious ones. Either way, RSIP Vision has extensive experience using this feature and can uncover the working-ways of trained networks for your application. before the output, and create a heatmap of these gradients which can overlay the input. This heatmap is “hotter” in areas where the gradients and activations are larger , and colder where they are smaller. Intuitively, hotter areas are the areas more important to the prediction of the network - the areas which affect the prediction and are the mechanism behind the black-box. Consider the aforementioned example - the surgical scene classifier. Using Grad- CAM on this network may point to the temporal time point where the surgery failed . Statistical analysis of multiple surgeries may show the most common mode of failure, or the most common anatomical area prone to failures. Being able to explain the decision- making process of a neural networks, or explainability, is important in several aspects: 1. Self assurance - when developing a classifier, calculating the accuracy metric of choicewill convey howwell the system predicts, but to further trust results, understanding what the system learns

RkJQdWJsaXNoZXIy NTc3NzU=