Computer Vision News - May 2024

39 datascEYEnce! Computer Vision News Susceptibility to such attacks is intricately linked to financial and fraudulent incentives, alongside technical vulnerabilities within the clinical infrastructure. Cristina and her peers investigated the vulnerability of algorithms to adversarial attacks in three different medical imaging modalities (radiology, ophthalmology, and histopathology). Their study showed the effect of common algorithm design choices (e.g., pre-training on ImageNet) on adversarial robustness and the importance of establishing realistic and standardized robustness studies (Bortsova, González-Gonzalo, Wetstein et al, 2021). Multi-stakeholder collaborations for seamless algorithm integration After testing the mentioned (and other) properties for trustworthiness, the next step is the integration of algorithms in the clinical infrastructure. In her last study (González-Gonzalo et al., 2022), Cristina highlights how a close collaboration between clinicians, AI engineers, and other relevant stakeholders is crucial to generating algorithms that are properly used and trusted. It is also essential that target users get adequately trained on how to use the algorithms in-house. Clinical validations, ideally prospectively, can be performed for this purpose. So let’s hope to see many more of those collaborations in the future! I want to thank Cristina again for giving her insights on trustworthy AI, congratulate her on finishing her PhD and wish her the best of luck for her future! Visual evidence results from Cristina's publication “Iterative Augmentation of Visual Evidence for Weakly-Supervised Lesion Localization in Deep Interpretability Frameworks: Application to Color Fundus Images”.

RkJQdWJsaXNoZXIy NTc3NzU=