Computer Vision News - December 2022
44 A New Definition of Interpretability “ At AI4media, I’m developing new tools for interpretability and looking into the unsupervised discovery of concepts in the latent space and the generation of causal explanations, ” she tells us. “ I’malsoworking with IBM Research Europe on a slightly different project related to colorectal cancer patients. The idea is to look into the images of their tissues, or histopathology inputs, and molecular profiles. ” Vincent is currently working on various projects with deep learning for medical imaging. “ I’m starting a new project to analyze brain metastases, ” he reveals. “ We have agree on a definition shared in the technical sciences as we design the systems that will deliver interpretability and accountability. But it’s not like we’ve set the Rosetta Stone. It can be changed. ” Vincent continues: “ We need a common basis to discuss the methodsweneed todevelop toget feedback and to have a loop with all the experts. We may even need to give up on certain collaborations. If there is no understanding, there is no start of a collaboration. Then we want to start fostering new collaborations. We had a group of experts, and we tried to include as many experts as possible, but it can still be challenged and adapted in the future. ” In the medical domain, also, there are barriers to overcome. Mara says that clinicians always ask: Are you developing something that will replace me in the long run? That is entirely not what they are doing, she counters. Instead, they are trying to build systems that can interact with domain experts because without that, systems can only communicate with tech people that know machine learning, and that’s not the point. In her day job, Mara is currently working on projects for AI4media and IBM Research Europe .
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=