Computer Vision News - March 2023
35 UM2ii Lab at U of Maryland “ I went through four years of medical school, five years of residency, and a one- year fellowship, wheremost of my time was spent learning how to do clinical work, ” Paul points out. “ Vishwa spent a similar amount of time on the computer science side. We have different perspectives and vantage points and can cover each other’s weaknesses. ” Vishwa agrees: “ As a computer scientist, I would go off and work on problems that wouldn’t even be useful from a clinical standpoint. What we do here is work on problems that can be translated and prevent the AI being translated, ” Paul tells us. “ Problems like fairness, trustworthiness, and explainability of AI are things that I care very much about as a practicing radiologist. Vishwa brings a robust set of skills on the technical side. How do we translate things from computer science into the medical world? ” By design, the pair are building a lab that rejects the previous silo working in these two domains. This collaboration allows them to bounce ideas off each other and discover the problems that really require attention. Figure showing our recent work in Vision Transformers (ViT) compared to Convolutional Neural Networks (CNN) for radiograph diagnosis. Heatmaps for ViT (DeiT-B, middle) show more precise localization to radiographic abnormalities than CNNs (DenseNet121, right).
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=