Computer Vision News - August 2021

2 Summary AI Rese ch 6 2) Create visual saliency maps which are employed to show visual support. The GradCAM method is trained to visualize differences between the HE-labels and OCT- labels. This shows that the former focus on hard-exudates (yellow lesion) and the latter centers around the fovea . But the generated heatmaps can be tricky to interpret, especially in the medical field, as they estimate where the model is looking to make its predictions, but unfortunately the appearance of medical images is not as straightforward as natural images. 3) Train an unpaired image to image translation method. The authors modify the classic CycleGAN adding a 1x1 convolutional path from the input to the output and modelling functions f (for transformation from DME to no-DME predictions) and g (for transformation from no-DME to DME predictions) as residual networks. To verify that the CycleGAN works, a model M is trained on both original and translated images and the area under the curve (AUC) is recorded; the results show that the CycleGAN is able to successfully fool prediction model M into thinking images are from the opposite class , and it continues to improve with successive applications of f and g. The AUC for input source-target pair (x,y) is 0.804 while the AUC for successive applications (g 4 (x),f 4 (y)) is 0.106.

RkJQdWJsaXNoZXIy NTc3NzU=