Computer Vision News - October 2022

37 Thanos and Hadrien representation of what someone’s heart is doing now and what it could be doing if it was healthier offers something tangible. This work falls under the umbrella of counterfactual analysis. In the field of causalityinmachinelearning,counterfactual questions explore alternative scenarios that might have happened had our actions been different. These scenarios can help inform clinicians by offering possibilities that may be hard for them to visualize. “We’ve shown it in echocardiograms with the ejection fraction, but our method isn’t only constrained there,” Thanos points out. “It could be a question like, for example, had I given the patient a different drug, would they have survived? We chose an echocardiogram with the ejection fraction because we felt it was clear-cut and visually easy for people to understand the power of these kinds of counterfactual questions, which otherwise they wouldn’t be able to answer.” Hadrien tells us that the biggest challenge on the technical side came from the fact that the neural network is causal . The network has two branches, the factual and the counterfactual, and to train that network, they needed ground truth to compute an error on the output of both branches. For the factual branch, that was not difficult because they had a video input and wanted the network to reconstruct that input. However, for the counterfactual branch, they had no way of generating what did not exist. “That was the hardest part – we had to find a trick,” he says. “The solution we found was not to use videos as our ground truth, but instead use two neural networks to compute the loss on metrics that we wanted the network to learn. We wanted the counterfactual branch to generate a video that looked real and a video that had a different ejection fraction. We could train a network to ensure that the video produced looked likea real echocardiogram, This is a representation of the neural network we designed. It shows the factual and counterfactual flow of information, as well as the “trick” we used to train the counterfactual branch, ie: the combination of the expert model for the LVEF regression and GAN discriminator for visual quality. F I

RkJQdWJsaXNoZXIy NTc3NzU=