Computer Vision News - October 2022
36 Poster Presentation “We want to let the clinician compare the true echocardiogram from the patient and an echocardiogram of the same patient with a different ejection fraction,” Hadrien tells us. “Same heart, same person, but different ejection fraction, which doesn’t exist. It’s purely hypothetical, but it lets the clinician see if the patient’s heart is in bad condition, good condition, or too good condition, which can also be bad, by comparing the two results visually.” Thanos adds: “In real life, a cardiologist can’t say, what if the patient had a 45% instead of a 40% ejection fraction? With our model, they can.” This work aims not to treat but to diagnose the patient. It allows them to see what problems they might have. It is abstract to talk about ejection fraction, but a visual Athanasios “Thanos” Vlontzos has just completed his PhD at Imperial College London and recently joined Spotify as a Research Scientist. Hadrien Reynaud is currently studying for his PhD at Imperial. Their paper explores a novel causal generative model applied to echocardiograms, and they spoke to us about it ahead of their poster session. The ejection fraction is probably the most importantmetric intheclinical evaluationof cardiac function because it tells the clinician how much blood the heart pumps into the body for every beat. This paper proposes a novel neural network architecture that generates two videos for a given cardiac ultrasound scan, including one modified to change the ejection fraction. D’ARTAGNAN: COUNTERFACTUAL VIDEO GENERATION Thanos Vlontzos Hadrien Reynaud BEST OF MICCAI
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=