Computer Vision News 4 people’s trust in systems, and systems’ trust toward people. To this end, it conducts user studies, collaborates with social scientists, engages with customer groups, and utilizes its systems to investigate various trust-related factors. One of its main objectives is to minimize the risk in the current climate of marginalizing humans in this equation. Regular readers may recognize Ilke from her Women in Computer Vision feature in 2017. Since then, her career has gone from strength to strength. As the self-declared “mother of FakeCatcher”, her passion for the technology shines through as she talks about it, and she takes pride in the fact that it belongs to her rather than any corporation. “My background is in proceduralization, which is computer graphics, computer vision, and machine learning to find interpretable representations from 3D data,” she explains. “I’ve been looking at priors, distributions, and generative models my entire life –20 years of research. When deepfakes were rising, I thought, ‘Generative models also generate deepfakes, and generative models have those priors. Can we find some human prior in data to depend on in that generated content?’ I was also working on human understanding in virtual reality. At that point, I was like, there are some human priors, and machine learning can predict humans, so we can build something...” Ilke then saw an MIT paper about PPG signals, looking at blood flow from videos, and realized its potential for analyzing deepfakes. Alongside her colleague Umur Aybars Ciftci, they began running experiments on the data and proving why PPG works. FakeCatcher soon Trusted Media
RkJQdWJsaXNoZXIy NTc3NzU=