“We’ve got a collection of images that are never shown to the network, but similar classes have been shown to the StyleGAN for training,” Prajwal tells us. “We took a bunch of unseen images, transformed all those images into the EEG feature space, and tried to reconstruct the EEG features with the StyleGAN. Even though our network hadn’t seen those images, it was able to reconstruct a close approximation of what this EEG might belong to. That shows we can deploy the model live if possible.” To learn more about this work, visit Posters 3 today at 17:15-19:15. Virtual papers are available via the WACV 2024 online interface. 9 DAILY WACV Unseen images to EEG representation space and reconstructing back using EEGStyleGAN-ADA. Saturday Learning Robust Deep Visual …
RkJQdWJsaXNoZXIy NTc3NzU=