CVPR Daily - Wednesday

9 DAILY CVPR Wednesday approach within the broader category of Test-Time Adaptation (TTA), has emerged as a promising way to address this challenge and enhance model robustness. “In TTA, we take a model that has been pre-trained on a source dataset and adapt it on a target dataset at test time, but without labels and images from the source dataset,” David explains. “To do this, there’s an architecture named TTT, where we introduce an auxiliary task at test time, which is often self-supervised or unsupervised, without labeled data. At training time, we train the model, and at test time, we update the feature extractor thanks to this auxiliary task, which improves the model’s performance.” David and Gustavo introduce the concept of Noise-Contrastive TestTime Training (NC-TTT), a novel unsupervised TTT technique based on the popular theory of NoiseContrastive Estimation. With NCTTT, the model learns to classify noisy views of projected feature maps and then adapts accordingly on new domains. It employs the same Y-shaped architecture as previous works, with one branch dedicated to the main classification task and the other to the auxiliary TTT task. NC-TTT is particularly suited for classification tasks. However, the principles of TTA can be applied to other vision tasks, particularly those prone to domain shifts caused by noise or other corruptions. Even NC-TTT

RkJQdWJsaXNoZXIy NTc3NzU=