CVPR Daily - Tuesday
Domain adaptation is one thing, but what happens when a model has to adapt again and again to new domains across its lifespan? Without precautions, it can forget its previous knowledge – in the continual learning literature, this is called catastrophic forgetting . Riccardo and his team propose new protocols to study this continual domain adaptation problem. This has been studied before, but the team have come up with more realistic techniques to learn computer vision models in these settings. “ We use some very simple benchmarks, like digit classification tasks , because it ’ s faster to run experiments on these datasets. You can prototype faster and do a lot of ablation studies, ” he tells us. “ Then we use object classification and the PACS dataset from the domain generalization community to classify objects pictured as sketches, cartoons, paintings, or photos. ” The team also experimented with semantic segmentation , looking at different city and weather conditions. They trained a model on clean streets without clouds or fog, and then finetuned it on foggy samples, then cloudy samples, then used different lighting conditions, such as sunrise and sunset. This is particularly relevant for self-driving cars , where there are many models based on different conditions and training these models takes a lot of time. Ideally, you would be able to finetune without having to retrain again and again on the same data. Riccardo Volpi is a research scientist at NAVER LABS Europe in Grenoble, France, having graduated in 2019 from the Italian Institute of Technology in Genoa. He is first author of a paper which is at the intersection of domain adaptation and continual learning. He speaks to us ahead of his presentation today. Continual Adaptation of Visual Representations via Domain Randomization and Meta-Learning 10 DAILY CVPR Tuesday Presentation
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=