Computer Vision News - June 2022
14 Outstanding Paper Award Learning to learn as an idea has been around for centuries, but early approaches in modern times date back to Jürgen Schmidhuber in the 1980s. It is usually applied by unrolling the update rule you are trying to meta-learn by some number of steps and then immediately evaluating performance. “ In few-shot learning, we do this all the time, ” Sebastian explains. “ We adapt for several steps and then ask: how did we do? We then optimize for that performance. The limitation is that we have no idea what will come after . If we’d trained for longer, we might have got even better. ” This work aims to develop an algorithm that can aut omatically tune another learning algorithm as it is being applied . A simple example is online hyperparameter tuning. A more ambitious or advanced example would be discovering a learning algorithm directly from data. The core idea is to change the meta-objective – the way you’re optimizing your learning rule. “ I took inspiration from looking at how online optimization works and seeing if we could bring some of those ideas into the deep learning setting, ” Sebastian tells us. “ The basic idea applies very broadly: you have something to optimize, you’re going to look at how it behaves for a couple of gradient steps, and then optimize it such that if you were to train again, it would be much faster. ” Sebastian Flennerhag is a Research Scientist at DeepMind. He speaks to us after scooping an Outstanding Paper Award at ICLR 2022 in April for his work on bootstrapped meta-learning. BOOTSTRAPPED META-LEARNING Outstanding Paper ICLR 2022
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=