Computer Vision News - June 2023

15 Alexander Meinke that literature, known as adversarial training, can also be applied here. Basically, you run adversarial attacks at train time and teach the model to retain low confidences even on the perturbed out-distribution samples. But verifying that your model really is robust is a computationally intractable problem. So even if you have a perfectly robust model, you can never know that for sure. This is where provable methods come to the rescue. In my work I managed to come up with such a method that is robust, scalable, simple to train and computationally cheap . And the best part? You get a second guarantee for the price of one. Earlier I said that one can mathematically show that standard neural nets become asymptotically overconfident on far-away data. Our method actually provably fixes this so that as you move far away from the training data your model has decreasing confidence. You could say, the model knows, that it doesn't know!

RkJQdWJsaXNoZXIy NTc3NzU=