Computer Vision News - June 2023

14 Congrats, Doctor Alexander! Alexander Meinke recently defended his PhD at the University of Tübingen. In his research he focused on the robust quantification of uncertainty in neural networks. He is now working on AI safety and existential risks associated with AGI. Congrats, Doctor Alexander! Do neural nets know when they don't know? In general, the answer is no, how could they? That is to say there are two problems: 1) A neural classifier will give highly confident predictions on samples that don't even contain any of the classes that they were trained on and 2) you can actually mathematically prove that neural classifiers become more and more confident the further you move away from their training data, instead of less and less confident. Let's take these issues one by one. For the first problem we are saying, what if we are trying to classify cats and dogs and suddenly somebody shows a toaster? Then the model better assign low confidence. We can achieve this behavior by simply handing the model a whole bunch of samples that do not contain the classes during training and telling the model to have uniform predictions there. For example, in vision you could scrape a huge set of random images off the internet and use this as your training out-distribution. Turns out that in many cases the neural net will learn to generalize its low confidence for unseen out-distributions as well. That all sounds well and good but of course there are some issues with this. As you might know, neural networks are generally not robust to adversarial perturbations . In this context this means that even if our neural net correctly assigns low confidence to an unknown class, we can adversarially manipulate this out-distribution sample so that it looks almost identical but suddenly receives very high confidence from the model . Of course, there is lots of research on adversarial robustness and the most successful method from

RkJQdWJsaXNoZXIy NTc3NzU=