Computer Vision News - January 2020

Summary Research 16 Until this section, everything sounds beautiful: in the near future, we will not need to train networks anymore and we will use NTKs instead. But as we all know, these models will not be used if their performances are not as good as regular neural nets. To demonstrate the power of the CNTK, the paper compares the performance of vanilla CNTK with the corresponding architecture of regular CNN. Moreover, it evaluates the performance of CNTK with GAP which stands for Global Average Pooling. All experiments were evaluated on CIFAR-10 . The results of the model can be seen in the table below: As we can observe, the best kernel CNTK with GAP achieves accuracy of more than 77% . These results can significantly be called a new benchmark for kernel methods on CIFAR-10. On the other hand, there is still 5%-6% gap between the performances of CNTK and those of a regular network, which means that regular networks may still have some advantage. Nowadays, this gap is the most researched question in this field: can we close the gap? Can we prove a lower bound on the gap? These questions are still open. There is no doubt that CNTK are interesting and intriguing tools, and that these models stand in the center of many machine learning studies today. NTK and CNTK might revolutionize the field of deep learning: no training, one global optimum, and many theoretical justifications. A follow up work to this paper has also presented a comparable accuracy to AlexNet on CIFAR data which supports the idea the NTKs are the next thing. We promise to update you with any progress in this exciting field. Experiments Conclusion When we want to make a prediction, we just need to compute for all i; in general, this can be done more efficiently than feeding the example to the network. The most interesting question about this family of models is how close their performance is to that of the regular neural nets ; this is what we describe next. ker( , )

RkJQdWJsaXNoZXIy NTc3NzU=