Computer Vision News - January 2020

Summary Research 14 In recent years, there is an explosion in the field of deep learning . In many applications, deep nets are the go-to solution for most tasks. On the other hand, there are two crucial questions which are still unsolved: what are the theoretical guarantees of deep neural nets (for optimization, expressiveness and generalization)? And whether deep learning can be computed and trained more efficiently. For example, using kernel methods . Surprisingly, in the last year or two, these two questions have been solved under one assumption : that the width of the network is large enough (or tends to infinity). Under this assumption, recent papers were able to establish several theoretical guarantees for these networks: 1) Convergence to global minimum of the L-2 loss with zero training loss 2) Tight generalization bounds for test examples of any distribution 3) There is a kernel-linear model that explains the network behavior The paper we review this month builds upon these three theoretical results to represent infinite width networks as a kernel regression, named in the literature NTK - neural tangent kernel - or CNTK , for convolutional NTK. The paper includes theoretical analysis of the behavior of such kernels, derivation of a closed form solution for convolutional kernels , and comparisons to other state of the art methods. by Amnon Geifman On Exact Computation with an Infinitely Wide Neural Net Every month, Computer Vision News reviews a research paper from our field. This month we have chosen On Exact Computation with an Infinitely Wide Neural Net. The full paper by authors Sanjeev Arora, Simon S. Du, Wei Hu, Zhiyuan Li, Ruslan Salakhutdinovk and Ruosong Wang is here . Getting started

RkJQdWJsaXNoZXIy NTc3NzU=