Computer Vision News - June 2016
16 Computer Vision News Tool This simple process is just one example of how the tensor decomposition provides an efficient way to explore your data. This makes you easily understand its structure and content. Now let’s quickly explore a second use for the tensor decompositions. Here, we will classify the image faces into groups, one group per subject. For ease of display and to make it fit in this limited space, we will randomly select 10 subjects from the database. In addition, we will assume that the left-light and right-light images (the shadowed images) are “noisier” images and those are removed. That leaves us with 9 images per subject. We have now a tensor with a size of 243x320x90 , which we will decompose using the same settings as above. Taking three values per subject - one score from each factor (A13 , A12 and A11) - and feeding those into a simple SVM classifier trained and evaluated with Leave-One-Out-Cross-Validation yielded an accuracy above 90%. But even before that, one can again use the data exploratory approach and just inspect the values of the A13, A12 and A11 vectors of Factor1, 2 and 3 respectively by plotting them on a 3D graph. This will reveal a nice pattern of 10 groups (one per subject): Each point in the above figure corresponds to one image in the database - this is also known as a dimensionality reduction procedure . Last but not least, how can we write about computer vision without mentioning deep learning? Indeed, researchers suggest today the use of tensor decompositions to train and analyze deep neural networks. For this you can read: • Novikov, Alexander et al: " Tensorizing Neural Networks " in Advances in Neural Information Processing Systems, NIPS 2015 • Cohen, Nadav, Or Sharir and Amnon Shashua: " On the Expressive Power of Deep Learning: A Tensor Analysis ”, 2015
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=