Computer Vision News - August 2018
The authors also offer an evaluation of the importance of using taxonomy to choose transfer learning networks versus just selecting one at random -- in the figure below. The performance of their taxonomy is compared to random transfer policies. The taxonomy outperformed all other connectivities by a large margin. Conclusion: Basically, the article is trying to answer the question whether there is an underlying affinity between visual tasks or they are unrelated. For instance, can computing the image normals be used to compute or estimate an image’s depth? Intuitively, the answer to these questions is positive, implying the existence of an affinity, i.e., similarity in internal structure, between neural networks trained for visual tasks. Awareness and understanding of this underlying structural similarity is the basis and foundation of transfer learning and can help identify redundancy between tasks, i.e., when transfer learning is equivalent. Source code and demonstration can be found here . Research 10 Research Computer Vision News
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=