Computer Vision News - July 2019

Method DNN architecture for visual features representation In order to represent an image in an informative way, the authors used a pre- trained classification model of the VGG-19 architecture. This architecture was trained to classify ImageNet dataset with over 1000 classes of objects category. A trained classification model (such as VGG-19) gives a representation of an image in an hierarchical manner, which is invariant to shifting, rotations and translation of the objects. In turn, this will produce an invariance of the model to such transformations at inference time. The VGG-19 model consists of 16 building blocks of convolution and 3 fully connected layers. The outputs from each layer were taken immediately after the weight multiplication (without rectification) and concatenated into a vector. This vector is named the visual feature vector, that later will be used in the training and reconstruction process. fMRI decoding The authors construct a decoding model that predicts the visual feature vectors from brain signal using sparse linear regression (SLR). The logic is that the SLR can automatically select the most important features for prediction. In this way, they also cope with the high dimensionality of the fMRI signal. Computer Vision News 9 Research Computer Vision News Deep Image Reconstruction from …

RkJQdWJsaXNoZXIy NTc3NzU=