Computer Vision News - September 2018

Quantitative Results: The authors evaluated several configurations for training the network: (1) They compared the performance using different optimization methods -- SGD vs. Adam. (2) They tested the effect of batch normalization (BN) on network performance, by comparing the network including BN in the blue blocks, to one without them. (3) They compared the residual learning (labeled ‘with RL’) plain discriminative training model they were proposing, where the model is trained to output the noise, with the model trained to output the denoised image ^x (labeled ‘without RL’). You can see in the figure above, that the best performance and fastest training time was achieved using Adam, with the network trained to output noise (residual learning), and the network including batch normalization (BN). You can also clearly see the importance of including batch normalization for residual learning, which gains highly in both performance and speed of network convergence when BN is present. The disparity in ‘with RL’ performance between ‘with BN’ and ‘without BN’ is especially high, which makes sense, since Research 8 Research Computer Vision News I = imread('cameraman.tif'); noisyI = imnoise(I,'gaussian',0,0.01); figure imshowpair(I,noisyI,'montage'); title('Original Image (left) and Noisy Image (right)') denoisedI = denoiseImage(noisyI, net); figure imshow(denoisedI) title('Denoised Image')

RkJQdWJsaXNoZXIy NTc3NzU=