MIDL Vision 2020

textur ed representations of the image, so they try to reduce as much as possible the shape information and increase the texture information. At this point we could say there was a hypothesis that shape is important, but what happens if we completely ignore shape and all the network has is these little gray-level variations? If you look at the paper, some of these images look really funny actually. They look just like noise essentially. You have to look very carefully at them to see that they are actually a scan of the brain. What is really interesting in our opinion about that is you can train these really accurate algorithms on something that to the human eye can look just like noise.” The way the textural map is defined is by first choosing a value for the pixel radius. Ahmed tells us they had an unexpected discovery when they noticed that changing the value of the radius produced a completely different pattern as a textural map. This makes sense, but the interesting thing about it is the performance of the neural network actually depends on it. Looking ahead to next steps, Ahmed says they have plenty of ideas. They want to find out whether their findings are specific to the local binary pattern algorithmor if they are specific to texture in general. He thinks this local binary pattern algorithm is a good choice of textural-representation algorithm but would like to explore other texture- analysis algorithms. Another idea is to go in the completely opposite direction Ahmed E. Fetit 27 A simple image summarizing the explanation usually given to why CNNs are effective; low level shape features are combined in increasingly complex hierarchies until an object could be classified or detected (referred to as the 'shape hypothesis' in recent literature). We, however, think that there is an important role to subtle visual textures and not just shape information, and study this in a neuroimaging setting. “The performance of the neural network actually depends on it!”

RkJQdWJsaXNoZXIy NTc3NzU=