Computer Vision News - June 2018
learned: what has the learning process taught each individual neuron to represent? We will demonstrate a number of interpretive approaches which you can use as building blocks towards this objective. A semantic dictionary , similarly to a regular bilingual dictionary, is a list of pairs (neuron, iconic visualization), which provides a visualization of what each individual neuron of each layer detects. This enables us to better grasp the underlying mathematical objects of deep learning networks’ hidden layers -- the first step to interpretability. Neuron activations now map to iconic visualizations, instead of abstract indices -- many visualizations will bring to mind natural language human concepts, such as “ floppy ear ”, “ dog snout ” or “ fur ”. We can use the semantic dictionary as a building block to construct an activation vector, a weighted composite of semantic dictionary icons -- visualizing what specific combinations of neurons firing represent. We can group neurons to construct composite activation vectors along different conceptual axes: Spatial attribution of neurons: this is usually done using saliency maps. To produce a saliency map, we take a label and go backward through the network to the image, using a simple heatmap to show what pixels of the input image most contributed to the classification. Based on the saliency map and the semantic dictionary, we replace image areas with their activation vectors. Because these are visualizations, unlike a simple heatmap, they can represent the relevance of an area to multiple classifications (i.e., dog, snout, white, etc.). The figure below demonstrates interpretive visualization using the above image of labrador retriever and tiger cat. As the image goes through successive reductions in resolution as it is processed along layers of the neural network (from left to right) -- we get decreasingly dense grids. Each grid cell’s color is determined by the overlay of two saliency maps -- orange for ‘labrador retriever’, and blue for ‘tiger cat’. Then, the activation vector visualization (based on semantic dictionary icons) for each grid cell is overlaid on top of the saliency maps, with size representing magnitude. 16 Tool Computer Vision News Focus on: Debug and Analysis Mechanisms
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=