ECCV 2016 Daily - Friday
Presentations 19 ECCV Daily: Fr iday In order to generate explanations, Lisa Anne's team propose a new loss function for training text generation models. Instead of simply minimizing cross entropy loss between ground truth and predicted words as is usually done when training text generation models, they include a "discriminative" loss. During training, sentences are sampled and given a reward based on how class discriminative they are. More class discriminative sentences are given a higher reward. Because the loss operates over sampled sentences, REINFORCE [2] is employed to backpropagate through the sampling mechanism. Results on a fine-grained bird classification dataset demonstrate the effectiveness of the proposed approach. The explanation model is evaluated on a variety of metrics and is shown to outperform other baselines, such as a normal description model. Qualitatively, the generated explanations discuss the generated explanations discuss more class discriminative attributes than descriptions. For example, in the image below, a description model mentions attributes which are common across bird classes (e.g., "black"), whereas the explanation model mentions attributes which are specific to the White Necked Raven (e.g., "white nape"). As the community continues to use deep networks, providing explanations for network predictions is becoming more important. Lisa Anne and her team envision that future explanation models will provide more insights into the exact mechanism of deep networks and will be important for the adoption of sophisticated AI systems. [1] Biran, O., McKeown, K.: Justification narratives for individual classifications. In: Proceedings of the AutoML workshop at ICML 2014. [2] Williams, R.J.: Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning (1992)
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=