Computer Vision News - April 2020

3 Summary A Step Towards Explainability 19 Separation in such a clear way can also be seen in other variables. Exercise-induced angina has a clear separation, although not as expected, as 'no' (blue) increases the probability. Men, shown with the red have a reduced chance of heart disease in this model. Why is this? Domain knowledge tells us that men have a greater chance. Being able to explain how the model works is invaluable but we can also do better than that. I will conclude with a small example of how one can have an even more granular look on that data: explore how different variables affect individual patient outcomes. Let’s use that code by creating a function that uses the TreeExplainer model: "Being able to explain how the model works is invaluable but we can also do better than that." For the specific individual patient, the prediction is 36% (compared to the baseline of 58.4%). Which features are helping to minimise the prediction? A reversible thalassemia defect, not having a flat st_slope and a major vessel. All the above shows that, using a machine learning model, not only we can predict the heart disease on patient groups, but we can also use explainability techniques to further investigate how thosemodels work on the given data, evenwith a considerably small dataset. That is invaluable to the research community and it is an increasingly important ongoing field. If you want to experiment more, have a look at datasets and books such as the ones referred on this article! Most of all, enjoy, share and keep well! Till next time! def heart_disease_risk_factors (model, patient): explainer = shap.TreeExplainer(model) shap_values = explainer.shap_values(patient) shap.initjs() return shap.force_plot(explainer.expected_value[ 1 ], shap_values[ 1 ], patient)data_for_prediction = X_test.iloc[ 1 ,:].astype(float)heart_disease_risk_factors(model, data_for_prediction All images are Public Domain except if described otherwise.

RkJQdWJsaXNoZXIy NTc3NzU=