Explainability in Artifical Intelligence
At the Moment, one of the approaches in Machine learning, called Deep Learning, is becoming more and more popular. Deep Learning is a set of architectures, inspired by our brain’s functionality and uses deep layers to extract information from given data and learn its characteristics to pass a judgment on unseen, but similar data. It has shown its robustness in numerous applications such as image analysis, speech recognition, Natural Language Processing, Sound and acoustic classification and detection and many more. However, understanding the inner learning behavior of these architectures are still unknown and raises this question; Why should we trust such networks above our experts and shallow machine learning algorithms, ones who can actually explain their decisions and why they have come to their judgements. This talk will briefly mention the explainability problem, expectations and approaches used to interpret models.
Anahid Naghibzadeh-Jalali, Junior Scientist