A key challenge of Artificial Intelligence (AI) is that the acceptance of Machine Learning (ML) algorithms is limited by their inability to “explain” their decisions in a way that is understandable for humans.
Researchers and policy makers are therefore calling for algorithmic accountability and especially for Explainable AI (XAI). However, how to achieve the goal of Explainable AI in different usage contexts and in view of different target groups, especially non-experts, remains poorly understood.
In the course of this project, we compare the interpretability of various visual explanation strategies of ML algorithms in controlled user experiments and investigate the relationship between explainability, user acceptance, and user behavior.
There is a huge international interest in the topic of Explainable AI, but also many open questions that remain for its real-world implementation and its impact on user acceptance. Thus, we are confident that the results of this project will be relevant to international research community and to industry.