Explainable AI

Lecturer: Marc Streit


Visualization not only can help get insight into our data or present it in a clearer way, but is also a very powerful tool for explaining complex machine learning algorithms. This lecture on the topic of Explainable AI (xAI) will give an overview of WHY we want to visualize, WHO uses xAI, WHAT to visualize; HOW to visualize and WHEN to visualize.


Goals of Explainable AI; WHY Visualize: Interpretability & Explainability, Debugging & Improving Models, Comparing & Selecting Models; WHO Uses xAI: Model Developers & Buildings, Model Users, Non-experts; WHAT to Visualize in xAI: Computational Graph & Network Architecture, Learned Model Parameters, Individual Computational Units, Neurons in High-dimensional Space, Aggregated Information; HOW to Visualize in xAI: Node-link Diagrams for Network Architecture, Dimensionality Reduction & Scatter Plots, Line Charts for Temporal Metric; Instance-based Analysis & Exploration, Interactive Experimentation, Algorithms for Attribution & Feature Visualization; WHEN to Visualize in xAI: During Training, After Training; Research Directions & Open Problems; Selected Current Research and Case Studies

Criteria for Evaluation

Written exam (oral exam in exceptional cases)


Slide presentation combined with case studies and in-class exercises.

Study Material

Fred Hohman, Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers.

Additional study material will be provided throughout the semester.