A journey in the land of explainable AI
(beautiful landscapes, horrendous pits and everywhere in between)
Welcome to the companion website for the tutorial A journey in the land of explainable AI (beautiful landscapes, horrendous pits and everywhere in between).
It will take place at ECAI 2024, Santiago de Compostela, on the afternoon of the 19/10/2024.
You may go to the material page to start your journey straight away, or to the about page to get more informations.
AI software - both symbolic and connectionist - is becoming pervasive. However, its very dissemination raises various questions from the perspective of software engineering. Indeed, far from being a magical wand that solve all problems as it is sometime presented, AI remain a piece of software that must be designed, tested (which includes debugging), integrated within broader systems and deployed. Thus, there is a necessity to expect the same level of quality assessment for AI as well.
Part of the answer lies on the field of explainable AI (xAI). Multiple methods have been proposed since the inception of AI, be it exposing counterexamples with UNSAT core in SMT proving or the diagnosis of models. The explosion of deep learning presented several new technical and conceptual challenges. How to interpret and explain the decision of AI software? How to debug a faulty model during development and production? Most of the approaches used for neural network interpretability can be framed under the feature-attribution framework: methodologies that aim to outline which part of an input are responsible for a particular prediction. Such methods are easy to deploy and usually provide visually appealing results. They also present some limitations: brittleness, sensitivity to manipulation and lack of generalization abilities. Another approach consists in designing interpretable models from the ground-up, in particular case-based neural networks. This tutorial will provide a panorama of those approaches and their limitations. It is targeted for people interested in the field of explainable AI. Basic knowledge of artificial intelligence and deep learning is expected.
Once you are done, please fill the feedback form!