|
||||
|
Special Course: Explainable AI - Counterfactuals and Feature Attributions for Data Science
Page last edited by Lei You (leiyo) 21/11-2024
Course DescriptionThis course delves into the burgeoning field of Explainable AI (XAI), providing participants with a
robust understanding of the principles and techniques that make machine learning models transparent
and interpretable. Focusing on counterfactual explanations and feature attributions, the course bridges
the gap between complex model performance and human interpretability. Participants will gain handson
experience with cutting-edge tools and methodologies, ensuring they can confidently apply XAI
techniques to real-world data science problems.
The course is running from June to November every year. LecturerLei You, PhD Assistant Professor in Applied Mathematics TextbookBach F. Learning theory from first principles. MIT press; 2024 Dec 24. Learning ObjectivesBy the end of this course, participants will be able to:
1. Demonstrating the understanding of the foundational concepts and importance of explainable artificial
intelligence (XAI).
2. Explain the necessity of transparency and interpretability in machine learning models.
3. Understand the theory behind counterfactual explanations.
4. Apply counterfactual explanation techniques to various machine learning models to illustrate how
changes in input can affect outputs.
5. Analyzing machine learning models with different feature attribution methods.
6. Use feature attribution tools in explainable AI to determine and explain the impact of individual
features on model predictions.
7. Deriving metrics and standards for assessing model transparency with respect to certain use cases.
8. Assess and compare the transparency levels of different machine learning models. Course Content OverviewThis course starts with a historical context and the key challenges in current research. It explores
counterfactual explanations, emphasizing theoretical foundations, generation methods, and evaluation
metrics. Feature attribution techniques are analyzed through a comparative study and real-world case
applications. The course covers leading XAI tools and libraries, highlighting their practical applications
and research advancements. It evaluates model transparency and interpretability using established
metrics and experimental studies. Advanced topics include causal inference, explainability in complex
models, and ethical implications. Real-world case studies across various domains illustrate the practical
impact and challenges of XAI. The course culminates in a capstone project where students develop and
present their own research proposals, fostering rigorous XAI research and peer review. Effort and Working TimeThis course is designed to be equivalent to 10 ECTS, reflecting a total workload of approximately 280
hours. This includes:
• 60 hours of interactive sessions
• 100 hours of hands-on lab sessions and practical exercises
• 40 hours of reading and reviewing supplementary materials
• 80 hours dedicated to the project report writing and final presentation
Target AudienceThis course is designed for master-level students who wish to enhance their understanding of model
interpretability and apply explainable AI techniques in their work and study. PrerequisitesParticipants should have a basic understanding of machine learning concepts and proficiency in Python
programming. Prior experience with machine learning libraries such as scikit-learn, TensorFlow, or
PyTorch is recommended but not required. Delivery ModeThe course will be delivered through a combination of one-to-one guidance, hands-on lab sessions, and
interactive discussions. Participants will have access to project resources, including tutorials, code repositories,
and reading materials. AssessmentSuccessful completion of the course will require active participation in lab sessions and a final presentation
of the project report. Research publications are encouraged but not mandatory.
|
|||