Go to Course: https://www.coursera.org/learn/explainable-machine-learning-xai
Visualize and explain neural network models using SOTA techniques.
Describe emerging approaches to explainability in large language models (LLMs) and generative computer vision.
Model-Agnostic Explainability
In this module, you will be introduced to the concept of model-agnostic explainability and will explore techniques and approaches for local and global explanations. You will learn how to explain and implement local explainability techniques LIME, SHAP, and ICE plots, global explainable techniques including functional decomposition, PDP, and ALE plots, and example-based explanations in Python. You will apply these learnings through discussions, guided programming labs, and a quiz assessment.
Explainable Deep LearningIn this module, you will be introduced to the concept of explainable deep learning and will explore techniques and approaches for explaining neural networks. You will learn how to explain and implement neural network visualization techniques, demonstrate knowledge of activation vectors in Python, and recognize and critique interpretable attention and saliency methods. You will apply these learnings through discussions, guided programming labs and case studies, and a quiz assessment.
Explainable Generative AIIn this module, you will be introduced to the concept of explainable generative AI. You will learn how to explain emerging approaches to explainability in LLMs, generative computer vision, and multimodal models. You will apply these learnings through discussions, guided programming labs, and a quiz assessment.
As Artificial Intelligence (AI) becomes integrated into high-risk domains like healthcare, finance, and criminal justice, it is critical that those responsible for building these systems think outside the black box and develop systems that are not only accurate, but also transparent and trustworthy. This course is a comprehensive, hands-on guide to Explainable Machine Learning (XAI), empowering you to develop AI solutions that are aligned with responsible AI principles. Through discussions, case
Great! I love how they showed the cuttting edge of research.