In an era where Artificial Intelligence (AI) is rapidly transforming high-risk domains like healthcare, finance, and criminal justice, the ability to develop AI systems that are not only accurate but also transparent and trustworthy is critical. The Explainable AI (XAI) Specialization is designed to empower AI professionals, data scientists, machine learning engineers, and product managers with the knowledge and skills needed to create AI solutions that meet the highest standards of ethical and responsible AI.
Taught by Dr. Brinnae Bent, an expert in bridging the gap between research and industry in machine learning, this course series leverages her extensive experience leading projects and developing impactful algorithms for some of the largest companies in the world. Dr. Bent's work, ranging from helping people walk to noninvasively monitoring glucose, underscores the meaningful applications of AI in real-world scenarios.
Throughout this series, learners will explore key topics including Explainable AI (XAI) concepts, interpretable machine learning, and advanced explainability techniques for large language models (LLMs) and generative computer vision models. Hands-on programming labs, using Python to implement local and global explainability techniques, and case studies offer practical learning. This series is ideal for professionals with a basic to intermediate understanding of machine learning concepts like supervised learning and neural networks.
Applied Learning Project
The Explainable AI (XAI) Specialization offers hands-on projects that deepen understanding of XAI and Interpretable Machine Learning through coding activities and real-world case studies.
Course 1 Projects: Explore ethical and bias considerations through moral machine reflections, case studies, and research analysis. Projects include visualizing embedding spaces using TensorFlow’s Embedding Projector and evaluating XAI in healthcare for diagnostics and security in autonomous driving.
Course 2 Projects: Python-based lab activities with Jupyter notebooks focus on implementing models like GLMs, GAMs, decision trees, and RuleFit.
Course 3 Projects: Advanced labs focus on local explanations using LIME, SHAP, and Anchors, along with visualizing saliency maps and Concept Activation Vectors using free platforms like Google Colab for GPU resources. The projects provided in this Specialization prepare learners to create transparent and ethical AI solutions for real-world challenges.