This course is an introduction to sequential decision making and reinforcement learning. We start with a discussion of utility theory to learn how preferences can be represented and modeled for decision making. We first model simple decision problems as multi-armed bandit problems in and discuss several approaches to evaluate feedback. We will then model decision problems as finite Markov decision processes (MDPs), and discuss their solutions via dynamic programming algorithms. We touch on the notion of partial observability in real problems, modeled by POMDPs and then solved by online planning methods. Finally, we introduce the reinforcement learning problem and discuss two paradigms: Monte Carlo methods and temporal difference learning. We conclude the course by noting how the two paradigms lie on a spectrum of n-step temporal difference methods. An emphasis on algorithms and examples will be a key part of this course.
(19 reviews)
Recommended experience
What you'll learn
Map between qualitative preferences and appropriate quantitative utilities.
Model non-associative and associative sequential decision problems with multi-armed bandit problems and Markov decision processes respectively
Implement dynamic programming algorithms to find optimal policies
Implement basic reinforcement learning algorithms using Monte Carlo and temporal difference methods
Skills you'll gain
Details to know
Add to your LinkedIn profile
8 assignments
See how employees at top companies are mastering in-demand skills
Earn a career certificate
Add this credential to your LinkedIn profile, resume, or CV
Share it on social media and in your performance review
There are 8 modules in this course
Welcome to Decision Making and Reinforcement Learning! During this week, Professor Tony Dear provides an overview of the course. You will also view guidelines to support your learning journey towards modeling sequential decision problems and implementing reinforcement learning algorithms.
What's included
6 videos6 readings1 assignment1 programming assignment3 discussion prompts1 plugin
Welcome to week 2! This week, we will learn about multi-armed bandit problems, a type of optimization problem in which the algorithm balances exploration and exploitation to maximize rewards. Topics include action values and sample averaging estimation, 𝜀-greedy action selection, and the upper confidence bound. You could post in the discussion forum if you need assistance on the quiz and assignment.
What's included
3 videos1 reading1 assignment1 programming assignment2 discussion prompts
Welcome to week 3! This week, we will focus on the basics of the Markov decision process, including rewards, utilities, discounting, policies, value functions, and Bellman equations. You will model sequential decision problems, understand the impact of rewards and discount factors on outcomes, define policies and value functions, and write Bellman equations for optimal solutions. You could post in the discussion forum if you need assistance on the quiz and assignment.
What's included
6 videos1 reading1 assignment1 programming assignment3 discussion prompts
Welcome to week 4! This week, we will cover dynamic programming algorithms for solving Markov decision processes (MDPs). Topics include value iteration and policy iteration, nonlinear Bellman equations, complexity and convergence, and a comparison of the two approaches.You could post in the discussion forum if you need assistance on the quiz and assignment.
What's included
6 videos1 reading1 assignment2 programming assignments3 discussion prompts
Welcome to week 5! This week, we will go through topics on partial observability and POMDPs, belief states, representation as belief MDPs, and online planning in MDPs and POMDPs. You will also apply your knowledge to update the belief state and employ a belief transition function to calculate state values. You could post in the discussion forum if you need assistance on the quiz and assignment.
What's included
5 videos2 readings1 assignment1 programming assignment3 discussion prompts
Welcome to week 6! This week, we will introduce Monte Carlo methods, and cover topics related to state value estimation using sample averaging and Monte Carlo prediction, state-action values and epsilon-greedy policies, and importance sampling for off-policy vs on-policy Monte Carlo control. You will learn to estimate state values, state-action values, use importance sampling, and implement off-policy Monte Carlo control for optimal policy learning. You could post in the discussion forum if you need assistance on the quiz and assignment.
What's included
6 videos2 readings1 assignment1 programming assignment2 discussion prompts
Welcome to week 7! This week, we will cover topics related to temporal difference learning for prediction, TD batch methods, SARSA for on-policy control, and Q-learning for off-policy control. You will learn to implement TD prediction, TD batch and offline methods, SARSA and Q-learning, and compare on-policy vs off-policy TD learning. You will then apply your knowledge in solving a Tic-tac-toe programming assignment.You could post in the discussion forum if you need assistance on the quiz and assignment.
What's included
5 videos2 readings1 assignment3 programming assignments2 discussion prompts
Welcome to week 8! This module covers n-step temporal difference prediction, n-step SARSA (on-policy and off-policy), model-based RL with Dyna-Q, and function approximation. You will be prepared to implement n-step TD learning, n-step SARSA, Dyna-Q for model-based learning, and use function approximation for reinforcement learning. You will apply your knowledge in the Frozen Lake programming environment. You could post in the discussion forum if you need assistance on the quiz and assignment.
What's included
4 videos3 readings1 assignment1 programming assignment2 discussion prompts1 plugin
Instructor
Offered by
Recommended if you're interested in Algorithms
Stanford University
University of Leeds
Yunus Social Business Fund Bengaluru
DeepLearning.AI
Why people choose Coursera for their career
Learner reviews
19 reviews
- 5 stars
57.89%
- 4 stars
26.31%
- 3 stars
0%
- 2 stars
10.52%
- 1 star
5.26%
Showing 3 of 19
Reviewed on Jan 20, 2024
Very good introductory and basic to Reinforcement Learning. But programming assignments need more careful compilation and more attention to detail!
Reviewed on Jul 9, 2023
Well-structured course that provides a great introduction to methodologies used in reinforcement learning. I am now eager to experiment more in my own time, to consolidate what I have learned.
New to Algorithms? Start here.
Open new doors with Coursera Plus
Unlimited access to 10,000+ world-class courses, hands-on projects, and job-ready certificate programs - all included in your subscription
Advance your career with an online degree
Earn a degree from world-class universities - 100% online
Join over 3,400 global companies that choose Coursera for Business
Upskill your employees to excel in the digital economy
Frequently asked questions
Access to lectures and assignments depends on your type of enrollment. If you take a course in audit mode, you will be able to see most course materials for free. To access graded assignments and to earn a Certificate, you will need to purchase the Certificate experience, during or after your audit. If you don't see the audit option:
The course may not offer an audit option. You can try a Free Trial instead, or apply for Financial Aid.
The course may offer 'Full Course, No Certificate' instead. This option lets you see all course materials, submit required assessments, and get a final grade. This also means that you will not be able to purchase a Certificate experience.
When you purchase a Certificate you get access to all course materials, including graded assignments. Upon completing the course, your electronic Certificate will be added to your Accomplishments page - from there, you can print your Certificate or add it to your LinkedIn profile. If you only want to read and view the course content, you can audit the course for free.
You will be eligible for a full refund until two weeks after your payment date, or (for courses that have just launched) until two weeks after the first session of the course begins, whichever is later. You cannot receive a refund once you’ve earned a Course Certificate, even if you complete the course within the two-week refund period. See our full refund policy.