University of Alberta
Prediction and Control with Function Approximation

Give your career the gift of Coursera Plus with $160 off, billed annually. Save today.

University of Alberta

Prediction and Control with Function Approximation

Martha White
Adam White

Instructors: Martha White

25,872 already enrolled

Included with Coursera Plus

Gain insight into a topic and learn the fundamentals.
4.8

(820 reviews)

Intermediate level

Recommended experience

Flexible schedule
Approx. 21 hours
Learn at your own pace
90%
Most learners liked this course
Gain insight into a topic and learn the fundamentals.
4.8

(820 reviews)

Intermediate level

Recommended experience

Flexible schedule
Approx. 21 hours
Learn at your own pace
90%
Most learners liked this course

Details to know

Shareable certificate

Add to your LinkedIn profile

Assessments

4 assignments

Taught in English

See how employees at top companies are mastering in-demand skills

Placeholder

Build your subject-matter expertise

This course is part of the Reinforcement Learning Specialization
When you enroll in this course, you'll also be enrolled in this Specialization.
  • Learn new concepts from industry experts
  • Gain a foundational understanding of a subject or tool
  • Develop job-relevant skills with hands-on projects
  • Earn a shareable career certificate
Placeholder
Placeholder

Earn a career certificate

Add this credential to your LinkedIn profile, resume, or CV

Share it on social media and in your performance review

Placeholder

There are 5 modules in this course

Welcome to the third course in the Reinforcement Learning Specialization: Prediction and Control with Function Approximation, brought to you by the University of Alberta, Onlea, and Coursera. In this pre-course module, you'll be introduced to your instructors, and get a flavour of what the course has in store for you. Make sure to introduce yourself to your classmates in the "Meet and Greet" section!

What's included

2 videos2 readings1 discussion prompt

This week you will learn how to estimate a value function for a given policy, when the number of states is much larger than the memory available to the agent. You will learn how to specify a parametric form of the value function, how to specify an objective function, and how estimating gradient descent can be used to estimate values from interaction with the world.

What's included

13 videos2 readings1 assignment1 programming assignment1 discussion prompt

The features used to construct the agent’s value estimates are perhaps the most crucial part of a successful learning system. In this module we discuss two basic strategies for constructing features: (1) fixed basis that form an exhaustive partition of the input, and (2) adapting the features while the agent interacts with the world via Neural Networks and Backpropagation. In this week’s graded assessment you will solve a simple but infinite state prediction task with a Neural Network and TD learning.

What's included

11 videos2 readings1 assignment1 programming assignment1 discussion prompt

This week, you will see that the concepts and tools introduced in modules two and three allow straightforward extension of classic TD control methods to the function approximation setting. In particular, you will learn how to find the optimal policy in infinite-state MDPs by simply combining semi-gradient TD methods with generalized policy iteration, yielding classic control methods like Q-learning, and Sarsa. We conclude with a discussion of a new problem formulation for RL---average reward---which will undoubtedly be used in many applications of RL in the future.

What's included

7 videos2 readings1 assignment1 programming assignment2 discussion prompts

Every algorithm you have learned about so far estimates a value function as an intermediate step towards the goal of finding an optimal policy. An alternative strategy is to directly learn the parameters of the policy. This week you will learn about these policy gradient methods, and their advantages over value-function based methods. You will also learn how policy gradient methods can be used to find the optimal policy in tasks with both continuous state and action spaces.

What's included

11 videos2 readings1 assignment1 programming assignment1 discussion prompt

Instructors

Instructor ratings
4.8 (107 ratings)
Martha White
University of Alberta
4 Courses98,394 learners
Adam White
University of Alberta
4 Courses98,394 learners

Offered by

Recommended if you're interested in Machine Learning

Why people choose Coursera for their career

Felipe M.
Learner since 2018
"To be able to take courses at my own pace and rhythm has been an amazing experience. I can learn whenever it fits my schedule and mood."
Jennifer J.
Learner since 2020
"I directly applied the concepts and skills I learned from my courses to an exciting new project at work."
Larry W.
Learner since 2021
"When I need courses on topics that my university doesn't offer, Coursera is one of the best places to go."
Chaitanya A.
"Learning isn't just about being better at your job: it's so much more than that. Coursera allows me to learn without limits."

Learner reviews

Showing 3 of 820

4.8

820 reviews

  • 5 stars

    84.54%

  • 4 stars

    12.40%

  • 3 stars

    1.94%

  • 2 stars

    0.72%

  • 1 star

    0.36%

AP
4

Reviewed on Apr 12, 2020

DL
5

Reviewed on May 31, 2020

CP
5

Reviewed on Jan 18, 2020

New to Machine Learning? Start here.

Placeholder

Open new doors with Coursera Plus

Unlimited access to 7,000+ world-class courses, hands-on projects, and job-ready certificate programs - all included in your subscription

Advance your career with an online degree

Earn a degree from world-class universities - 100% online

Join over 3,400 global companies that choose Coursera for Business

Upskill your employees to excel in the digital economy

Frequently asked questions