Chevron Left
Back to Transformer Models and BERT Model

Learner Reviews & Feedback for Transformer Models and BERT Model by Google Cloud

4.1
stars
80 ratings

About the Course

This course introduces you to the Transformer architecture and the Bidirectional Encoder Representations from Transformers (BERT) model. You learn about the main components of the Transformer architecture, such as the self-attention mechanism, and how it is used to build the BERT model. You also learn about the different tasks that BERT can be used for, such as text classification, question answering, and natural language inference. This course is estimated to take approximately 45 minutes to complete....

Top reviews

Filter by:

1 - 15 of 15 Reviews for Transformer Models and BERT Model

By Swastik N

•

Jun 29, 2023

This course should never have been published to coursera with such a state, it should have been a free YouTube video at best. What can be done better: - Make it a full course, i.e., start by introducing sequence modelling, why use attention mechanism over something like LSTM etc. - Start introducing concepts such as tokenization, vectorization, etc. - Describe transformer model in detail, take a simple 1 encoder / 1 decoder block and explain how it works with a toy example dataset. - Teach implementation of the toy example in code. - Introduce BERT with clear goal on what it aims to solve - Then comes the part that was discussed in the current course.

By Ankit M

•

Sep 13, 2023

The videos do not properly explain how to setup Google Cloud account and help with running the lab. Since its just a couple of videos, I expected some solid lab work- but there are no efforts made by Google to help learners setup their lab environment and get the stuff running.

By Naman A

•

Jul 17, 2023

course was amazing gave me a good overview of BERT model and concepts like Encoding and decoding but not for beginner :>

By Waheed A

•

Jun 26, 2024

The course's lab was highly interactive and effectively reinforced the material by providing hands-on experience with real-life datasets. The quiz was well-designed to test understanding and retention, making sure that key concepts were grasped thoroughly.

By Nabih S

•

Jun 25, 2023

very clear and detailed explanation of the transformers with practical example of training BERT model

By Muneer H H

•

Nov 7, 2023

good

By Wingyan C

•

Mar 13, 2024

Excellent and concise presentation of Transformer and BERT models. The course designer may consider adding programming assignments to illustrate the concepts and to reinforce student learning.

By Kian M L

•

Oct 29, 2023

I need to use my on GCP to run the lab. Otherwise, very good introduction to get going on Transformers

By NAYEEM I

•

Sep 25, 2023

Personally, I don't think it comes under short course; it is like a superficial walkthrough short-video. It could have been a better short course. Maybe a better option will be to club all the introduction courses from Google Cloud together. Also, I wouldn't insist on a certification for this course.

By Arjun V

•

Sep 26, 2023

Very quick, not very detailed; gives overall view if that's what its meant to do.

By Tianhao Z

•

Jan 16, 2024

The first lecture was OK, but I don't think too much details about how the self-attention and FFN is constructed and why. The second part is just hard to follow without any instructions on how to get the exact setup, and, at the end, getting everything from TF hub is really just for illustration without too much to learn.

By A B

•

Dec 21, 2023

The introduction is too quick and shallow, with no other material offered to make up for it. Also, it forces you to use Google Cloud.

By Gary S

•

Dec 15, 2024

This course uses far to much jargon, without defining terms, and without providing simple examples that feature neural net architecture and matrix algebra. A complete waste of time.

By arsalan k

•

Oct 23, 2023

The course has lot of bugs in interface. In addition there is no explanation on how to import code from github to google cloud.

By Peter F

•

Aug 25, 2024

I'm none the wiser.