Another Perspective of learning fastai course

Another Perspective of learning fastai course

Another Perspective of learning fastai course

August 22, 2020

Image for post

Image for post

I had written this post long ago and wanted to publish, but waited for fastai2 and fastai course 2020 release so that my experience might help new people joining the course. fastai 2020 course is released. I welcome everyone who are starting the fastai course 2020 to fastai family.

Thank you Jeremy Howard and Rachel Thomas for their great contribution to the democratization of Deep Learning.

My experience with fastai

I have completed fastai part1(2019) part2 and very much impressed by the course. It is one of the best courses available to learn deep learning. And it is free. What more do anyone need to get started with deep learning?

This is just my experience and what I feel about the course. So don’t get me wrong, I am not discouraging to join other courses in market but My true intention writing this blog is to tell everyone to join and learn from fast.ai course and whatever may come your way, Just complete the course.

My glimpses of fastai courses come 1 year ago when a friend suggested me fastai course. I first went through a few lessons after lesson 3 everything went above my understanding. Because I was used to the university bottom-up approach and went to learn how every line of code works in the first attempt. Jeremy had warned to follow his words carefully. I missed on that. I want to summarise the mistakes that I had made when getting started so that hopefully, you will avoid making the same when learning from the fast.ai course.

At my first attempt, I discontinued from fastai course feeling it’s difficulty after lesson 3 and started to learn deep learning from Coursera, Udacity courses. I completed few Coursera and Udacity courses. Until 7 months back when I wanted to build some working model on my own, I was able to do a very basic model and it was far away compared to the state of art results. At this point, I could remember the words of Jeremy “build state of art model” and wanted to explore the top-down approach. Surprisingly top-down approach suites me better than bottom-up. And this time I wanted to complete fastai course, and finish what I had left incomplete.

Since now I had the experience of going through boring, theoretical courses, fastai seemed more interesting, and Jeremy’s humor was the best part of his courses. Even though I realized this was the best course on the internet and went through it, I was stuck again after lesson 4. But this time I did not want to leave the course. So I went through forum and every blog on the internet to know how to people have previously completed fastai course. Then I finally realized the way to complete and understand fastai course is just the way Jeremy tells you to do it.

Haha, I know this seems confusing, but let me explain!!!

When Jeremy says we will learn in further lessons means just that, don’t break your head on that concept and be in the mindset of learning everything on day 1. Believe and you will understand that in future lessons. You surely won’t understand in the first attempt( at least I didn’t) but after watching it, 4 to 5 times, maybe you will. We are all so enthusiastic that we usually want to learn every line of code(which is not bad but didn’t work for me), we just forget to listen to Jeremy’s words. So the best way to complete the course is to surrender your mind to Jeremy and listen to every word what he tells, and just follow it.

So what next after you complete course. Don’t worry, there is another part. And this repeats every year, 2 parts of the course and with 6 months gap in between.

6 Months!!! Don’t worry, you need that time to digest the content of the course and explore all techniques on a new dataset and do some fun projects.

My suggestions:

  • surrender your mind to Jeremy and listen to every word what he tells, and just follow it.
  • People will suggest you and you may also feel to try another beginner-friendly course, but don’t quit fastai course, be persistent and finish what you started.
  • People may tell you, need to be able to read a lot of papers, it’s correct. But the approach matters. At first, read what you can in the paper. convert math to code. And as you go you will get better.
  • Watching the lectures DOES NOT EQUATE TO DOING Deep learning. So write all code in the lesson, try to apply on a different kind of dataset.
  • At first, when you start, watch the lecture once. It’s okay, you won’t understand everything, you don’t have to either. But the important thing is to complete.
  • Code as much as you can.

Crazy Perspective

Recently when I was watching part 2 of course for 3rd time, I had a new perspective of looking at fastai course.

I have gone through part1 of course nearly 5times and part2 of course 3 times. All the terminologies like epochs, learning rate, momentum, fit-one-cycle just not apply dataset but also a way of learning fastai course.

I just wanted to connect technical concepts to way we learn. So this is just for fun:)

Consider epochs as a number of times you need to watch and learn from fastai course.

Learning rate as a factor to learn the course. Discriminative learning rate may be treated as when you start the course you take small steps, and as you feel comfortable you take larger steps and as the concept becomes difficult you once again take small steps.

Momentum: after you complete the course once, you cover easy concepts of the course faster.

fit one cycle as complete course fully once and repeat for required epochs.

weights maybe your understanding with deep learning concepts before you start this course it may be zero or even pre-trained if you have done other courses.

As you learn the course you weights get changed from backpropagation. As we fully understand the course we get better accuracy.

Augmentation, dropout is also required to increase your understanding i.e. accuracy.

dropout can be treated as highly tough or things you don’t want to get into in the beginning. These concepts will be drop out in the first iteration(even though it is not exactly dropout)

Augmentation can be treated as learning the same concepts in lessons on different datasets.

I did not know how to connect concepts of the loss function, optimizer and most importantly the help from forums into this crazy perspective. If you can help me connect the dots. I would be grateful.

I would love to hear feedback, your perspective. You can tweet @UKamath7 or comment on kaggle, reddit or medium.

Originally published at kirankamath.netlify.app.