Contrary to popular opinion, I find that Andrew Ng's Intro to ML (on Coursera/Stanford) focuses too much on basic math and theory - which doesn't detract from the course's quality, but makes the course a drudgery to go through.
Programming exercises involve a single line or two, and that too in Octave - which was all the rage back when the course was launched, but it's not so useful now.
It emphasizes practicality to the extreme - you are only taught theory/domain knowledge when needed. The instructor's amazing, the massive scale of knowledge imparted boggles the mind, and you feel like you've accomplished something when you're finished with it.
Best of all, it's free. And you can start Deep Learning there too when you're done with ML, if you feel the need (or interest).
I'd second this class. I have done both and if you're brand new to machine learning I'd do fast.ai first. You gain a greater understanding from Andrew's course, but if you're not going to become a practitioner fast.ai is better.
There should also be some sort of polls for these sort of questions or at least for mapping goals. I past polls if any in ...yc.com/news feed. How do I go about it?
To learn a solid theoretical foundation: Caltech's "Learning from data" [1]. It is one of those rare courses where the professor is so good that he manages to make tricky concepts seem almost trivial (like Paar's "Understanding cryptography").
Look elsewhere if all you want is to learn tools and start practical projects ASAP without really understanding what you are doing. Tools come and go and will serve you for a while, concepts are timeless and will serve you for life. You need both, of course, but I wouldn't skip the theory, specially when such an amazing course is on tap.
The online Stanford courses--CS224, CS229, CS231--are an excellent introduction into modern AI. CS231 with Andrej Karpathy in particular was a game-changer for me. It has three very thorough and well-designed assignments that will have you implement many of the basic algorithms discussed in the course.
The pre-requisite linear algebra for these subjects can be learned through Gilbert Strang's MIT course. A basic grounding in statistics and probability theory, along with calculus will also help.
sidkhanooja|7 years ago
Programming exercises involve a single line or two, and that too in Octave - which was all the rage back when the course was launched, but it's not so useful now.
Instead, start with this - https://www.fast.ai
It emphasizes practicality to the extreme - you are only taught theory/domain knowledge when needed. The instructor's amazing, the massive scale of knowledge imparted boggles the mind, and you feel like you've accomplished something when you're finished with it.
Best of all, it's free. And you can start Deep Learning there too when you're done with ML, if you feel the need (or interest).
k4ch0w|7 years ago
ihvck|7 years ago
philonoist|7 years ago
jackallis|7 years ago
charlysl|7 years ago
Look elsewhere if all you want is to learn tools and start practical projects ASAP without really understanding what you are doing. Tools come and go and will serve you for a while, concepts are timeless and will serve you for life. You need both, of course, but I wouldn't skip the theory, specially when such an amazing course is on tap.
[1] https://work.caltech.edu/telecourse.html
ultrasounder|7 years ago
luhego|7 years ago
PROS
- It has a good balance between theory and practice.
- There are lectures covering the theory and practice.
- There are practical assignments you need to code with Python.
- It includes in-class Kaggle competitions.
- It includes a rating system so you can compare your progress with other students.
CONS
- It has some prerequisites. You need to know Python(at a basic level) and some basic knowledge of math(calculus, linear algebra, etc).
- It is a difficult course. You will need between 5 to 10 hours each week for assignments. Each week is usually harder than the previous one.
You can find more details in this post: https://www.kaggle.com/general/77771
Isamu|7 years ago
Facebook field guide to machine learning: https://research.fb.com/the-facebook-field-guide-to-machine-...
Training on Machine Learning with AWS: https://aws.amazon.com/training/learning-paths/machine-learn...
simple_phrases|7 years ago
nwsm|7 years ago
All lessons in R and Python. Wide range of content.
notoriousjpg|7 years ago
briga|7 years ago
The pre-requisite linear algebra for these subjects can be learned through Gilbert Strang's MIT course. A basic grounding in statistics and probability theory, along with calculus will also help.
solomatov|7 years ago
It's a recording of CS229 at Stanford. This course is much harder and more thorough than the one on coursera.
UsernameTaken5|7 years ago
jackallis|7 years ago
cromiium|7 years ago
source99|7 years ago
priyanka2019|7 years ago
[deleted]