Firstly, while I think it's beneficial to learn multiple languages (python, R, matlab, julia), I'd suggest picking one to avoid overwhelming yourself and freaking out. I'd suggest python because there are great tools and lots of learning resources out there, plus most of the cutting edge neural networks action is in python.
Then for overall curriculum, I'd suggest:
1. start with basic machine learning (not neural networks) and in particular, read through the scikit-learn docs and watch a few tutorials on youtube. spend some time getting familiar with jupyter notebooks and pandas and tackle some real-world problems (kaggle is great or google around for datasets that excite you). Make sure you can solve regression, classification and clustering problems and understand how to measure the accuracy of your solution (understand things like precision, recall, mse, overfitting, train/test/validation splits)
2. Once you're comfortable with traditional machine learning, get stuck into neural networks by doing the fast.ai course. It's seriously good and will give you confidence in building near cutting-edge solutions to problems
3. Pick a specific problem area and watch a stanford course on it (e.g. cs231n for computer vision or cs224n for NLP)
4. Start reading papers. I recommend Mendeley to keep notes and organize them. The stanford courses will mention papers. Read those papers and the papers they cite.
5. Start trying out your own ideas and implementations.
While you do the above, supplement with:
* Talking Machines and O'Reilly Data Show podcasts
* Follow people like Richard Socher, Andrej Karpathy and other top researchers on Twitter
For those who like videos, I would highly recommend utilizing Andrew Ng's Coursera ML videos for step one. I found his lectures to be good high level overviews of those topics.
The course in general lacks rigor, but I thought it was a very good first step.
* Book: Hands-On Machine Learning w/ Scikit-Learn & TensorFlow (http://amzn.to/2vPG3Ur). Theory & code, starting from "shallow" learning (eg Linear Regression) on sckikit-learn, pandas, numpy; and moves to deep learning with TF.
Online courses recommended in this thread are great resources to get your feet wet. If you want to actually be able to build ML powered applications, or contribute to an MLE team, we've written a blog post which is a distillation of conversations with over 50 top teams (big and small) in the Bay Area. Hope you find it helpful!
Andrew Ng's tutorials[1] on Coursera are very good.
If you're into python programming then tutorials by sentdex[2] are also pretty good and cover things like scikit, tensorflow, etc (more practical less theory)
This doesn't actually answer the question, but I always think that people who want to study neural nets should read Marvin Minsky's Perceptrons. It's an academic work. It's short. It's incredibly well written and easy to understand. It shaped the history of neural net research for decades (err... stopped it, unfortunately :-) ). You should be able to find it at any university library.
Although this recommendation doesn't really fit the requirements of the poster, I think it is easy to reach first for modern, repackaged explanations and ignore the scientific literature. I think there is a great danger in that. Sometimes I think people are a bit scared to look at primary sources, so this is a great place to start if you are curious.
"Learn AI the Hard Way". It's actually just reading a bunch of papers and trying to implement them, and anytime you don't understand something spend as much time as needed until you get it.
I created a blog (http://ai.bskog.com) to have as a notepad and study backlog. There I keep track of what free courses I am currently taking and which one I will take next.
p.s.
Although video courses are good. Everyday life makes it sometimes difficult to listen to videos on youtube while for instance doing chores around the house or working out, because you often need to a. see the slides/code examples, and b. put it into practice right away... therefore, podcasts are good to give you a flow of information.
Linear Digression, Data skeptic and (thanks to this thread i now discovered Machine Learning Guide)
Don't be discouraged if there is stuff you do not understand or feel like: i can never remember these terms or that algorithm. Just be immersed in the information and stuff will fall into place. And later when you hear about that thing again it will make more sense. I tend to use a breadth first approach to learning, where i get exposed to everything before digging into details thus getting an overview of what i need to learn and where to start.
Just Q&A - no presentations. Study from whatever books (http://amlbook.com/ and http://www.deeplearningbook.org/ are popular in our group) or courses (Andrew Ng's are also popular) you like throughout the week and then show up with any questions you have. We've been meeting for a couple of months now and new folks are always welcome no matter where you are in your studies!
I did the "early years" of both statistics and tiny neural networks/perceptrons in college a long time ago. It also helps that I use math at work (anything from simulated 3D physics to DSP.)
Since then, I've used Wikipedia and Mathworld when work had needed it. Regression, random forest, simulated annealing, clustering, boosting and gradient ascent are all on the statistics/ML spectrum.
But the best resource was running NVIDIA DIGITS, training some of the stock models, and really looking deeply at the visualizations available. You could do this on your own computer, or these days, rent some spot GPU instance on ECC for cheap.
I highly recommend going through the DIGITS tutorials if you want a crash course in deep learning, and make sure to visualize all the steps! Try a few different network topologies and different depths to get a feel for how it works.
Geoff Hinton's Coursera course was what got me into it. It's not for the faint of heart. I might recommend Andrej Karpathy's cs231n as a more up to date source today.
For the math: MIT OCW Scholar and maybe Klein's Coding the Matrix.
For AI specifically, MOOCS on Coursera, edx, and Udacity will give you plenty of options. The ones by big names like Thrun, Norvig, and Ng are great places to start.
It really helps to already be comfortable with algorithms. Princeton's MOOCs on Algorithms by Bob Sedgewick on Coursera would be a great place to start.
If you were to spend a year or so going through many of the resources presented here, and probably knew your stuff pretty well (or at least as well as you could after a year), would anyone actually give you a job?
Nobody is "given" a job; you "earn" a job by convincing the hiring manager that you can do what they need done.
If you're any good, and have good results to show and talk about, yes, you could totally be employed.
If you show that you're extra willing to do all the heavy data preparation and labeling work yourself as well as the infrastructure that runs the models, you'll have an even easier time. Most people just want to play with models, and believe data preparation is "beneath" them, but that's actually where the meat is and where the success of the model is made or destroyed.
It depends what sort of a job you have in mind. If you wanted the sort of job where you spend all day every day doing ML/DL/AI stuff then no, that's a pure research job and probably needs a PhD. But the life of an ordinary working data scientist isn't like that: you would spend 75% of your time acquiring and cleaning/pre-processing data (including all the organizational tasks of finding it and persuading people to give you it), 20% of your time trying to shepherd what you had created/discovered into a real, working production system, and maybe 5% if you are lucky on this sort of thing. You absolutely can learn everything you need to get to this level through MOOCs. The rest is down to your interview skills.
There are too many resources from which to choose. It would be thoughtful of anyone to share AI learning pathways, like a syllabus, using those resources.
? I've always thought that ML/AI for me was about learning the languages that could express my idea of how it could work. In order to do that myself, I started reading about algorithm types.
There was one particular study piece that I remember reading that I believe was written in the late 70's early 80's, but I can't remember its name. It was a HTML unformatted uni course-work document that the guy who wrote it said he'd just keep changing it as required. Really wish I could remember his name.
I have a slightly different bent on what is discussed here, because my particular implementation reflects what I think is important. There are an infinite number of variations. It depends on what you think you think it might be good for.
You could do it in those languages. But it would be uphill all the way and you'd look back in a year and realise you'd expended 10x the effort trying to hammer a square peg into a round hole than it would have taken to just learn R, Python or MATLAB upfront.
I'm with the others on this. Never mind the cringe - he's all show, so much so I think he's bluffing (doesn't know ML). He amps up on "character" so much you're excited for the knowledge drop - when it comes, it's so fast and technical there's nothing to gain from it. The adage "if you can't explain something simply you don't understand it" applies. I was hoping he understood ML enough to boil things down; instead he spews equations and jargon so fast (1) you don't catch it, (2) I think he's just reading from a source. He doesn't go for essence, he goes for speed - and that's not helpful.
Again, the cringe isn't the problem directly; but that it's a cover for his bluff. The result is a not-newbie-friendly resource.
I was watching a twitch livestream where he was coding an RL thing. His code was just wrong (I paused it and looked through it), but it compiled anyways and started outputting stats, so he declared "I'm such a baller! It's learning!" and then quickly concluded the program. It's one thing to find his style annoying, but he is neither a strong thinker nor coder.
I've personally found him to be more of a "showman" and a youtube "star" rather than someone technically adept with data sciences. He is good at what he does - which is building cool things using cool tools/api.
But I wouldn't recommend him as a good resource to learn core ML from or figure out how stuff work internally.
He just pipes input through bunch of libraries that are available off the shelf. Does that produce a useful output? Sure. Could he write any of them himself, or explain how any of them work beyond a superficial overview? I doubt it.
alexcnwy|8 years ago
Then for overall curriculum, I'd suggest:
1. start with basic machine learning (not neural networks) and in particular, read through the scikit-learn docs and watch a few tutorials on youtube. spend some time getting familiar with jupyter notebooks and pandas and tackle some real-world problems (kaggle is great or google around for datasets that excite you). Make sure you can solve regression, classification and clustering problems and understand how to measure the accuracy of your solution (understand things like precision, recall, mse, overfitting, train/test/validation splits)
2. Once you're comfortable with traditional machine learning, get stuck into neural networks by doing the fast.ai course. It's seriously good and will give you confidence in building near cutting-edge solutions to problems
3. Pick a specific problem area and watch a stanford course on it (e.g. cs231n for computer vision or cs224n for NLP)
4. Start reading papers. I recommend Mendeley to keep notes and organize them. The stanford courses will mention papers. Read those papers and the papers they cite.
5. Start trying out your own ideas and implementations.
While you do the above, supplement with:
* Talking Machines and O'Reilly Data Show podcasts
* Follow people like Richard Socher, Andrej Karpathy and other top researchers on Twitter
Good luck and enjoy!
hinfaits|8 years ago
The course in general lacks rigor, but I thought it was a very good first step.
petrbela|8 years ago
* https://www.udacity.com/course/intro-to-artificial-intellige...
* https://www.udacity.com/course/machine-learning--ud262
Deep Learning:
* Jeremy Howard's incredibly practical DL course http://course.fast.ai/
* Andrew Ng's new deep learning specialization (5 courses in total) on Coursera https://www.deeplearning.ai/
* Free online "book" http://neuralnetworksanddeeplearning.com/
* The first official deep learning book by Goodfellow, Bengio, Courville is also available online for free http://www.deeplearningbook.org/
larrydag|8 years ago
Introduction to Statistical Learning http://www-bcf.usc.edu/~gareth/ISL/
Elements of Statistical Learning https://web.stanford.edu/~hastie/ElemStatLearn/
lefnire|8 years ago
* Book: Hands-On Machine Learning w/ Scikit-Learn & TensorFlow (http://amzn.to/2vPG3Ur). Theory & code, starting from "shallow" learning (eg Linear Regression) on sckikit-learn, pandas, numpy; and moves to deep learning with TF.
* Podcast: Machine Learning Guide (http://ocdevel.com/podcasts/machine-learning). Commute/exercise backdrop to solidify theory. Provides curriculum & resources.
e_ameisen|8 years ago
https://blog.insightdatascience.com/preparing-for-the-transi...
Disclaimer: I work for Insight
superasn|8 years ago
If you're into python programming then tutorials by sentdex[2] are also pretty good and cover things like scikit, tensorflow, etc (more practical less theory)
[1] https://www.coursera.org/learn/machine-learning [2] https://pythonprogramming.net/data-analysis-tutorials/
mikekchar|8 years ago
Although this recommendation doesn't really fit the requirements of the poster, I think it is easy to reach first for modern, repackaged explanations and ignore the scientific literature. I think there is a great danger in that. Sometimes I think people are a bit scared to look at primary sources, so this is a great place to start if you are curious.
iamkeyur|8 years ago
https://github.com/ChristosChristofidis/awesome-deep-learnin...
https://github.com/josephmisiti/awesome-machine-learning
orthoganol|8 years ago
billconan|8 years ago
melonkernel|8 years ago
2. Deep Learning Summer School Montreal 2016 https://sites.google.com/site/deeplearningsummerschool2016/h...
2. selfdrivingcars.mit.edu + youtube playlist "MIT 6.S094: Deep Learning for Self-Driving Cars" (https://youtu.be/1L0TKZQcUtA?list=PLrAXtmErZgOeiKm4sgNOknGvN...)
3. Coursera: Machine Learning with Andrew Ng
4. Standford Cs231n (https://www.youtube.com/watch?v=g-PvXUjD6qg&list=PLlJy-eBtNF...)
5. Deep Learning School 2016 (https://www.youtube.com/playlist?list=PLrAXtmErZgOfMuxkACrYn...)
6. Udacity: Deep Learning (https://www.udacity.com/course/deep-learning--ud730)
I created a blog (http://ai.bskog.com) to have as a notepad and study backlog. There I keep track of what free courses I am currently taking and which one I will take next.
p.s.
Although video courses are good. Everyday life makes it sometimes difficult to listen to videos on youtube while for instance doing chores around the house or working out, because you often need to a. see the slides/code examples, and b. put it into practice right away... therefore, podcasts are good to give you a flow of information.
Linear Digression, Data skeptic and (thanks to this thread i now discovered Machine Learning Guide)
Don't be discouraged if there is stuff you do not understand or feel like: i can never remember these terms or that algorithm. Just be immersed in the information and stuff will fall into place. And later when you hear about that thing again it will make more sense. I tend to use a breadth first approach to learning, where i get exposed to everything before digging into details thus getting an overview of what i need to learn and where to start.
mindviews|8 years ago
Just Q&A - no presentations. Study from whatever books (http://amlbook.com/ and http://www.deeplearningbook.org/ are popular in our group) or courses (Andrew Ng's are also popular) you like throughout the week and then show up with any questions you have. We've been meeting for a couple of months now and new folks are always welcome no matter where you are in your studies!
jwatte|8 years ago
Since then, I've used Wikipedia and Mathworld when work had needed it. Regression, random forest, simulated annealing, clustering, boosting and gradient ascent are all on the statistics/ML spectrum.
But the best resource was running NVIDIA DIGITS, training some of the stock models, and really looking deeply at the visualizations available. You could do this on your own computer, or these days, rent some spot GPU instance on ECC for cheap.
I highly recommend going through the DIGITS tutorials if you want a crash course in deep learning, and make sure to visualize all the steps! Try a few different network topologies and different depths to get a feel for how it works.
deepnotderp|8 years ago
modeless|8 years ago
cs702|8 years ago
https://unsupervisedmethods.com/my-curated-list-of-ai-and-ma...
HN thread: https://news.ycombinator.com/item?id=14764700
jhealy|8 years ago
sn9|8 years ago
For AI specifically, MOOCS on Coursera, edx, and Udacity will give you plenty of options. The ones by big names like Thrun, Norvig, and Ng are great places to start.
It really helps to already be comfortable with algorithms. Princeton's MOOCs on Algorithms by Bob Sedgewick on Coursera would be a great place to start.
mongodude|8 years ago
http://blog.paralleldots.com/data-scientist/list-must-read-b...
garysieling|8 years ago
jongold|8 years ago
baron816|8 years ago
jwatte|8 years ago
If you're any good, and have good results to show and talk about, yes, you could totally be employed.
If you show that you're extra willing to do all the heavy data preparation and labeling work yourself as well as the infrastructure that runs the models, you'll have an even easier time. Most people just want to play with models, and believe data preparation is "beneath" them, but that's actually where the meat is and where the success of the model is made or destroyed.
gaius|8 years ago
Toast_|8 years ago
https://gallery.cortanaintelligence.com/
Dowwie|8 years ago
m15i|8 years ago
yodaarjun|8 years ago
icc97|8 years ago
sprobertson|8 years ago
randcraw|8 years ago
Frogolocalypse|8 years ago
http://machinelearningmastery.com/a-tour-of-machine-learning...
There was one particular study piece that I remember reading that I believe was written in the late 70's early 80's, but I can't remember its name. It was a HTML unformatted uni course-work document that the guy who wrote it said he'd just keep changing it as required. Really wish I could remember his name.
I have a slightly different bent on what is discussed here, because my particular implementation reflects what I think is important. There are an infinite number of variations. It depends on what you think you think it might be good for.
unknown|8 years ago
[deleted]
frik|8 years ago
gaius|8 years ago
jey|8 years ago
palerdot|8 years ago
It is quirky, funny and above all very short and crisp and gives you a quick overview of things. Most of his videos are related to AI/ML.
lefnire|8 years ago
Again, the cringe isn't the problem directly; but that it's a cover for his bluff. The result is a not-newbie-friendly resource.
orthoganol|8 years ago
deepakkarki|8 years ago
But I wouldn't recommend him as a good resource to learn core ML from or figure out how stuff work internally.
throwaway2016a|8 years ago
twblalock|8 years ago
He just pipes input through bunch of libraries that are available off the shelf. Does that produce a useful output? Sure. Could he write any of them himself, or explain how any of them work beyond a superficial overview? I doubt it.