There's a guy who makes youtube videos like this a lot. He's a good guy and a smart guy but seems to either not care or not realize he isn't helping. I've seen videos like "Computer Vision In 5 lines" where the first line is "import helperclass.py" or something like that and that helper class has like 1500 lines of code that he wrote to implement. Sure people need to be aware of what they're learning but if someone has no programming experience and finds that video they're going to just think it's magic.
You always have to start somewhere. Do you want to code all the OS from the ground up, or do you have minimal expectations about the environment your software will run on?
Do you want to look at all the tiny details, or do you wish to focus on a specific aspect of the problem? The article author could also give an explanation of linear algebra, but I guess he expects the reader to be familiar with this part of the problem.
I guess that there's always a software layer which may be considered "backbone". For people designing networks, all the tedious work of building the network is just plumbing, and they probably expect it to be automated from a formal description of the network.
Almost obligatory, but Stanford cs231n [1] lets you implement a complete (deep) neural network from scratch [2] before venturing into pytorch/tf. Its super fun.
I can highly recommend Joel Grus’ live coding video. He creates a deep learning library only using numpy in an hour and it’s really fun to watch it all come together. https://youtu.be/o64FV-ez6Gw
Towards Data Science is definitely one of the best Medium Publications. I hope they don't begin to monetise with Medium's member-only content. It's such a drawback to Medium.
Sure! Optimizing for number of lines (instead of number of characters):
* Remove 'import numpy as n' and use __import__('numpy') in place of it everywhere
* Remove s and d functions; inline them where they're called
* Get rid of 'class N', as it's unnecessary. If adding globals is cheating, then you can do '__import__('math').x = x' instead of 'self.x = x' (yes this will work and persist).
* Technically, you don't have to print the result at the end
Honest question: what are the reasons to code this up using OOP, creating a neural network object plus methods, instead of data structures and functions that operate on them?
If it’s just personal preference, I’m fine with that, I’m not trying to start a flame war.
Python, both the language and community, are very strong proponents of OOP. While you can do a lot of more functional stuff, esp. w/ functools, the community at large tends to discourage that. "Never use map/filter" is a weirdly common phrase among pythonistas. So this, like most python-driven examples, is doing things in a pythonic way.
Coming from the R side, I tend to prefer structures & functions as well, but if I tried to write Python that way I'd be wary about showing that code to anyone more entrenched in the Pythonic way of thinking.
Thanks for submitting, this is going to come in handy for my education! I have an assignment next week which requires creating a neural network, then substituting various optimizers in place of backpropagation and comparing performance over iterations. Have found that simple numpy based NN's are easier to examine and connect the changes with the theory, this guide looks to help further with this understanding!
That sounds really fun! Do you get to pick optimizers? Obviously direct gradient is a thing (calculate the partial for every parameter) but also particle swarm optimization might be cool to profile.
I suggest a remarkably useful device: Four circular disks when attached to a cart make it easy to move the cart from location to another. Furthermore you could put goods in that cart and move them too. A lot easier than carrying the goods on you back or an animal. A brief search of the internet finds this is a new idea.
Another thing to try is calculating backpropagation by hand, on paper, with a small NN. This is what my mentor said his final exam on NN in college involved.
This is not as glorified as it sound. 100 of thousands of students around the world build neural net from scratch as their homework. People please move on, spend your time somewhere else which is more productive. I coded neural network using c++ in 2009 as homework. But I never though about showing this to people.
[+] [-] a_bonobo|7 years ago|reply
$ import a_whole_bunch_of_stuff
Good to see that this is not the case here :)
The fast.ai course has a similar exercise in the beginning, but you'll still import the weights from somewhere else.
Their fast.ai v1 library has a very short implementation too (loading the MINIST example dataset and then using Resnet18):
Done!Source: http://docs.fast.ai/
[+] [-] sprobertson|7 years ago|reply
[+] [-] partycoder|7 years ago|reply
First, you get cake. Then you make it for 20 minutes. Then you have cake.
[+] [-] zodPod|7 years ago|reply
[+] [-] ShorsHammer|7 years ago|reply
It has around 100k LOC.
[+] [-] MrUnderBridje|7 years ago|reply
I guess that there's always a software layer which may be considered "backbone". For people designing networks, all the tedious work of building the network is just plumbing, and they probably expect it to be automated from a formal description of the network.
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] maurits|7 years ago|reply
[1] http://cs231n.stanford.edu/
[2] http://cs231n.github.io/assignments2017/assignment1/
[+] [-] drej|7 years ago|reply
[+] [-] lunchladydoris|7 years ago|reply
[+] [-] tomglynch|7 years ago|reply
[+] [-] vesche|7 years ago|reply
I got a beer for anyone who can golf it under 5 lines
[+] [-] applecrazy|7 years ago|reply
https://gist.github.com/applecrazy/deda2fac6e83c07b93e001731...
Edit: I literally took newlines, converted to \n in a string, and then exec()ed the whole thing. Here's a repl of it working: https://repl.it/@applecrazy/Code-Golfing-a-Neural-Net
[+] [-] earenndil|7 years ago|reply
* Remove 'import numpy as n' and use __import__('numpy') in place of it everywhere
* Remove s and d functions; inline them where they're called
* Get rid of 'class N', as it's unnecessary. If adding globals is cheating, then you can do '__import__('math').x = x' instead of 'self.x = x' (yes this will work and persist).
* Technically, you don't have to print the result at the end
Where do I go for my beer?
[+] [-] anonytrary|7 years ago|reply
[+] [-] plg|7 years ago|reply
If it’s just personal preference, I’m fine with that, I’m not trying to start a flame war.
[+] [-] claytonjy|7 years ago|reply
Coming from the R side, I tend to prefer structures & functions as well, but if I tried to write Python that way I'd be wary about showing that code to anyone more entrenched in the Pythonic way of thinking.
[+] [-] derekmcloughlin|7 years ago|reply
https://makeyourownneuralnetwork.blogspot.com/ https://www.amazon.com/dp/1530826608/ref=cm_sw_r_tw_dp_U_x_x...
First part goes into what a NN is and how backprop works, second part is an implementation in Python.
[+] [-] bluemania|7 years ago|reply
[+] [-] dnautics|7 years ago|reply
[+] [-] peter303|7 years ago|reply
[+] [-] qbrass|7 years ago|reply
[+] [-] amasad|7 years ago|reply
https://repl.it/@turbio/neural-network
[+] [-] starpilot|7 years ago|reply
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] anonytrary|7 years ago|reply
[+] [-] wheresvic1|7 years ago|reply
[+] [-] master_yoda_1|7 years ago|reply
[+] [-] tobr|7 years ago|reply
Strange reaction to a “beginner’s guide”, how can someone move on before they learn the basics?
[+] [-] deytempo|7 years ago|reply