Imho, we should let people experiment as much as they want. Having more examples is better than less. Still, thanks for the link for the course, this is a top-notch one
I'm very sorry, I should have phrased my original post in a kinder, less dismissive way, and kudos to you for not reacting badly to my rudeness. It is a cool repo and a great accomplishment. Implementing autograd is great as a learning exercise, but my opinion is that you're not going to get the performance or functionality of one of the large, mainstream autograd libraries. Karpathy, for example, throws away micrograd after implementing it and uses pytorch in his later exercises. So it's great that you did this, but for others to learn how autograd works, Karpathy is usually a better route, because the concepts are built up one by one and explained thoroughly.
Cleaner, more straightforward, more compact code, and considered complete in its scope (i.e. implement backpropagation with a PyTorch-y API and train a neural network with it). MyTorch appears to be an author's self-experiment without concrete vision/plan. This is better for author but worse for outsiders/readers.
P.S. Course goes far beyond micrograd, to makemore (transfomers), minbpe (tokenization), and nanoGPT (LLM training/loading).
alkh|1 month ago
iguana2000|1 month ago
jerkstate|1 month ago
richard_chase|1 month ago
whattheheckheck|1 month ago
forgotpwd16|1 month ago
P.S. Course goes far beyond micrograd, to makemore (transfomers), minbpe (tokenization), and nanoGPT (LLM training/loading).
tfsh|1 month ago