Well, neither Linux nor LLVM loudly proclaimed that they would be the next Internet or GUI. So I am inclined to believe that this will not be the case and the person doing the proclamation might be a little full of himself.
TinyGrad is GeoHot's system/compiler to map neural networks onto hardware. He consistently points out this one point: Because the exact number of cycles is know in advance, it can be scheduled, there's no need for branch prediction, or that type of thing in a CPU.
Essentially, he wants to be able to express programs, and even an operating system, as a directed acyclic graph of logical binary operations, so that you can have consistent and deterministic runtime behavior.
The bit about LLMs is a distraction, in my opinion.
> he wants to be able to express programs, and even an operating system, as a directed acyclic graph of logical binary operations, so that you can have consistent and deterministic runtime behavior.
So how is this different from digital logic synthesis for CPLD/FPGA or chip design we have been doing over the last decades?
> While there may be a legacy Linux running in a VM to manage all your cloud phoning spyware, the core functionality of the lifelike device is boot to neural network.
No, I do not think future devices will be "boot to neural network." Traditional algorithms still have a place. Your robot vacuum cleaner (his example) may still use A* to route plan, and Quicksort to display your cleanings in terms of most energy usage.
> Without CPUs, we can be freed from the tyranny of the halting problem.
Not sure what this means but I think it still makes sense to have a CPU directing things as in current architectures. You don't just have your neural engine, you also have your GPU, Audio system, input devices, etc. and those need a controller. Something needs to coordinate.
Think of it as unwinding a program all the way until it's just a list of instructions. You can know exactly how long that program will take, and it will always take that same time.
He's got the kernel of a good idea. Deterministic data flows are a good thing. We keep almost getting there, with things like data flow architectures, FPGAs, etc. But there's always a premature optimization for the silicon, instead of the whole system. This leads to failure, over, and over.
He's wrong in the idea of using an LLM for general purpose compute. Using math instead of logic isn't a good thing for many use cases. You don't want a database, or an FFT in a Radar System to hallucinate, for example.
My personal focus is on homogeneous, clocked, bit level systolic arrays.[2] I'm starting to get the feeling the idea is really close to being a born secret[1] though, as it might enable anyone to really make high performance chips on any fab node.
You could still build a FFT in tinygrad and it would be as deterministic as it's matmuls (so not bitwise deterministic, due to the non-associativity of floating point math and the way GPUs don't guarantee execution order, but we are okay with that). The matmuls in the NNs don't hallucinate.
I don't know why I should switch from PyTorch to Tinygrad as a researcher and practitioner. In terms of kernel fusion, there is torch.compile. Not to say there is a large ecosystem behind PyTorch and almost every paper today is published with a PyTorch implementation. Probably what Tinygrad shines is bare-metal platforms?
nah not like he's talking about - TF and PT definitely punt all that down to tensorrt or hip or whatever. doesn't mean there's anything novel here - just that TF and PyTorch don't do it.
I generally don't read anything by gh but I think he is cryptically just referring to something like XLA, whereby your NN architecture gets compiled straight to hardware, say to a custom asic, or to an FPGA bit stream, etc.
It's definitely going to happen but I don't think it will replace CPU's much like human brains can't quite replace CPU's and what they are optimised for.
Trying to make out that TinyGrad is leading the charge in this is quite self indulgent.
The only reason neural networks don't have control flow is because they are not very good. They are incredibly inefficient and the only way to properly solve that is to introduce control flow, for example: https://arxiv.org/abs/2311.10770
[+] [-] saagarjha|1 year ago|reply
[+] [-] bryanlarsen|1 year ago|reply
"just a hobby, won't be big and professional like gnu"
[+] [-] mikewarot|1 year ago|reply
Essentially, he wants to be able to express programs, and even an operating system, as a directed acyclic graph of logical binary operations, so that you can have consistent and deterministic runtime behavior.
The bit about LLMs is a distraction, in my opinion.
[+] [-] zevv|1 year ago|reply
So how is this different from digital logic synthesis for CPLD/FPGA or chip design we have been doing over the last decades?
[+] [-] WoodenChair|1 year ago|reply
No, I do not think future devices will be "boot to neural network." Traditional algorithms still have a place. Your robot vacuum cleaner (his example) may still use A* to route plan, and Quicksort to display your cleanings in terms of most energy usage.
> Without CPUs, we can be freed from the tyranny of the halting problem.
Not sure what this means but I think it still makes sense to have a CPU directing things as in current architectures. You don't just have your neural engine, you also have your GPU, Audio system, input devices, etc. and those need a controller. Something needs to coordinate.
[+] [-] TimSchumann|1 year ago|reply
Can someone please explain to me what this even means in this context?
Serious question.
[+] [-] orra|1 year ago|reply
https://news.ycombinator.com/item?id=36074287
You could say he had a history of using big words to talk shit.
[+] [-] WithinReason|1 year ago|reply
https://news.ycombinator.com/item?id=41623474
[+] [-] benzible|1 year ago|reply
[+] [-] mikewarot|1 year ago|reply
[+] [-] mikewarot|1 year ago|reply
He's wrong in the idea of using an LLM for general purpose compute. Using math instead of logic isn't a good thing for many use cases. You don't want a database, or an FFT in a Radar System to hallucinate, for example.
My personal focus is on homogeneous, clocked, bit level systolic arrays.[2] I'm starting to get the feeling the idea is really close to being a born secret[1] though, as it might enable anyone to really make high performance chips on any fab node.
[1] https://en.wikipedia.org/wiki/Born_secret
[2] https://github.com/mikewarot/Bitgrid
[+] [-] KeplerBoy|1 year ago|reply
[+] [-] chenzhekl|1 year ago|reply
[+] [-] skybrian|1 year ago|reply
[+] [-] fuhsnn|1 year ago|reply
[+] [-] melodyogonna|1 year ago|reply
If I understand him correctly, if everything becomes a neural network then he expects most neural networks to use Tinygrad
[+] [-] winocm|1 year ago|reply
[+] [-] krackers|1 year ago|reply
Doesn't every ML framework have that?
[+] [-] almostgotcaught|1 year ago|reply
[+] [-] tmitchel2|1 year ago|reply
It's definitely going to happen but I don't think it will replace CPU's much like human brains can't quite replace CPU's and what they are optimised for.
Trying to make out that TinyGrad is leading the charge in this is quite self indulgent.
[+] [-] akoboldfrying|1 year ago|reply
In the same way that we can be freed of the tyranny of being able to write a for loop.
[+] [-] WithinReason|1 year ago|reply
[+] [-] FloatArtifact|1 year ago|reply
[+] [-] almostgotcaught|1 year ago|reply
https://github.com/tinygrad/tinygrad/blob/master/extra/hip_g...
it is definitely not shippable
[+] [-] georgehotz|1 year ago|reply
We wrote entire NVIDIA, AMD, and QCOM drivers in that style.
https://github.com/tinygrad/tinygrad/blob/master/tinygrad/ru...
https://github.com/tinygrad/tinygrad/blob/master/tinygrad/ru...
https://github.com/tinygrad/tinygrad/blob/master/tinygrad/ru...
[+] [-] timkq|1 year ago|reply
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] carrja99|1 year ago|reply
[+] [-] djaouen|1 year ago|reply