top | item 41622588

Tinygrad will be the next Linux and LLVM

29 points| alvivar | 1 year ago |twitter.com

37 comments

order
[+] saagarjha|1 year ago|reply
Well, neither Linux nor LLVM loudly proclaimed that they would be the next Internet or GUI. So I am inclined to believe that this will not be the case and the person doing the proclamation might be a little full of himself.
[+] bryanlarsen|1 year ago|reply
Interesting contrast to how Linux itself was first introduced:

"just a hobby, won't be big and professional like gnu"

[+] mikewarot|1 year ago|reply
TinyGrad is GeoHot's system/compiler to map neural networks onto hardware. He consistently points out this one point: Because the exact number of cycles is know in advance, it can be scheduled, there's no need for branch prediction, or that type of thing in a CPU.

Essentially, he wants to be able to express programs, and even an operating system, as a directed acyclic graph of logical binary operations, so that you can have consistent and deterministic runtime behavior.

The bit about LLMs is a distraction, in my opinion.

[+] zevv|1 year ago|reply
> he wants to be able to express programs, and even an operating system, as a directed acyclic graph of logical binary operations, so that you can have consistent and deterministic runtime behavior.

So how is this different from digital logic synthesis for CPLD/FPGA or chip design we have been doing over the last decades?

[+] WoodenChair|1 year ago|reply
> While there may be a legacy Linux running in a VM to manage all your cloud phoning spyware, the core functionality of the lifelike device is boot to neural network.

No, I do not think future devices will be "boot to neural network." Traditional algorithms still have a place. Your robot vacuum cleaner (his example) may still use A* to route plan, and Quicksort to display your cleanings in terms of most energy usage.

> Without CPUs, we can be freed from the tyranny of the halting problem.

Not sure what this means but I think it still makes sense to have a CPU directing things as in current architectures. You don't just have your neural engine, you also have your GPU, Audio system, input devices, etc. and those need a controller. Something needs to coordinate.

[+] TimSchumann|1 year ago|reply
> Without CPUs, we can be freed from the tyranny of the halting problem.

Can someone please explain to me what this even means in this context?

Serious question.

[+] mikewarot|1 year ago|reply
Think of it as unwinding a program all the way until it's just a list of instructions. You can know exactly how long that program will take, and it will always take that same time.
[+] mikewarot|1 year ago|reply
He's got the kernel of a good idea. Deterministic data flows are a good thing. We keep almost getting there, with things like data flow architectures, FPGAs, etc. But there's always a premature optimization for the silicon, instead of the whole system. This leads to failure, over, and over.

He's wrong in the idea of using an LLM for general purpose compute. Using math instead of logic isn't a good thing for many use cases. You don't want a database, or an FFT in a Radar System to hallucinate, for example.

My personal focus is on homogeneous, clocked, bit level systolic arrays.[2] I'm starting to get the feeling the idea is really close to being a born secret[1] though, as it might enable anyone to really make high performance chips on any fab node.

[1] https://en.wikipedia.org/wiki/Born_secret

[2] https://github.com/mikewarot/Bitgrid

[+] KeplerBoy|1 year ago|reply
You could still build a FFT in tinygrad and it would be as deterministic as it's matmuls (so not bitwise deterministic, due to the non-associativity of floating point math and the way GPUs don't guarantee execution order, but we are okay with that). The matmuls in the NNs don't hallucinate.
[+] chenzhekl|1 year ago|reply
I don't know why I should switch from PyTorch to Tinygrad as a researcher and practitioner. In terms of kernel fusion, there is torch.compile. Not to say there is a large ecosystem behind PyTorch and almost every paper today is published with a PyTorch implementation. Probably what Tinygrad shines is bare-metal platforms?
[+] skybrian|1 year ago|reply
I don’t understand the LLVM comparison. Is it somehow a compiler backend for conventional programming languages? Can you run C or Rust code?
[+] fuhsnn|1 year ago|reply
Me either, it's like saying ai-dependency is the next freedom.
[+] melodyogonna|1 year ago|reply
Makes me wonder if he knows what LLVM does.

If I understand him correctly, if everything becomes a neural network then he expects most neural networks to use Tinygrad

[+] krackers|1 year ago|reply
> tinygrad has a hardware abstraction layer, a scheduler, and memory management. It's an operating system

Doesn't every ML framework have that?

[+] almostgotcaught|1 year ago|reply
nah not like he's talking about - TF and PT definitely punt all that down to tensorrt or hip or whatever. doesn't mean there's anything novel here - just that TF and PyTorch don't do it.
[+] tmitchel2|1 year ago|reply
I generally don't read anything by gh but I think he is cryptically just referring to something like XLA, whereby your NN architecture gets compiled straight to hardware, say to a custom asic, or to an FPGA bit stream, etc.

It's definitely going to happen but I don't think it will replace CPU's much like human brains can't quite replace CPU's and what they are optimised for.

Trying to make out that TinyGrad is leading the charge in this is quite self indulgent.

[+] akoboldfrying|1 year ago|reply
>Without CPUs, we can be freed from the tyranny of the halting problem.

In the same way that we can be freed of the tyranny of being able to write a for loop.

[+] WithinReason|1 year ago|reply
The only reason neural networks don't have control flow is because they are not very good. They are incredibly inefficient and the only way to properly solve that is to introduce control flow, for example: https://arxiv.org/abs/2311.10770
[+] FloatArtifact|1 year ago|reply
Great... Does this mean my pc will hallucinate kernel panics when it doesn't even have a kernel?
[+] almostgotcaught|1 year ago|reply
no it won't, because while hitting ioctls in python is cute

https://github.com/tinygrad/tinygrad/blob/master/extra/hip_g...

it is definitely not shippable

[+] timkq|1 year ago|reply
I can't say anything on the performance, but inline assembly in Python is crazy
[+] carrja99|1 year ago|reply
Isn’t this the guy who joined Twitter as an intern to “fix” search?
[+] djaouen|1 year ago|reply
Yeah, good luck with that, lol