(no title)
polyglotfacto | 24 days ago
Let me explain why:
> the resulting compiled output is over 60kb, far exceeding the 32k code limit enforced by Linux
Seems like a failure to me.
> I tried (hard!) to fix several of the above limitations but wasn’t fully successful. New features and bugfixes frequently broke existing functionality.
This has code smell written all over it.
----
Conclusion: this cost 20k to build, not taking into account the money spent on training the model. How much would you pay for this software? Zero.
The reality is that LLM are up there with SQL and ROR(or above) in terms of changing how people write software and interact with data. That's a big deal, but not enough to support trillion dollar valuations.
So you get things like this project, which are just about driving a certain narrative.
conception|24 days ago
polyglotfacto|23 days ago
It's like writing a novel in a week that no one wants to read. If in six months you can do it in an hour, there is still zero value.
Agents are useful but very limited tools: I treat them a little machines that can translate high-level instructions into detailed code, but where I still need to review the output to make sure they understood what I meant; that's it. Zero autonomy; parallelism just means I can't keep up with the output and quality goes down.
I think the point of this project, like the fastrender slop thing, is to push the parallel agent narrative and have the financial markets believe this will create a lot more demand for inference on these models in the short term.
Example of someone falling for it: https://x.com/DKThomp/status/2019484169915572452