But the model doesn't need to read the node_modules to write a React app, it just needs to write the React code (which it is heavily post-trained to be able to use). So the fair counter example is like:
function Hello() {
return <button>Hello</buttton>
}
Fair challenge to the idea. But what i am saying is that every line of boilerplate, every import statement, every configuration file consumes precious tokens.
The more code, the more surface area the LLM needs to cover before understanding or implementing correctly.
Right now the solution to expensive token limits is the most token-efficient technology. let's reframe it better. Was react made to help humans organize code better or machines?
Is the High Code-to-Functionality Ratio 3 lines that do real work > 50 lines of setup really necessary?
At current prices you can pretty much get away with murder even for the most expensive models out there. You know, $14/million output tokens. 10k output tokens is 14 cents. Which is ~40k words, or whatever.
The way to use LLM's for development is to use the API.
anditherobot|1 month ago
The more code, the more surface area the LLM needs to cover before understanding or implementing correctly.
Right now the solution to expensive token limits is the most token-efficient technology. let's reframe it better. Was react made to help humans organize code better or machines?
Is the High Code-to-Functionality Ratio 3 lines that do real work > 50 lines of setup really necessary?
lucid-dev|1 month ago
At current prices you can pretty much get away with murder even for the most expensive models out there. You know, $14/million output tokens. 10k output tokens is 14 cents. Which is ~40k words, or whatever.
The way to use LLM's for development is to use the API.