(no title)
jocoda | 11 months ago
For more esoteric fast changing languages/frameworks it has me chasing my tail in a chain of code updates where each fix breaks something in the n-1th, or n-2th version. Sometimes it's deprecated code, or it halucinates functions that would be valid if your were using a a different language of framework. And sometimes simple coding errors.
But it will get better, a lot better.
The main benefit is that it will let a invested non programmer client build a functional framework prototype and then combine that with a list missing features that a more skilled programmer can flesh out to a first cut solution.
For the first time we 'might' get better requirements with an actual working model instead of having the implementor doing most of the requirements as a first pass from a high level hand wavy requirement. I think we're going to see some amazing tools for this.
What I don't see it doing is creating original algorithms to solve things being done for the first time.
falkensmaize|11 months ago
I see statements like this a lot when talking about AI in general. People seem to think it is a foregone conclusion that no limit to LLM model improvement and capability exists. What causes you to believe this and what evidence do you have to back it up?
tokioyoyo|11 months ago
jocoda|11 months ago
Now, about your comprehension skills, where is there any mention on my part of there being 'no limit'? In fact I go as far as to speculate on at least one.