Would be ridiculously inefficient, while also being nondeterministic and opaque. Impossible to debug, verify, or test anything, and thus would be unwise to use for almost any kind of important task.
But maybe for a very forgiving task you can reduce developer hours.
As soon as you need to start doing any kind of custom training of the model, then you are reintroducing all developer costs and then some, while the other downsides still remain.
And if you allow users of your API to train the model, that introduces a lot of issues. see: Microsoft's Tay chatbot
Also you would need to worry about "prompt injection" attacks.
> Would be ridiculously inefficient, while also being nondeterministic and opaque. Impossible to debug, verify, or test anything, and thus would be unwise to use for almost any kind of important task.
Not to defend a joke app, but I have worked in “serious” production systems that for all intents and purposes were impossible to recreate bugs in to debug. They took data from so many outside sources that the “state” of the software could not be easily replicated at a later time. Random microservice failures littered the logs and you could never tell if one of them was responsible for the final error.
Again, not saying GPT backend is better but I can definitely see use-cases where it could power DB search as a fall-through condition. Kind of like the standard 404 error - did you mean…?
Absolutely this. It's a solution looking for a problem.
If the developer task is really so trivial why not just have a human write actual code?
And even if it is actual code instead of a Rube Goldberg-esque restricted query service, I still don't think there's ever any time saved using AI for anything. Unless you also plan on assigning the code review to the AI, a human must be involved. To say that the reviews would be tedious is an understatement. Even the most junior developer is far more likely to comprehend their bug and fix it correctly. The AI is just going to keep hallucinating non-existent APIs, haphazardly breaking linter rules, and writing in plagiarized anti-patterns.
Thank you for this human-generated connection [I can still safely and statistically presume].
I already know personally how incredible and what GPT-like systems are capable, and I've only "accepted" this future for about six weeks. Definitely having to process multitudes (beyond technical) and start accepting that prompt engineering is real and that there are about to be more jobless than just losing the trucking industry to AI [largest employer of males in USA] — this is endemic.
The sky is falling. The sky is also blue (this is the stupidest common question GPT is getting right now; instead ask "Why do people care that XYZ is blue/green/red/white/bad/unethical?"
nwah1|3 years ago
But maybe for a very forgiving task you can reduce developer hours.
As soon as you need to start doing any kind of custom training of the model, then you are reintroducing all developer costs and then some, while the other downsides still remain.
And if you allow users of your API to train the model, that introduces a lot of issues. see: Microsoft's Tay chatbot
Also you would need to worry about "prompt injection" attacks.
chime|3 years ago
Not to defend a joke app, but I have worked in “serious” production systems that for all intents and purposes were impossible to recreate bugs in to debug. They took data from so many outside sources that the “state” of the software could not be easily replicated at a later time. Random microservice failures littered the logs and you could never tell if one of them was responsible for the final error.
Again, not saying GPT backend is better but I can definitely see use-cases where it could power DB search as a fall-through condition. Kind of like the standard 404 error - did you mean…?
sublinear|3 years ago
If the developer task is really so trivial why not just have a human write actual code?
And even if it is actual code instead of a Rube Goldberg-esque restricted query service, I still don't think there's ever any time saved using AI for anything. Unless you also plan on assigning the code review to the AI, a human must be involved. To say that the reviews would be tedious is an understatement. Even the most junior developer is far more likely to comprehend their bug and fix it correctly. The AI is just going to keep hallucinating non-existent APIs, haphazardly breaking linter rules, and writing in plagiarized anti-patterns.
herculity275|3 years ago
ProllyInfamous|3 years ago
I already know personally how incredible and what GPT-like systems are capable, and I've only "accepted" this future for about six weeks. Definitely having to process multitudes (beyond technical) and start accepting that prompt engineering is real and that there are about to be more jobless than just losing the trucking industry to AI [largest employer of males in USA] — this is endemic.
The sky is falling. The sky is also blue (this is the stupidest common question GPT is getting right now; instead ask "Why do people care that XYZ is blue/green/red/white/bad/unethical?"