(no title)
loneboat | 2 years ago
But when I see people say things like "Look! I used GPT to write a functioning webapp!" - I worry that people get a false sense of "It works!" from pasting GPTs code into their compiler and seeing roughly the results they expect. That's great, but GPT in its current form spends exactly zero time "thinking" about corner cases - It's just a black box that repeatedly spits out "most likely next token". So maybe that app works 90% of the time. Or 95%. Or 99%. But you don't have much of a way to tell the difference without rigorous testing that includes thorough and well-articulated test cases. But in order to do that, you need to understand the problem you're solving in a very detailed way, and how your code reacts to it. And in order to do that, you need to... know how to write the program.
I think this latest wave of LLMs and generative AI is really awesome tech, and I play with it every day, because it's just so cool. But seeing people trust programs written with them worries me. Some day someone is gonna copy/pasta some LLM generated code into mission critical software, trusting it implicitly, and cause a tragedy.
fragmede|2 years ago
unknown|2 years ago
[deleted]
habinero|2 years ago