top | item 38597869

(no title)

loneboat | 2 years ago

Even at runtime I don't trust the results worth much. A large part of the act of programming is identifying and handling corner-cases. You never manage to handle EVERY corner case, and the missed ones result in frustrating debugging sessions. But a competent programmer can cover enough cases up front that the time spent debugging is manageable.

But when I see people say things like "Look! I used GPT to write a functioning webapp!" - I worry that people get a false sense of "It works!" from pasting GPTs code into their compiler and seeing roughly the results they expect. That's great, but GPT in its current form spends exactly zero time "thinking" about corner cases - It's just a black box that repeatedly spits out "most likely next token". So maybe that app works 90% of the time. Or 95%. Or 99%. But you don't have much of a way to tell the difference without rigorous testing that includes thorough and well-articulated test cases. But in order to do that, you need to understand the problem you're solving in a very detailed way, and how your code reacts to it. And in order to do that, you need to... know how to write the program.

I think this latest wave of LLMs and generative AI is really awesome tech, and I play with it every day, because it's just so cool. But seeing people trust programs written with them worries me. Some day someone is gonna copy/pasta some LLM generated code into mission critical software, trusting it implicitly, and cause a tragedy.

discuss

order

fragmede|2 years ago

So tell the LLM you want it to handle corner cases and it will add code to handle them. It can also generate unit tests for those corner cases. LLMs have fundamentally changed programming. There's still skill required to do it well, but we're a long ways from Borland TurboC on DOS.

habinero|2 years ago

That doesn't work. It can't think through corner cases, because LLMs don't think. They aren't actually synthesizing or revising anything.