(no title)
0xFACEFEED | 1 year ago
It reminds me of a time (long ago) when the trend/fad was building applications visually. You would drag and drop UI elements and define logic using GUIs. Behind the scenes the IDE would generate code that linked everything together. One of the selling points was that underneath the hood it's just code so if someone didn't have access to the IDE (or whatever) then they could just open the source and make edits themselves.
It obviously didn't work out. But not because of the scope/scale (something AI code generation solves) but because, it turns out, writing maintainable secure software takes a lot of careful thought.
I'm not talking about asking an AI to vomit out a CRUD UI. For that I'm sure it's well suited and the risk is pretty low. But as soon as you introduce domain specific logic or non-trivial things connected to the real world - it requires thought. Often times you need to spend more time thinking about the problem than writing the code.
I just don't see how "guidance" of an LLM gets anywhere near writing good software outside of trivial stuff.
Aeolun|1 year ago
That’s not a failure of the AI writing that 100 line monstrosity, it’s a failure of you deciding to actually use the thing.
If you know what 20 lines are necessary and the AI doesn’t output that, why would you use it?
pama|1 year ago
If the function is fast to evaluate and you have thorough coverage by tests, you couod iterate on an LLMs that aims to compress it down to a simpler / shorter version that behaves identical to the original function. Of course brevity for the sake of brevity can lead to less code that is not always more clear or simpler to understand than the original —LLMs are very good at mimicing code style, so show them a lot of your own code and ask them to mimic it and you may be surprized.
fullstackchris|1 year ago
At the end of the day, we're engineers that write complex symbols on a 2d canvas, for something that is (ultimately, even if the code being written is machine to machine or something) used for some human purpose.
Now, if those complex symbols are readable, fully covered in tests, and meets requirements / specifications, I don't see why I should care if a human, an AI, or a monkey generated those symbols. If it meets the spec, it meets the spec.
Seems like most people in these threads are making arguments against others who are describing usage of these tools in a grossly incorrect manner from the get go.
I've said it before in other AI threads that I think (at least half?) of the noise and disagreement around AI generated code is like a bunch of people trying to use a hammer when they needed a screwdriver and then complaining that the hammer didnt work like a screwdriver!!! I just don't get it. When you're dealing with complex systems, i.e, reality, these tools (or any tool for that matter) will never work like a magic wand.
nprateem|1 year ago
The real problem with AI coding is not knowing in advance the cases where it's going to spin its wheels and go in a cycle of stupid answers. I lose 20 minutes at a time that way because it seems like it needs just one more prompt, but in the end I have to step in, either with code or telling it specifically where the bug is.
canadianfella|1 year ago
[deleted]