I had a related episode at work when my coworker asked me why his seemingly trivial 10 line piece of code was misbehaving inexplicably. It turned out he had two variables `file_name` and `filename` and used one in place of another. I asked him how he ended up with such code, he said he used copilot to create it. Using code from a generative AI without understanding what it does is never a good idea.
delusional|1 year ago
if [ -z "${Var}+x" ]
Where I can see what the author was trying to do, but the code is just wrong.
I dont mind people not knowing stuff, especially when it's essentially Bash trivia. But what broke my heart was when I pointed out the problem, linked to the documentation, but recieved the response "I dont know what it means, I just used copilot" followed by him just removing the code.
What a waste of a learning opportunity.
falcor84|1 year ago
There were many times in my career when I had what I expected to be a one-off issue that I needed a quick solution for and I would look for a quick and simple fix with a tool I'm unfamiliar with. I'd say that 70% of the time the thing "just works" well enough after testing, 10% of the time it doesn't quite work but I feel it's a promising approach and I'm motivated to learn more in order to get it to work, and in the remaining 20% of the time I discover that it's just significantly more complex than I thought it would be, and prefer to abandon the approach in favor of something else; I never regretted the latter.
I obviously lose a lot of learning opportunities this way, but I'm also sure I saved myself from going down many very deep rabbit holes. For example, I accepted that I'm not going to try and master sed&awk - if I see it doesn't work with a simple invocation, I drop into Python.
woctordho|1 year ago
OJFord|1 year ago
steelframe|1 year ago
rwmj|1 year ago
imtringued|1 year ago
It is not nonsense. You use that expression if you want to check if a variable exists or not (as opposed to being set to an empty string) which is an extremely common problem.
Tainnor|1 year ago
tomrod|1 year ago
sa-code|1 year ago
frumper|1 year ago
f6v|1 year ago
I think the problem with “AI” code is that many people have almost a religions belief. There’re weirdos on internet who say that AGI is couple years away. And by extension current AI models are seen as something incapable of making a mistake when writing code.
jazz9k|1 year ago
His clients were usually older small business owners that just wanted a web presence. His rate was $5000/site.
Within a few years, business dried up and he had to do something completely different.
He also hosted his own smtp server for clients.It was an old server on his cable modem in a dusty garage. I helped him prevent spoofing/relaying a few times, but he kept tinkering with the settings and it would happen all over again.
Laakeri|1 year ago
falcor84|1 year ago
Arguably the term for a bad idea that works is "good idea"
Cthulhu_|1 year ago
JKCalhoun|1 year ago
yawnxyz|1 year ago
Asking it to refactor / fix it made it worse bc it'd get confused, and merge them into a single variable — the problem was they had slightly different uses, which broke everything
I had to step through the code line by line to fix it.
Using Claude's still faster for me, as it'd probably take a week for me to write the code in the first place.
BUT there's a lot of traps like this hidden everywhere probably, and those will rear their ugly heads at some point. Wish there was a good test generation tool to go with the code generation tool...
danenania|1 year ago
Having mistakes in context seems to 'contaminate' the results and you keep getting more problems even when you're specifically asking for a fix.
It does make some sense as LLMs are generally known to respond much better to positive examples than negative examples. If an LLM sees the wrong way, it can't help being influenced by it, even if your prompt says very sternly not to do it that way. So you're usually better off re-framing what you want in positive terms.
I actually built an AI coding tool to help enable the workflow of backing up and re-prompting: https://github.com/plandex-ai/plandex
davidthewatson|1 year ago
https://news.ycombinator.com/item?id=40922090
LSS: metaprogramming tests is not trivial but straightforward, given that you can see the code, the AST, and associated metadata, such as generating test input. I've done it myself, more than a decade ago.
I've referred to this as a mix of literate programming (noting the traps you referred to and the anachronistic quality of them relative to both the generated tests and their generated tested code) wrapped up in human-computer sensemaking given the fact that what the AI sees is often at best a lack in its symbolic representation that is imaginary, not real; thus, requiring iterative correction to hit its user's target, just like a real test team interacting with a dev team.
In my estimation, it's actually harder to explain than it is to do.
Glyptodon|1 year ago
Noumenon72|1 year ago
arcticfox|1 year ago
ben_w|1 year ago
True, but the anecdote doesn't prove the point.
It's easy to miss that kind of difference even if you wrote the code yourself.
latexr|1 year ago
The developer in the story had no idea what the code did, hence they would not have written it themselves, making it impossible for them to “miss” anything.
morgango|1 year ago
bckr|1 year ago
Yes.
AI as a faster way to type: Great!
AI as a way to discover capabilities: OK.
Faster way to think and solve problems: Actively harmful.
berniedurfee|1 year ago
Artificial Incompetence indeed!
planb|1 year ago
mooreds|1 year ago
Hear hear!
I feel like genAI is turning devs from authors to editors. Anyone who thinks the latter is lesser than the former has not performed both functions. Editing properly, to elevate the meaning of the author, is a worthy and difficult endeavor.
EVa5I7bHFq9mnYK|1 year ago
CalRobert|1 year ago
FanaHOVA|1 year ago
creesch|1 year ago
nope1000|1 year ago
fragmede|1 year ago
boredhedgehog|1 year ago
brigadier132|1 year ago