top | item 43498911

(no title)

nonchalantsui | 11 months ago

Your use case is in fact in the top whatever percentile for AI usefulness. Short simple scripting that won't have to be relied on due to never being widely deployed. No large codebase it has to comb through, no need for thorough maintenance and update management, no need for efficient (and potentially rare) solutions.

The only use case that would beat yours is the type of office worker that cannot write professional sounding emails but has to send them out regularly manually.

discuss

order

MostlyStable|11 months ago

I fully believe it's far better at the kind of coding/scripting that I do than the kind that real SWEs do. If for no other reason than the coding itself that I do is far far simpler and easier, so of course it's going to do better at it. However, I don't really believe that coding is the only use case. I think that there are a whole universe of other use cases that probably also get a lot of value from LLMs.

I think that HN has a lot of people who are working on large software projects that are incredibly complex and have a huge numbers of interdependencies etc., and LLMs aren't quite to the point that they can very usefully contribute to that except around the edges.

But I don't think that generalizing from that failure is very useful either. Most things humans do aren't that hard. There is a reason that SWE is one of the best paid jobs in the country.

mattmanser|11 months ago

Even a 1 month project with one good senior engineer working on it will get 20+ different files and 5,000+ loc.

Real programming is on a totally different scale than what you're describing.

I think that's true for most jobs. Superficially an AI looks like it can do good.

But LLMs:

1. Hallucinate all the time. If they were human we'd call them compulsive liars

2. They are consistenly inconsistent, so are useless for automation

3. Are only good at anything they can copy from their data set. They can't create, only regurgitate other people's work

4. AI influencing hasn't happened yet, but will very soon start making AI LLMs useless, much like SEO has ruined search. You can bet there are a load of people already seeding the internet with a load of advertising and misinformation aimed solely at AIs and AI reinforcement

throwaway290|11 months ago

It's not about the size it's more about if the task is trivial.

kerkeslager|11 months ago

And... I know people who now use AI to write their professional-sounding emails, and they often don't sound as professional as they think they do. It can be easy to just skim what an AI generates and think it's okay to send if you aren't careful, but the people you send those emails to actually have to read what was written and attempt to understand it, and doing that makes you notice things that a brief skim doesn't catch.

It's actually extremely irritating that I'm only half talking to the person when I email with these people.

skydhash|11 months ago

It's kinda like machine translated novels. You have to really be passionate about the novel to endure these kinds of translations. That's when you realize how much work novel translators do to get a coherent result.