top | item 46614281

(no title)

schnitzelstoat | 1 month ago

I agree that all the AI doomerism is silly (by which I mean those that are concerned about some Terminator-style machine uprising, the economic issues are quite real).

But it's clear the LLM's have some real value, even if we always need a human-in-the-loop to prevent hallucinations it can still massively reduce the amount of human labour required for many tasks.

NFT's felt like a con, and in retrospect were a con. The LLM's are clearly useful for many things.

discuss

order

latexr|1 month ago

Those aren’t mutually exclusive; something can be both useful and a con.

When a con man sells you a cheap watch for an high price, what you get is still useful—a watch that tells the time—but you were also still conned, because what you paid for is not what was advertised. You overpaid because you were tricked about what you were buying.

LLMs are useful for many things, but they’re also not nearly as beneficial and powerful as they’re being sold as. Sam Altman, while entirely ignoring the societal issues raised by the technology (such as the spread of misinformation and unhealthy dependencies), repeatedly claims it will cure all cancers and other kinds of diseases, eradicate poverty, solve the housing crisis, democracy… Those are bullshit, thus the con description applies.

https://youtu.be/l0K4XPu3Qhg?t=60

BoxOfRain|1 month ago

I think the following things can both be true at the same time:

* LLMs are a useful tool in a variety of circumstances.

* Sam Altman is personally incentivised to spout a great deal of hyped-up rubbish about both what LLMs are capable of, and can be capable of.

dgxyz|1 month ago

I disagree with this perspective. Human labour is mostly inefficiency from habitual repetition from experience. LLMs tend not to improve that. They look like they do but instead train the user into replacing the repetition with machine repetition.

We had an "essential" reporting function in the business which was done in Excel. All SMEs seem to have little pockets of this. Hours were spent automating the task with VBA to no avail. Then LLMs came in after the CTO became obsessed with it and it got hit with that hammer. This is four iterations of the same job: manual, Excel, Excel+VBA, Excel+CoPilot. 15 years this went on.

No one actually bothered to understand the reason the work was being done and the LLM did not have any context. This was being emailed weekly to a distribution list with no subscribers as the last one had left the company 14 years ago. No one knew, cared or even though about it.

And I see the same in all areas LLMs are used. They are merely pasting over incompetence, bad engineering designs, poor abstractions and low knowledge situations. Literally no one cares about this as long as the work gets done and the world keeps spinning. No one really wants to make anything better, just do the bad stuff faster. If that's where something is useful, then we have fucked up.

Another one. I need to make a form to store some stuff in a database so I can do some analytics on it later. The discussion starts with how we can approach it with ReactJS+microservices+kubernetes. That isn't the problem I need solving. People have been completely blinded on what a problem is and how to get rid of it efficiently.

ACCount37|1 month ago

LLMs of today advance in incremental improvements.

There is a finite amount of incremental improvements left between the performance of today's LLMs and the limits of human performance.

This alone should give you second thoughts on "AI doomerism".

latexr|1 month ago

That is not necessarily true. That would be like arguing there is a finite number of improvements between the rockets of today and Star Trek ships. To get warp technology you can’t simply improve combustion engines, eventually you need to switch to something else.

That could also apply to LLMs, that there would be a hard wall that the current approach can’t breach.

112233|1 month ago

pole-vaulting records improve incrementally too. and there is finite distance left to the moon. without deep understanding and experience and numbers to back up the opinion, any progress seems about to reach arbitrary goals.

falloutx|1 month ago

AI doomerism was sold by the AI companies as some sort of "learn it or you'll fall behind". But they didnt think it through, now that AI is widely seen as a bad thing by general public (except programmers who think they can deliver slop faster). Who would be buying $200/month sub when they get laid off, I am not sure the strategy of spreading fear was worth it. I also don't think this tech can ever be profitable. I hope it burns more money at this rate.

immibis|1 month ago

You can't get to the moon by learning to climb taller and taller trees.

runarberg|1 month ago

> it can still massively reduce the amount of human labour required for many tasks.

I want to see some numbers before I believe this. So far my feelings is that the best case scenario is that it reduces the time it needs to do bureaucratic tasks, tasks that were not needed anyway and could have just been removed for an even grater boost in productivity. Maybe, it seems to be automating tasks from junior engineer, tasks which they need to perform in order to gain experience and develop their expertise. Although I need to see the numbers before I believe even that.

I have a suspicion that AI is not increasing productivity by any meaningful metric which couldn’t be increased by much much much cheaper and easier means.

bodge5000|1 month ago

> The LLM's are clearly useful for many things

I don't think that's of any doubt. Even beyond programming, imo especially beyond programming, there are a great many things they're useful for. The question is; is that worth the enormous cost of running them?

NFT's were cheap enough to produce and that didn't really scale depending on the "quality" of the NFT. With an LLM, if you want to produce something at the same scale as OpenAI or Anthropic the amount of money you need just to run it is staggering.

This has always been the problem, LLMs (as we currently know them) they being a "pretty useful tool" is frankly not good enough for the investment put into them

falloutx|1 month ago

All of the professions its trying to replace are very much bottom end of the tree, like programmers, designers, artists, support, lawyers etc. While you can easily already replace management and execs with it already and save 50% of the costs, but no one is talking about that.

At this point the "trick" is to scare white collar knowledge workers into submission with low pay and high workload with the assumption that AI can do some of the work.

And do you know a better way to increase your output without giving OpenAI/Claude thousands of dollars? Its morale, improving morale would increase the output in a much more holistic way. Scare the workers and you end up with spaghetti of everyone merging their crappy LLM enhanced code.

ACCount37|1 month ago

Yeah. Obviously. Duh. That's why we keep doing it.

Opus 4.5 saved me about 10 hours of debugging stupid issues in an old build system recently - by slicing through the files like a grep ninja and eventually narrowing down onto a thing I surely would have missed myself.

If I were to pay for the tokens I used at API pricing, I'd pay about $3 for that feat. Now, come up with your best estimate: what's the hourly wage of a developer capable of debugging an old build system?

For the reference: by now, the lifetime compute use of frontier models is inference-dominated, at a rate of 1:10 or more. And API costs at all major providers represent selling the model with a good profit margin.