top | item 46825335

(no title)

ivanstojic | 1 month ago

> If you asked me six months ago what I thought of generative AI, I would have said

It’s always this tired argument. “But it’s so much better than six months ago, if you aren’t using it today you are just missing out.”

I’m tired of the hype boss.

discuss

order

libraryofbabel|1 month ago

This point about "some people already said six months ago that it was better than it was six months ago" is regularly trotted out in threads like this like it's some sort of trump card that proves AI is just hype. It doesn't make sense to me. What else do you expect people to be saying about a rapidly-improving technology? How does it help you to distinguish technologies that are hype from those that are not?

I'm sure people were saying similar things about, say, aviation all through the first decades of the 20th century, "wow, those planes are getting better every few years"... "Until recently planes were just gimmicks, but now they can fly across the English channel!"... "I wouldn't have got in one of those death traps 5 years ago, but now I might consider it!" And different people were saying things like that at different times, because they had different views of the technology, different definitions of usefulness, different appetites for risk. It's just a wide range of voices talking in similar-sounding terms about a rapidly-developing technology over a span of time.

This is just how people are going to talk about rapidly-improving technologies for which different people have different levels of adoption at different times. It's not a terribly interesting point. You have to engage with the specifics, I'm afraid.

deweller|1 month ago

The second half of that argument was not in this article. The author was just relating his experience.

For what it is worth, I have also gone from a "this looks interesting" to "this is a regular part of my daily workflow" in the same 6 month time period.

jofla_net|1 month ago

"The challenge isn’t choosing “AI or not AI” - that ship has sailed."

Aurornis|1 month ago

I’m a light LLM user myself and I still write most of the important code by myself.

Even I can see there has been a clear advancement in performance in the past six months. There will probably be another incremental step 6 months from now.

I use LLMs in a project that helps give suggestions for a previously manually data entry job. Six months ago the LLM suggestions were hit or miss. Using a recent model it’s over 90% accurate. Everything is still manually reviewed by humans but having a recent model handle the grunt work has been game changing.

If people are drinking a firehose of LinkedIn style influencer hype posts I could see why it’s tiresome. I ignore those and I think everyone else should do. There is real progress being made though.

candiddevmike|1 month ago

I think the rapid iteration and lack of consistency from the model providers is really killing the hype here. You see HN stories all the time around how things are getting worse, and it seems folks success with the major models is starting to heavily diffuse.

The model providers should really start having LTS (at least 2 years) offerings that deliver consistent results regardless of load, IMO. Folks are tired of the treadmill and just want some stability here, and if the providers aren't going to offer it, llama.cpp will...

KptMarchewa|1 month ago

There is a difference between quantization of SOTA model and old models. People want non-quantized SOTA models, rather than old models.

aspenmartin|1 month ago

Yea I hear this a lot, do people genuinely dismiss that there has been step change progress over 6-12 months timescale? I mean it’s night and day, look at benchmark numbers… “yea I don’t buy it” ok but then don’t pretend you’re objective

benrutter|1 month ago

I think I'd be in the "don't buy it" camp, so maybe I can explain my thinking at least.

I don't deny that there's been huge improvements in LLMs over the last 6-12 months at all. I'm skeptical that the last 6 months have suddenly presented a 'category shift' in terms of the problems LLMs can solve (I'm happy to be proved wrong!).

It seems to me like LLMs are better at solving the same problems that they could solve 6 months ago, and the same could be said comparing 6 months to 12 months ago.

The argument I'd dismiss isn't the improvement, it's that there's a whole load of sudden economic factors, or use cases, that have been unlocked in the last 6 months because of the improvements in LLMs.

That's kind of a fuzzier point, and a hard one to know until we all have hindsight. But I think OP is right that people have been claiming "LLMs are fundamentally in a different category to where they were 6 months ago" for the last 2 years - and as yet, none of those big improvements have yet unlocked a whole new category of use cases for LLMs.

To be honest, it's a very tricky thing to weight into, because the claims being made around LLMs are very varied from "we're 2 months away from all disease being solved" to "LLMs are basically just a bit better than old school Markov chains". I'd argue that clearly neither of those are true, but it's hard to orient stuff when both those sides are being claimed at the same time.