top | item 45121601

(no title)

dreadnip | 5 months ago

I don’t agree. HN is full of technical people, and technical people see LLMs for what they truly are: pattern matching text machines. We just don’t buy into the AGI hype because we’ve seen nothing to support it.

I’m not concerned for my job, in fact I’d be very happy if real AGI would be achieved. It would probably be the crowning tech achievement of the human race so far. Not only would I not have to work anymore, the majority of the world wouldn’t have to. We’d suddenly be living in a completely different world.

But I don’t believe that’s where we’re headed. I don’t believe LLMs in their current state can get us there. This is exactly like the web3 hype when the blockchain was the new hip tech on the block. We invent something moderately useful, with niche applications and grifters find a way to sell it to non technical people for major profit. It’s a bubble and anyone who spends enough time in the space knows that.

discuss

order

DonHopkins|5 months ago

>This is exactly like the web3 hype when the blockchain was the new hip tech on the block. We invent something moderately useful, with niche applications and grifters find a way to sell it to non technical people for major profit.

LLMs are not anything like Web3, not "exactly like". Web3 is in no way whatsoever "something moderately useful", and if you ever thought it was, you were fooled by the same grifters when they were yapping about Web3, who have now switched to yapping about LLMs.

The fact that those exact same grifters who fooled you about Web3 have moved onto AI has nothing to do with how useful what they're yapping about actually is. Do you actually think those same people wouldn't be yapping about AI if there was something to it? Yappers gonna yap.

But Web3 is 100% useless bullshit, and AI isn't: they're not "exactly alike".

Please don't make false equivalences between them like claiming they're "exactly like" each other, or parrot the grifters by calling Web3 "moderately useful".

atleastoptimal|5 months ago

Calling LLM's "pattern matching text machines" is a catchy thought-terminating cliche, which accounts to calling a human brain a "blob of fats, salts, and chemicals". It technically makes sense, but it is seeing the forest for the trees, and ignores the fact that this mere pattern patching text machine is doing things people said were impossible a few years ago. The simplicity and seeming mundanity of a technology has no bearing on its potential or emergent properties. A single termite, observed by itself, could never reveal what it could build when assembled together with its brethren.

I agree that there are lots of limitations to current LLM's, but it seems somewhat naive to ignore the rapid pace of improvement over the last 5 years, the emergent properties of AI at scale, especially in doing things claimed to be impossible only years prior (remember when people said LLM's could never do math, or that image models could never get hands or text right?).

Nobody understands with greater clarity or specificity the limitations of current LLM's than the people working in labs right now to make them better. The AGI prognostications aren't suppositions pulled out of the realm of wishful thinking, they exist because of fundamental revelations that have occurred in the development of AI as it has scaled up over the past decade.

I know I claimed that HN's hatred of AI was an emotional one, but there is an element to their reasoning too that leads them down the wrong path. By seeing more flaws than the average person in these AI systems, and seeing the tact with which companies describe their AI offerings to make them seem more impressive (currently) than they are, you extrapolate that sense of "figuring things out" to a robust model of how AI is and must really be. In doing so, you pattern match AI hype to web3 hype and assume that since the hype is similar in certain ways, that it must also be a bubble/scam just waiting to pop and all the lies are revealed. This is the same pattern-matching trap that people accuse AI of making, and see through the flaws of an LLM output while it claims to have solved a problem correctly.

neffy|5 months ago

No, it´s really not - it's exactly what they are. Multi-dimensional pattern matching machines, using massive databases put together from resources like stack overflow, Clegg's (every cheaters go to for assignment answers, massive copyright theft etc.). If that wasn´t the case, there wouldn't be jobs right now writing answers to feed into the databases.

And that´s actually quite useful - given that most of this material is paywalled or blocked from search engines. It´s less useful when you look at code examples that mix different versions of python, and have comments referring to figures on the previous page. I´m afraid it becomes very obvious when you look under the hood at the training sets themselves, just how this is all being achieved.