top | item 38625746

(no title)

AngaraliTurk | 2 years ago

> And while LLMs are certainly an exciting new technology, it's not at all clear that they're really more than a glorified autocorrect.

Are we sure things like biology, or heck, even the universe as a whole and its parts, aren't "glorified x thing"? Can't we apply this argument to just about anything?

discuss

order

rf15|2 years ago

I feel like comparing it to biological systems glances over like half the points the parent makes. Also of course it's a sophisticated autocorrect, there's no system for agency, which is what we also desire from a proper AGI.

AngaraliTurk|2 years ago

I wanted to put the focus on the overused "glorified x thing" sentence, which to me seems to be applicable to just about anything. I didn't want to liken/compare AI to biological systems per se.

HDThoreaun|2 years ago

Free will is an illusion. no one has agency. we're all just following our baked in incentives.

0x0203|2 years ago

I have a theory (with no data to back it up; would be curious to get people's thoughts) that people with a religious or spiritual world-view, who believe that there is such thing as a soul, and that the mind is more than just a collection of neurons in the brain, are much less inclined to think that "AI" will ever reach a sort of singularity or true "human-like" intelligence. And likewise, those who are more atheist/agnostic, or inclined to believe that human consciousness is nothing more than the patterns of neurons firing in response to various stimuli, are more convinced that a human-like machine/programmed intelligence is not only possible, but inevitable given enough resources.

I could be wildly off base, but seeing many of the (often heated) arguments made about what AI is or isn't or should or could be, it makes me wonder.

mcv|2 years ago

As it happens, I am indeed Christian. But I see the soul as the software that runs on the hardware of our brain (although those aren't as neatly separated in our brain as they are in computers), and I suspect that it should be possible to simulate it in theory. I just think we're nowhere near that. We still don't agree on what the many aspects of intelligence are and how they work together to form the human mind. And then there's consciousness; we have no clue what it is. Maybe it's emergent? Maybe it's illusion? Or is it something real? I don't think we'll be able to create a truly human-like intelligence until we figure that one out.

Although we're certainly making a lot of progress on other aspects of intelligence.

And then there's all the talk of a singularity in innovation or progress that to me betrays a lack of understanding of what the word singularity means, and a lack of understanding of the limits of knowledge and progress.

marcosdumay|2 years ago

Of course, when you don't define your X, you can use that phrase for anything. That's trivial logic.

AngaraliTurk|2 years ago

I don't define X because it's highly variable.

It seems to me that on one extreme there are people easily anthropomorphising advanced computing and on the other extreme there are people trivializing it with sentences like "glorified x thing". This time around it's "glorified autocorrect" and its derivations. It's always something that glorifies another artificial thing, and I suspect that if and when we will have recreated the human brain, or heck, another human, it will still be a "glorified x thing".

As 0x0203 said, maybe it is to be ascribed to the religious substrate that takes offence at anything that arrogantly tries to resemble the living creatures made by God, or God himself.