top | item 35273490

(no title)

headsoup | 2 years ago

I think the argument is more that they only work from past inputs, they interpret the world the way they are told to. It is not that 'AI' can do things humans can't (otherwise the argument fails for many technical things, like a car at speed).

If your bet is on the former, how does it create an entirely new, irrational thought?

discuss

order

pixl97|2 years ago

Again, this seems like a weird argument. Not that long ago I was told AI would 'never' be able to perform some of the actions that LLMs are performing now. I have about zero faith in anyone that says anything along the lines of "AI won't be able to perform this human like action because..."

The AI's we are using now are nearly one dimensional when it comes to information. We are pretraining on text, and we're getting "human like" behavior out of them. They have tiny context windows when working on new problems. They have no connection to reality via other sensor information. They have no means of continuous learning. And yet we're already getting rather insane emergent behaviors from them.

What does multi-modal AI that can interact with the world and use that for training look like? What does continuous learning AI look like? What does a digital mind look like that has a context window far larger than the human mind ever could? One that input into a calculator faster than we can realize we've had a thought in the first place? One that's connected to sensory systems that span a globe?

circuit10|2 years ago

But even if the first AGI does end up perfectly simulating a human (which seems somewhat unlikely), a human given the ability to think really fast and direct access to huge amounts of data without being slowed down by actually using their eyes to read and hands to type would still be dangerously powerful

AstralStorm|2 years ago

Assuming they don't drown in the information overload and they don't take in any kind of garbage we also put out there.

We also have some pharmaceutical tricks to tweak up processing capabilities of the mind, so there's potentially no need to simulate. The capabilities of the big ball of sentient goop have not been plumbed yet.

Now imagine a technology that could obviate the need for sleep or maybe make it useful and productive.

Tao3300|2 years ago

As Cicero said of Caesar, "the wariness and energy of that bogeyman are terrifying."

RobotToaster|2 years ago

>I think the argument is more that they only work from past inputs, they interpret the world the way they are told to

Arguably humans are the same, being the product of genetics, epigenetics, and lived experience.

PaulDavisThe1st|2 years ago

Almost certainly true, but there's a huge difference. We're the result of forces that have played out within an evolutionary process that has lasted for millions of years.

Current "machine learning"-style AI (even when it uses self-driven iteration, like the game playing systems) is the result of a few ideas across not much more than 100 years, and for the most part is far too heavily influenced by existing human conceptions of what is possible and how to do things.

rowanG077|2 years ago

That argument is totally defeated by AI destroying human players, even top of the world level, at countless games.

headsoup|2 years ago

Refer to my point on past inputs. If a human suddenly said to the machine "change of rules, now you have to play by these new rules" the AI suddenly gets immensely dumber and will apply useless solutions.