top | item 47167782

(no title)

visarga | 3 days ago

John Good's quote is pretty myopic, it assumes machines make better machines based on being "ultraintelligent" instead of learning from environment-action-outcome loop.

It's the difference between "compute is all you need" and "compute+explorative feedback" is all you need. As if science and engineering comes from genius brains not from careful experiments.

discuss

order

observationist|3 days ago

There's an implicit assumption there, anything a computer as intelligent as a human does will be exactly what a human would do, only faster. Or more intelligent. If the process is part of the intelligent way of doing things, like the scientific method and careful experimentation, then that's what the ultraintelligent machine will do.

There's no implication that it's going to do it all magically in its head from first principles; it's become very clear in AI that embodiment and interaction with the real world is necessary. It might be practical for a world model at sufficient levels of compute to simulate engineering processes at a sufficient level of resolution that they can do all sorts of first principles simulated physical development and problem solving "in their head", but for the most part, real ultraintelligent development will happen with real world iterations, robots, and research labs doing physical things. They'll just be far more efficient and fast than us meatsacks.

ACCount37|3 days ago

At sufficient levels of intelligence, one can increasingly substitute it for the other things.

Intelligence can be the difference between having to build 20 prototypes and building one that works first try, or having to run a series of 50 experiments and nailing it down with 5.

The upper limit of human intelligence doesn't go high enough for something like "a man has designed an entire 5th gen fighter jet in his mind and then made it first try" to be possible. The limits of AI might go higher than that.

kilpikaarna|3 days ago

Exceedingly elaborate, internally-consistent mind constructs, untested against the real world, sounds like a good definition of schizophrenia. May or may not correlate with high intelligence.

econ|3 days ago

I like the substitution concept. What humans can do depends on the abstractions and the tools. One could picture just the shape of the jet and have a few ideas how to improve it further. If that is enough info for the tool it could be worthy of the label "designed by Jim".

tjoff|3 days ago

Have you gotten any indication that machines won't have sensors?!

gopher_space|3 days ago

From what I can see we're working as hard as we can to build them. You can watch the "let's put this on a Raspberry Pi and see what happens" seeds of Skynet develop in real time.

There's something compelling about helping assemble the machine. Science fiction was completely wrong about motivation. It's fun.

Eldt|3 days ago

Maybe ultraintelligence is having an improved environment-action-outcome loop. Maybe that's all intelligence really is

goodmythical|3 days ago

I've noticed this core philosophical difference in certain geographically associated peoples.

There is a group of people who think AI is going to ruin the world because they think they themselves (or their superiors) would ruin the world.

There is a group of people who think AI is going to save the world because they think they themselves (or their superiors) would save the world.

Kind of funny to me that the former is typically democratic (those who are supposed to decide their own futures are afraid of the future they've chosen) while the other is often "less free" and are unafraid of the future that's been chosen for them.

inigyou|3 days ago

In that case, it can't be improved with bigger computers.