top | item 37726525

(no title)

2358452 | 2 years ago

Yes, there is a lot of bunk AI safety discussions. But there are legitimate concerns as well. AI is close to human level. Logically they become dangerous, specially if given autonomy and bad goals. Many of the accredited researchers recognize this.

There is some level that you can discuss AI safety without AI expertise (specially as of a few years ago where everything as so uncertain), but I think currently you need a lot of awareness of physical and computational limits. Taking those limits into account, we're clearly very close to human level intelligences that can scale in unpredictable ways (probably not "grey goo" ways), but potentially dangerous ways under various scenarios, including manipulating our digital lives if there are humongous AI systems controlling everything as we are in danger of getting into as a society.

I think there's also a lot of elitism toward humanities implied that you should try to get past too. Humanities have a lot of insights about human nature, even if not all of it is reliable. See philosophers like Derek Parfit.

(in case you're wondering, I've implemented a few AIs mostly RL algorithms)

discuss

order

jazzyjackson|2 years ago

The thing I always get caught up on, when making comparisons between computers and humans wrt autonomy, is that the computer reaches the output state from the input by a clock cranking the CPU forward, ie it's a function that runs when the environment around it forces it to run. To put it in the LLM context, between words and after a Stop token, the "intelligence" is dead - frozen - suspended until the next function call.

How can a machine, then, possess anything like self-directed behavoir, when it never has a sense of self-preservation? Basically this is my axiom, that sense of self requires fear/awareness of mortality and the good sense to avoid those things that end you.

Perhaps you could concoct a machine that runs in an infinite loop with no off switch, I guess my question for you is, in what way can a machine have autonomy?

And my distinction between living and dead might be, a living system acts out of self preservation, consuming or modifying its environment to survive/thrive, while a dead system is simply acted upon by the environment its embedded in, like a crystal growing due to molecular force and temperature gradient - or an adding machine being cranked by a higher being.