(no title)
2358452 | 2 years ago
There is some level that you can discuss AI safety without AI expertise (specially as of a few years ago where everything as so uncertain), but I think currently you need a lot of awareness of physical and computational limits. Taking those limits into account, we're clearly very close to human level intelligences that can scale in unpredictable ways (probably not "grey goo" ways), but potentially dangerous ways under various scenarios, including manipulating our digital lives if there are humongous AI systems controlling everything as we are in danger of getting into as a society.
I think there's also a lot of elitism toward humanities implied that you should try to get past too. Humanities have a lot of insights about human nature, even if not all of it is reliable. See philosophers like Derek Parfit.
(in case you're wondering, I've implemented a few AIs mostly RL algorithms)
jazzyjackson|2 years ago
How can a machine, then, possess anything like self-directed behavoir, when it never has a sense of self-preservation? Basically this is my axiom, that sense of self requires fear/awareness of mortality and the good sense to avoid those things that end you.
Perhaps you could concoct a machine that runs in an infinite loop with no off switch, I guess my question for you is, in what way can a machine have autonomy?
And my distinction between living and dead might be, a living system acts out of self preservation, consuming or modifying its environment to survive/thrive, while a dead system is simply acted upon by the environment its embedded in, like a crystal growing due to molecular force and temperature gradient - or an adding machine being cranked by a higher being.
arisAlexis|2 years ago