For me, I'm not worried about artificial beings being smarter than us being able to reason about things better.
I am far far far more worried what a corporation would do. Look at the TikTok algorithm and how good it is at hijacking our thought process. Do I need to fear it in and of itself? No. I fear the company using it to drive behaviors a certain way because it is profitable.
I think Westworld had a great version of this. They create a super intelligent computer, and they could have just as easily asked it to solve climate change as to manipulate stocks. And ultimately the machine wasn't after anything, just working on a directive. And when asked to shut down, it did. (season 3)
Thats the point here, its all about who gets to direct it, and if it is Elon Musk, I fear for our future.
I can see a situation where multiple AI systems will be connected to simulate human intelligence so you can ask it any question and it will have an answer. It will also be able to outline a whole project into a plan that people can follow. In that sense, it will be smarter. That's a 100% chance. It's a way to easily access the world's knowledge. Search engines are on their way out. Google search will eventually be a relic.
But will it be AGI? No. That's a close to 0% chance, not 0 but close in the next 20 years.
Could it be possible that nothing it truly conscious? Maybe we're all cogs in the wheel of a giant chemical reaction? In the future, someone may build a robot with a ChatGPT brain, and maybe that will be what passes for as AGI. I haven't found any human endeavor that adequately explains consciousness. Consciousness and intelligence are probably two different things. AI is not conscious, and its intelligence varies, sometimes being very stupid, and sometimes being very smart. It's a system inside of another system.
coldtea|1 year ago
Justsignedup|1 year ago
I am far far far more worried what a corporation would do. Look at the TikTok algorithm and how good it is at hijacking our thought process. Do I need to fear it in and of itself? No. I fear the company using it to drive behaviors a certain way because it is profitable.
I think Westworld had a great version of this. They create a super intelligent computer, and they could have just as easily asked it to solve climate change as to manipulate stocks. And ultimately the machine wasn't after anything, just working on a directive. And when asked to shut down, it did. (season 3)
Thats the point here, its all about who gets to direct it, and if it is Elon Musk, I fear for our future.
chmaynard|1 year ago
WheelsAtLarge|1 year ago
But will it be AGI? No. That's a close to 0% chance, not 0 but close in the next 20 years.
whateverevetahw|1 year ago
grantcas|1 year ago
[deleted]
hcrean|1 year ago
Humanity is possibly about to breed home appliances that pathfind our way to greater knowledge than we could ever have gotten to on our own.
AI might hit a wall and not progress for many years. Only time will tell...
RecycledEle|1 year ago
Find anyone, anywhere, who can answer as broad of a range of questions, as quickly, as well, as ChatGPT does.
No human can match any one of those 3 dimensions of performance.
AI won several years ago.
Bilal_io|1 year ago
mytailorisrich|1 year ago
I think that if we leave out any deadlines then the probability is 100%.
Havoc|1 year ago
soist|1 year ago
fallingfrog|1 year ago
theemachinist|1 year ago
grantcas|1 year ago
[deleted]