(no title)
tikkun | 1 year ago
> Within the next three years, robotics should be completely solved [wrong, unsolved 7 years later], AI should solve a long-standing unproven theorem [wrong, unsolved 7 years later], programming competitions should be won consistently by AIs [wrong, not true 7 years later, seems close though], and there should be convincing chatbots (though no one should pass the Turing test) [correct, GPT-3 was released by then, and I think with a good prompt it was a convincing chatbot]. In as little as four years, each overnight experiment will feasibly use so much compute capacity that there’s an actual chance of waking up to AGI [didn't happen], given the right algorithm — and figuring out the algorithm will actually happen within 2–4 further years of experimenting with this compute in a competitive multiagent simulation [didn't happen].
Being exceptionally smart in one field doesn't make you exceptionally smart at making predictions about that field. Like AI models, human intelligence often doesn't generalize very well.
padolsey|1 year ago
Is anyone though? Genuine question. I don't have much faith in predictions anymore.
qeternity|1 year ago
Most of it is survivorship bias: if you have a million people all making predictions with coin flip accuracy, somebody is going to get a seemingly improbable number correct.
ethbr1|1 year ago
exe34|1 year ago
_giorgio_|1 year ago
https://openai.com/index/elon-musk-wanted-an-openai-for-prof...
> 2/3/4 will ultimately require large amounts of capital. If we can secure the funding, we have a real chance at setting the initial conditions under which AGI is born.
InkCanon|1 year ago
sangnoir|1 year ago
unknown|1 year ago
[deleted]