(no title)
silveraxe93 | 6 months ago
It's basically what every major AI lab head is saying from the start. It's the peanut gallery that keeps saying they are lying to get funding.
silveraxe93 | 6 months ago
It's basically what every major AI lab head is saying from the start. It's the peanut gallery that keeps saying they are lying to get funding.
JeremyNT|6 months ago
Not to detract from what has been done here in any way, but it all seems entirely consistent with the types of progress we have seen.
It's also no surprise to me that it's from Google, who I suspect is better situated than any of its AI competitors, even if it is sometimes slow to show progress publicly.
westbrookt|6 months ago
I think this was the first mention of world models I've seen circa 2018.
This is based on VAEs though.
kranke155|6 months ago
Hard to fault them as the process towards ASI now appears to be runaway and uncontrollable.
glenstein|6 months ago
I suppose it depends what you count as "the start". The idea of AI as a real research project has been around since at least the 1950s. And I'm not a programmer or computer scientist, but I'm a philosophy nerd and I know debates about what computers can or can't do started around then. One side of the debate was that it awaited new conceptual and architectural breakthroughs.
I also think you can look at, say, Ted Talks on the topic, with guys like Jeff Hawkins presenting the problem as one of searching for conceptual breakthroughs, and I think similar ideas of such a search have been at the center of Douglas Hofstadter's career.
I think in all those cases, they would have treated "more is different" like an absence of nuance, because there was supposed to be a puzzle to solve (and in a sense there is, and there has been, in terms of vector space and back propagation and so on, but it wasn't necessarily clear that physics could "pop out" emergently from such a foundation).
jonas21|6 months ago
[1] http://www.incompleteideas.net/IncIdeas/BitterLesson.html
satvikpendem|6 months ago
ivape|6 months ago
pantalaimon|6 months ago
We don't inherit any software, so cognitive function must bootstrap itself from it's underlying structure alone.
https://media.ccc.de/v/38c3-self-models-of-loving-grace
silveraxe93|6 months ago
We had one breakthrough a couple of years ago with GPT-3, where we found that neural networks / transformers + scale does wonders. Everything else has been a smooth continuous improvement. Compare today's announcement to Genie-2[1] release less than 1 year ago.
The speed is insane, but not surprising if you put in context on how fast AI is advancing. Again, nothing _new_. Just absurdly fast continuous progress.
[1] - https://deepmind.google/discover/blog/genie-2-a-large-scale-...