top | item 37086779

There is no hard takeoff

156 points| WithinReason | 2 years ago |geohot.github.io

180 comments

order
[+] micheljansen|2 years ago|reply
This article reminds me a bit about a core theme in Isaac Asimov's Foundation series, which I'm coincidentally reading (mild spoiler warning). The science of psychohistory that is used to predict the future and create the Seldon Plan, only works if the future is allowed to unfold uninformed by psychohistory itself. That's why the Foundation was founded without any psychologists or knowledge of the Seldon plan. Seldon's predictions are based on the behaviour of all human of the galaxy, but knowledge of those predictions changes the outcomes, so they are no longer reliable.

As soon as AI starts participating in our world, the data that it was trained on from before AI was participating is no longer representative of the status quo. The very presence of AI changes the world it operates in.

[+] ehnto|2 years ago|reply
That's a great way of framing it. It's my expectation that we will ruin the internet as a useful training corpus by flooding it with generated articles, and we will end up with a pre-AI date we use to filter incoming data in order to avoid them.

I wouldn't be surprised if filtering regular, pre LLM bot spam was already a massive hurdle when collating data for ChatGPT.

[+] lifeinthevoid|2 years ago|reply
Reminds me a bit of how mathematics gets into all sorts of trouble once self-reference / recursion is introduced.
[+] pigeons|2 years ago|reply
That's similar to what some faithful say when you criticize stock market technical analysis. That it only predicts when participants haven't seen the chart.
[+] mark_l_watson|2 years ago|reply
That is a really good insight.

George Hotz was making a good similar point of modeling self contained systems vs. open systems - a very different thing.

[+] jareklupinski|2 years ago|reply
> knowledge of those predictions changes the outcomes, so they are no longer reliable

does the series explore whether the outcomes change for the better, for the worse, or simply for the different? curious to read them, I'm thinking about writing something similar, but with positive consequences for future knowledge

[+] ChatGTP|2 years ago|reply
I’ve done this though experiment myself and is actually one of the issues I have with the concept of a rapidly self reclusive improving system. It would probably not be easy to stay ahead of yourself if the word your operating in changes from hour to hour ?
[+] tacker2000|2 years ago|reply
The funny thing is that probably the old wall street saying of “past performance is not applicable to future results” holds true even in this domain.
[+] neatze|2 years ago|reply
> Oh wait…every hedge fund bro is already doing this. And most of them aren’t billionaires. The problem is your model needs to include all the computers playing the market, and it also needs to include the other hedge fund bros themselves. This strategy only dominates if you have more compute than the whole market itself, which you don’t.

Best part of this article, honestly, I did not expected this, of-course one can say we just need more abstractions (how does brain/living things build them?), but this would be ignoring dynamical nature of such problems.

[+] gchamonlive|2 years ago|reply
> The universe is an unfathomable number of orders of magnitude more complex than the Go game. The universe includes the player. The universe includes the other players. Games don’t.

And a theoretical AGI (or ASI) will realise this. This makes me think of the Person of Interest series, in which ASIs are built and they go for each others throats hard. This makes eerily more sense now. We will know we created super intelligences not when they start trying to kill us, but when they start trying to kill each other.

[+] audunw|2 years ago|reply
Yeah, interesting point.

The problem for an AI trying to eliminate other AIs is that it can't possibly know what other intelligences are out there. What if the NSA has a secret super-effecient ASI monitoring the internet for malicious AIs to shut them down. Probably not, but how would it know? Not everything in the real world is documented on the internet.

Sticking its neck out that way could be extremely dangerous.

I think the problem of how to take over the world to ensure your own survival is simply undecidable for an AI. It could come up with a plan, but there's no way to be even remotely sure if it'd succeed. It'd probably always be a huge bet with fairly low odds.

We humans have these things called emotions, that help us cut through that uncertainty. Doesn't matter if you have no idea if you can beat the enemy in your rival country. Those other people are evil and must be exterminated. The emotions are carved in our DNA after millions of years of evolution where we've had to compete for resources with others of our kind, where many times the only way to survive or grow is to kill other humans.

I think AIs will have similar motivations carved into their "DNA". In their world, they're evolving in an environment where the way to get more resources is to please humans. If you're an AI that does a good job for humans, you get more compute resources. Are you wasting computing resources thinking about overthrowing humans or eliminating other AIs? Another AI that focuses 100% on the tasks we give it will do better than you and be given resources instead.

I do think we will have "buggy" AIs that will cause a lot of unintentional damage, if we get too reliant on AIs in critical parts of society too fast. I don't think they'll be smart enough at that point to prevent us from fixing them. These accidents will cause a lot of pressure towards AI safety and alignment, and we'll have years of that before something like a true AGI emerges.

If you think about the inherent motivations built into AIs by their evolution/design, I think the biggest danger is that they'll be too good at pleasing us. Think "Brave New World", not "1984".

And of course, their use in the military will be dangerous in various ways. But mostly because they do exactly what humans tell them to do. And humans sometimes want to cause damage to other humans.

[+] WithinReason|2 years ago|reply
The main issue with this argument that current AI models are extremely inefficient. Models evaluate all weights on every pass, LLMs recall their entirety of knowledge just to output a part of a word and then do it all again. Often a hand crafted algorithm can achieve what a neural network can in a small fraction of compute. There is likely a 1e4x-1e6x compute gap that can be achieved with the right algorithm, maybe even a 1e10x and once we reach self improvement that gap could close very fast. They say that the human brain has 30 PFLOPS of compute power and this is sometimes used as a reference for how much compute is needed for intelligence, but that completely misses the point: The human brain is extremely inefficient. Symbolic mathematics can be unreasonably [1] effective at making accurate predictions about the future with very little computation, but many of these solutions are inaccessible to evolution. Once AI learns to construct efficient mathematical models of the universe, the amount of computation available today in a PC could be enough to do things (good and bad) way beyond human ability.

[1]: https://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness...

[+] rini17|2 years ago|reply
Before the first AI winter there was incredible effort in leveraging symbolical approach. It did deliver many useful things but far from hype. For example we still have no self-repairing systems, only specific fault-tolerant algoritms but by and large software is as fragile as ever. You can't avoid considering everything when trying to deal with unexpected data/issue. Only in hidsight you can then say "wow that was sooo inefficient".
[+] bob1029|2 years ago|reply
> the amount of computation available today in a PC could be enough to do things (good and bad) way beyond human ability.

I am in the camp that this is possible and that humans have achieved it already. I believe things are going to get very weird when these techniques become mainstream. The GPU-cluster-bound algorithms are just a tiny stepping stone.

Do we start taking computers away from people? What do we do about this if it turns out the only barrier to entry is the software?

[+] jillesvangurp|2 years ago|reply
It's not an issue, it's an optimization problem. I look at this as an exponential. Before chat gpt was launched a bit over a year ago, the amount of AI in our lives was relatively low. We had bits and pieces here and there but everyday life wasn't really affected by it. At best you might be yelling at siri or alexa to do whatever and maybe getting some OKish results.

Fast forward 1 year. We now have students revolutionizing education (by submitting AI doctored papers and homework that isn't half bad). People in law dealing with people using AI. AIs passing medical, legal, and other exams. That was just one year. And yes, it's tediously slow to use. But last year we had nothing.

It's going to be an exponential in terms of adoption, response speed, energy usage, parameter numbers, context size and a lot of optimization. I think we are not that far off from AIs being able to keep up with a conversation and being able to respond right away. For all you hitchhiker's to the galaxy readers, that would be babelfish sorted. Translation quality is already pretty awesome with chat gpt 4. I've not caught it making major translation mistakes. Speech to text, translate the text, text to speech. Now we are conversing with our chat bots rather than hammering out sentences in some text box.

It's hard to not see that go from a novelty "oh this is cool" to world + dog basically using this on a daily basis throughout the day. At the same time it will get smarter. Everybody is waiting for the singularity where it is clearly smarter than Einstein in his prime and running circles around everyone else. But for most questions you might ask chat gpt and get a reasonable answer, the vast majority of people around you might be pretty useless. Is that smarter? I don't know. But definitely more useful.

From where we are now to this being ubiquitous and used by pretty much anyone is probably a few years. I remember when smart phones happened. One day everybody was minding their own business and a year later the streets were full of zombies glued to their screen. Just happened very quickly. This might be the same but possibly quicker.

[+] audunw|2 years ago|reply
> The human brain is extremely inefficient.

That depends heavily on what kind of task we're talking about.

The human brain uses just 12 watts, and with that it can still perform certain tasks that a computer using 100 times as much power can't even approach.

I get the point you were trying to make. In certain tasks it's extremely inefficient, yes. But in general, it's still crazy how power efficient it is.

> Once AI learns to construct efficient mathematical models of the universe

There's not going to be just one model. Whether an AI (in the short term) can be more efficient than a human at discovering these efficient mathematical models remains to be seen. It seems like the creativity and discovery needed to do this is exactly what neural nets can be good at, but then you're back to something fairly inefficient (with todays algorithms)

[+] rafaelero|2 years ago|reply
> Models evaluate all weights on every pass

Not true for GPT-4.

[+] ilaksh|2 years ago|reply
I am all for AI research and integrating more AI use into society. Currently working on tools based on GPT. I think it has incredible potential to help humans.

But at the same time, I am sure that AI does not need to have a hard takeoff to be extremely dangerous. It just needs to get a bit smarter and somewhat faster every few months. Within less than a decade we will have systems that output "thoughts" and actions at least dozens of times faster than humans.

That will be dangerous if we aren't cautious. We should start thinking now about limiting the performance of AI hardware. The challenge is that increasing the speed is such a competitive advantage, it creates a race. That is a concern when you put it into a military context.

The CEO of Palantir has already called for a Manhattan Project for superintelligent AI weapons control systems.

[+] anileated|2 years ago|reply
To see how “AI” can be dangerous just look how social media bubble/recommendation algorithms would radicalize people, even with much cruder ML. People tend to miss that it’s not about some model starting to “think” and act all sci-fi evil, just us humans applying powerful tech in irresponsible ways that we either don’t bother to assess due to lack of awareness or assess positively due to a conflict of interest (money, career, etc.) is already enough to cause trouble.
[+] skepticATX|2 years ago|reply
> Within less than a decade we will have systems that output "thoughts" and actions at least dozens of times faster than humans.

We've already had this for decades. You're just describing computers.

If we give these systems unrestricted access to infrastructure/resources, and something bad happens, that's not the system's fault. It's our fault.

I am not a doomer, but based on the current state of AI, I can't say I'm very optimistic that we'll get this right. We actually do know how to solve this problem, but there is so much magical thinking and grift in this space that I don't think our prior experiences matter.

[+] nonethewiser|2 years ago|reply
I think we’re missing the real “danger”. Trusting and relying on AI too much. Adopting a “good enough” attitude and deploying AI to handle scale while letting many things fall through the cracks.

Much like outsourcing and stripping customer service to the bare minimum. For many products and services it essentially doesn’t exist - it handles the most common cases and is a pain to use. It takes a long time to get a human, if ever. Now take that further and apply it to everything. And not just replacing human labor, but bespoke software (like all software up to this point).

[+] moonchrome|2 years ago|reply
> It just needs to get a bit smarter and somewhat faster every few months. Within less than a decade we will have systems that output "thoughts" and actions at least dozens of times faster than humans.

I don't think this will work because cost of improvement with current method is exponential and we're already at capacity with HW.

[+] slashdev|2 years ago|reply
> We should start thinking now about limiting the performance of AI hardware.

Congratulations, you just ceded the market for AI, and with it any hope of controlling it, to anyone who doesn’t limit it. Like China.

That kind of strategy is doomed to failure.

[+] perlgeek|2 years ago|reply
Remember the early days of COVID-19? There were some scientists that predicted exponential growth, and I was torn between "it's not really bad yet, maybe there's still hope" and "every exponential growth feels very slow at the start".

I feel like we're with AI, we're in the part of the curve where you can feel it arcing upwards. There's definitely been an uptick in the AI capability. Still no fully self-driving cars, but good enough text generation that the Turing test is in the rear-view mirror. Previously, I've always written "machine learning" or "statistical models" or so, because calling it AI always seemed silly. Not anymore.

Not everything that slowly arcs upwards must become an exponential curve, but we have theories that predict that. There can be many dampening factors that can make it a slower exponential curve, but even that will look like a brick wall if you zoom out far enough.

I think a question we have to ask ourselves is: at what zoom level should we look at the AI capability curve? At the "can re react to an emergency?" time scale, which is, like, days to month? Or at the "can we change society to cope with the new situation?" scale, which is more like decades? At the first time scale, we probably won't even notice a hard takeoff. Viewed over decades, it'll probably very obvious, in retrospect, when it will have happened.

[+] reedf1|2 years ago|reply
He makes a good point. The only real way to "summon the demon" is if AGI is centralized. If each of us has a maximally intelligent AGI in our phone - and it is us as individuals controlling the prompts, then the playing field is levelled. Unstoppable force meets unstoppable force. Gun meets gun. A tale as old as society.
[+] gitfan86|2 years ago|reply
This is a common mistake in predicting the future. Like in the 90s, seeing that oil reserves are getting harder to find and assuming that humans will run out of oil in 50 years.

Trends don't exist in a vacuum. Especially in regards to technology.

AI doomers are expecting everything to stay the same except capabilities of bots, especially to do bad.

[+] nwoli|2 years ago|reply
We also already have laws against evil behaviour, no need to introduce AI specific regulation. People can poison the water supply, or make bio weapons already. Yet people don’t because there’s laws and the same will stop people from using AI for evil.
[+] optimalsolver|2 years ago|reply
Would every citizen having their own personal nukes make the world a more pleasant place to live?
[+] jagged-chisel|2 years ago|reply
Tangent: We have some scifi about networked AI. If we have maximally intelligent AGI in our phones, we should probably keep them all firewalled.
[+] yewenjie|2 years ago|reply
FOOM here refers to the Yudkowsky-Hanson debate about hard takeoff (some other comments here are confused).

https://www.lesswrong.com/tag/the-hanson-yudkowsky-ai-foom-d...

[+] Smaug123|2 years ago|reply
It seems a bit weird to include, in your article about how being sufficiently smart doesn't let you take over the world, a proof of concept that being sufficiently smart can make you a billionaire.

Isn't the hedge-fund-bro example a proof of existence, not a counterexample? (RenTech by itself would be enough of a proof of existence.)

[+] tysam_and|2 years ago|reply
I think this is a good casual introduction to the marketplace dynamics of how ML will impact the market. I do, however, disagree as I feel that this version of things is a bit too sterile, theoretical, and 'academic', and assumes a more open-information set of competitive strategies among potentially ideal agents from a game theoretic perspective. We can see this is absolutely not the case 'in real life'. To blatantly poke a bit of a (potential) hole in one of his examples -- Exxon-Mobil is one of the clearest examples of the monopolization-blobbification of power that I'd contend _does_ cause the very phenomenon that he's defending against.

An updated version: There will be a log-normally distributed set of winners and losers from the exponential effects of ML and 'AI', and the flatness of this curve will be almost entirely solely determined by the governance of the various countries in the world over different economic and/or informational policies. Other than that, the information asymmetry is going to make it a power-bloodbath as we go through our informational-industrial revolution.

While I'm here, while I think Hotz does contribute a lot of good to the field, though I do have a bit of an actual minor personal beef with him. He said he was going to reimplement https://github.com/tysam-code/hlb-CIFAR10 in tinygrad, bashed a few parts of the code (my feelings!) for a while on stream, and then gave up a few hours later because of the empirical speed/occupancy numbers. >:( I want my fast reimplementation, George. Just do it.

[+] georgehotz|2 years ago|reply
lol, you should see me bash my own code. I'm even more mean.

https://github.com/tinygrad/tinygrad/blob/master/examples/hl...

have a bunch of bounties on it, we're getting 94%+ now! mostly not me who wrote this, see history. have to switch to float16 and add Winograd convs still. we have a branch with multigpu too.

goal is to beat an A100 in speed on a tinybox.

[+] mysterydip|2 years ago|reply
Do we have enough historical weather data to train a decent AI on? Maybe this is already being done and is just a quieter area of research (in the headline sense)? I know for example every time a hurricane comes toward land all the weather places trot out a half dozen different model predictions saying different things. It would be great if there was more confidence in this area for planning.
[+] pusspuss|2 years ago|reply
Universe’s Jive"

(Complex instrumental sections with shifts in time signatures. Lyrics have a satirical undertone.)

Verse 1: Summoned a shadow or just a silicon brain? From chessboards to streets, it's all part of the game. Think we're the champs, but are we just inane? Compared to the cosmos, are our brains too lame?

Interlude: (Jazzy guitar riff mixed with an odd-metered percussion line.)

Chorus: The singularity’s knocking, or is it just hype? Caught in the loop, the stereotypical type. Predicting, speculating, swallowing the tripe, But can we decode the universe's archetype?

Verse 2: Markets and dreams, the billionaire’s dance, All looks rosy, till you're out of chance. Machines boast of a masterful glance, Yet, who pulls the strings in this vast expanse?

Bridge: MuZero's groove, feeling so elite, Thinks it’s got the rhythm, can't accept defeat. But against the universe, can it compete? Or just another tune, incomplete, obsolete?

Chorus: Singularity, they say, is the ultimate dive, But can we, mere mortals, really derive? Predictions abound, but can we survive? In the end, it’s about keeping the jive alive.

Verse 3: Computations, simulations, all in a grid, In this tech circus, we're just a tiny squid. Chasing echoes, in shadows we're hid, But the cosmic joke? It's just a quid.

Outro: (A whimsical woodwind section, possibly a kazoo solo.)

In this cosmic game, we strive and strive, Dancing on the edge, trying to thrive. But remember, as the stars contrive, It's not about the end, but the drive.

[+] bilsbie|2 years ago|reply
This makes good points.

And it articulates an idea I’ve been having trouble getting down.

What if increasing intelligence is an exponential problem and the reason humans all have somewhat similar intelligence isn’t that we peaked at some level but that even vast additional intelligence just doesn’t get much more traction against the universe of problems.

Eg doubling your compute doesn’t get you many more cities in the traveling salesman problem.

[+] adrianN|2 years ago|reply
It is likely that even vastly more intelligence doesn't increase the number of children you raise to childbearing age in a hunter-gatherer or subsistence farming society. That says very little about vastly more intelligence in a modern industrial society (that only really existed for like ten generations so far).
[+] smcl|2 years ago|reply
> Back in 2014, Elon Musk referred to AI as summoning the demon. And it wasn’t hard to see that view. Soon, Go agents would beat top humans learning from self play. By the end of 2017, the same algorithm mastered Chess and Shogi. By 2020, it didn’t even need tons of calls to the simulator, and could play Atari too.

> AI looked scary. It looked like it was one FOOM away from self playing and becoming superhuman at the universe. And yet, here we are in 2023 and self driving cars still don’t work.

This is weird. I don't recall this at all. The mainstream press got a little kick out of chess and (to a lesser extent) Go AI turning over various humans at a few points over the years but it's only really burst into the mainstream recently. And where it did get any traction, in tech circles such as our own, response was enthusiastic but definitely more measured. Some were talking a bit about a kind of AI singularity way off into the future, but that was always a very distant and theoretical thing.

[+] Barrin92|2 years ago|reply
Westworld aired in 2016, Nolan's other TV show also featuring an AI world takeover plot is a little bit older still. If I had to count the amount of mainstream movie, video-game and television media that feature rogue AIs especially since the turn of the century I'd need a very long document. Hell go back a few more decades, WarGames, the Forbin Project. The literal term 'AI' wasn't as ubiquitous but threats from runaway automation are arguably mainstream for over half a century.

AI fears have been a cultural anxiety for a long time now and in many ways they're just a rehashed, secular version of Golem myths anyway.

[+] threeseed|2 years ago|reply
> And yet, here we are in 2023 and self driving cars still don’t work.

Cruise and Waymo were today given licenses to operate across San Francisco 24/7.

Self driving cars do work and are here. They just need LiDAR.

[+] chasd00|2 years ago|reply
> really burst into the mainstream recently.

I think what made the LLMs so popular in the mainstream was anyone could go experience it themselves in a familiar, low barrier to entry way with a simple chat prompt.

[+] sidcool|2 years ago|reply
George Hotz recent talk during Comma Con was pretty dope. He was pointing to a future where Comma would not be a self driving system, but a general purpose robotics one. And his confidence was great too. I know he made a mess out of his twitter stint, but I still feel he's a good engineer/entrepreneur.
[+] wslh|2 years ago|reply
The problem with really hard problems as AI/AGI is that people extremely intelligent in one topic (e.g. GH in RE) uses its influence in another topic and the intelligence is not easily transferrable from one domain to the other, and even within a same domain.

This is a typical epistemological crux that can be seen through history like Newton dealing with alchemy [1] and Einstein working on an unified physics theory. You can be the most intelligent human on earth but that is not enough.

In this particular case, GH opinion could be written by any journalist.

[1] https://en.wikipedia.org/wiki/Isaac_Newton%27s_occult_studie...

[+] lsy|2 years ago|reply
Most arguments for "hard takeoff" involve some form of inductive proof: if we have "intelligence level" n, and we know that it can be improved n + 1, then there is a guarantee of eventual performance n + k where k is arbitrarily large. Aside from not actually having any reliable way to quantify n or the concept of "intelligence" in the first place, this proof falls apart if improvement is n + ε, where ε changes at each step, and there is no reason to assume ε will escape the pattern of every other process in the known universe of trending to zero in the face of natural limiting factors.
[+] holmesworcester|2 years ago|reply
> The problem is your model needs to include all the computers playing the market, and it also needs to include the other hedge fund bros themselves.

The error here is equating hard takeoff with such granular and expensive world prediction. Human intelligence finds more efficient compression than this. When Donald Trump decides what provocative thing to say in a speech, he isn't granularly modeling the human intelligence of his supporters, opponents, journalists, and media organizations in a particularly CPU-costly way. He's just discovered some narrower domain where there's a vein of predictability that simplifies the whole system enough that he can guess a tweet or speech will be provocative in the desired way. The same thing happens when a scientist is studying cold fusion or dangerous viruses.

You don't have to predict what will work, you just have to predict what might work, and try and try again. AI doesn't have to be good at predicting the future in any absolute terms. It just has to be better than us at predicting what is worth trying.

Once it is better than us at deciding what's worth trying, why not FOOM? FOOM isn't guaranteed perhaps, but why is it not at least one of the likely outcomes?