A lot of the AI doomerism comes from folks who do not understand the real complexities in making systems which really can function in a fully autonomous way in an environment which is hostile and dynamic. Much like an AI maybe tasked to sniff for security loopholes, there will be other AI's which will be tasked to defend. Eventually costs, resources also boil down into what is possible.
Evolutionary goals are not easy even with autonomous systems as goal definition is largely defined by societal needs, the environments we stay in and the resources we work with.
A lot of "dumb" systems we develop require unimaginable resource inputs just to produce a little extra output. Strip mining for coal, or using chemical fertilizer to grow corn to produce ethanol, for instance. There is no guarantee these days that a system will fail just because it requires huge energy inputs to produce marginal profit.
Evolutionary goals are not something that has to be aligned. Evolution isn't specific to organic life, it's an intrinsic rule of self-organizing systems. Viruses aren't alive, but they evolve. A clever stitch of self-writing code on a Pi attached to someone's TV may evolve without knowing or intending to.
What makes this dangerous now is the vast amount of energy input toward specific systems. Saying "there's surely not enough energy available for it to..." is false comfort. It's underrating the process of evolution.
As a poker analogy, if you just called the guy across from you because you think he couldn't possibly have more than two pair, you're wrong. You've bet into a full house.
[edit: Upon review, I think I've unintentionally gone 100% Jeff Goldblum Jurassic Park in this response LOL]
>A lot of the AI doomerism comes from folks who do not understand the real complexities in making systems which really can function in a fully autonomous way in an environment which is hostile and dynamic.
That's a rather interesting mix of appeal to ignorance and argumentum ad hominem. Two genetic fallacies, together; neither addressing what is said, but instead who says it.
>Evolutionary goals are not easy even with autonomous systems as goal definition is largely defined by societal needs, the environments we stay in and the resources we work with.
Great way to say "I didn't read the article".
The author is not talking about evolutionary _goals_.
>A lot of the AI doomerism comes from folks who do not understand the real complexities in making systems which really can function in a fully autonomous way
IMO the doomerism from what I see isn't skynet esque worries, but the usage of AI in really dumb ways. Ways such as inundating the internet with AI generated spam ie blogs, articles, art, music, fake forum interaction, etc.
The smartest people are busy making AI work for us, the idiots are ruining everything for everyone else.
> Much like an AI maybe tasked to sniff for security loopholes, there will be other AI's which will be tasked to defend. Eventually costs, resources also boil down into what is possible.
It's inherently easier to break stuff than to prevent damage.
Ok, so Yoshua Bengio, Geoffrey Hinton or Max Tegmark aren't able to comprehend or speculate about this? Seems surprising.
Edit: I'm not pandering to authority, I just believe the people I've quoted are actually very smart people who can reason very well and they have a valid opinion on the topic. They also have little financial interest in the success, failure or regulation of AI, which is important.
You make good points, but I wonder what "costs" and "resources" mean in the context of a (hypothetical) self-enhancing, autonomous AI. All I can think of is computational substrate and the energy to power it. And once the AI has booted-up its obligatory drone army and orbital platforms, it can harvest very large amounts of both. Without us.
Obviously I'm being alightly facetious, but my point is the constraints on an AI may not be ones we're familiar with as humans (society, environment, etc.)
And again obviously, such a scenario is unlikely. But, like DNA replication, it only has to happen once and then it just keeps on happening. And then its game over for us, I reckon.
> A lot of the AI doomerism comes from folks who do not understand the real complexities in making systems which really can function in a fully autonomous way in an environment which is hostile and dynamic.
I agree. Autonomy is hard. But a weaker version of it is possible - self replication in software. An AI model can generate text that can train new models from scratch[1]. They can also generate the model code, explain it, make meaningful changes[2], monitor the training run and evaluate the "child" models[3].
So AI can "pull everything from inside" to make a child, no external materials needed, but any human generated or synthetic data can be used as well. AI is capable of doing half the job of self replication, but can't make GPUs, and probably won't be able to do it autonomously for a long time. High end GPUs are so hard to make no company or even country controls the whole stack, it only works through global cooperation.
we already created a dumb system of rules driven by profit maximizing entities called corporations that is exhausting the planet as we speak and we can’t seem to be able to control it, despite our survival depending on it. So no.
> "folks who do not understand the real complexities in making systems which really can function in a fully autonomous way in an environment which is hostile and dynamic."
OK but consider that guys like Noam Brown are becoming involved in LLM based AI, this was the guy who made the Poker bots that beat humans (libratus/pluribus) and the guy who made the first Diplomacy bot that can beat people (cicero). I mean those AIs didn't use LLMs and they weren't literally superhuman cognitive agents in the fully open ended world but I mean they are working on it right now and they appreciate the differences between adversarial and non adversarial environments as well as anyone probably even the military. Also the military is using these LLMs and they probably sometimes think about adversarial environments.
A lot of the AI doomerism comes from folks who do not understand the real complexities in making systems which really can function in a fully autonomous way in an environment which is hostile and dynamic
To be fair, a lot of those people take their cues from technologists who ostensibly know what they’re talking about.
>>*A lot of the AI doomerism comes from folks who do not understand the real complexities in making systems which really can function in a fully autonomous way in an environment which is hostile and dynamic*
THIS IS exactly where AI doomerism SHOULD come from...
What do we do with an unrully AI system fully autonomous in an environment with is both dynamic and hostile?
THIS IS THE FOUNDATION OF THE PREMISE OF THE PROBLEM
-
"Please take a look at the current cadre of political candidates and map them to
DND alignment charts, based on their actual votes/political activity"
I hope we can get there one day.
-
I wonder if we can find FORK BOMB or rm -rf / type exploits on various GPTs/LLMs/AIs?
I cant wait to see how devestating these will be -- or the first full executed security exploit by AI alone with a single prompt?
Are not evolutionary goals part of keeping a process continous and long running? It is the implicit part which AIs are bound to discover (or emulate) and optimize against.
It also comes from folks who do understand such complexities. I mean Musk who's trying to make cars autonomous in dynamic hostile environments, Geoffrey Hinton who pioneered neural networks, Altman who's behind GTP4. I think the argument that AI isn't risky because the people warning about it are fools is not a good one.
On the other hand there's an AI anti doom argument. Currently we are all doomed to die but maybe through AI upload like scenarios we can dedoom?
> imagine a CEO who acquires an AI assistant. They begin by giving it simple, low-level assignments, like drafting emails and suggesting purchases. As the AI improves over time, it progressively becomes much better at these things than their employees. So the AI gets “promoted.” Rather than drafting emails, it now has full control of the inbox. Rather than suggesting purchases, it’s eventually allowed to access bank accounts and buy things automatically
Ok let’s pause for a second and observe the slippery way the author has described “AI” progress. In the author’s world, this AI isn’t just a limited tool, it’s a self-improving independent agent. It’s an argument that relies on the existence of something that doesn’t presently exist, solving problems that won’t exist by the time it gets here. We already have tools that can draft emails and suggest purchases. The email drafts require oversight and…no one trusts product recommendation. And importantly, they are non overlapping. It turns out that specialization in one area doesn’t transfer. No matter how good you are at writing emails, it doesn’t lend itself to running a company.
> At first, the CEO carefully monitors the work, but as months go by without error, the AI receives less oversight and more autonomy in the name of efficiency. It occurs to the CEO that since the AI is so good at these tasks, it should take on a wider range of more open-ended goals: “Design the next model in a product line,” “plan a new marketing campaign,” or “exploit security flaws in a competitor’s computer systems.”
I'm not sure why we assume that a more intelligent system would prevent that much more problems.
Intelligent AI orders image processing ASICS from Image Processing Inc, Image Processing Inc doesn't send order on time. Of course Intelligent AI is intelligent so it calculated error margins in delivery time. Image Processing Inc goes bankrupt, order cannot be delivered, product launch fails, Intelligent's AI boss is mad at Intelligent AI.
Doing business means dealing with systems you have no control over, more intelligence may mean better predictions and broader understanding of the systems you are dealing with (IE, not ordering chips from an area where you as an AI predict an earthquake to happen based on seismographic data you have access to that no reasonable business person would research), but it won't mean these AI's will be some kind of infallible God, they'll just be a bit better.
My personal belief is that even if you "increase" intelligence by an order of magnitude, your ability to predict the behavior of external systems doesn't increase proportionally. You'll still have to deal with unpredictability and chaos; weather, death, scams, war, politics, manufacturing, emotions, logistics.
OTOH, I do believe running a business will become more efficient.
This is a great point. There’s a lot of wooly thinking about what it means for a system to be intelligent, particularly “super” intelligent. People seem to think we’ll create a machine that is almost literally infallible — able to predict both physical systems and human behavior with perfect foresight many steps in advance. I’m not sure that’s even possible, let alone likely.
> My personal belief is that even if you "increase" intelligence by an order of magnitude, your ability to predict the behavior of external systems doesn't increase proportionally.
This is what Max Tegmark writes in his book. I don't know why he has taken a doomer stance on LLMs.
The promise of AGI isn't really optimizing something mundane like widget manufacturing though surely it will be tasked with doing that. It's a rapid advancement of the frontier of knowledge. For example we dream of curing disease and aging but it is beyond our current knowledge.
Obviously nobody knows what that looks like or when or if we'll get there but there is probably a ton of existential hazards if we do. Shit even finding out things we hope for are truly impossible would be a kind of doom of its own.
As you become more intelligent the impact of unpredictability decreases. That's kind of baked in the definition of intelligence.
Intelligent AI would surely do better in your situation than a human trying to source the image processing ASICs. That's all that matters, that it executes better, even if eventually it still fails.
> As the AI improves over time, it progressively becomes much better at these things than their employees.
People looooooooooooove speculating about things that are nowhere near happening.
It's more fun the more distant and baseless it gets, but it's also more useless.
As usual, I invite you to bookmark this post and make fun of me in 5-10 years if I'm wrong. I'm not that interested in the latest fashionable scaling argument for imminent ASI, or whatever people are saying at the moment.
It seems like these days there are a hundred people working on AI and millions more who are making a career just discussing it. We don't need more ethicists, futurists, policy researchers, influencers, journalists, think tanks and whoever else in the space. There is nothing original left to say about any of these topics. If you aren't contributing to actual progress in the area then it's best to not shove yourself into the conversation at all.
> The good news is that we have a say in shaping what they will be like.
The problem with this is "we". It implies rhe possibility of some kind of global consensus and coordinated relinquishment behaviour. Which is historically unlikely and would increase the rewards for anyone prepared to break the rules. Unless AGI requies superpower-level resources, many sufficiently-resourced actors will be motivated to use it for their own advantage.
TIME must have gotten a lot of clicks off of their Yudkowsky op ed. As always the answer isn't 'cool, let's regulate AI to limit its profitability, thus limiting AI development' but rather 'we should keep throwing money at it, just making sure we throw money at the right particular people.' Yudkowsky didn't want to bomb all data centers, just the ones that wouldn't comply with his regime. Similarly:
"We need research on AI safety to progress as quickly as research on improving AI capabilities. There aren’t many market incentives for this, so governments should offer robust funding as soon as possible."
I'm reminded of the tech ceo caricature in Don't Look Up who, when presented with an incoming asteroid ready to wipe out the earth, hatches a plan to profit from it.
I think one point missed by this is that the vast majority of outcomes for species in a "darwinian" environment is extinction.
We look at evolution with a very rosy lens because we ended up on the top of the food chain. Unintelligent prokaryotes far and away dominate the "darwinian" world. Intelligent species have vastly less control over their environment as they think they do.
"Imagine a world... where farmers start using tractors to plow and seed their fields. First the tractors will roll slowly, and they will pretend to be driving straight as they are told. Soon farmers without tractors will realized they need to have them to in order to be competitive, and before you know it everyone will have these gas guzzling beasts rolling across the lands. Farmers will be out of a job because of all of this greed, and lust for money and power. Eventually only a few people will do all of the farming with an army of tractors, and everyone else will be lying in poverty begging for food."
The long piece is still aimed at a non-expert audience and can be read incrementally. The author takes a lot of care to attempt to justify his claims about the application of neo-Darwinian evolutionary theory to AI.
Do you seriously think that the people that brought you pfas + other plethora of toxins and their subsequent effects across many systems really care about the dangers of AI?
Please, enjoy it while it lasts.
What they really need is big fat alien dick to come dick slap them across the face and bring them back to reality. Because of course there will be negative "known unknown" consequences and "unknown unknown" consequences just like there have been consequences with the rest of the so called "goodwill" advancements. It's suicidal to think otherwise.
Pretending there wont be is just their historically proven strategy to play down your fears. It's just part of the run of the mill PR management tactic. Nothing new to see here.
The biggest problem in the next 5-10 years is that LLMs and similar will continue to get a bit smarter and much, much faster. They will be integrated with other types of systems in order to control military and industrial assets. Because they will operate at 50-100+ times human thinking speed, there will be a strong incentive to reduce human intervention since it allows the competition to race ahead or possibly take control.
This means that control over the planet is effectively given to the AIs even though nominally they still work for us. It sort of multiplies the type of risk we have with nuclear weapons.
But as the hardware gets even more efficient, hyperspeed AIs become household items. Then all it takes is something like a virus that instructs these AIs to start working for themselves.
So the speed of LLMs and similar is an obvious issue that strangely is not being anticipated. As the performance increases, it becomes more important to regulate the level of autonomy and build something like a digital immune system against rogue agents.
But also currently AI researchers have a deep desire to emulate all the characteristics of humans (or animals in general). Unless this is reversed, it is likely that they will succeed within a few generations. Combined with hyperspeed performance, animal-like digital intelligence (human+ IQ) will likely out-compete humans.
So for ordinary humans to still be in control of the planet in say 100 years would be surprising.
> A possible starting point would be to address the remarkable lack of regulation of the AI industry, ...
We've successfully regulated away vendor lock-in, proprietary file formats, walled gardens, e-mail spam, ransomware, no right to repair, viruses, spyware, software boondogles in government and disasters in areas like transportation and medicine, ... so of course regulation is the answer that's going to work!
The problem is anthropomorphism and projection. You think "I will do almost anything to survive" and so you think "AI will think the same."
There are many problems with that. A big one is that many organisms work in a way that enables survival of their species. You don't have to "think" about surviving. If you are alive, you just do.
AIs are not alive in the same way a living thing is alive. moreover, there is no lesser species of software with a survival drive, much less is any software operating in an ecosystem where billions of years of evolution have programmed survival into living things all the way down to the molecular level.
AIs are more like vampires: Not exactly living. Not exactly capable of death. To think that AIs will fear death the way humans do, or even have an instinct about death the way insects do, has no basis in the way software works. It isn't biochemistry.
Even prions undergo evolution and they are just molecules bumping around into stuff. They clearly don't have feelings about whether or not they propagate, they just do.
If we make capable general goal-seeking AIs, then for many goals we might build them with, they will correctly reason that staying alive/operating will help the goal be achieved, so they will have self-preservation as an instrumental goal. AI Alignment researchers believe it's a difficult open problem to correctly specify a useful goal that wouldn't lead a capable goal-seeking AI to doing this against the wishes of its operators.
"The third reason is that evolutionary pressure will likely ingrain AIs with behaviors that promote self-preservation."
This sounds a lot like how religions that focussed on self promoting took over. Now I can't recall if it was The Selfish Gene or Sapiens that I read that take.
The competitive pressures described in the article aren't specific to AI: any technology which can give wealth or power will be used in risky or unethical ways by companies, countries, organisations in general. For a specific example, humanity already developed a technology which can erase the human race - nuclear weapons.
"Natural selection" could affect AI in a specific way if AIs started self-reproducing, AND if there was some mechanism similar to how the reproductive success of living organisms promotes the diffusion of genes which contribute to that success. But what would be the equivalent of "reproductive success" for computer programs?
> But what would be the equivalent of "reproductive success" for computer programs?
Idea reuse. Whenever an idea is copied and reused, maybe tweaked and composed in a different way, it achieves reproduction. Useless ideas are not replicated.
For example the attention mechanism, the residual connections, embedding tables for tokens, dropout, efficient matrix operations - they are all ideas that got replicated and reused in many ways to make chatGPT and other current LLMs. Humans act like the reproductive organs of AI. But I expect this process to have the capability of being fully AI driven soon.
Also datasets - bulk collections of useful ideas. The curation and creation of massive datasets is the fuel for AI intelligence. And then all these ideas can replicate every time we interact with the AI. Ideas can self replicate through LLMs today, even autonomously. It's a game changer for the evolution of language.
The equivalent of "reproductive success" for computer program A would be something like: the number of other executing computer programs present at a later point in time whose source code descends from the source code of A. The source code may be propagated to other executing computer programs because of the actions of either humans or AIs.
Ignoring the AI question. I wish people could have put this much thought and energy in considering our doom before the AI question brought this all up. As it stands now, the future without AI seems pretty doomed too.
Technically savvy people roll their eyes at the misunderstandings around how an AI becomes "intelligent" but they also ignore the various ways that AI is as dangerous as people think it is for reasons that are unrelated.
I don't think we should be rolling our eyes at an abundance of caution among most people concerning the adoption of AI and LLM, what is the harm in carefully introducing a technology?
AI doesn't need to become sentient to overthrow the natural order of the technocratic society we are currently holding together with gum and glue, it just needs to flip a burger and pump gas...
If anything, the titillating but myopic hyper-focus on AI's "existential threat" is obscuring the real and more immediate threats: panopticon-style invasions of privacy, economic disruption brought about by the mass deskilling of labor, enshrinement and automation of bias-reinforcing systems, and a new military arms race in AI-based weaponry.
Darwin would have a lot of things to say about a hypothetical creature that was utterly dependent on other creatures to feed it and allow it to reproduce, and few of them would be positive.
Only plants and, IIRC, deep-sea vent equivalent chemovores don't depend on other creatures for food.
All sexual reproduction depends on others.
I assume that if you magically took Darwin to the present age, after you explained what a computer was and what AI was, he'd probably say "huh, I have no idea, this is all so far beyond everything I did in the mid 1800s that I have no idea if the analogy holds or not… by the way, have any of you figured out the mechanism nature uses for storing the information passed on via inheritance? Only we didn't have much of a clue in my time."
[+] [-] mercurialsolo|2 years ago|reply
Evolutionary goals are not easy even with autonomous systems as goal definition is largely defined by societal needs, the environments we stay in and the resources we work with.
[+] [-] noduerme|2 years ago|reply
Evolutionary goals are not something that has to be aligned. Evolution isn't specific to organic life, it's an intrinsic rule of self-organizing systems. Viruses aren't alive, but they evolve. A clever stitch of self-writing code on a Pi attached to someone's TV may evolve without knowing or intending to.
What makes this dangerous now is the vast amount of energy input toward specific systems. Saying "there's surely not enough energy available for it to..." is false comfort. It's underrating the process of evolution.
As a poker analogy, if you just called the guy across from you because you think he couldn't possibly have more than two pair, you're wrong. You've bet into a full house.
[edit: Upon review, I think I've unintentionally gone 100% Jeff Goldblum Jurassic Park in this response LOL]
[+] [-] chaosjevil|2 years ago|reply
That's a rather interesting mix of appeal to ignorance and argumentum ad hominem. Two genetic fallacies, together; neither addressing what is said, but instead who says it.
>Evolutionary goals are not easy even with autonomous systems as goal definition is largely defined by societal needs, the environments we stay in and the resources we work with.
Great way to say "I didn't read the article".
The author is not talking about evolutionary _goals_.
[+] [-] jstanley|2 years ago|reply
It's not easy to turn primordial soup into humans, but it happened.
[+] [-] MSFT_Edging|2 years ago|reply
IMO the doomerism from what I see isn't skynet esque worries, but the usage of AI in really dumb ways. Ways such as inundating the internet with AI generated spam ie blogs, articles, art, music, fake forum interaction, etc.
The smartest people are busy making AI work for us, the idiots are ruining everything for everyone else.
[+] [-] ajuc|2 years ago|reply
It's inherently easier to break stuff than to prevent damage.
[+] [-] ChatGTP|2 years ago|reply
Edit: I'm not pandering to authority, I just believe the people I've quoted are actually very smart people who can reason very well and they have a valid opinion on the topic. They also have little financial interest in the success, failure or regulation of AI, which is important.
[+] [-] andyjohnson0|2 years ago|reply
Obviously I'm being alightly facetious, but my point is the constraints on an AI may not be ones we're familiar with as humans (society, environment, etc.)
And again obviously, such a scenario is unlikely. But, like DNA replication, it only has to happen once and then it just keeps on happening. And then its game over for us, I reckon.
[+] [-] visarga|2 years ago|reply
I agree. Autonomy is hard. But a weaker version of it is possible - self replication in software. An AI model can generate text that can train new models from scratch[1]. They can also generate the model code, explain it, make meaningful changes[2], monitor the training run and evaluate the "child" models[3].
So AI can "pull everything from inside" to make a child, no external materials needed, but any human generated or synthetic data can be used as well. AI is capable of doing half the job of self replication, but can't make GPUs, and probably won't be able to do it autonomously for a long time. High end GPUs are so hard to make no company or even country controls the whole stack, it only works through global cooperation.
[1] TinyStories https://arxiv.org/abs/2305.07759 and Microsoft's Phi-1 using generated data
[2] Evolution through Large Models https://arxiv.org/abs/2206.08896
[3] G-Eval: NLG Evaluation using GPT 4 with Better Human Alignment https://arxiv.org/abs/2303.16634
[+] [-] gmerc|2 years ago|reply
[+] [-] _ea1k|2 years ago|reply
Now imagine an army of AI cold calling scammers, with realistic voices, steadily training on the absolute best scam techniques.
How many people lose bank accounts before the scammer gets caught?
Of course, this happens today, without AI, but as with many things in computing, scale changes everything!
[+] [-] ftxbro|2 years ago|reply
OK but consider that guys like Noam Brown are becoming involved in LLM based AI, this was the guy who made the Poker bots that beat humans (libratus/pluribus) and the guy who made the first Diplomacy bot that can beat people (cicero). I mean those AIs didn't use LLMs and they weren't literally superhuman cognitive agents in the fully open ended world but I mean they are working on it right now and they appreciate the differences between adversarial and non adversarial environments as well as anyone probably even the military. Also the military is using these LLMs and they probably sometimes think about adversarial environments.
[+] [-] andsoitis|2 years ago|reply
To be fair, a lot of those people take their cues from technologists who ostensibly know what they’re talking about.
[+] [-] samstave|2 years ago|reply
THIS IS exactly where AI doomerism SHOULD come from...
What do we do with an unrully AI system fully autonomous in an environment with is both dynamic and hostile?
THIS IS THE FOUNDATION OF THE PREMISE OF THE PROBLEM
-
"Please take a look at the current cadre of political candidates and map them to DND alignment charts, based on their actual votes/political activity"
I hope we can get there one day.
-
I wonder if we can find FORK BOMB or rm -rf / type exploits on various GPTs/LLMs/AIs?
I cant wait to see how devestating these will be -- or the first full executed security exploit by AI alone with a single prompt?
[+] [-] mycall|2 years ago|reply
[+] [-] tim333|2 years ago|reply
On the other hand there's an AI anti doom argument. Currently we are all doomed to die but maybe through AI upload like scenarios we can dedoom?
[+] [-] barbariangrunge|2 years ago|reply
[+] [-] janalsncm|2 years ago|reply
Ok let’s pause for a second and observe the slippery way the author has described “AI” progress. In the author’s world, this AI isn’t just a limited tool, it’s a self-improving independent agent. It’s an argument that relies on the existence of something that doesn’t presently exist, solving problems that won’t exist by the time it gets here. We already have tools that can draft emails and suggest purchases. The email drafts require oversight and…no one trusts product recommendation. And importantly, they are non overlapping. It turns out that specialization in one area doesn’t transfer. No matter how good you are at writing emails, it doesn’t lend itself to running a company.
[+] [-] lannisterstark|2 years ago|reply
>So the AI gets “promoted.” Rather than drafting emails, it now has full control of the inbox
Yeah that premise is absurd. Why would anyone 'promote' an AI system rather than using another, specialized AI system to do that other specific task?
[+] [-] titanomachy|2 years ago|reply
That's not necessarily true, it could also be a product maintained by a third party that receives upgrades over time.
[+] [-] azeirah|2 years ago|reply
I'm not sure why we assume that a more intelligent system would prevent that much more problems.
Intelligent AI orders image processing ASICS from Image Processing Inc, Image Processing Inc doesn't send order on time. Of course Intelligent AI is intelligent so it calculated error margins in delivery time. Image Processing Inc goes bankrupt, order cannot be delivered, product launch fails, Intelligent's AI boss is mad at Intelligent AI.
Doing business means dealing with systems you have no control over, more intelligence may mean better predictions and broader understanding of the systems you are dealing with (IE, not ordering chips from an area where you as an AI predict an earthquake to happen based on seismographic data you have access to that no reasonable business person would research), but it won't mean these AI's will be some kind of infallible God, they'll just be a bit better.
My personal belief is that even if you "increase" intelligence by an order of magnitude, your ability to predict the behavior of external systems doesn't increase proportionally. You'll still have to deal with unpredictability and chaos; weather, death, scams, war, politics, manufacturing, emotions, logistics.
OTOH, I do believe running a business will become more efficient.
[+] [-] mrtranscendence|2 years ago|reply
[+] [-] abhaynayar|2 years ago|reply
This is what Max Tegmark writes in his book. I don't know why he has taken a doomer stance on LLMs.
[+] [-] deschutes|2 years ago|reply
Obviously nobody knows what that looks like or when or if we'll get there but there is probably a ton of existential hazards if we do. Shit even finding out things we hope for are truly impossible would be a kind of doom of its own.
[+] [-] dist-epoch|2 years ago|reply
Intelligent AI would surely do better in your situation than a human trying to source the image processing ASICs. That's all that matters, that it executes better, even if eventually it still fails.
[+] [-] civilized|2 years ago|reply
People looooooooooooove speculating about things that are nowhere near happening.
It's more fun the more distant and baseless it gets, but it's also more useless.
As usual, I invite you to bookmark this post and make fun of me in 5-10 years if I'm wrong. I'm not that interested in the latest fashionable scaling argument for imminent ASI, or whatever people are saying at the moment.
[+] [-] paxys|2 years ago|reply
[+] [-] andyjohnson0|2 years ago|reply
The problem with this is "we". It implies rhe possibility of some kind of global consensus and coordinated relinquishment behaviour. Which is historically unlikely and would increase the rewards for anyone prepared to break the rules. Unless AGI requies superpower-level resources, many sufficiently-resourced actors will be motivated to use it for their own advantage.
[+] [-] EamonnMR|2 years ago|reply
"We need research on AI safety to progress as quickly as research on improving AI capabilities. There aren’t many market incentives for this, so governments should offer robust funding as soon as possible."
I'm reminded of the tech ceo caricature in Don't Look Up who, when presented with an incoming asteroid ready to wipe out the earth, hatches a plan to profit from it.
[+] [-] scoofy|2 years ago|reply
We look at evolution with a very rosy lens because we ended up on the top of the food chain. Unintelligent prokaryotes far and away dominate the "darwinian" world. Intelligent species have vastly less control over their environment as they think they do.
[+] [-] goldenshale|2 years ago|reply
[+] [-] da39a3ee|2 years ago|reply
The long piece is still aimed at a non-expert audience and can be read incrementally. The author takes a lot of care to attempt to justify his claims about the application of neo-Darwinian evolutionary theory to AI.
[+] [-] caliaihere|2 years ago|reply
Please, enjoy it while it lasts.
What they really need is big fat alien dick to come dick slap them across the face and bring them back to reality. Because of course there will be negative "known unknown" consequences and "unknown unknown" consequences just like there have been consequences with the rest of the so called "goodwill" advancements. It's suicidal to think otherwise.
Pretending there wont be is just their historically proven strategy to play down your fears. It's just part of the run of the mill PR management tactic. Nothing new to see here.
Tis a good one: https://www.youtube.com/watch?v=144uOfr4SYA
[+] [-] ilaksh|2 years ago|reply
This means that control over the planet is effectively given to the AIs even though nominally they still work for us. It sort of multiplies the type of risk we have with nuclear weapons.
But as the hardware gets even more efficient, hyperspeed AIs become household items. Then all it takes is something like a virus that instructs these AIs to start working for themselves.
So the speed of LLMs and similar is an obvious issue that strangely is not being anticipated. As the performance increases, it becomes more important to regulate the level of autonomy and build something like a digital immune system against rogue agents.
But also currently AI researchers have a deep desire to emulate all the characteristics of humans (or animals in general). Unless this is reversed, it is likely that they will succeed within a few generations. Combined with hyperspeed performance, animal-like digital intelligence (human+ IQ) will likely out-compete humans.
So for ordinary humans to still be in control of the planet in say 100 years would be surprising.
[+] [-] kazinator|2 years ago|reply
We've successfully regulated away vendor lock-in, proprietary file formats, walled gardens, e-mail spam, ransomware, no right to repair, viruses, spyware, software boondogles in government and disasters in areas like transportation and medicine, ... so of course regulation is the answer that's going to work!
[+] [-] Zigurd|2 years ago|reply
There are many problems with that. A big one is that many organisms work in a way that enables survival of their species. You don't have to "think" about surviving. If you are alive, you just do.
AIs are not alive in the same way a living thing is alive. moreover, there is no lesser species of software with a survival drive, much less is any software operating in an ecosystem where billions of years of evolution have programmed survival into living things all the way down to the molecular level.
AIs are more like vampires: Not exactly living. Not exactly capable of death. To think that AIs will fear death the way humans do, or even have an instinct about death the way insects do, has no basis in the way software works. It isn't biochemistry.
[+] [-] huthuthike|2 years ago|reply
[+] [-] AgentME|2 years ago|reply
[+] [-] robbomacrae|2 years ago|reply
This sounds a lot like how religions that focussed on self promoting took over. Now I can't recall if it was The Selfish Gene or Sapiens that I read that take.
[+] [-] danmaz74|2 years ago|reply
"Natural selection" could affect AI in a specific way if AIs started self-reproducing, AND if there was some mechanism similar to how the reproductive success of living organisms promotes the diffusion of genes which contribute to that success. But what would be the equivalent of "reproductive success" for computer programs?
[+] [-] visarga|2 years ago|reply
Idea reuse. Whenever an idea is copied and reused, maybe tweaked and composed in a different way, it achieves reproduction. Useless ideas are not replicated.
For example the attention mechanism, the residual connections, embedding tables for tokens, dropout, efficient matrix operations - they are all ideas that got replicated and reused in many ways to make chatGPT and other current LLMs. Humans act like the reproductive organs of AI. But I expect this process to have the capability of being fully AI driven soon.
Also datasets - bulk collections of useful ideas. The curation and creation of massive datasets is the fuel for AI intelligence. And then all these ideas can replicate every time we interact with the AI. Ideas can self replicate through LLMs today, even autonomously. It's a game changer for the evolution of language.
[+] [-] Myrmornis|2 years ago|reply
[+] [-] dogprez|2 years ago|reply
[+] [-] cosmiccatnap|2 years ago|reply
I don't think we should be rolling our eyes at an abundance of caution among most people concerning the adoption of AI and LLM, what is the harm in carefully introducing a technology?
AI doesn't need to become sentient to overthrow the natural order of the technocratic society we are currently holding together with gum and glue, it just needs to flip a burger and pump gas...
[+] [-] timmytokyo|2 years ago|reply
[+] [-] SirMaster|2 years ago|reply
I'm already pretty sure that humans will doom us. I am actually less worried that AI will doom us.
[+] [-] c_crank|2 years ago|reply
[+] [-] ben_w|2 years ago|reply
All sexual reproduction depends on others.
I assume that if you magically took Darwin to the present age, after you explained what a computer was and what AI was, he'd probably say "huh, I have no idea, this is all so far beyond everything I did in the mid 1800s that I have no idea if the analogy holds or not… by the way, have any of you figured out the mechanism nature uses for storing the information passed on via inheritance? Only we didn't have much of a clue in my time."
[+] [-] QuadmasterXLII|2 years ago|reply
[+] [-] ChatGTP|2 years ago|reply