> As insurers accurately assess risk through technical testing
If that’s not “the rest of the owl” I don’t know what is.
Let’s swap out superintelligence for something more tangible. Say, a financial crash due to systemic instability. How would you insure against such a thing? I see a few problems, which are even more of an issue for AI.
1. The premium one should pay depends on the expected risk, which is damage from the event divided by the chance of event occurring. However, quantifying the numerator is basically impossible. If you bring down the US financial system, no insurance company can cover that risk. With AI, damage might be destruction of all of humanity, if we believe the doomers.
2. Similarly, the denominator is basically impossible to quantify. What is the chance of an event which has never happened before? In fact, having “insurance” against such a thing will likely create a moral hazard, causing companies to take even bigger risks.
3. On a related point, trying to frame existential losses in financial terms doesn’t make sense. This is like trying to take out an insurance policy that will protect you from Russian roulette. No sum of cash can correct that kind of damage.
1. Someone is always carrying the risk; the question is, who it should be? We suggest private markets should price and carry the first $10B+ before the government backstop. That incentivizes them to price and manage it.
2. Insurance has plenty of ways to manage moral hazard (e.g. copays). Pricing any new risk is hard, but at least with AI you can run simulations, red-team, review existing data, etc.
3. We agree on existential losses, but catastrophic events can be priced and covered. Insurers enforcing compliance with audits/standards would help them reduce catastrophes, in turn reducing the risk of many existential risks.
This only works if there are negative consequences faced by the insured parties when things go wrong. If all the negative consequences are faced by society and there are no regulations that incur that burden on the companies building AI, then we'll have unchecked development.
We agree! Unchecked development could lead to disaster. Insurers can insist on adherence to best practices to incentivize safe practices. They can also clarify liability and cover most (but not all) of the risk, leaving the developer on the hook for a portion of it.
> But we don’t want medical device manufacturers or nuclear power plant operators to move fast and break things. AI will quickly get baked into critical infrastructure and could enable dangerous misuse.
nobody will put a language model in a pacemaker or a nuclear reactor, because the people who would be in a position to do such things are actual doctors or engineers aware both of their responsibilities and of the long jail term that awaits them if they neglect them
this inevitabilism, to borrow a word from another submission earlier today, about "AI" ending up in critical infrastructure and the important thing being to figure out how to do it right is really quite repugnant
sure, yes, i know about the shitty kinda-explainable statistical models that already control my insurance premiums or likelihood of getting policed or whatever
but why is it a foregone conclusion that people are going to (implicitly rightly so given the framing lets it pass unquestioned!) put llms into things that materially affect my life on the level of it ending due to a stopped heart or a lethal dose of radiation
> I'd rather be under the Chinese boot than having all of humanity under the boot of an AI
That is not the options being offered. The options are under the boot of a Western AI or a Chinese AI. Maybe you'd prefer the Chinese AI boot to the Western AI boot?
> certainly no reason to try to increase the chance of summoning a machine god
The argument is that this is inevitable. If it's possible to make AGI someone will eventually do it. Does it matter who does it first? I don't know. Yes, making it happen faster might be bad. Waiting until someone else does it first might be worse.
I'm biased because my company (Newfront) is in insurance but there are a lot of great points here. This jumped out: "By 2030, global AI data centers alone are projected to require $5 trillion of investment, while enterprise AI spend is forecast to reach $500 billion."
There's a mega trend of value concentrating in AI (and all the companies that touch/integrate it). Makes a ton of sense that insurance premiums will flow that direction as well.
> This jumped out: "By 2030, global AI data centers alone are projected to require $5 trillion of investment, while enterprise AI spend is forecast to reach $500 billion."
With no skin in the game, it will be either cool if super intelligence happens or it doesn’t and I just get to enjoy some schadenfreude. Either all of these people are geniuses or they’re Jonestown members.
For this to work, large class actions are needed. If companies are liable for large judgements, companies will insure against them. If not, companies will not try to avoid harms for which they need not pay.
This article is a bizarre mix of center-right economic ideas and completely unfounded assumptions about the nature of AI technology, to the point where I'm genuinely not sure if this is intended as parody or not.
> We’re navigating a tightrope as Superintelligence nears.
There is no evidence we're anywhere near "superintelligence" or AGI. There is no evidence any AI tools are intelligent in any sense, yet alone "superintelligence". The only reference for this, given much later, is to https://ai-2027.com/ which is no more than fan fiction. You might as well have cited Terminator or The Matrix as evidence.
The only people actually claiming any advancement towards "superintelligence" or "AGI" directly financially gain from people thinking that it's right around the corner.
> If the West slows down unilaterally, China could dominate the 21st century.
Is this casual sinophobia intended to appeal to a particular audience? I can't see what purpose this statement, and others like it, serves other than to try to frame this as "it's us or them".
> Faster than regulation: major pieces of regulation, created by bureaucrats without technical expertise, move at glacial pace.
This is a very common right-wing viewpoint. That regulation, government oversight, and "red tape" is unacceptable to business. Forgetting that building codes, public safety regulations, and workers rights all stem directly from government regulation. This article goes out of it's way to frame it as obvious, like a simple fact unworthy of introspection.
> Enterprises must adopt AI agents to maintain competitiveness domestically and internationally.
There is no evidence this is the case, and no citation is even attempted.
> The only reference for this, given much later, is to https://ai-2027.com/ which is no more than fan fiction.
There are certainly pretty gaping holes in its logic but it’s more than a fanfic. I’m a bit confused about the incentive of its authors to add their names to it, since it seems if they’re wrong they lose credibility and if they’re right I’m not sure they’ll be able to cash in on the upside.
Is there any indication whatsoever that there's even a glimpse of artificial intelligence out there?
So far, I have seen language models that, quite impressively, translate between different languages, including programming languages and natural language specs. Yes, these models use a wast (compressed) knowledge from pretty much all of the internet.
There are also chain of thought models, yes, but what kind of actual intelligence can they achieve? Can they formulate novel algorithms? Can they formulate new physics hypotheses? Can they write a novel work of fiction?
Or aren't they actually limited by the confines of what we as a species already know?
You seem to be part of a trend where most humans are defined as unintelligent - there are remarkably few people out there capable of formulating novel algorithms or physics hypothesises. Novels there are a few more if we admit unreadable slop produced by people who really should choose careers other than writing. It speaks to the progress that machines have made that traditional tests of intelligence, like holding a conversation or doing well on an undergraduate-level university test, apparently no longer measure anything of importance related to intelligence.
If we admit that even relatively stupid humans show some levels of intelligence, as far as I can tell we've already achieved artificial intelligence.
> We’re navigating a tightrope as Superintelligence nears. If the West slows down unilaterally, China could dominate the 21st century
I stopped reading after this. First, there is no evidence of Superintelligence nearing or even any clear definition of what "Superintelligence nearing" means. This is classic "assuming the sale" gambit with fear-mongering in its appeal.
janalsncm|7 months ago
If that’s not “the rest of the owl” I don’t know what is.
Let’s swap out superintelligence for something more tangible. Say, a financial crash due to systemic instability. How would you insure against such a thing? I see a few problems, which are even more of an issue for AI.
1. The premium one should pay depends on the expected risk, which is damage from the event divided by the chance of event occurring. However, quantifying the numerator is basically impossible. If you bring down the US financial system, no insurance company can cover that risk. With AI, damage might be destruction of all of humanity, if we believe the doomers.
2. Similarly, the denominator is basically impossible to quantify. What is the chance of an event which has never happened before? In fact, having “insurance” against such a thing will likely create a moral hazard, causing companies to take even bigger risks.
3. On a related point, trying to frame existential losses in financial terms doesn’t make sense. This is like trying to take out an insurance policy that will protect you from Russian roulette. No sum of cash can correct that kind of damage.
brdd|7 months ago
1. Someone is always carrying the risk; the question is, who it should be? We suggest private markets should price and carry the first $10B+ before the government backstop. That incentivizes them to price and manage it.
2. Insurance has plenty of ways to manage moral hazard (e.g. copays). Pricing any new risk is hard, but at least with AI you can run simulations, red-team, review existing data, etc.
3. We agree on existential losses, but catastrophic events can be priced and covered. Insurers enforcing compliance with audits/standards would help them reduce catastrophes, in turn reducing the risk of many existential risks.
bvan|7 months ago
xmprt|7 months ago
brdd|7 months ago
evertedsphere|7 months ago
nobody will put a language model in a pacemaker or a nuclear reactor, because the people who would be in a position to do such things are actual doctors or engineers aware both of their responsibilities and of the long jail term that awaits them if they neglect them
this inevitabilism, to borrow a word from another submission earlier today, about "AI" ending up in critical infrastructure and the important thing being to figure out how to do it right is really quite repugnant
sure, yes, i know about the shitty kinda-explainable statistical models that already control my insurance premiums or likelihood of getting policed or whatever
but why is it a foregone conclusion that people are going to (implicitly rightly so given the framing lets it pass unquestioned!) put llms into things that materially affect my life on the level of it ending due to a stopped heart or a lethal dose of radiation
blibble|7 months ago
I never understood this argument
as a non-USian: I'd prefer to be under the Chinese boot rather than having all of humanity under the boot of an AI
and it is certainly no reason to try to do everything we possibly can to try and summon a machine god
socalgal2|7 months ago
That is not the options being offered. The options are under the boot of a Western AI or a Chinese AI. Maybe you'd prefer the Chinese AI boot to the Western AI boot?
> certainly no reason to try to increase the chance of summoning a machine god
The argument is that this is inevitable. If it's possible to make AGI someone will eventually do it. Does it matter who does it first? I don't know. Yes, making it happen faster might be bad. Waiting until someone else does it first might be worse.
unknown|7 months ago
[deleted]
gwintrob|7 months ago
There's a mega trend of value concentrating in AI (and all the companies that touch/integrate it). Makes a ton of sense that insurance premiums will flow that direction as well.
blibble|7 months ago
and by 2040 it will be $5000 trillion!
and by 2050 it will be $5000000 quadrillion!
yahoozoo|7 months ago
Animats|7 months ago
lowsong|7 months ago
> We’re navigating a tightrope as Superintelligence nears.
There is no evidence we're anywhere near "superintelligence" or AGI. There is no evidence any AI tools are intelligent in any sense, yet alone "superintelligence". The only reference for this, given much later, is to https://ai-2027.com/ which is no more than fan fiction. You might as well have cited Terminator or The Matrix as evidence.
The only people actually claiming any advancement towards "superintelligence" or "AGI" directly financially gain from people thinking that it's right around the corner.
> If the West slows down unilaterally, China could dominate the 21st century.
Is this casual sinophobia intended to appeal to a particular audience? I can't see what purpose this statement, and others like it, serves other than to try to frame this as "it's us or them".
> Faster than regulation: major pieces of regulation, created by bureaucrats without technical expertise, move at glacial pace.
This is a very common right-wing viewpoint. That regulation, government oversight, and "red tape" is unacceptable to business. Forgetting that building codes, public safety regulations, and workers rights all stem directly from government regulation. This article goes out of it's way to frame it as obvious, like a simple fact unworthy of introspection.
> Enterprises must adopt AI agents to maintain competitiveness domestically and internationally.
There is no evidence this is the case, and no citation is even attempted.
janalsncm|7 months ago
There are certainly pretty gaping holes in its logic but it’s more than a fanfic. I’m a bit confused about the incentive of its authors to add their names to it, since it seems if they’re wrong they lose credibility and if they’re right I’m not sure they’ll be able to cash in on the upside.
choeger|7 months ago
So far, I have seen language models that, quite impressively, translate between different languages, including programming languages and natural language specs. Yes, these models use a wast (compressed) knowledge from pretty much all of the internet.
There are also chain of thought models, yes, but what kind of actual intelligence can they achieve? Can they formulate novel algorithms? Can they formulate new physics hypotheses? Can they write a novel work of fiction?
Or aren't they actually limited by the confines of what we as a species already know?
roenxi|7 months ago
If we admit that even relatively stupid humans show some levels of intelligence, as far as I can tell we've already achieved artificial intelligence.
yahoozoo|7 months ago
no
derelicta|7 months ago
brdd|7 months ago
bwfan123|7 months ago
I stopped reading after this. First, there is no evidence of Superintelligence nearing or even any clear definition of what "Superintelligence nearing" means. This is classic "assuming the sale" gambit with fear-mongering in its appeal.
muskmusk|7 months ago
Finally some clear thinking on a very important topic.