Something tells me aspects of living in the next few decades driven by technology acceleration will feel like being lobotomized while conscious and watching oneself the whole time. Like yes, we are able to think of thousands of hypothetical ways technology (even those inferior to full AGI) could go off the rails in a catastrophic way and post and discuss these scenarios endlessly... and yet it doesn't result in a slowing or stopping of the progress leading there. All it takes is a single group with enough collective intelligence and breakthroughs and the next AI will be delivered to our doorstop whether or not we asked for it.
It reminds me of the time I read books in my youth and only 20 years later realized the authors of some of those books were trying to deliver a important life messages to a teenager undergoing crucial changes, all of which would be painfully relevant to the current adult me... and yet the whole time they fell on deaf ears. Like the message was right there but I did not have the emotional/perceptive intelligence to pick up on and internalize it for too long.
AI isn't like nuclear fission. You can't remotely detect that somebody is training an AI. It's far too late to sequester all the information related to AI like what was done with uranium enrichment. The equipment needed to train AI is cheap and ubiquitous.
These "safety declarations" are toothless and impossible to enforce. You can't stop AI, you need to adapt. Video and pictures will soon have no evidentiary value. Real life relationships must be valued over online relationships because you know the other person is real. It's unfortunate, but nothing AI is "disrupting" existed 200 years ago and people will learn to adapt like they always have.
To quote the fictional comic book villain Toyo Harada, "none of you can stop me. Not any one of you individually nor the whole of you collectively."
> Video and pictures will soon have no evidentiary value.
I think we may eventually get camera authentication as a result of this, probably legally enforced in the same way and for similar reasons as Japan enforced that digital camera shutters have to make a noise.
> but nothing AI is "disrupting" existed 200 years ago
200 years ago there were about 1 billion people on earth; now there are about 8 billion. Anarchoprimitivists and degrowth people make a similar handwave about the advances of the last 200 years, but they're important to holding up the systems which keep a lot of people alive.
> Video and pictures will soon have no evidentiary value
We still accept eyewitness testimony in courts. Video and pictures will be fine, their context is what will matter. Where we'll have a generation of chaos is in the public sphere, as everyone born before somewhere between 1975 and now fails to think critically when presented with an image they'd like to believe is true.
>You can't remotely detect that somebody is training an AI.
There are training runs in progress that will use billions of dollars of electricity and GPUs. Quite detectable -- and stoppable by any government that wants to stop such things from happening on territory it controls.
And certainly we can reduce the economic incentive for investing money on such a run by banning AI-based services like ChatGPT.
> You can't remotely detect that somebody is training an AI.
Energy use is energy use, training is still incredibly energy intensive and the GPU heat signatures are different from non GPU ones, it fairly trivial to detect large scale GPU usage.
Enforcement is a different problem, and is not specific to AI, if you cannot enforce an agreement it doesn't matter if its AI or nuclear or sarin gas.
The US worked with all printer manufacturers to add watermarking. In theory they could work with fabs or service providers to embed instruction detection, , similar to how hosting providers do mining instruction detection.
You really can. The government often knows when an individual makes a bomb in their garage. The know the recipes and they monitor the ingredients. When someone buys tens of thousands of GPUs, people notice. When someone builds a new foundry, people notice. These are enormous changes.
Videos and Pictures are not evidence. The declarations of the videos and photos to be accurate depiction of events is the evidence.
The law was one step ahead the whole time.
> It's far too late to sequester all the information related to AI like what was done with uranium enrichment.
I think this presumes that Sam Altman is correct to claim that they can scale their way to, in the practical sense of the word, AGI.
If he is right about that, you are right that it's too late to hide it; if he's wrong, I think the AI architecture and/or training methods we have yet to invent are in the set of things we could usefully sequester.
> The equipment needed to train AI is cheap and ubiquitous.
Again, possibly:
If we were already close even before DeepSeek's models, yes, the hardware is too cheap and too ubiquitous.
If we're still not close even despite DeepSeek's cost reductions, then the hardware isn't cheap enough — and Yudkowsky's call for a global treaty on maximum size of data centre to be enforced by cruise missiles when governments can't or won't use police action, still makes sense.
> These "safety declarations" are toothless and impossible to enforce. You can't stop AI, you need to adapt.
Deepfakes are a distraction from more important things here. The point of AI safety is "it doesn't matter who builds unaligned AGI, if someone builds it we all die".
If you agree that unaligned AGI is a death sentence for humanity, then it's worth trying to stop it.
If you think AGI is unlikely to come about at all, then it should be a no-op to say "don't build it, take steps to avoid building it".
If you think AGI is going to come about and magically be aligned and not be a death sentence for humanity, pay close attention to the very large number of AI experts saying otherwise. https://en.wikipedia.org/wiki/P(doom)
If your argument is "but some experts don't believe that", ask yourself whether it's reasonable to say "well, experts disagree about whether this will kill us all, so we shouldn't do anything".
Half moat-building, half marketing. The need for "safety" implies some awesome power.
Don't get me wrong, they are impressive. I can see LLM's eventually enabling people to be 10x more productive in jobs that interact with a computer all day.
> All this "AI safety" is purely moat-building for the likes of OpenAI et. al. to prevent upstarts like DeepSeek.
Modern AI safety originated with people like Eliezer Yudkowsky, Nick Bostrom, the LessWrong/rationality movement etc.
They very much were not just talking about it only to build moats for OpenAI. For one thing, OpenAI didn't exist at the time, AI was not anywhere close to where it is today, and almost everyone thought their arguments were ridiculous.
You might not agree with them, but you can't simply dismiss their arguments as only being there to prop up the existing AI players, that's wrong and disingenuous.
> LLMs will not get us to AGI. Not even close. Altman talking about this danger is like Musk talking about driverless taxis.
AGI is a meaningless term. The LLM architecture has shown promise in every single domain once used for perceptron neural networks. By all accounts on those things that fit its 'senses' the LLMs are significantly smarter than the average human being.
When you are the dominant world power, you just don't let others determine your strategy, as simple as that.
Attempts at curbing AI will come from those who are losing the race. There's this interview where Edward Teller recalls how the USSR used a moratorium in nuclear testing to catch up with the US on the hydrogen bomb, and how he was the one telling the idealist scientists that that was going to happen.
I missed the boat on the 80s but as a “hacker” who made it through the 90s and 00s there’s something deeply sad and disturbing about how the conversation around AI is trending.
Imaging telling hackers from the past that people on a website called “hacker news” would be arguing about how important it is that the government criminalize running code on your own computer. It’s so astoundingly, ethically, philosophically opposed to everything that inspired me to get into computers in the first place. I only have to wonder if people really believe this, or it’s a sophisticated narrative that’s convenient to certain corporations and politicians.
Most likely the countries who will have unconstrained AGIs will get to advance technologically by leaps and bounds. And those who constrain it will remain in the "stone age" when it comes to it.
Assuming AGI doesn't lead to instant apocalyptic scenario it is more likely to lead to a form of resource curse[1] than anything that benefits the majority. In general countries where the elite is dependent on the labor of the people for their income have better outcomes for the majority of people than countries that don't (see for example developing countries with rich oil reserves).
What would AGI lead to? Most knowledge work would be replaced in the same way as manufacturing work has been, and AGI is in control of the existing elite. It would be used to suppress any revolt for eternity, because surveillance could be perfectly automated and omnipresent.
Perhpas, but meanwhile making it legal to have racial profiling AI tech in the hands of government and corporations does a great disservice to your freedom and privacy. Do not buy the narrative, EU regulations are not about forbidding AGI, they're about ensuring a minumum of decency in how the tech is allowed to exist. Something Americans seem deathly allergic to.
Or maybe those countries' economies will collapse once they let AGIs control institutions instead of human beaurocrats, because the AGIs are doing their own thing and trick the government by alignment faking and in-context scheming.
Or it will be viewed like nuclear weapons and those who have it will be bombed by those who don't.
These are all silicon valley "neck thoughts." They're entirely uninformed by the current state of the world and any travels through it. It's fantasies brought about by people with purely monetary desires.
It'd be funny if there wasn't billions of dollars being burnt to market this crap.
Am I right in understanding that this "declaration" is not a commitment to do anything specific? I don't really understand why it matters who does or does not sign it.
These kind of "don't be evil" declarations are typically meaningless gestures by which non-players who weren't going to be participating anyway can posture as morally superior, while having no meaningful impact on the course of things. See also, the Ottawa Treaty; non-signatories include the US, China, Russia, Pakistan and India, Egypt, Israel, Iran, Cuba, North and South Korea... In other words all the countries from which landmine use is expected in the first place. And when push comes to shove, signatories like Ukraine will use landmines anyway because national defense is worth more than feeling morally superior for adhering to a piece of paper.
The fundamental issue with these AI safety declarations is that they completely ignore game theory. The technology has already proliferated (see: DeepSeek, Qwen) and trying to control it through international agreements is like trying to control cryptography in the 90s.
I've spent enough time building with these models to see their transformative potential. The productivity gains aren't marginal - they're exponential. And this is just with current-gen models.
China's approach is particularly telling. While they lack the massive compute infrastructure of US tech giants, their research output is impressive. Their models may be smaller, but they're remarkably efficient. Look at DeepSeek's performance-to-parameter ratio.
The upside potential is simply too large to ignore. We're seeing breakthroughs in protein folding that would take traditional methods decades. Education is being personalized at scale. The translation capabilities alone are revolutionary.
The reality is that AI development will continue regardless of these declarations. The optimal strategy isn't to slow down - it's to maintain the lead while developing safety measures in parallel. Everything else is just security theater.
(And yes, I've read the usual arguments about x-risk. The bottleneck isn't safety frameworks - it's compute and data quality.)
Europe is hopeless so it does not make a difference. China can sign and ignore it so it does not make a difference.
But it would not be wise for the USA to have their hands tied up so early. I suppose that the UK wants to go their usual "lighter touch regulation" than the EU route to attract investment. Plus they are obviously trying hard to make friends with the new US administration.
Given what is potentially at stake if you're not the first nation to achieve ASI, it's a little late to start imposing any restrictions or adding distractions
Similarly, whoever gains the most training and fine-tuning data from whatever source via whatever means first -- will likely be at advantage
Hard to see how that toothpaste goes back in the tube now
What benefit do these AI regulations provide to progressing AI/AGI development? Do they slow down progress? If so, how do the countries that intend to enforce these regulations plan to compete on AI/AGI with countries that don’t have these regulations?
What exactly is the letter declaring? There are so many interpretations of "AI safety" with most of them not actually having anything to do with maximizing distribution of societal and ecosystem prosperity or minimizing the likelihood of destruction or suffering. In fact some concepts of AI safety I have seen are double speak for rules that are more likely to lead to AI imposed tyranny.
Where is the nuanced discussion of what we want and don't want AI to do as a society?
These details matter, and working through them collectively is progress, in stark contrast to getting dragged into identity politics arguments.
- I want AI to increase my freedom to do more and spend more time doing things I find meaningful and rewarding.
- I want AI to help us repair damage we have done to ecosystems and reverse species diversity collapse.
- I want AI to allow me to consume more in a completely sustainable way for me and the environment.
- I want AI that is an excellent and honest curator of truth, both in terms of accurate descriptions of the past and nuanced explanations of how reality works.
- I want AI that elegantly supports a diversity of values, so I can live how I want and others can live how they want.
- I don't want AI that forcefully and arbitrarily limits my freedoms
- I don't want AI that forcefully imposes other people's values on me (or imposes my values on others)
- I don't want AI war that destroys our civilization and creates chaos
- I don't want AI that causes unnecessary suffering
- I don't want other people to use AI to tyrannize me or anyone else.
How about instead of being so broadly generic about "AI safety" declarations we get specific, and then ask people to make specific commitments in kind. Then it would be a lot more meaningful when they refuse, or when they oblige and then break them.
Hard to know how significant this is because its impossible to know what the political class (and many others) mean by "AI" (and thus its potential risks). This is not new, similar charades a few years ago around "blockchain" etc.
But ignoring the signaling going on on various sides would be a mistake. "AI" is for all practical purposes a synonym for algorithmic decision making, with potential direct implication on peoples lifes. Without accountability, transparency, recourse etc the unchecked expansion of "AI" in various use cases represents a significant regression for historically established rights. In this respect the direction of travel is clear: The US is dismantling the CFPB, even more deregulation (if that is at all possible) is coming, big tech will be trusted to continue "self-regulating" etc.
The interesting part is the UK stance. Somewhere in between the US and the EU in terms of citizen / consumer protections, but despite brexit probably closer to the latter, this siding with dog-eats-dog deregulation might signal an anxiety not to be left behind.
For those wondering, here is the meat of the declaration:
Promoting AI accessibility to reduce digital divides;
Ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all
Making innovation in AI thrive by enabling conditions for its development and avoiding market concentration driving industrial recovery and development
Encouraging AI deployment that positively shapes the future of work and labour markets and delivers opportunity for sustainable growth
Making AI sustainable for people and the planet
Reinforcing international cooperation to promote coordination in international governance
The world is the world. Today is today. Tomorrow is tomorrow.
You cannot face the world with how you want it to be, but only as it is.
What we know today is that a relatively straightforward series of matrix multiplications leads to what is perceived to be intelligence. This is simply true no matter how many declarations one signs.
Given that this is the case, there is nothing left to be done unless we want to go full Butlerian Jihad
[+] [-] doright|1 year ago|reply
It reminds me of the time I read books in my youth and only 20 years later realized the authors of some of those books were trying to deliver a important life messages to a teenager undergoing crucial changes, all of which would be painfully relevant to the current adult me... and yet the whole time they fell on deaf ears. Like the message was right there but I did not have the emotional/perceptive intelligence to pick up on and internalize it for too long.
[+] [-] snickerbockers|1 year ago|reply
These "safety declarations" are toothless and impossible to enforce. You can't stop AI, you need to adapt. Video and pictures will soon have no evidentiary value. Real life relationships must be valued over online relationships because you know the other person is real. It's unfortunate, but nothing AI is "disrupting" existed 200 years ago and people will learn to adapt like they always have.
To quote the fictional comic book villain Toyo Harada, "none of you can stop me. Not any one of you individually nor the whole of you collectively."
[+] [-] pjc50|1 year ago|reply
I think we may eventually get camera authentication as a result of this, probably legally enforced in the same way and for similar reasons as Japan enforced that digital camera shutters have to make a noise.
> but nothing AI is "disrupting" existed 200 years ago
200 years ago there were about 1 billion people on earth; now there are about 8 billion. Anarchoprimitivists and degrowth people make a similar handwave about the advances of the last 200 years, but they're important to holding up the systems which keep a lot of people alive.
[+] [-] abdullahkhalids|1 year ago|reply
Yet, the international agreements on non-use of chemical weapons have held up remarkably well.
[+] [-] JumpCrisscross|1 year ago|reply
We still accept eyewitness testimony in courts. Video and pictures will be fine, their context is what will matter. Where we'll have a generation of chaos is in the public sphere, as everyone born before somewhere between 1975 and now fails to think critically when presented with an image they'd like to believe is true.
[+] [-] hollerith|1 year ago|reply
There are training runs in progress that will use billions of dollars of electricity and GPUs. Quite detectable -- and stoppable by any government that wants to stop such things from happening on territory it controls.
And certainly we can reduce the economic incentive for investing money on such a run by banning AI-based services like ChatGPT.
[+] [-] parliament32|1 year ago|reply
This is one bit that has a technological solution. Canon's had some version of this since the early 2000s: https://www.bhphotovideo.com/c/product/319787-REG/Canon_9314...
A more recent initiative: https://c2pa.org/
[+] [-] manquer|1 year ago|reply
Energy use is energy use, training is still incredibly energy intensive and the GPU heat signatures are different from non GPU ones, it fairly trivial to detect large scale GPU usage.
Enforcement is a different problem, and is not specific to AI, if you cannot enforce an agreement it doesn't matter if its AI or nuclear or sarin gas.
[+] [-] tonymet|1 year ago|reply
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] 549tj35p4tjk|1 year ago|reply
[+] [-] L-four|1 year ago|reply
[+] [-] htrp|1 year ago|reply
Until we get replicants
[+] [-] johnflan|1 year ago|reply
Thats a very interesting point
[+] [-] sam_lowry_|1 year ago|reply
Probably not the same way you can detect working centrifuges in Iran... but you definitely can.
[+] [-] timewizard|1 year ago|reply
So that's easy.
Nothing to actually worry about.
Other than Sam Altman and Elon Musks' pending ego fight.
[+] [-] moffkalast|1 year ago|reply
Technically both are real people, one is just not human. At least by the person/people definition that would include sentient aliens and such.
[+] [-] ben_w|1 year ago|reply
I think this presumes that Sam Altman is correct to claim that they can scale their way to, in the practical sense of the word, AGI.
If he is right about that, you are right that it's too late to hide it; if he's wrong, I think the AI architecture and/or training methods we have yet to invent are in the set of things we could usefully sequester.
> The equipment needed to train AI is cheap and ubiquitous.
Again, possibly:
If we were already close even before DeepSeek's models, yes, the hardware is too cheap and too ubiquitous.
If we're still not close even despite DeepSeek's cost reductions, then the hardware isn't cheap enough — and Yudkowsky's call for a global treaty on maximum size of data centre to be enforced by cruise missiles when governments can't or won't use police action, still makes sense.
[+] [-] JoshTriplett|1 year ago|reply
Deepfakes are a distraction from more important things here. The point of AI safety is "it doesn't matter who builds unaligned AGI, if someone builds it we all die".
If you agree that unaligned AGI is a death sentence for humanity, then it's worth trying to stop it.
If you think AGI is unlikely to come about at all, then it should be a no-op to say "don't build it, take steps to avoid building it".
If you think AGI is going to come about and magically be aligned and not be a death sentence for humanity, pay close attention to the very large number of AI experts saying otherwise. https://en.wikipedia.org/wiki/P(doom)
If your argument is "but some experts don't believe that", ask yourself whether it's reasonable to say "well, experts disagree about whether this will kill us all, so we shouldn't do anything".
[+] [-] r00fus|1 year ago|reply
LLMs will not get us to AGI. Not even close. Altman talking about this danger is like Musk talking about driverless taxis.
[+] [-] ryanackley|1 year ago|reply
Don't get me wrong, they are impressive. I can see LLM's eventually enabling people to be 10x more productive in jobs that interact with a computer all day.
[+] [-] z7|1 year ago|reply
[+] [-] edanm|1 year ago|reply
Modern AI safety originated with people like Eliezer Yudkowsky, Nick Bostrom, the LessWrong/rationality movement etc.
They very much were not just talking about it only to build moats for OpenAI. For one thing, OpenAI didn't exist at the time, AI was not anywhere close to where it is today, and almost everyone thought their arguments were ridiculous.
You might not agree with them, but you can't simply dismiss their arguments as only being there to prop up the existing AI players, that's wrong and disingenuous.
[+] [-] yodsanklai|1 year ago|reply
[+] [-] anon291|1 year ago|reply
AGI is a meaningless term. The LLM architecture has shown promise in every single domain once used for perceptron neural networks. By all accounts on those things that fit its 'senses' the LLMs are significantly smarter than the average human being.
[+] [-] jcarrano|1 year ago|reply
Attempts at curbing AI will come from those who are losing the race. There's this interview where Edward Teller recalls how the USSR used a moratorium in nuclear testing to catch up with the US on the hydrogen bomb, and how he was the one telling the idealist scientists that that was going to happen.
[+] [-] oceanplexian|1 year ago|reply
Imaging telling hackers from the past that people on a website called “hacker news” would be arguing about how important it is that the government criminalize running code on your own computer. It’s so astoundingly, ethically, philosophically opposed to everything that inspired me to get into computers in the first place. I only have to wonder if people really believe this, or it’s a sophisticated narrative that’s convenient to certain corporations and politicians.
[+] [-] ExoticPearTree|1 year ago|reply
[+] [-] _Algernon_|1 year ago|reply
What would AGI lead to? Most knowledge work would be replaced in the same way as manufacturing work has been, and AGI is in control of the existing elite. It would be used to suppress any revolt for eternity, because surveillance could be perfectly automated and omnipresent.
Really not something to aspire to.
[1]: https://en.wikipedia.org/wiki/Resource_curse
[+] [-] Night_Thastus|1 year ago|reply
Could it exist some day? Certainly. But currently 'AI' will never become an AGI, there's no path forward.
[+] [-] eikenberry|1 year ago|reply
[+] [-] thrance|1 year ago|reply
[+] [-] ijidak|1 year ago|reply
Here we are, a couple of years later, truly musing about sectors of the world embracing AI and others not.
That sort of piecemeal adoption was predictable but not that we are here to have this debate this soon!
[+] [-] emsign|1 year ago|reply
[+] [-] timewizard|1 year ago|reply
These are all silicon valley "neck thoughts." They're entirely uninformed by the current state of the world and any travels through it. It's fantasies brought about by people with purely monetary desires.
It'd be funny if there wasn't billions of dollars being burnt to market this crap.
[+] [-] sschueller|1 year ago|reply
[+] [-] Imnimo|1 year ago|reply
[+] [-] puff_pastry|1 year ago|reply
[+] [-] seydor|1 year ago|reply
[+] [-] lupusreal|1 year ago|reply
[+] [-] junto|1 year ago|reply
That was never going to fly with the current U.S. administration. Not only is the word inclusive in there but ethical and trustworthy as well.
Joking aside, I genuinely don’t understand the “job creation” claims of JD Vance in his dinner speech in Paris.
Long-term I just can’t imagine what a United States will look like when 75% of the population are both superfluous and a burden to society.
If this happens fast, society will crumble. Sheep are best kept busy grazing.
[+] [-] sagarpatil|1 year ago|reply
I've spent enough time building with these models to see their transformative potential. The productivity gains aren't marginal - they're exponential. And this is just with current-gen models.
China's approach is particularly telling. While they lack the massive compute infrastructure of US tech giants, their research output is impressive. Their models may be smaller, but they're remarkably efficient. Look at DeepSeek's performance-to-parameter ratio.
The upside potential is simply too large to ignore. We're seeing breakthroughs in protein folding that would take traditional methods decades. Education is being personalized at scale. The translation capabilities alone are revolutionary.
The reality is that AI development will continue regardless of these declarations. The optimal strategy isn't to slow down - it's to maintain the lead while developing safety measures in parallel. Everything else is just security theater.
(And yes, I've read the usual arguments about x-risk. The bottleneck isn't safety frameworks - it's compute and data quality.)
[+] [-] mytailorisrich|1 year ago|reply
Europe is hopeless so it does not make a difference. China can sign and ignore it so it does not make a difference.
But it would not be wise for the USA to have their hands tied up so early. I suppose that the UK wants to go their usual "lighter touch regulation" than the EU route to attract investment. Plus they are obviously trying hard to make friends with the new US administration.
[+] [-] mtkd|1 year ago|reply
Similarly, whoever gains the most training and fine-tuning data from whatever source via whatever means first -- will likely be at advantage
Hard to see how that toothpaste goes back in the tube now
[+] [-] tnt128|1 year ago|reply
If an enemy state gives AI autonomous control and gains massive combat effectiveness, it puts the pressure to other countries to do the same.
No one wants sky net. But if we continue the current path, painting the world as we vs them. I m fearful sky net will be what we get
[+] [-] jameslk|1 year ago|reply
[+] [-] FloorEgg|1 year ago|reply
Where is the nuanced discussion of what we want and don't want AI to do as a society?
These details matter, and working through them collectively is progress, in stark contrast to getting dragged into identity politics arguments.
- I want AI to increase my freedom to do more and spend more time doing things I find meaningful and rewarding. - I want AI to help us repair damage we have done to ecosystems and reverse species diversity collapse. - I want AI to allow me to consume more in a completely sustainable way for me and the environment. - I want AI that is an excellent and honest curator of truth, both in terms of accurate descriptions of the past and nuanced explanations of how reality works. - I want AI that elegantly supports a diversity of values, so I can live how I want and others can live how they want. - I don't want AI that forcefully and arbitrarily limits my freedoms - I don't want AI that forcefully imposes other people's values on me (or imposes my values on others) - I don't want AI war that destroys our civilization and creates chaos - I don't want AI that causes unnecessary suffering - I don't want other people to use AI to tyrannize me or anyone else.
How about instead of being so broadly generic about "AI safety" declarations we get specific, and then ask people to make specific commitments in kind. Then it would be a lot more meaningful when they refuse, or when they oblige and then break them.
[+] [-] openrisk|1 year ago|reply
But ignoring the signaling going on on various sides would be a mistake. "AI" is for all practical purposes a synonym for algorithmic decision making, with potential direct implication on peoples lifes. Without accountability, transparency, recourse etc the unchecked expansion of "AI" in various use cases represents a significant regression for historically established rights. In this respect the direction of travel is clear: The US is dismantling the CFPB, even more deregulation (if that is at all possible) is coming, big tech will be trusted to continue "self-regulating" etc.
The interesting part is the UK stance. Somewhere in between the US and the EU in terms of citizen / consumer protections, but despite brexit probably closer to the latter, this siding with dog-eats-dog deregulation might signal an anxiety not to be left behind.
[+] [-] iceman2654|1 year ago|reply
Promoting AI accessibility to reduce digital divides;
Ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all
Making innovation in AI thrive by enabling conditions for its development and avoiding market concentration driving industrial recovery and development
Encouraging AI deployment that positively shapes the future of work and labour markets and delivers opportunity for sustainable growth
Making AI sustainable for people and the planet
Reinforcing international cooperation to promote coordination in international governance
https://www.elysee.fr/en/emmanuel-macron/2025/02/11/statemen...
[+] [-] anon291|1 year ago|reply
You cannot face the world with how you want it to be, but only as it is.
What we know today is that a relatively straightforward series of matrix multiplications leads to what is perceived to be intelligence. This is simply true no matter how many declarations one signs.
Given that this is the case, there is nothing left to be done unless we want to go full Butlerian Jihad