If you write policy about AI you're doing it wrong. AI is an implementation, but policies must be written for outcomes.
Discrimination by law enforcement, exclusion from loan approval, bad moderation on social networking, cheating on exams, creating fake news or media about people, swallowing up user data... all the negative social impact of AI can be achieved without it, and much of it is already illegal anyway.
Legislation that is predicated on AI will fail in the long run. Legislation that focuses on the actual negative outcomes will stand the test of time much more.
> all the negative social impact of AI can be achieved without it,
With the big differences being massive automatisation, huge reduction of cost and no one to blame when things go wrong... It's like saying a nuke and a knife are the same because they both kill
Agreed, I think it’s more a lens (one of many) that helps show what’s possible with technology and what may require legal protections.
For example things like privacy and surveillance laws obviously need updating in the face of advances in networking, data collection at scale, etc. Same with copyright in the face of plentiful copying.
But good laws will as you say address what is now possible or dangerous, as opposed to any specific implementation or general purpose technology involved. The tech just sets the context for what protections are needed.
One outcome which is not unique to AI, but fairly exclusive: The value of human cognitive labor eventually drops below subsistence income. This isn't here, yet; but it's a hard problem so we should be devoting substantial resources to solutions before it hits.
Oh, so we can't address any specific problems with any technology, because we should actually be fixing all of society at the root of all those problems. So while you wait for our broken political system to solve those root causes, enjoy feeling smug about not having implemented any imperfect, temporary bandaids to stop some bleeding.
Are you working on fixing those root problems? Or after dismissing short term policy bandaids, are you going to go back to working in an industry where you will probably make more money in the short run if governments don't do any tech regulation in the short run?
Your commitment to the long run will lead to paralysis and do nothing in the long run.
I'm having trouble understanding what they want to "upskill" those people to do.
What skills won't be replaced? The only ones I can think of either have a large physical component, or are only doable by a tiny fraction of the current workforce.
As for the ones with a physical component (plumbers being the most cited), the cognitive parts of the job (the "skilled" part of skilled labor) can be replaced while having the person just following directions demonstrated onscreen for them. And of course, the robots aren't far behind, since the main hard part of making a capable robot is the AI part.
I don't think "replaced" is a good word here.. augmented and expanded. With AI we are expanding our activities, users expect more, competition forces companies to do more.
But AI can't be held liable for its actions, that is one role. It has no direct access to the context it is working in, so it needs humans as a bridge. In the end AI produce outcomes in the same local context, which is for the user. So from intent to guidance to outcomes they are all user based, costs and risks too.
I find it pessimistic to take that static view on work, as if "that's it, all we needed is invented", and now we are fighting for positions like musical chairs
Agree for almost all jobs, but some, like my fathers, was about crawling inside huge metal pieces to do precision machining. For unique piecework, it might not be economical to train AI. Surely equivalents to this exist elsewhere
If Volkswagen's competitors ran around saying that cars aren't dangerous and there's no need to regulate them, and their critics insisted that you're a mark if you accept the premise that cars are a useful transportation method at all, I don't suppose I'd have a choice but to take it seriously. If you know of a similar analysis from a less conflicted group I'd love to read it!
i see the comments here are pretty cynical about this post, and probably for good reason. especially "you might have to start taxing consumption instead of income because people won't have income anymore"
but at least a couple of these proposals seem to boil down to needing to tax the absolute crap out of the AI companies. which seems pretty obviously true, and its interesting that the ai companies are already saying that.
> its interesting that the ai companies are already saying that.
This is just cheap PR to launder legitimacy and urgency. To create false equivalence between AI agent and an employee.
I think this is a sign of weakness, having seen AI rolled out in many companies where it already shows signs of being absolute disaster (like summaries changing meaning and losing important details - so tasks go in wrong direction and take time to be corrected, developers creating unprecedented amount of tech debt with their vibe coded features, massive amount of content that sound important, but it is just equivalent of spam, managers spending ours with LLM "researching" strategy feeding the FOMO and so on).
AI companies are in a difficult position right now. Anthropic is taking the lead by looking like they care and are concerned about the effects of the technology that their feverishly building.
I don’t trust them. Their strategy is to say “don’t worry about all your jobs being taken by our technology. We (AI companies) are going to be taxed so much that you are going to be living a wealthy and fruitful life making meme photos and looking at AI porn. Don’t be concerned about how you’ll pay your bills. We’ll work it all out. Trust us.”
I've found large entrenched players tend to prefer slightly more than a reasonable amount of taxation and regulation in any industry; governments are easier to predict and handle than scrappy competitors.
I used to think that "AI operating in meatspace" was going to remain a tough problem for a long time, but seeing the dramatic developments in robotics over the last 2 years, it's pretty clear that's not going to be the case.
As the masses fade into permanent unemployment, this will likely coincide with (and be partially caused by) a corresponding proliferation in intelligent humanoid robots.
At a certain point, "turning on them" becomes physically impossible.
Much like the end of history wasn’t the end of history
LLM-Attention centric AI isn’t the end of AI development
So if they are successful at locking in it will be at their own demise because it doesn’t cover the infinity many pathways for AI to continue down, specifically intersections with robotics and physical manipulation, that are ultimately way more impactful on society.
Until the plurality of humans on the earth understand that human exceptionalism is no longer something to be taking for granted (and shouldn’t have been) there’s never going to be effective global governance of technology.
> Until the plurality of humans on the earth understand that human exceptionalism is no longer something to be taking for granted (and shouldn’t have been) there’s never going to be effective global governance of technology.
Could you elaborate more on this? FYI fully agreed on the former sentences.
I highly doubt there is ever going to be "effective governance" of technology for multiple reasons. It would require impossible foreknowledge of the impacts of every possible tech and dystopian levels of control to prevent new ideas from coming into play. We cannot even get the direction of impact right a-priori. Even if they had that this dystopia would have to remain both stable and unmutated over generations. All the while their control creates their own counterforce, incentivized to invent tech outside of their control to topple the forces of stagnation.
What other option you can propose. This article [1] says preferred suggestions by economists is: retraining, regulation, or social insurance and for most of the people surveyed "retraining" was the preferred approach.
Not sure MOOCs can be taken as an useful alibi to measure success of upskill. Most (employers) won't honor the MOCC certs, and people do MOOC while working. Taking a MOOC doesn't inherently ensure that the learner has mastered the course they took, hence there is less incentive in completing too.
They want to speed up the process of getting laws written to protect them and AI. One way to do that is to appear like you’re looking for a solution while at the same time mentioning how urgent things are and how we need to pass laws quickly. You can guess how those laws will be focused, but my guess is it will be focused on benefiting the AI companies and companies that plan to use AI companies to build their companies.
The reasons for that will be proposed as protecting the citizens from the evil other country that’s building AI. “Without strong AI, we can’t build weapons to defend the country.” and “without strong AI, our companies won’t be able to compete in the world marketplace.”
danpalmer|4 months ago
Discrimination by law enforcement, exclusion from loan approval, bad moderation on social networking, cheating on exams, creating fake news or media about people, swallowing up user data... all the negative social impact of AI can be achieved without it, and much of it is already illegal anyway.
Legislation that is predicated on AI will fail in the long run. Legislation that focuses on the actual negative outcomes will stand the test of time much more.
lm28469|4 months ago
With the big differences being massive automatisation, huge reduction of cost and no one to blame when things go wrong... It's like saying a nuke and a knife are the same because they both kill
andy99|4 months ago
For example things like privacy and surveillance laws obviously need updating in the face of advances in networking, data collection at scale, etc. Same with copyright in the face of plentiful copying.
But good laws will as you say address what is now possible or dangerous, as opposed to any specific implementation or general purpose technology involved. The tech just sets the context for what protections are needed.
khafra|4 months ago
Cheer2171|4 months ago
Are you working on fixing those root problems? Or after dismissing short term policy bandaids, are you going to go back to working in an industry where you will probably make more money in the short run if governments don't do any tech regulation in the short run?
Your commitment to the long run will lead to paralysis and do nothing in the long run.
robbrown451|4 months ago
What skills won't be replaced? The only ones I can think of either have a large physical component, or are only doable by a tiny fraction of the current workforce.
As for the ones with a physical component (plumbers being the most cited), the cognitive parts of the job (the "skilled" part of skilled labor) can be replaced while having the person just following directions demonstrated onscreen for them. And of course, the robots aren't far behind, since the main hard part of making a capable robot is the AI part.
tintor|4 months ago
Robots are far behind.
Mechanical hands with human equivalent performance is as hard as the AI part.
Strong, fast, durable, tough, touch and temp sensitive, dexterous, light, water-proof, energy efficient, non-overheating.
Muscles and tendons in human hands and forearms self-heal and grow stronger with more use.
Mechanical tendons stretch and break. Small motors have plenty of issues of their own.
visarga|4 months ago
But AI can't be held liable for its actions, that is one role. It has no direct access to the context it is working in, so it needs humans as a bridge. In the end AI produce outcomes in the same local context, which is for the user. So from intent to guidance to outcomes they are all user based, costs and risks too.
I find it pessimistic to take that static view on work, as if "that's it, all we needed is invented", and now we are fighting for positions like musical chairs
DaveZale|4 months ago
MatekCopatek|4 months ago
How seriously would you take a proposal on car pollution regulation and traffic law updates written by Volkswagen?
SpicyLemonZest|4 months ago
protocolture|4 months ago
blibble|4 months ago
they more or less wrote the EU emission regulations
the only reason diesel cars were sold in huge numbers in the EU
notatoad|4 months ago
but at least a couple of these proposals seem to boil down to needing to tax the absolute crap out of the AI companies. which seems pretty obviously true, and its interesting that the ai companies are already saying that.
varispeed|4 months ago
This is just cheap PR to launder legitimacy and urgency. To create false equivalence between AI agent and an employee.
I think this is a sign of weakness, having seen AI rolled out in many companies where it already shows signs of being absolute disaster (like summaries changing meaning and losing important details - so tasks go in wrong direction and take time to be corrected, developers creating unprecedented amount of tech debt with their vibe coded features, massive amount of content that sound important, but it is just equivalent of spam, managers spending ours with LLM "researching" strategy feeding the FOMO and so on).
cudgy|4 months ago
I don’t trust them. Their strategy is to say “don’t worry about all your jobs being taken by our technology. We (AI companies) are going to be taxed so much that you are going to be living a wealthy and fruitful life making meme photos and looking at AI porn. Don’t be concerned about how you’ll pay your bills. We’ll work it all out. Trust us.”
eucyclos|4 months ago
blibble|4 months ago
mrshadowgoose|4 months ago
As the masses fade into permanent unemployment, this will likely coincide with (and be partially caused by) a corresponding proliferation in intelligent humanoid robots.
At a certain point, "turning on them" becomes physically impossible.
wmf|4 months ago
thawawaycold|4 months ago
AndrewKemendo|4 months ago
LLM-Attention centric AI isn’t the end of AI development
So if they are successful at locking in it will be at their own demise because it doesn’t cover the infinity many pathways for AI to continue down, specifically intersections with robotics and physical manipulation, that are ultimately way more impactful on society.
Until the plurality of humans on the earth understand that human exceptionalism is no longer something to be taking for granted (and shouldn’t have been) there’s never going to be effective global governance of technology.
blind_tomato|4 months ago
Could you elaborate more on this? FYI fully agreed on the former sentences.
Nasrudith|4 months ago
unknown|4 months ago
[deleted]
throw-10-13|4 months ago
intended|4 months ago
Without any modifications - MOOCs have single digit completion rates. This is high quality, free, publicly available educational material.
The vast majority of people do not simply have the time, money, or undivided attention - to get a new domain under their belt.
This is “help miners learn code” territory.
sateesh|4 months ago
Not sure MOOCs can be taken as an useful alibi to measure success of upskill. Most (employers) won't honor the MOCC certs, and people do MOOC while working. Taking a MOOC doesn't inherently ensure that the learner has mastered the course they took, hence there is less incentive in completing too.
1. https://www.foreignaffairs.com/united-states/coming-ai-backl...
musicale|4 months ago
Financial circularity could also lead to instability.
Mistletoe|4 months ago
frozenseven|4 months ago
I hope people will eventually revisit these predictions and admit they were wrong.
mjbale116|4 months ago
Incredible stuff...
watwut|4 months ago
Proposal written by billionaire trying to shift taxation even more away from themselves and even more to everyone else.
> Accelerate permits and approvals for AI infrastructure
Oh, they want that? Who would not say.
cudgy|4 months ago
The reasons for that will be proposed as protecting the citizens from the evil other country that’s building AI. “Without strong AI, we can’t build weapons to defend the country.” and “without strong AI, our companies won’t be able to compete in the world marketplace.”
swoorup|4 months ago
remarkEon|4 months ago
Not serious, not worth reading.
vkou|4 months ago
Anyone with anxieties over immigration should have those same concerns over AI, many times over.
Skilled immigrants just got a $100,000/year head tax in the US. Where is such a tax for AI?