top | item 45583574

Preparing for AI's economic impact: exploring policy responses

75 points| grantpitt | 4 months ago |anthropic.com

80 comments

order

danpalmer|4 months ago

If you write policy about AI you're doing it wrong. AI is an implementation, but policies must be written for outcomes.

Discrimination by law enforcement, exclusion from loan approval, bad moderation on social networking, cheating on exams, creating fake news or media about people, swallowing up user data... all the negative social impact of AI can be achieved without it, and much of it is already illegal anyway.

Legislation that is predicated on AI will fail in the long run. Legislation that focuses on the actual negative outcomes will stand the test of time much more.

lm28469|4 months ago

> all the negative social impact of AI can be achieved without it,

With the big differences being massive automatisation, huge reduction of cost and no one to blame when things go wrong... It's like saying a nuke and a knife are the same because they both kill

andy99|4 months ago

Agreed, I think it’s more a lens (one of many) that helps show what’s possible with technology and what may require legal protections.

For example things like privacy and surveillance laws obviously need updating in the face of advances in networking, data collection at scale, etc. Same with copyright in the face of plentiful copying.

But good laws will as you say address what is now possible or dangerous, as opposed to any specific implementation or general purpose technology involved. The tech just sets the context for what protections are needed.

khafra|4 months ago

One outcome which is not unique to AI, but fairly exclusive: The value of human cognitive labor eventually drops below subsistence income. This isn't here, yet; but it's a hard problem so we should be devoting substantial resources to solutions before it hits.

Cheer2171|4 months ago

Oh, so we can't address any specific problems with any technology, because we should actually be fixing all of society at the root of all those problems. So while you wait for our broken political system to solve those root causes, enjoy feeling smug about not having implemented any imperfect, temporary bandaids to stop some bleeding.

Are you working on fixing those root problems? Or after dismissing short term policy bandaids, are you going to go back to working in an industry where you will probably make more money in the short run if governments don't do any tech regulation in the short run?

Your commitment to the long run will lead to paralysis and do nothing in the long run.

robbrown451|4 months ago

I'm having trouble understanding what they want to "upskill" those people to do.

What skills won't be replaced? The only ones I can think of either have a large physical component, or are only doable by a tiny fraction of the current workforce.

As for the ones with a physical component (plumbers being the most cited), the cognitive parts of the job (the "skilled" part of skilled labor) can be replaced while having the person just following directions demonstrated onscreen for them. And of course, the robots aren't far behind, since the main hard part of making a capable robot is the AI part.

tintor|4 months ago

'main hard part of making a capable robot is the AI part'

Robots are far behind.

Mechanical hands with human equivalent performance is as hard as the AI part.

Strong, fast, durable, tough, touch and temp sensitive, dexterous, light, water-proof, energy efficient, non-overheating.

Muscles and tendons in human hands and forearms self-heal and grow stronger with more use.

Mechanical tendons stretch and break. Small motors have plenty of issues of their own.

visarga|4 months ago

I don't think "replaced" is a good word here.. augmented and expanded. With AI we are expanding our activities, users expect more, competition forces companies to do more.

But AI can't be held liable for its actions, that is one role. It has no direct access to the context it is working in, so it needs humans as a bridge. In the end AI produce outcomes in the same local context, which is for the user. So from intent to guidance to outcomes they are all user based, costs and risks too.

I find it pessimistic to take that static view on work, as if "that's it, all we needed is invented", and now we are fighting for positions like musical chairs

DaveZale|4 months ago

Agree for almost all jobs, but some, like my fathers, was about crawling inside huge metal pieces to do precision machining. For unique piecework, it might not be economical to train AI. Surely equivalents to this exist elsewhere

MatekCopatek|4 months ago

It's hard to read this without being cynical.

How seriously would you take a proposal on car pollution regulation and traffic law updates written by Volkswagen?

SpicyLemonZest|4 months ago

If Volkswagen's competitors ran around saying that cars aren't dangerous and there's no need to regulate them, and their critics insisted that you're a mark if you accept the premise that cars are a useful transportation method at all, I don't suppose I'd have a choice but to take it seriously. If you know of a similar analysis from a less conflicted group I'd love to read it!

protocolture|4 months ago

Am I the US Government in this scenario?

blibble|4 months ago

> How seriously would you take a proposal on car pollution regulation and traffic law updates written by Volkswagen?

they more or less wrote the EU emission regulations

the only reason diesel cars were sold in huge numbers in the EU

notatoad|4 months ago

i see the comments here are pretty cynical about this post, and probably for good reason. especially "you might have to start taxing consumption instead of income because people won't have income anymore"

but at least a couple of these proposals seem to boil down to needing to tax the absolute crap out of the AI companies. which seems pretty obviously true, and its interesting that the ai companies are already saying that.

varispeed|4 months ago

> its interesting that the ai companies are already saying that.

This is just cheap PR to launder legitimacy and urgency. To create false equivalence between AI agent and an employee.

I think this is a sign of weakness, having seen AI rolled out in many companies where it already shows signs of being absolute disaster (like summaries changing meaning and losing important details - so tasks go in wrong direction and take time to be corrected, developers creating unprecedented amount of tech debt with their vibe coded features, massive amount of content that sound important, but it is just equivalent of spam, managers spending ours with LLM "researching" strategy feeding the FOMO and so on).

cudgy|4 months ago

AI companies are in a difficult position right now. Anthropic is taking the lead by looking like they care and are concerned about the effects of the technology that their feverishly building.

I don’t trust them. Their strategy is to say “don’t worry about all your jobs being taken by our technology. We (AI companies) are going to be taxed so much that you are going to be living a wealthy and fruitful life making meme photos and looking at AI porn. Don’t be concerned about how you’ll pay your bills. We’ll work it all out. Trust us.”

eucyclos|4 months ago

I've found large entrenched players tend to prefer slightly more than a reasonable amount of taxation and regulation in any industry; governments are easier to predict and handle than scrappy competitors.

blibble|4 months ago

they seem to have omitted the scenarios where the newly unemployable electorate turn on them

mrshadowgoose|4 months ago

I used to think that "AI operating in meatspace" was going to remain a tough problem for a long time, but seeing the dramatic developments in robotics over the last 2 years, it's pretty clear that's not going to be the case.

As the masses fade into permanent unemployment, this will likely coincide with (and be partially caused by) a corresponding proliferation in intelligent humanoid robots.

At a certain point, "turning on them" becomes physically impossible.

wmf|4 months ago

The higher taxes proposed here could be used to buy off the electorate.

thawawaycold|4 months ago

They just know it's not going to happen

AndrewKemendo|4 months ago

Much like the end of history wasn’t the end of history

LLM-Attention centric AI isn’t the end of AI development

So if they are successful at locking in it will be at their own demise because it doesn’t cover the infinity many pathways for AI to continue down, specifically intersections with robotics and physical manipulation, that are ultimately way more impactful on society.

Until the plurality of humans on the earth understand that human exceptionalism is no longer something to be taking for granted (and shouldn’t have been) there’s never going to be effective global governance of technology.

blind_tomato|4 months ago

> Until the plurality of humans on the earth understand that human exceptionalism is no longer something to be taking for granted (and shouldn’t have been) there’s never going to be effective global governance of technology.

Could you elaborate more on this? FYI fully agreed on the former sentences.

Nasrudith|4 months ago

I highly doubt there is ever going to be "effective governance" of technology for multiple reasons. It would require impossible foreknowledge of the impacts of every possible tech and dystopian levels of control to prevent new ideas from coming into play. We cannot even get the direction of impact right a-priori. Even if they had that this dystopia would have to remain both stable and unmutated over generations. All the while their control creates their own counterforce, incentivized to invent tech outside of their control to topple the forces of stagnation.

throw-10-13|4 months ago

Seem’s like the crypto gifters and moon boys have found a new home.

intended|4 months ago

If upskilling is your hope - you have no hope.

Without any modifications - MOOCs have single digit completion rates. This is high quality, free, publicly available educational material.

The vast majority of people do not simply have the time, money, or undivided attention - to get a new domain under their belt.

This is “help miners learn code” territory.

sateesh|4 months ago

What other option you can propose. This article [1] says preferred suggestions by economists is: retraining, regulation, or social insurance and for most of the people surveyed "retraining" was the preferred approach.

Not sure MOOCs can be taken as an useful alibi to measure success of upskill. Most (employers) won't honor the MOCC certs, and people do MOOC while working. Taking a MOOC doesn't inherently ensure that the learner has mastered the course they took, hence there is less incentive in completing too.

1. https://www.foreignaffairs.com/united-states/coming-ai-backl...

musicale|4 months ago

The most immediate impact might be the bursting of the AI bubble and a dotcom-like crash of tech stocks and businesses like Anthropic.

Financial circularity could also lead to instability.

Mistletoe|4 months ago

This is actually the most likely outcome but it's the quiet part an AI company isn't going to say out loud.

frozenseven|4 months ago

>bursting of the AI bubble

I hope people will eventually revisit these predictions and admit they were wrong.

mjbale116|4 months ago

anthropic are so sure about the incoming economic impact of their AI that they want to start talking about policy - for our sake.

Incredible stuff...

watwut|4 months ago

> "you might have to start taxing consumption instead of income because people won't have income anymore"

Proposal written by billionaire trying to shift taxation even more away from themselves and even more to everyone else.

> Accelerate permits and approvals for AI infrastructure

Oh, they want that? Who would not say.

cudgy|4 months ago

They want to speed up the process of getting laws written to protect them and AI. One way to do that is to appear like you’re looking for a solution while at the same time mentioning how urgent things are and how we need to pass laws quickly. You can guess how those laws will be focused, but my guess is it will be focused on benefiting the AI companies and companies that plan to use AI companies to build their companies.

The reasons for that will be proposed as protecting the citizens from the evil other country that’s building AI. “Without strong AI, we can’t build weapons to defend the country.” and “without strong AI, our companies won’t be able to compete in the world marketplace.”

swoorup|4 months ago

This smells of regulatory capture.

remarkEon|4 months ago

ctrl + f for "immigration" returns nothing.

Not serious, not worth reading.

vkou|4 months ago

This is a horn that must be harped on, frequently and loudly.

Anyone with anxieties over immigration should have those same concerns over AI, many times over.

Skilled immigrants just got a $100,000/year head tax in the US. Where is such a tax for AI?