I think I've internalized these stories enough to comfortably say (without giving anything away) that AI is incompatible with capitalism and probably money itself. That's why I consider it to be the last problem in computer science, because once we've solved problem solving, then the (artificial) scarcity of modern capitalism and the social darwinism it relies upon can simply be opted out of. Unless we collectively decide to subjugate ourselves under a Star Wars empire or Star Trek Borg dystopia.
The catch being that I have yet to see a billionaire speak out against the dangers of performative economics once machines surpass human productivity or take any meaningful action to implement UBI before it's too late. So on the current timeline, subjugation under an Iron Heel in the style of Jack London feels inevitable.
Isn’t that the one where corporate structures become intelligent self executing agents, cause a lot of problems? Yet here IRL, the current tech billionaires think it’s a roadmap to follow?
Talk about getting the wrong message. No one show those guys a copy of 1984! Wow, then…
Please ELI5 for me: How are AI agents different from traditional workflow engines, which orchestrated a set of tasks by interacting with both humans and other software systems?
have you built stuff with LLMs before? genuine question because nondeterministic and deterministic workflows are leagues apart in what they can accomplish.
The human is no longer in the loop. The agentic system is capable of generating quality synthetic data over time to train on. It becomes self improving with the quality synthetic data that can be used to train weaker models to perform better.
Which has become largely true? People flip-flop between the hottest AI model of the day. After a flagship AI model ships, distillations appear that offer slightly degraded performance at the fraction of the cost.
For inference, the difference between expensive data center hardware and homemade GPUs largely comes down the distinction of RAM. Which is a limitation actively worked around (unfortunately all the well-funded orgs are not that interested in this)
Okay so how does an economy of AI companies doing business selling services related to hyperintellingent AI tech to each other differ from Nvidia, Oracle and OpenAI sending money to each other to buy eacy other's stuff?
Is this what will be tried to fix the potential fallout from continuously decreasing fertility rates (resulting in population decline, thus affecting the consumption-based economy)?
Nope. This is just greed to make most of the moment without any thought for tomorrow. Nobody knows or cares where it takes us, but everybody knows that there is money to make today. So you need a model to analyze the economy with greed as the only driving force without any foresight. Add some parameters to account for monopolistic forces, human desire to be lazy and dumb thinking it is progress, and losing all biological senses to devices. That may give a better prediction.
read the paper and not sure if i buy all of it. i just keep buying domains with agent in it and haven't shipped anything (agentify.sh for example) wish I could know what to build (and stop buying domains)
Well for starters if some incredible change to capitalism doesn't occur, we are going to have to come up with never before cooperative software tools for the general populace to assess and avoid the most egregious companies that stop hiring people.
Tools for: mass harassment campaigns against rich people/companies that don't support human life anymore, dynamically calculating the most damage you can do without crossing into illegal.
Automatically suggesting alternatives of local human businesses vs the bigevils, or collecting like minded groups of people to start up new competition. Tracking individual rich people and what new companies and decisions they are making doing ongoing damage, somehow recognize and categorize the trends of big tech to "do the same old illegal shit except through an app now" before the legal system can catch up.
Capitalism sure turns out be real fucking dumb if it can't even come up with proper market analysis tools for workers to have some kind of knowledge about where they can best leverage their skills, companies get away with breaking all the rules and create coercion hierarchies everywhere.
I hate to say (because the legal system has never worked ever) but the only workable future to me seems like forcing agents/robots to be tied to humans. If a company wants 100 robots, they must be paying a human for every robot they utilize somehow. Maybe a dynamic ratio somehow, like if the government decided most people are getting enough resources to survive, then maybe 2 robots per human payed.
> If a company wants 100 robots, they must be paying a human for every robot they utilize somehow
This is so bonkers and absurd I don't know what to say.
> Automatically suggesting alternatives of local human businesses vs the bigevils, or collecting like minded groups of people to start up new competition
I think you are correct on the competition part.
I think we are going to see a avalanche of millions of small business takeaway market share from big businesses(b2b saas is the first casualty but also others in the future as technology advances).
However more than by regulation, I think it'll happen due to AI/LLMs itself.
“…the only workable future to me seems like forcing agents/robots to be tied to humans.”
This is what I’ve been thinking lately as well. Couple that with legal responsibility for any repercussions, and you might have a way society can thrive alongside AI and robotics.
I think any AI or robotic system acting upon the world in some way (even LLM chatbots) should require a human “co-signer” who takes legal responsibility for anything the system does, as if they had performed the action themselves.
I dunno, I think social media of the past years has certainly indicated that whoever controls (social) media can do a pretty good job of creating or bleeding out social movements, by amplifying or dampening some venues of social discourse.
AI systems cannot be economic agents, in the sense of participating in a relevant sense in economic transactions. An economic transaction is an exchange between people with needs (, preferences, etc.) that can die -- and so, fundamentally, are engaged in exchanges of (productive) time via promising and meeting one's promises. Time is the underlying variable of all economics, and its what everything ends up in ratio to -- the marginal minute of life.
There isn't any sense in which an AI agent gives rise to a economic value, because it wants nothing, promises nothing, and has nothing to exchange. An AI agent can only 'enable' economic transactions as means of production (, etc.) -- the price of any good cannot derive from a system which has no subjective desire grounded in no final ends.
Replace "AI system" with "corporation" in the above and reread it.
There's no fundamental reason why AI systems can't become corporate-type legal persons. With offshoring and multiple jurisdictions, it's probably legally possible now. There have been a few blockchain-based organizations where voting was anonymous and based on token ownership. If an AI was operating in that space, would anyone be able to stop it? Or even notice?
The paper starts to address this issue at "4.3 Rethinking the legal boundaries of the corporation.", but doesn't get very far.
Sooner or later, probably sooner, there will be a collision between the powers AIs can have, and the limited responsibilities corporations do have. Go re-read this famous op-ed from Milton Friedman, "The Social Responsibility of Business Is to Increase Its Profits".[1] This is the founding document of the modern conservative movement. Do AIs get to benefit from that interpretation?
> Time is the underlying variable of all economics
Not quite. It's scarcity, not time. Scarcity of economic inputs (land, labor, capital, and technology). So "time" you mean labor and that's just one input.
Economics is like a constrained optimization problem: how to allocate scarce resources given unlimited desires.
Yeah, that’s well articulated and well reasoned. Unfortunately, so long as in some way these agents are able to make money for the owner the argument is totally moot. You cannot expect capitalists to think of anything other than profit in the next quarter or quarter after that
vermilingua|3 months ago
https://en.wikipedia.org/wiki/Accelerando
OgsyedIE|3 months ago
In Accelerando the VO are a species of trillions of AI beings that are sort of descended from us. They have a civilization of their own.
hirsin|3 months ago
alexpotato|3 months ago
rf15|3 months ago
Also what a shortsighted scifi book, yet techies readily invest in that particular fantasy because it's not your usual spaceship fare.
zackmorris|3 months ago
https://marshallbrain.com/manna1
I think I've internalized these stories enough to comfortably say (without giving anything away) that AI is incompatible with capitalism and probably money itself. That's why I consider it to be the last problem in computer science, because once we've solved problem solving, then the (artificial) scarcity of modern capitalism and the social darwinism it relies upon can simply be opted out of. Unless we collectively decide to subjugate ourselves under a Star Wars empire or Star Trek Borg dystopia.
The catch being that I have yet to see a billionaire speak out against the dangers of performative economics once machines surpass human productivity or take any meaningful action to implement UBI before it's too late. So on the current timeline, subjugation under an Iron Heel in the style of Jack London feels inevitable.
ineedasername|3 months ago
Talk about getting the wrong message. No one show those guys a copy of 1984! Wow, then…
zkmon|3 months ago
alberth|3 months ago
There’s a level of autonomy by the AI agents (it determines on its own the next step), that is not predefined.
Agreed though that there’s lots of similarities.
david_shi|3 months ago
Freedumbs|3 months ago
Herring|3 months ago
torginus|3 months ago
For inference, the difference between expensive data center hardware and homemade GPUs largely comes down the distinction of RAM. Which is a limitation actively worked around (unfortunately all the well-funded orgs are not that interested in this)
praccu|3 months ago
Here's one from Deepmind:
https://arxiv.org/abs/2509.10147
david_shi|3 months ago
1. https://www.x402.org/ - micropayments for ai agents to access resources without needing to sign up for an api key
2. https://8004.org/ - open AI agent registry standard
ur-whale|3 months ago
https://en.wikipedia.org/wiki/Decentralized_autonomous_organ...
Esophagus4|3 months ago
I feel like co-ops were awful anyway even without the blockchain.
torginus|3 months ago
tokioyoyo|3 months ago
zkmon|3 months ago
sriku|3 months ago
agentifysh|3 months ago
wjsdj2009|3 months ago
beefnugs|3 months ago
Tools for: mass harassment campaigns against rich people/companies that don't support human life anymore, dynamically calculating the most damage you can do without crossing into illegal.
Automatically suggesting alternatives of local human businesses vs the bigevils, or collecting like minded groups of people to start up new competition. Tracking individual rich people and what new companies and decisions they are making doing ongoing damage, somehow recognize and categorize the trends of big tech to "do the same old illegal shit except through an app now" before the legal system can catch up.
Capitalism sure turns out be real fucking dumb if it can't even come up with proper market analysis tools for workers to have some kind of knowledge about where they can best leverage their skills, companies get away with breaking all the rules and create coercion hierarchies everywhere.
I hate to say (because the legal system has never worked ever) but the only workable future to me seems like forcing agents/robots to be tied to humans. If a company wants 100 robots, they must be paying a human for every robot they utilize somehow. Maybe a dynamic ratio somehow, like if the government decided most people are getting enough resources to survive, then maybe 2 robots per human payed.
saxenaabhi|3 months ago
This is so bonkers and absurd I don't know what to say.
> Automatically suggesting alternatives of local human businesses vs the bigevils, or collecting like minded groups of people to start up new competition
I think you are correct on the competition part.
I think we are going to see a avalanche of millions of small business takeaway market share from big businesses(b2b saas is the first casualty but also others in the future as technology advances).
However more than by regulation, I think it'll happen due to AI/LLMs itself.
slaterbug|3 months ago
This is what I’ve been thinking lately as well. Couple that with legal responsibility for any repercussions, and you might have a way society can thrive alongside AI and robotics.
I think any AI or robotic system acting upon the world in some way (even LLM chatbots) should require a human “co-signer” who takes legal responsibility for anything the system does, as if they had performed the action themselves.
torginus|3 months ago
UltraSane|3 months ago
jaco6|3 months ago
[deleted]
mjburgess|3 months ago
There isn't any sense in which an AI agent gives rise to a economic value, because it wants nothing, promises nothing, and has nothing to exchange. An AI agent can only 'enable' economic transactions as means of production (, etc.) -- the price of any good cannot derive from a system which has no subjective desire grounded in no final ends.
Animats|3 months ago
There's no fundamental reason why AI systems can't become corporate-type legal persons. With offshoring and multiple jurisdictions, it's probably legally possible now. There have been a few blockchain-based organizations where voting was anonymous and based on token ownership. If an AI was operating in that space, would anyone be able to stop it? Or even notice?
The paper starts to address this issue at "4.3 Rethinking the legal boundaries of the corporation.", but doesn't get very far.
Sooner or later, probably sooner, there will be a collision between the powers AIs can have, and the limited responsibilities corporations do have. Go re-read this famous op-ed from Milton Friedman, "The Social Responsibility of Business Is to Increase Its Profits".[1] This is the founding document of the modern conservative movement. Do AIs get to benefit from that interpretation?
[1] https://www.nytimes.com/1970/09/13/archives/a-friedman-doctr...
bofadeez|3 months ago
Not quite. It's scarcity, not time. Scarcity of economic inputs (land, labor, capital, and technology). So "time" you mean labor and that's just one input.
Economics is like a constrained optimization problem: how to allocate scarce resources given unlimited desires.
sdenton4|3 months ago
esafak|3 months ago
baq|3 months ago
You’ll need to give a citation for this to take you seriously
Aperocky|3 months ago
However that seems completely tangential to the current AI tech trajectory and probably going to arise entirely separately.
theflyinghorse|3 months ago
OgsyedIE|3 months ago