(no title)
Chance-Device | 1 month ago
There’s a lot of fear around what will happen with AI, not so much of extinction but rather of two things: fear of losing income, and arguably more importantly, fear of losing identity.
People often are invested in what they do to the point that it’s who they are. That being replaced or eliminated might be a bigger psychological threat than lack of income, at least to those of us fortunate enough to be well off right now.
However, these threats are outweighed by the benefits that AI can eventually bring. Medical advances, power generation, manufacturing capability. Our systems for running society have a lot of problems, economically, politically, epistemologically. These can also be improved with AI assistance.
The real problem is the transition, it’s such a huge shift, and it will happen all at once to everyone, uprooting our idea of the world and our place in it.
What we need is to embrace AI and find a way to make sure that the transition and benefits of AI are distributed instead of concentrated.
For me this looks like the following: companies must commit to retaining some minimum number of employees in every currently existing function, to be determined proportional to their profit taking. This sets a floor on the job losses that can come later when AI really comes on stream.
The justification for this is three fold: firstly, it’s a safety mechanism, it ensures that regardless of the capabilities of an AI system, there are multiple humans working with it to verify its results. If they aren’t verifying diligently, then they’re not doing their jobs.
Secondly, jobs aren’t just a way of making income, they’re wrapped up in identity and meaning for at least some people, and this helps to maintain that existing identity structure across a meaningful cross section of society.
Third, it keeps the economy running, money circulating. You can’t have a market economy without consumers. UBI is one component of this too, but this is both more direct, more useful and more meaningful.
Llamamoe|1 month ago
Benefits come to those who have the means to access it, and wealth is a measure of the ability to direct and influence human effort and society.
How exactly do you propose that AI will serve the wellbeing of the worker/middle classes after they've been made obsolete by it?
Goodwill of the corporations working on them? Of their shareholders, well-known to always put welfare first and profit second? Government action increasingly directed by their lobbying?
> What we need is to embrace AI and find a way to make sure that the transition and benefits of AI are distributed instead of concentrated.
Sure. How? We've not done it with any other technological advances so far, and I don't see how shifting the power balance further away from the worker/middle class will help matters.
There's a reason why the era of techno-optimism has already faded as quickly as it's begun.
Chance-Device|1 month ago
Let me be clearer: I said “companies must commit to” where the stronger phrasing is “companies are forced to by legislation”. But to begin with this might be voluntarily done by some number of companies.
Also, in this vision of society the AI companies (OpenAI, Anthropic, google etc) are taxed heavily. The taxation is redistributed, there is UBI for some fraction of the population, maybe the majority. Others still work in companies mandated to keep employees as I outlined above.
Importantly, we as a society specifically aim to bring about these benefits of AI by using the redistributed funds in part to invest in them.
Part of this is the free market, part is planned government investment. If one fails, maybe the other succeeds. Either way, we try to spread the benefits and importantly to ensure the benefits are actually there in the first place.
inatreecrown2|1 month ago