(no title)
top256 | 5 months ago
That being said, it doesn't change my point about agency. Your prediction #2 (LLMs impacting the economy) seems "almost inevitable" precisely because thousands of people are actively working to make it true. If everyone stopped tomorrow - if OpenAI, Anthropic, Google, etc., all pivoted to other projects - would it still be inevitable?
The appearance of inevitability comes from observing massive coordinated human effort toward a goal, then mistaking that effort for natural law. It's like watching a thousand people pushing a boulder uphill and concluding "that boulder inevitably goes up."
GMoromisato|5 months ago
It reminds me of the time a Scientific American columnist ran a contest. The idea was to submit a number on a postcard. The person who submitted the highest number would win $1 million divided by the winning number.
The publishers almost nixed the contest. They didn't actually have $1 million to give away and they were worried. But the columnist assured them that there was no chance the prize would be even one dollar. As you might expect, the winning number was so ridiculously large that no money had to be sent.
As for LLMs affecting the economy, I think my only uncertainty is in how long it will take for them to be integrated. I think we all agree that even in their current state, they are valuable for certain tasks (e.g., language translation). I'm pretty sure they are economically viable for 90% of support calls. Are they going to displace software developers? Lawyers? Doctors? Even if they don't, the effects will still be large.
If we agree that they are economically viable, then I don't see how they don't have an impact. If a technology saves businesses money, it is going to get deployed. The only wildcard would be regulatory blocks or legislative bans. But you know China is not going to give up on the technology, so any bans will be reversed or circumvented (we'll import products/services from China).
Now, if you don't think LLMs in their current state are economically viable/valuable, then none of that will happen. But (a) that's a different argument, and (b) then people will inevitably choose NOT to deploy LLMs.
I get the desire for agency. I certainly wish I had more control over the course of the future. But people's behavior (in large groups) is almost always downstream of external events. Fukushima has an accident and Germany chooses to give up nuclear power. Inflation in the US goes up and people vote for Donald fraking Trump.
Maybe there will be some event--a Fukushima for AI--that causes a mass change. But absent that, I think LLMs are here to stay.