(no title)
addisonj | 4 months ago
> I was discussing with a friend that my biggest concern with AI right now is not that it isn't capable of doing things... but that we switched from research/academic mode to full value extraction so fast that we are way out over our skis in terms of what is being promised, which, in the realm of exciting new field of academic research is pretty low-stakes all things considered... to being terrifying when we bet policy and economics on it.
That isn't overly prescient or anything... it feels like the alarm bells started a while ago... but wow the absolute "all in" of the bet is really starting to feel like there is no backup. With the cessation of EVs tax credits, the slowdown in infra spending, healthcare subsidies, etc, the portfolio of investment feels much less diverse...
Especially compared to China, which has bets in so many verticals, battery tech, EVs, solar, then of course all the AI/chips/fabs. That isn't to say I don't think there are huge risks for China... but geez does it feel like the setup for a big shift in economic power especially with change in US foreign policy.
matthewaveryusa|4 months ago
For AI, the pivot to profitability was indeed quick, but I don't think it's as bad as you may think. We're building the software infrastructure to accomodate LLMs into our work streams which makes everyone more efficient and productive. As foundational models progress, the infrastructure will reap the benefits a-la moore's law.
I acknowledge that this is a bullish thesis but I'll tell you why I'm bullish: I'm basically a high-tech ludite -- the last piece of technology I adopted was google in 1996. I converted from vim to vscode + copilot (and now cursor.) because of LLMs -- that's how transformative this technology is.
spaceman_2020|4 months ago
There is something bizarre about an economic system that pursues productivity for the sake of productivity even as it lays off the actual participants in the economic system
An echo of another commenter who said that its amazing that AI is now writing comments on the internet
Which is great, but it actively makes the internet a worse place for everyone and eventually causes people to simply stop using your site
Somewhat similar to AI making companies more productive - you can produce more than ever, but because you’re more productive, you don’t hire enough and ultimately there aren’t enough people to consume what you produce
Galanwe|4 months ago
I think this is covered in a number of papers from think tanks related to the current administration.
The overall plan, as I understood it, is to devalue the dollar while keeping the monetary reserve status. A weaker dollar will make it competitive for foreign countries to manufacture in the US. The problem is that if the dollar weakens, investors will fly away. But the AI boom offsets that.
For now it seems to work: the dollar lost more than 10% year to date, but the AI boom kept investors in the US stock market. The trade agreements will protect the US for a couple years as well. But ultimately it's a time bomb for the population, that will wake up in 10 years with half their present purchasing power, in non dollar terms.
hakfoo|4 months ago
If we removed "modern search" (Google) and had to go back to say 1995-era AltaVista search performance, we'd probably see major productivity drops across huge parts of the economy, and significant business failures.
If we removed the LLMs, developers would go back to Less Spicy Autocomplete and it might take a few hours longer to deliver some projects. Trolls might have to hand-photoshop Joe Biden's face onto an opossum's body like their forefathers did. But the world would keep spinning.
It's not just that we've had 20 years more to grow accustomed to Google than LLMs, it's that having a low-confidence answer or an excessively florid summary of a document are not really that useful.
827a|4 months ago
I, too, don't understand the OP's point of quickly pivoting to value extraction. Every technology we've ever invented was immediately followed by capitalists asking "how can I use this to make more money". LLMs are an extremely valuable technology. I'm not going to sit here and pretend that anyone can correctly guess exactly how much we should be investing into this right now in order to properly price how much value they'll be generating in five years. Except, its so critical to point out that the "data center capex" numbers everyone keeps quoting are, in a very real (and, sure, potentially scary) sense, quadruple-counting the same hundred-billion dollars. We're not actually spending $400B on new data centers; Oracle is spending $nnB on Nvidia, who is spending $nnB to invest in OpenAI, who is spending $nnB to invest in AMD, who Coreweave will also be spending $nnB with, who Nvidia has an $nnB investment in... and so forth. There's a ton of duplicate-accounting going on when people report these numbers.
It doesn't grab the same headlines, but I'm very strongly of the opinion that there will be more market corrections in the next 24 months, overall stock market growth will be pretty flat, and by the end of 2027 people will still be opining on whether OpenAI's $400B annual revenue justifies a trillion dollars in capex on new graphics cards. There's no catastrophic bubble burst. AGI is still only a few years away. But AI eats the world none-the-less.
[1] https://www.sciencedirect.com/science/article/abs/pii/S09275...
cheschire|4 months ago
vivalahn|4 months ago
For example?
unknown|4 months ago
[deleted]
utopiah|4 months ago
>> I was discussing with a friend that my biggest concern with AI right now is not that it isn't capable of doing things... but that we switched from research/academic mode to full value extraction so fast
lol, I read this few hours ago, maybe without enough caffeine but I read it as "my comment from 70 *years* ago" because I thought you somehow where at the The Dartmouth Summer Research Project on Artificial Intelligence 1956 workshop!
I somehow thought "Damn... already back there, at the birth of the field they thought it was too fast". I was entirely wrong and yet in some convoluted way maybe it made sense.
klooney|4 months ago
It happened ten years ago, it's just that perceptions haven't changed yet.