top | item 39553743

Money bubble

224 points| headalgorithm | 2 years ago |tbray.org

256 comments

order
[+] marcinzm|2 years ago|reply
> I kind of flipped out, and was motivated to finish this blog piece, when I saw this: “UK government wants to use AI to cut civil service jobs: Yes, you read that right.” The idea — to have citizen input processed and responded to by an LLM — is hideously toxic and broken; and usefully reveals the kind of thinking that makes morally crippled leaders all across our system love this technology.

As someone who actually had to deal with the government recently in the US I disagree. It was impossible to reach a human or otherwise get an answer to my likely not too unusual question. If they had an even half decent LLM then I'd have probably had my answer and action items for me to do within 30 seconds. Instead I've wasted days in various attempts to get some type of answer.

I recently needed to fix some issues in something I filled with the government. Email support used to exist but probably cut due to budgets. Chat support used to exist but probably cut due to budgets. Phone support has no waiting queue and require 1 minute of entering numbers to hit the disconnect point (due to not available agents). Physical mail seems an option but I don't know the format or address. Etc.

[+] sangnoir|2 years ago|reply
> I recently needed to fix some issues in something I filled with the government. Email support used to exist but probably cut due to budgets. Chat support used to exist but probably cut due to budgets.

With all that said, what makes you hope any government LLM will escape the whims of budget cuts? I'd rather walk in and wait for hours than share a 500 token/sec chatbot with thousands of other users and never get a resolution.

[+] bombcar|2 years ago|reply
The only thing I've seen that even gets close to working is physically going to the office in person but hell finding what or where that is.

And you can't even do that with Social Security anymore.

If it is something you could be legally liable for, I'd at least send a certified letter to whatever address you can find, so that if it becomes a problem later you can at least show you tried.

[+] euroderf|2 years ago|reply
And if the LLM cannot solve the case, maybe it can prepare a well-structured ticket that a hoomin can handle quickly & efficiently, without waiting for users to um and aw and gripe and forget things and have to look things up.
[+] ConorSheehan1|2 years ago|reply
What about when the LLM inevitably hallucinates a plausible but incorrect answer?
[+] rldjbpin|2 years ago|reply
this idea is at odds with the way public services work in societies with safety nets.

it has been, and remains to be, the case that the main purpose of certain parts of public services is to give people employment. there is rarely any meritocracy at scale once you get the job.

the reason why we get poor service cannot be completely put down to getting understaffed or lack of budget. while the UK govt has a better public service experience online than many developed countries, this approach I feel is missing the forest for the trees.

[+] zoogeny|2 years ago|reply
I disagree with most of this article. My own anecdotal experience is that dozens of non-tech friends, coworkers, etc. tell me that they are using ChatGPT every day. These are people who are telling me how they use it to draft emails, create marketing material, create sales support material, create education material, etc.

During every other hype boom I have been through that ultimately failed, those regular Joe types either hadn't even heard of the tech, simply didn't care or were actively hostile to it. Comparatively, with the new generative AI people are talking about how much they love it, how they use it every day, etc.

Even the Internet had a bubble that popped (back in the Pets.com days, circa 2001 [1]) and this short-term AI bubble will pop too. I expect the same pattern as the early Internet: an early pop followed by a recovery that leads to massive growth.

1. https://en.wikipedia.org/wiki/Dot-com_bubble

[+] atomicUpdate|2 years ago|reply
> My own anecdotal experience is that dozens of non-tech friends, coworkers, etc. tell me that they are using ChatGPT every day. These are people who are telling me how they use it to draft emails, create marketing material, create sales support material, create education material, etc.

My reaction when I hear this is that those people are being paid entirely too much money if an LLM can do their job. I think this is where the real economic impact will come from: when managers realize it's just LLMs generating emails to be summarized by LLMs and it's just bots spamming each other with busy work all day. At some point companies will realize it's all pointless and start trimming these pointless jobs, leaving a lot of people without any actual skills.

[+] __loam|2 years ago|reply
Am I the only one who is aghast at people using these things to write emails, or worse, educational materials? It feels impersonal and shitty to send someone an AI generated email. Furthermore, they lie all the time. Unless you know what you're talking about, there's a large risk to using it as an educational resource.
[+] netsharc|2 years ago|reply
I think the pets.com analogy is apt. Everyone's running around making "AI-powered" companies (just like everyone was making online shops in the 90s) and clueless investors are throwing money at them because, "AI!"...

Nvidia is lucky though, a lot of big companies will want their GPTs in-house to ensure their secrets won't be used to train someone else's GPT, and that means buying a lot of hardware (could be on a cloud data center too, but, same result for Nvidia)

[+] dartos|2 years ago|reply
What in the article did you disagree with?

The author said that AI was and has been obviously useful, but there’s a lot of dumb money flying around in AI land (to try building things like AGI)

[+] Nemi|2 years ago|reply
I think it is great that you know many people that use it daily. Can I ask how many PAY for it monthly? Is it as many people that, say, pay for netflix? Outside the tech world, of course. Do you see your friends and family paying for AI services directly? Because the amount of money to provide AI services is at least as great as it is to provide netflix. Probably way more actually.

What companies are making a ton of money on AI? Nvidia? Nvidia makes money selling chips to large, massively profitable companies that are in an arms race to capture as much market share as possible. Or they are selling to smaller companies trying to make a name. But none of those companies are making any money from the AI services they sell. All of them are spending massive amounts of capital.

What happens when we reach an equilibrium point where AI services are ‘good enough’? I’ll tell you what will happen, it will become another cost center for the big companies and all further development will cease once they have eliminated the competition.

Want an example? Smart home speakers. Do you think that Alexa and Google home are the best that they could do? Do you ever wonder why Alexa came out and made a splash and then Google frantically made one of their own, but once market share was evenly split between both companies all new development ground to a halt? It is because they were only going to spend enough to keep the other company from dominating then stop spending. Because there is no way to monetize it. Not really. You can charge for the hardware, but that is a pittance to them. Can you charge for the service? I use my google home all the time for some things, but if they told me they were going to start charging me would I continue using it? Probably not. There is a reason Amazon recently RIF’d a bunch of people on the alexa team.

People say that this is not like pets.com because profits are real, but are they? Or is it some crazy ponzi-like thing where the amazing profits being had by companies like Nvidia are going to dry up eventually. I say it more like Cisco of 2000, where they were making tons of money selling hardware to all those pets.com companies. Follow the money. Once you get to the person/company paying for the service, there is none. Not on this scale at least. I think there will definitely be companies that pay for an AI service, but the amount of total market spend will be somewhat less than the total spend for something like cell phone service or streaming services. You know, like those things that everyone you know, from technical to luddite, from rich to poor, all pay for. I don’t see AI reaching that level of ubiquity. Do you pay for email service? I know that the people on this site do, to some extent. Email providers charging for their services are a niche market. AI is destined to be the same.

Just to be clear, I agree with you that this will be like the internet was. It will change the world. But it is without a doubt a bubble, just like the early internet. And for the reasons that you say - everyone can easily see that it is ‘something’ just by using it. The barrier to entry is very low, just like opening a browser and going to a website was. It does not take a genius to see the potential. Which is all the more reason that dumb money is flowing like water into this bubble.

[+] rglover|2 years ago|reply
> I kind of flipped out, and was motivated to finish this blog piece, when I saw this: “UK government wants to use AI to cut civil service jobs: Yes, you read that right.” The idea — to have citizen input processed and responded to by an LLM — is hideously toxic and broken; and usefully reveals the kind of thinking that makes morally crippled leaders all across our system love this technology.

And this will not just be in government, it will be everywhere. The scariest part is that as people start to spend less time developing a skill set, and instead deferring to AI answers, you will cross a point where this problem can't be fixed (because nobody has the skills to fix it and the AI is trained on the outputs of previous generations of humans).

For the "olds" who already have a skillset, this will be incredibly lucrative (as those who can afford to pay to fix it will handsomely). But the potential for this to—at best—plateau humanity and at worst, make it regress, is significant.

The dark humor in all this: we thought AI would get us the Terminator, but instead it's going to get us rapid degeneration.

---

Edit: an addendum, the overall point I'm making is well encapsulated in this talk https://www.youtube.com/watch?v=ZSRHeXYDLko

[+] PheonixPharts|2 years ago|reply
Anyone who works in AI, especially very closely with models, will tell you that it's not capable of really replacing any jobs yet.

All these jobs being "replaced by AI" are simply being eliminated with the consequences of them being eliminated ignored. Customer service jobs aren't being replaced by AI, companies, like Klarna, are just giving up on customer service and using AI to increase their perceived value rather than reducing it.

[+] lp4vn|2 years ago|reply
There is something very wrong with the current dynamic of the world. Work is deeply devalued while capital has been reaping all the benefits of the increased productivity: if you doubt me, ask anyone who has made any kind of investment if they are making more money from the investment or from their regular job.

This is creating a ridiculous wealth disparity and deincentivizing a whole generation to get good with a skillset. I already heard from a lot of young people that working is not worth it, hard to disagree with them when even a basic thing like a piece of land or a house looks out of reach for a regular person.

But as you put it unsustainable things are not sustainable, society will regress until the equilibrium is found again. But things didn't need to be like this.

[+] bluedino|2 years ago|reply
On the other hand, in my country we have the stereotypical lazy government employee who is overpaid and doesn't work hard.

The government could then save money and provide better service for menial tasks such as "what permit do I need to do such and such"

[+] MyFirstSass|2 years ago|reply
Christ this is dark! Imagine a world where everything and everyone will be judged, ranked, evaluated, hired, fired, maybe even as a partner or friend by these AI's.

Unfathomably grim even if the alternative is rigid low skill bureaucrats.

I find LLM's extremely fascinating but if this is the end game i really hope AI free zones will emerge.

You can already see Gen Z being obsessed with face ranking filters, "looksmaxing" from data points and using filters day to day. It's dark.

[+] caseysoftware|2 years ago|reply
> The scariest part is that as people start to spend less time developing a skill set, and instead deferring to AI answers, you will cross a point where this problem can't be fixed (because nobody has the skills to fix it and the AI is trained on the outputs of previous generations of humans).

The loss of knowledge/skills was a key bit of Foundation which itself was a retelling of the fall of the Roman Empire.

As key skills become rarer, the price goes up.. until you can't hire for those skills at any price.

[+] willsmith72|2 years ago|reply
Doesn't that assume that people will forever be better learners than AI?

If the "olds" learnt a skillset at some point, the data they used to learn the skill is presumably available to the AI too. Why can't the AI learn it too?

(Not talking about physical labour which clearly has way less potential to be replaced than knowledge work)

[+] __loam|2 years ago|reply
I'm really worried about the implications of this technology for programming, art, and literacy in general. The skills required to be a good programmer or artist are not what's being sampled by these models, merely the output. There's a real danger here of losing these skills if they can no longer be developed professionally, and that even means no more human training data for newer and better models to be trained on. We'd be stuck with whatever trash current models are putting out.
[+] golergka|2 years ago|reply
I kind of flipped out, and was motivated to finish this blog piece, when I saw this: “UK government wants to use computers to cut civil service jobs: Yes, you read that right.” The idea — to have citizen data might be put into and processed by a computer — is hideously toxic and broken; and usefully reveals the kind of thinking that makes morally crippled leaders all across our system love this technology.
[+] carlosjobim|2 years ago|reply
Dealing with an AI will be much better than dealing with a civil servant, in most cases. There is a certain kind of person who becomes a civil servant and many of them will not only have a hostile attitude, but also make it their life's mission to try to do as much damage as possible in the pettiest ways possible to the people they "serve". Especially if you're of a sex, ethnicity or age group that they hate. Sometimes the same as theirs.

Letting citizens deal with their bureaucratic errands with an online form or portal instead of with a civil servant in their office has been an enormous benefit, in the places that offer this. An AI will fuck things up, being an AI, but it will not necessarily treat people with a hostile attitude and lie to clients to spite them. Unless it's programmed by civil servants, that is.

[+] ildjarn|2 years ago|reply
The future was predicted in The Machine Stops and Mockingbird
[+] ben_w|2 years ago|reply
> And this will not just be in government, it will be everywhere. The scariest part is that as people start to spend less time developing a skill set, and instead deferring to AI answers, you will cross a point where this problem can't be fixed (because nobody has the skills to fix it and the AI is trained on the outputs of previous generations of humans).

I think that would require AI development to approximately halt at close to the current level for over a lifetime.

Conditional on development halting, I'd agree with you. By analogy, there's this single, very useful, very powerful, set of "hidden methods that can be used to win all games, get rich, find love, determine the limits of thought itself!" — mathematics[0]. Do people like learning it? They do not. Calculator much easier. What a calculator does is none of that, calculators are merely arithmetic, but most people can't tell the difference between mathematics and arithmetic.

I think LLMs have the same effect on anything that can be expressed in words, and all the various image generator models have this effect on graphical arts. One must be extremely motivated to get past the "but the computer is better than me" hump.

However, I don't expect AI development to even approximately halt at anything close to the current level. There's a lot of room for self-play in domains like maths and computing where the proofs can be verified, and probably a lot of room for anything that can be RLHF'd, too. And that's also assuming we don't get any brain uploads; regardless of the question of "is such an upload of a human capable of consciousness", which absolutely matters, it may still be relevant to the economic issues of AI depending on the cost of running one depending on all the details of such an upload that I can't even begin to guess at at this point (last I heard, https://openworm.org was not actually measuring synaptic weights directly, but rather neural activity? I may be out of date, not my field).

Whatever happens, however good it does or doesn't get, I do expect something to go very weird before I reach the current state pension age — close enough that, if that something is "the machines break" or "society breaks", then there will still be plenty who remember the before times.

[0] https://www.smbc-comics.com/comic/secrets-2

[+] happytiger|2 years ago|reply
lol. This is how you end up with that scene in idiocracy where you find out you get your college degrees at Costco.
[+] prewett|2 years ago|reply
I normally think the entire market is overpriced, which I mostly still do, but I'm not convinced that tech stocks are overpriced, at least the ones he referred to. Nvidia has a forward P/E of 32, which is in line with Microsoft (35), Apple (28), Intel (31). Compare to KO (Coca-Cola) which is 22, which gives the 33% tech premium that one of his linked articles notes. But KO is going to grow at the rate of global growth (3 - 5%); I think it is not unreasonable that all of those companies are going to grow at least 33% more than KO. So I don't think there is a bubble in major tech stocks. It is possible that Nvidia will not retain its current sales over the long term, but given that Nvidia cannot satisfy current demand, that seems unlikely in the next couple of years. Training the future models isn't likely to require less compute, and I think there is a reasonable case to be made that people will need domain-specific training for domain-specific ChatGPTs (or whatever the future is). Which means more training.

Yes, I think it is way overhyped, but on the other hand, actual people are using ChatGPTs. I've used it for simple code to get started with an unfamiliar (but popular) library. I talked with a non-technical friend recently who was using it for relationship advice (with predictably unhelpful responses, it can't tell you the issues you are unaware of, but still).

If there's an AI bubble, it's in the early stages. In my mind the over-priced aspect of the market is the complete denial that stock prices at 5% interest rates should not be higher than at 1%, all things being equal. At least not if value = profit / costOfCapital as it is supposed to.

[+] HDThoreaun|2 years ago|reply
Nvidia's margins are currently 75%. I dont see any way they can possibly keep that up for more than a few years. As soon as a legitimate competitor shows up those margins will be cut in half if not more. Coca cola on the other hand has a very predictable business, with profits extremely unlikely to decline anytime soon.
[+] gmm1990|2 years ago|reply
After this last run up in stocks despite high interest rates I've completely given up on trying to value based on any metrics.
[+] bloqs|2 years ago|reply
Younger people who may not know: Tim Bray was one of the creators of XML. Also a nice guy on Twitter (at least he was a few years ago) he's probably active here too.
[+] timeagain|2 years ago|reply
His blog posts are one of the main things that keep me coming back here. Insightful, and he often is able to put into words things about the industry that I can only feel in the periphery of my heart.
[+] hn_throwaway_99|2 years ago|reply
While I agreed with a lot in this post, I'm also pretty wary of the underlying, unstated idea that average investors can avoid bubbles popping while also somehow taking advantage when things go up - that is, he doesn't really say it in so many words, but he's essentially talking about timing the market.

I started my tech career around the turn of the century, and made the mistake of putting a ton of money (at least for me, at the time) into Global Crossing. My thought was that while there were all these "fluffy" doomed dot coms at the time, Global Crossing had billions in real, physical infrastructure they built. Obviously I didn't quite understand debt at the time, never mind the actual fraud that Global Crossing committed (I remember thinking "Wow, stocks really can go to zero and never come back.")

Sure, you could argue I made every newbie investor mistake in the book, but the worse consequence for me was that it "spooked" me early in my investing career, such that I became very reticent to want to invest in things when I felt they were overvalued. E.g. I was one of those people who thought there was a giant tech bubble when Facebook bought Instagram for a billion dollars - in 2012...

So sure, you may think I'm an idiot, but I can quite guarantee I was far from alone. It was only at the point where I really, truly believed "I'm definitely not smarter than anyone else in the market" (and hardly anyone is) that I just put my money in index funds, did regular rebalancing, and otherwise forgot about it.

We may be in an AI bubble, we may not, but I've seen way too many "vastly overvalued" companies continue to be "vastly overvalued" for over a decade (and then only briefly coming down before shooting back up again) to think that Tim Bray has any special insight here.

[+] mattgreenrocks|2 years ago|reply
Absent GenAI, do the BigTechs of the world have enough growth going on elsewhere to appease Wall St volcano gods sufficiently?

They all seem to be hyping GenAI a ridiculous amount, prompting this question. And it makes sense for them to ride the hype train and get something out of it. But it also makes me wonder if that only makes the eventual drop even larger.

[+] sgt101|2 years ago|reply
As ever the truth is in the middle.

- LLM's provide functionality that was very difficult to implement until 2 years ago. - We can decode natural language statements relatively well and relatively easily. - We have an approximate common sense knowledge base. - We can encode statements into human readable text flexibly. (this was never so much of a problem as the first two - but it's still useful).

But, these are not magic boxes that can tell our fortunes.

So we can do good things if we engineer things well, and there is a lot of synergy with other AI tech that's been evolving in the last ten years. STT and object recognition are both very useful, end to end differentiable reasoners are coming in now as well. ML was becoming important in 2019, 2023 created an inflection and some hysteria, but there's substantial value to be had.

[+] neilk|2 years ago|reply
I mostly agree but I have a few quibbles with some arguments.

For instance, Bray considers the adage "The CIO is the last to know". From the 90s until now, developers have always snuck new technology in without management approval. You put Apache on a forgotten Linux box in the corner because it's easy and fun, and a few months later the whole company relies on it. Developers are not rushing to deploy skunkworks generative AI solutions, so, the argument goes, probably generative AI isn't that good.

There's a couple of problems with this.

1. Not everything that is good can be deployed skunkworks-style.

It might be that AI is only really good with incredibly high up-front costs and extremely specialized developers. Like launching a satellite. You can't do it yourself with stuff you have lying around, and even if you had the money to do it you probably don't have the expertise to do it safely. But it's still extremely valuable!

2. Sometimes we are using this technology to hack up solutions to personal problems!

I had a video which I wanted my hearing-impaired father to watch. I could have paid a human or AI-powered service to generate subtitles, but I found that I could do it myself with OpenAI's Whisper, on an old laptop, and then munging text files together in the usual way. I was a little shocked that this worked offline. I could have done it on a plane. This absolutely fits into a hacker workflow.

[+] IanCal|2 years ago|reply
> I kind of flipped out, and was motivated to finish this blog piece, when I saw this: “UK government wants to use AI to cut civil service jobs: Yes, you read that right.” The idea — to have citizen input processed and responded to by an LLM —

That's not what the article says, it's about processing responses not responding to people. I don't think there's anything about responding to citizens.

And it also doesn't say LLM it says AI.

[+] debacle|2 years ago|reply
Is there a resource I can use for understanding how "dumb money" impacts the markets?
[+] lbotos|2 years ago|reply
lots of articles talking about "index investing/passive investing market impacts" should get you what you are looking for.
[+] glitchc|2 years ago|reply
> Produce plausible output

That's basically art. So AI's only really good at producing art. So we're safe... but now I feel bad for the artists.

[+] andsoitis|2 years ago|reply
> but there are just way more ways for things to go wrong than right in the immediate future

That is always the case so I wouldn’t over index on that.

[+] jgalt212|2 years ago|reply
With the socialization of risk there is no good reason to not try to participate in bubbles--as longs you're not the first one rolled.
[+] benced|2 years ago|reply
> As bad as 2008? Nobody knows, but it wouldn’t surprise me.

The recency of 2008 has really warped people's brains. 2008 was the 2nd worst financial crisis of all time (maybe it would have been the worst if our fiscal and monetary tools were still at 1929 levels of sophistication). You should be extremely hesitant to declare that anything will even come close to it.

[+] elijahbenizzy|2 years ago|reply
There's an argument that the AI tech bubble is just a continuation from the value-add of automating everything and the productivity/GDP-increase that causes. It is just that investors are so skittish that they'll only let the floodgates open if its "sexy", so AI comes in and restarts the money machine.
[+] nkohari|2 years ago|reply
I think most people (including many working in AI!) would agree that AI is currently at the peak of the hype cycle and there will be a bloodletting at some point.

But I don't really understand how AI being hyped, and NVIDIA's stock being overvalued by extension, could result in a 2008-like market crash.