No. Undoubtedly No. There are some applications that will benefit directly from this technological improvement, but it is not a widely applicable as everyone is making it to be.
In the areas what it is broadly applicable, it will improve efficiency and also a loss of a few jobs, which might make it mostly neutral from an economic point of view.
No. It will descend into scamming sweatshop bot offices in the dusty reaches of the world thinking they can make a faster buck by using fewer employees than when before ai (think along the lines of (peopleperhour, mechanical turk, call centres, help centres, it software support).
In a perfect world, consumers would not tolerate bad support/service now that anyone (literally anyone) can learn to provide good service with an AI. The bad companies would go out of business, and the few struggling with the consequences of incompetent support would learn that courts now define insufficient support as gross negligence.
Ironically more production does not always lead to growth. A thought experiment: if you have everyone a machine to produce miniature bombs, would there be economic growth? There would be a huge increase in the number of bombs produced (creating GDP growth) but then a huge loss of productivity due to destruction and extra steps to protect things from the copious amount of bombs used. Overall, this would lead to a loss of wealth and productivity.
I could see this happening with AI as well, but where there fewer moral barriers to using it than a bomb.
Current LLMs are a "blurry JPEG of the Internet", essentially a fuzzy index instead of the character-accurate Google Search index.[1] Both have their utility, with a lot of overlap.
LLMs can't replace most human jobs in the same way that Google didn't replace most human jobs. However, many people become more productive thanks to modern web search and a few people did lose their jobs or were downsized. Nobody hires a research librarian in a private company these days because employees are expected to do their own searches!
The same thing will happen with LLMs. It'll be an alternative to Google Searches and perform much the same function, extending the capability to fuzzy searches and contextual searches. It'll be integrated with character-accurate indexes, and then there will be one "ask the Internet" product. It'll be useful. It'll make everyone more productive. I don't think it'll replace any of us any time soon. Maybe in 15+ years, but not next year.
[1] Most of the criticism I've seen of LLMs stems from a misunderstanding of what they do and how they work. People expect character-accurate output, such as URLs and references. It's not an index, it doesn't work that way!
Not sure why everyone is saying no, the answer is obviously yes. Having infinitely patient and accurate brains that can spin up in a dime will be the greatest economic boon we have ever seen.
Absolutely nothing about our current generation of AI is accurate at all.
67% even 87% on synthetic benchmarks does not intelligence make.
It’s all statistics based, it’s not infinitely accurate, nor do we have any reason to think any AI system would exhibit anything resembling patience. They don’t exist outside of inference time, let alone have a sense of the passage of time.
depends. The issue is that LLMs effectively “centralize” the functionality into a small set of models, which means if the costs of service drop too much compared to increase in demand, then GDP may perversely decrease. It all depends on whether AI actually increases net demand
Economic activity benefits the people who, in the end, consume the products and services. Today, there are long and intransparent supply chains but I think they mostly serve this goal. Economic growth, however does not only require better products and services in the pipeline. It also needs consumers who are ready to adopt these innovative products. A significant part of the world has aging populations (China, Japan, western Europe). There, a large fraction of the population might not adopt new and innovative life styles.
I think this is the reason why every innovation takes time until it's faded in completely and the entire society benefits from the innovation. So, neither AI nor other innovations will usher in an era of explosive growth.
I can honestly say using copilot and chatgpt semi regularly as an aid has helped increase my productivity by a nonzero amount. If this is the case for a large portion of people who work with computers regularly then I think it could very well have an upward trend on economic growth maybe just not super explosive. Yet what percentage of increase in productivity would be the threshold for "explosive" growth? 50%, 10%, or even 1%? With things like copilot for office or having a chatgpt button built into
your laptop making the use of AI so easy and seamless, many computer users may simultaneously experience a boost in productivity. It may just be less noticeable than one would expect.
To a first approximation GDP is what households consume (and household consumption is kind of the point of GDP).
How will AI increase aggregate household income explosively? Creating a few more billionaires is just measurement noise, not even visible in the trend line.
I've been thinking of this recently from the perspective of AI as the new mechanization. Not a brilliant idea, but the reason luddites destroyed the looms was because they saw themselves as mechanical beings. Most of the work done in society was mechanical. We made things. Very few people were responsible for thinking.
Now, society is information based, and we see ourselves as the thinking machines.
Just as the industrial revolution didn't remove humans from all mechanical work, AI won't remove us from all knowledge work, but I believe it will uncover the next level of humanity. If we're not only mechanical, and we're not only cerebral, what are we?
The luddites did not destroy the looms because they objected to the idea of mechanized looms, but because they objected to the politics and exploitation pushed by those introducing the looms: https://www.kirkusreviews.com/book-reviews/brian-merchant/bl...
Yes, but not in the way they think... The tech giants capable of running these AIs at scale will soon expand their business into selling "certified human created content" back to us.
I think it will further enable corporate growth. Current "AI" is essentially corporate lubrication. In condensing 30 years of internet content into vectors, it enables the creation and maintenance of systems close to the mean. It will make it very easy for boards to get rid of middlemen and run skeleton crews. The fine tuning left to do is all about variance and liability. It won't enable new things, it will enable more and cheaper common things.
The problem is, the AI needs to stop getting better (or only ever get slightly better) after a very specific point. It needs to be good enough to avoid getting tangled up in bureaucracy, but not be good enough to take over/wreck the current economic and political systems (at the least).
It should if most things the "AI experts" say lines up. Though the growth will ultimately only be felt by the corporate overlords, the government and the owners of AI server farms.
I have a (non-serious) conspiracy theory that the reality of AI is exactly counter to what we think. Once AI takes hold, we will no longer see such extremes of income inequality.
With that being the case, there's efforts underway to stifle AI. It looks like big business hasn't been the quickest to adopt. It's been full steam ahead on things like self-driving cars, even though at times the level of safety has been exaggerated (at least in the early years).
P.S. This is probably a load of nonsense, as evidenced by the many people working on AI, and all the money going into it but it seems like business hasn't been the most enthusiastic. It's never because they truly care.
P.S.S I also don't know how that would work exactly, but I could see things looking different with everything working fine and "employees" now having free time. Not having money to give them, and time to think "hey, why does that guy get all the stuff while we starve, maybe we should find a way to fix".
There's also the reality that while proprietary AI models could be bad for workers, AI could also be bad for big business. Highly disruptive if this can't be controlled. It's not always material costs, sometimes the issue is just that you could never staff teams of engineers to work on problem. Or you have a staff of engineers and need artists... here it seems the artists could actually have the upper hand, which is nice to see :)
> Once AI takes hold, we will no longer see such extremes of income inequality.
That's a very optimistic view. From where I'm sitting, it seems like the rich people control all three of the important AI companies (OpenAI, Google DeepMind, Anthropic) and all one (Nvidia) of the important chipmakers, so they will likely get even richer and many comparatively poor people will lose their jobs.
Yes. I believe so. Maybe it won't happen this year. Maybe not next year. But in 3 years, I think the world will radically change.
Here's an example use case we found for our business:
Our sales people request invoices from a potential customer. On those invoices are our competitor's services and price. We have matching services and our own prices. The goal is to find similar services where we charge less. In the past, our sales people would spend hours combing through those invoices. We wrote a prompt for GPT4, fed in our services and prices, and asked it to find services we could potentially replace as well as our profit margin. It took us a day to write this prompt. The results were outstanding and GPT4 gave accurate results. We even asked it to package it up in a PDF for us.
This will save our company hundreds of thousands each year and we can get back to the potential customer much faster than before - increasing the likelihood of a sale.
If we had to program this like normal software, it'd probably take months to get it right. Chances are, engineering would never even prioritize this feature for our sales people.
GPT6 with much higher context and much cheaper inference cost? Yes please. I think people can't imagine how it's going to change everything.
Totally agree. What is hard to see right now in business is talked about by Charlie Munger in Poor Charlie's Almanac.
What you describe will save your company money because you are an early adopter but in the long run, everyone is going to do these kind of things and the savings will be passed on to the consumer.
Munger mentions this talking about a textile business they had. The new more efficient machine wasn't going to make the business better but just end up passing savings on to the consumer so they actually sold the business.
Management wouldn't have prioritized that project for engineering because it would have cost too much and have uncertain benefits given the cost.
This is all massively deflationary and certain highly prized skills that cost $120k a year per right now, will be $20 bucks a month in 2024 dollars someday.
I'm not sure why the submitter went with that headline instead of the actual headline of the article: "How AI could explode the economy and how it could fizzle.". The piece itself discusses both "AI will be a huge economic boon" and "AI will have little economic effect" stances.
My prediction is that it will not cause explosive economic growth, but it will have noticeable economic effects that will benefit some at the expense of others.
so imagine a gigantic bomb of diarrhea fireworks, that is what AI will be like. i would type that into DallE but im afraid OpenAI would ban me and/or make my entire history public on linkedin.
skynetv2|1 year ago
antifa|1 year ago
benbojangles|1 year ago
RecycledEle|1 year ago
Engineering-MD|1 year ago
jvansc|1 year ago
jiggawatts|1 year ago
LLMs can't replace most human jobs in the same way that Google didn't replace most human jobs. However, many people become more productive thanks to modern web search and a few people did lose their jobs or were downsized. Nobody hires a research librarian in a private company these days because employees are expected to do their own searches!
The same thing will happen with LLMs. It'll be an alternative to Google Searches and perform much the same function, extending the capability to fuzzy searches and contextual searches. It'll be integrated with character-accurate indexes, and then there will be one "ask the Internet" product. It'll be useful. It'll make everyone more productive. I don't think it'll replace any of us any time soon. Maybe in 15+ years, but not next year.
[1] Most of the criticism I've seen of LLMs stems from a misunderstanding of what they do and how they work. People expect character-accurate output, such as URLs and references. It's not an index, it doesn't work that way!
mountainriver|1 year ago
dartos|1 year ago
Absolutely nothing about our current generation of AI is accurate at all.
67% even 87% on synthetic benchmarks does not intelligence make.
It’s all statistics based, it’s not infinitely accurate, nor do we have any reason to think any AI system would exhibit anything resembling patience. They don’t exist outside of inference time, let alone have a sense of the passage of time.
inimino|1 year ago
lolbullshit|1 year ago
klyrs|1 year ago
nextworddev|1 year ago
ulfw|1 year ago
You know we do need a customer base of actual human beings to sell things to. AI buying AI products AIn't gonna cut it.
moljac024|1 year ago
unknown|1 year ago
[deleted]
mo_42|1 year ago
I think this is the reason why every innovation takes time until it's faded in completely and the entire society benefits from the innovation. So, neither AI nor other innovations will usher in an era of explosive growth.
madnness76|1 year ago
tuatoru|1 year ago
How will AI increase aggregate household income explosively? Creating a few more billionaires is just measurement noise, not even visible in the trend line.
dehrmann|1 year ago
pedalpete|1 year ago
Now, society is information based, and we see ourselves as the thinking machines.
Just as the industrial revolution didn't remove humans from all mechanical work, AI won't remove us from all knowledge work, but I believe it will uncover the next level of humanity. If we're not only mechanical, and we're not only cerebral, what are we?
_aavaa_|1 year ago
yeknoda|1 year ago
wildrhythms|1 year ago
namaria|1 year ago
Vecr|1 year ago
Timber-6539|1 year ago
P_I_Staker|1 year ago
With that being the case, there's efforts underway to stifle AI. It looks like big business hasn't been the quickest to adopt. It's been full steam ahead on things like self-driving cars, even though at times the level of safety has been exaggerated (at least in the early years).
P.S. This is probably a load of nonsense, as evidenced by the many people working on AI, and all the money going into it but it seems like business hasn't been the most enthusiastic. It's never because they truly care.
P.S.S I also don't know how that would work exactly, but I could see things looking different with everything working fine and "employees" now having free time. Not having money to give them, and time to think "hey, why does that guy get all the stuff while we starve, maybe we should find a way to fix".
There's also the reality that while proprietary AI models could be bad for workers, AI could also be bad for big business. Highly disruptive if this can't be controlled. It's not always material costs, sometimes the issue is just that you could never staff teams of engineers to work on problem. Or you have a staff of engineers and need artists... here it seems the artists could actually have the upper hand, which is nice to see :)
ForHackernews|1 year ago
That's a very optimistic view. From where I'm sitting, it seems like the rich people control all three of the important AI companies (OpenAI, Google DeepMind, Anthropic) and all one (Nvidia) of the important chipmakers, so they will likely get even richer and many comparatively poor people will lose their jobs.
aurareturn|1 year ago
Here's an example use case we found for our business:
Our sales people request invoices from a potential customer. On those invoices are our competitor's services and price. We have matching services and our own prices. The goal is to find similar services where we charge less. In the past, our sales people would spend hours combing through those invoices. We wrote a prompt for GPT4, fed in our services and prices, and asked it to find services we could potentially replace as well as our profit margin. It took us a day to write this prompt. The results were outstanding and GPT4 gave accurate results. We even asked it to package it up in a PDF for us.
This will save our company hundreds of thousands each year and we can get back to the potential customer much faster than before - increasing the likelihood of a sale.
If we had to program this like normal software, it'd probably take months to get it right. Chances are, engineering would never even prioritize this feature for our sales people.
GPT6 with much higher context and much cheaper inference cost? Yes please. I think people can't imagine how it's going to change everything.
xotesos|1 year ago
What you describe will save your company money because you are an early adopter but in the long run, everyone is going to do these kind of things and the savings will be passed on to the consumer.
Munger mentions this talking about a textile business they had. The new more efficient machine wasn't going to make the business better but just end up passing savings on to the consumer so they actually sold the business.
Management wouldn't have prioritized that project for engineering because it would have cost too much and have uncertain benefits given the cost.
This is all massively deflationary and certain highly prized skills that cost $120k a year per right now, will be $20 bucks a month in 2024 dollars someday.
ForHackernews|1 year ago
(Betteridge's law of headlines, but also true in this case)
JohnFen|1 year ago
My prediction is that it will not cause explosive economic growth, but it will have noticeable economic effects that will benefit some at the expense of others.
doubloon|1 year ago
diarrhea
bombs
fireworks
so imagine a gigantic bomb of diarrhea fireworks, that is what AI will be like. i would type that into DallE but im afraid OpenAI would ban me and/or make my entire history public on linkedin.
aaron695|1 year ago
[deleted]
faceloss|1 year ago
[deleted]
morpheos137|1 year ago