The difference is that crypto as a sector is a dead end, while there will absolutely be multiple trillion+ dollar "AI" companies at some point in the future. It may not happen anytime soon, and it may not be any of the companies in existence today, but the overall bet (and its associated hype) is a valid - even necessary - one.
In that sense the current state of AI is less like crypto and more like the dotcom bubble of the early 00s. No one really understands the underlying tech well enough, but everyone wants to be involved. There are companies surging in valuation just by adding AI to their name. While all of this will correct itself, the underlying tech (the web in 2000, AI today) will eventually prove to be world changing.
The marketers from Crypto have transitioned to Ai... With their outlandish hype and over embellishment of capability, they are flooding the Internet with SEO spam concerning Ai, while it really isn't ready for prime time. This is the new reality, it will cause burnout and backlash against innovation, and it artificially bilks investors and causes rampant and harmful overvaluation and inflation bubbles in IT that ruin trustworthiness in tech.
Now replace “crypto” with “social media” and “AI” with “crypto”, then read your comment again and tell me how this is not what a crypto fanboy would write two years ago?
People need to remember what year Bitcoin first became popular and what year it is today. Comparing crypto of today to AI of today is unfair. You need to compare crypto when it first came out:
- “digital cash”, “buy anything with minimal fees”
- “banking for the unbanked”
- “digital gold”, “not subject to inflation”
- “trade cash for crypto with anyone on the street”
- “no merchant fees”
All of these statements were true when Bitcoin first came out and all of them provided value to some albeit not everyone. Most of these statements are not true today. 6-7 years later and we can barely buy anything with crypto, the fees are most definitely not minimal and the “merchant fees” fade in comparison, the market is fragmented beyond imagination. Given taxation and KYC requirements crypto today is anything but “banking for the unbanked” or “digital cash”, you most definitely can’t trade it with random strangers on the street unless you want to get yourself arrested. Due to its volatility and correlation with the market crypto also does not appear to be a safe heaven today, despite what theory would have us believe.
All that to say, there was great hope for crypto when it first came out but over time its utility diminished. I can draw some parallels with the AI of today but will the future turn out the same we can only guess.
Other criticisms aside, fees are <10 cents for many transactions on Eth rollups right now, and will drop another 10x in the next six months if not more. (Only true since around the last month.) USDC is nice for international money transfers, though still niche. I don’t think anyone cares if you trade crypto on the streets, just don’t get rubber hosed.
The space has suffered a lot from adverse selection towards grifters. UST alone captured many of the earnest attempts towards payments and took them down with it.
I can be sympathetic to many folks who have publicly dismissed LLMs as vapid hype in past months -- tech has claimed many fundamental breakthroughs before, it was reasonable to expect this would be another illusory one too -- but come on, even science fiction authors can't tell the difference between GPT-4 and autocomplete? It just feels willfully unimaginative of them, and perhaps motivated by insecurity.
Use some of the creativity we've all seen you exhibit to ask GPT-4 questions that require complex reasoning inside a world model, rather than regurgitating its training set. It's not going to get it right all of the time, but the fact that it can get it right any of the time is astonishing, and it's going to get quickly better, perhaps even purely through compute spend increase rather than algorithmic breakthrough.
Maybe it is important to ask if that creativity is truly unique? Or in those cases already encoded in the training data? Humans in general might not actually be that creative. Or variations are something that could be broken.
Whilst one has an efficient greener method of achieving consensus which one blockchain (Ethereum) switched from the wasteful proof-of-work to the greener proof-of-stake making that possible.
On the another hand after years of deep learning, there is still no viable efficient methods of training, fine tuning or inference without needing more data centers as the data and the AI model scales and its hype is currently contributing to the increased wastage of water and resources all for the sake of incinerating the planet to produce broken and hallucinating black-box AI models.
Being ‘useful’ isn’t an excuse for not finding greener efficient alternatives that significantly lower emissions rather than continuing to burn the planet over the new ‘hype’.
Right. The blockchain at least has semi-novel ideas around hashing and distributed trust, AI is just spitting out the same words we've seen for years!
I'm not a crypto or AI advocate, but it's easy to see how "technology with overestimated social consequences" could get tiring for everyone who isn't an investor. The "fundamental difference" between cryptocurrency and AI is that one is built on reproducible and accountable logging and the other is guesswork. Trying to champion one or the other feels like pointless virtue signalling really.
I also have to admit I really don't like Cory Doctorow's other works (I especially despise the reductive theory of "enshittification") but his thesis is 100% the truth. People need new buzzwords to cling to, and AI is the one with the most popular demo. The usefulness is an accessory to the marketing and positioning of AI-powered products.
I do think “AI” is a hype-bubble, but machine learning and specifically the sort of LLMs we’re seeing now are definitively not a hype-bubble.
We don’t really need general AI, nor are we going to be able to make a true flawless general AI anytime soon. AI was always a really vague marketing term, and can be applied to anything for instant relevancy boost. Bubble territory.
The LLMs we have now are extremely effective though. There’s real use case here for automating writing, and some menial tasks that human beings were unfairly burdened with, but were impossible to automate till now.
That’ll stick around. (Considering that ML concepts have been around for decades, you could even say they already have!)
> The LLMs we have now are extremely effective though. There’s real use case here for automating writing, and some menial tasks that human beings were unfairly burdened with, but were impossible to automate till now.
Some people take issue with that assertion. The biggest problem with LLMs is you can’t trust them and have to verify everything they output - and often writing it yourself and verifying is easier than verifying someone/something else’s work.
Right now no sane corporation will even let a LLM run their helplines that they out sourced to India. Imagine a LLM hallucinating a dangerous “solution” to a customer problem resulting in loss of property, injuries, or even death. It’s a massive lawsuit waiting to happen.
A bursting bubble? Not in my opinion. I'm already able to capture value in ways that were thought too difficult just 6 months ago. That's not vaporware.
WeWork also captured unique value for thousands of people before their company collapsed and everything fell apart. Your product doesn't need to be vaporware to be a bubble.
This is a terribly ignorant take and disappointing to see coming from a science fiction author.
> AI isn’t “artificial” and it’s not “intelligent.” “Machine learning” doesn’t learn. On this week’s Trashfuture podcast, they made an excellent (and profane and hilarious) case that ChatGPT is best understood as a sophisticated form of autocomplete — not our new robot overlord.
This is just argument by assertion. We have no good definition of intelligence, so I have no clue how he can be so confident. "Machine learning doesn't learn" is a crazy take, since "backprop + gradient descent does learn" is close to the most well-supported thing you can say about the past few years of algorithmic progress.
> sophisticated autocomplete
Aside from this being an incredibly reductive sneer that clearly isn't true if you've honestly tried using ChatGPT, etc., his citation for this is a podcast, which I'm positive Doctorow would not accept as sufficient for basically any other technical topic.
I love Ted Chiang's stories, and some of his takes on AI progress are cited here. However, I also found his extensive conversation with the Financial Times (earlier this month, so published after this) disappointing along similar lines. The thread running through both of these is a complete lack of a positive vision for the future, replaced by an almost smug cynicism that asserts any more technological progress is simply hype, a grift, and bad. Are there any current science fiction authors with a positive vision of the future?
> This is just argument by assertion. We have no good definition of intelligence, so I have no clue how he can be so confident.
Without concrete definitions your assertions are just as correct as theirs. But they have the evidence of absurd tech-bro hype of past technologies to draw on.
> I love Ted Chiang's stories, and some of his takes on AI progress are cited here. However, I also found his extensive conversation with the Financial Times (earlier this month, so published after this) disappointing along similar lines.
"I love Ted Chiang's stories because they jive with my preconceived notions, but I like him less when he says things that I don't believe"
> The thread running through both of these is a complete lack of a positive vision for the future, replaced by an almost smug cynicism that asserts any more technological progress is simply hype, a grift, and bad. Are there any current science fiction authors with a positive vision of the future?
Plenty. They talked about flying cars and living on the moon. Instead we got stagnant wages and a social-media skinner box. All of those wonderfully positive predictions didn't pan out.
The current "AI" is probably somewhat useful in many cases. Mostly producing content that does not need to be perfect. Or even correct. And it can already do it. You could already get text from one model and then few images from an other. And spam the made content online.
Now valuation hype is big question. Will there be moats? Or will it be commodity technology? And maybe server farms will make some money and everyone else marginal profit?
Mark my words, this week new data will come out showing an uptick in all things inflation related, and by a larger-than-expected margin, the Fed will raise rates, and a lot of the hopium that’s been filling balloons for the last six months will rapidly dissipate back into gloomy despair.
AI companies and related stocks (NVDA) will be first in line when people realizes we’re not going back to the world of free money in 2018 that fueled a number of hilarious pump-and-dumps over the last 5 years.
Other than that, OP is correct in just about everything in the article - AI isn’t “artificial” or “intelligent”, it’s just a hype word to put on top of a relatively simple ecosystem of applications that may someday prove valuable, but for today are comically bad at everything that isn’t strictly a language exercise (e.g. producing soundalike copy, translating bits of code between apps, etc.). It’s presently a tool that can be no smarter than the user whose hands it occupies, and in fact requires a significant amount of investment to actually “learn” or take direction to be good at some specific tasks. Textbook hype case as far as I can see.
Reading all the comments here tells me that we have not experienced a true crash in the market.
AI has created a lot of euphoria pumped by VC money once again which can only result in disappointment of expectations since this is the 'pump' part of the hype cycle.
Sooner or later we will see if any of these so-called AI companies are actually making any money or will survive at all.
In 2010, almost everyone was a 'tech company'. Now almost everyone is a 'AI company'. This bullshit needs to stop.
[+] [-] paxys|2 years ago|reply
In that sense the current state of AI is less like crypto and more like the dotcom bubble of the early 00s. No one really understands the underlying tech well enough, but everyone wants to be involved. There are companies surging in valuation just by adding AI to their name. While all of this will correct itself, the underlying tech (the web in 2000, AI today) will eventually prove to be world changing.
[+] [-] winternett|2 years ago|reply
[+] [-] elwebmaster|2 years ago|reply
[+] [-] elwebmaster|2 years ago|reply
[+] [-] raykyri|2 years ago|reply
The space has suffered a lot from adverse selection towards grifters. UST alone captured many of the earnest attempts towards payments and took them down with it.
[+] [-] cjbprime|2 years ago|reply
Use some of the creativity we've all seen you exhibit to ask GPT-4 questions that require complex reasoning inside a world model, rather than regurgitating its training set. It's not going to get it right all of the time, but the fact that it can get it right any of the time is astonishing, and it's going to get quickly better, perhaps even purely through compute spend increase rather than algorithmic breakthrough.
[+] [-] Ekaros|2 years ago|reply
[+] [-] jstx1|2 years ago|reply
[+] [-] rvz|2 years ago|reply
On the another hand after years of deep learning, there is still no viable efficient methods of training, fine tuning or inference without needing more data centers as the data and the AI model scales and its hype is currently contributing to the increased wastage of water and resources all for the sake of incinerating the planet to produce broken and hallucinating black-box AI models.
Being ‘useful’ isn’t an excuse for not finding greener efficient alternatives that significantly lower emissions rather than continuing to burn the planet over the new ‘hype’.
[+] [-] smoldesu|2 years ago|reply
I'm not a crypto or AI advocate, but it's easy to see how "technology with overestimated social consequences" could get tiring for everyone who isn't an investor. The "fundamental difference" between cryptocurrency and AI is that one is built on reproducible and accountable logging and the other is guesswork. Trying to champion one or the other feels like pointless virtue signalling really.
I also have to admit I really don't like Cory Doctorow's other works (I especially despise the reductive theory of "enshittification") but his thesis is 100% the truth. People need new buzzwords to cling to, and AI is the one with the most popular demo. The usefulness is an accessory to the marketing and positioning of AI-powered products.
[+] [-] graypegg|2 years ago|reply
We don’t really need general AI, nor are we going to be able to make a true flawless general AI anytime soon. AI was always a really vague marketing term, and can be applied to anything for instant relevancy boost. Bubble territory.
The LLMs we have now are extremely effective though. There’s real use case here for automating writing, and some menial tasks that human beings were unfairly burdened with, but were impossible to automate till now.
That’ll stick around. (Considering that ML concepts have been around for decades, you could even say they already have!)
[+] [-] jstx1|2 years ago|reply
There isn't any kind of distinction - when people say AI in 2023 they mean ML, LLMs, image models etc.
[+] [-] worrycue|2 years ago|reply
Some people take issue with that assertion. The biggest problem with LLMs is you can’t trust them and have to verify everything they output - and often writing it yourself and verifying is easier than verifying someone/something else’s work.
Right now no sane corporation will even let a LLM run their helplines that they out sourced to India. Imagine a LLM hallucinating a dangerous “solution” to a customer problem resulting in loss of property, injuries, or even death. It’s a massive lawsuit waiting to happen.
[+] [-] BinRoo|2 years ago|reply
A bursting bubble? Not in my opinion. I'm already able to capture value in ways that were thought too difficult just 6 months ago. That's not vaporware.
[+] [-] smoldesu|2 years ago|reply
[+] [-] md2020|2 years ago|reply
> AI isn’t “artificial” and it’s not “intelligent.” “Machine learning” doesn’t learn. On this week’s Trashfuture podcast, they made an excellent (and profane and hilarious) case that ChatGPT is best understood as a sophisticated form of autocomplete — not our new robot overlord.
This is just argument by assertion. We have no good definition of intelligence, so I have no clue how he can be so confident. "Machine learning doesn't learn" is a crazy take, since "backprop + gradient descent does learn" is close to the most well-supported thing you can say about the past few years of algorithmic progress.
> sophisticated autocomplete
Aside from this being an incredibly reductive sneer that clearly isn't true if you've honestly tried using ChatGPT, etc., his citation for this is a podcast, which I'm positive Doctorow would not accept as sufficient for basically any other technical topic.
I love Ted Chiang's stories, and some of his takes on AI progress are cited here. However, I also found his extensive conversation with the Financial Times (earlier this month, so published after this) disappointing along similar lines. The thread running through both of these is a complete lack of a positive vision for the future, replaced by an almost smug cynicism that asserts any more technological progress is simply hype, a grift, and bad. Are there any current science fiction authors with a positive vision of the future?
[+] [-] red-iron-pine|2 years ago|reply
Without concrete definitions your assertions are just as correct as theirs. But they have the evidence of absurd tech-bro hype of past technologies to draw on.
> I love Ted Chiang's stories, and some of his takes on AI progress are cited here. However, I also found his extensive conversation with the Financial Times (earlier this month, so published after this) disappointing along similar lines.
"I love Ted Chiang's stories because they jive with my preconceived notions, but I like him less when he says things that I don't believe"
> The thread running through both of these is a complete lack of a positive vision for the future, replaced by an almost smug cynicism that asserts any more technological progress is simply hype, a grift, and bad. Are there any current science fiction authors with a positive vision of the future?
Plenty. They talked about flying cars and living on the moon. Instead we got stagnant wages and a social-media skinner box. All of those wonderfully positive predictions didn't pan out.
[+] [-] Ekaros|2 years ago|reply
Now valuation hype is big question. Will there be moats? Or will it be commodity technology? And maybe server farms will make some money and everyone else marginal profit?
[+] [-] rzazueta|2 years ago|reply
The mobile hype bubble was the new portal hype bubble.
The portal hype bubble was the new dot com hype bubble.
I swear there's a pattern here... Maybe I'm not seeing it...
[+] [-] Hatrix|2 years ago|reply
[+] [-] zerbinxx|2 years ago|reply
AI companies and related stocks (NVDA) will be first in line when people realizes we’re not going back to the world of free money in 2018 that fueled a number of hilarious pump-and-dumps over the last 5 years.
Other than that, OP is correct in just about everything in the article - AI isn’t “artificial” or “intelligent”, it’s just a hype word to put on top of a relatively simple ecosystem of applications that may someday prove valuable, but for today are comically bad at everything that isn’t strictly a language exercise (e.g. producing soundalike copy, translating bits of code between apps, etc.). It’s presently a tool that can be no smarter than the user whose hands it occupies, and in fact requires a significant amount of investment to actually “learn” or take direction to be good at some specific tasks. Textbook hype case as far as I can see.
[+] [-] rvz|2 years ago|reply
AI has created a lot of euphoria pumped by VC money once again which can only result in disappointment of expectations since this is the 'pump' part of the hype cycle.
Sooner or later we will see if any of these so-called AI companies are actually making any money or will survive at all.
In 2010, almost everyone was a 'tech company'. Now almost everyone is a 'AI company'. This bullshit needs to stop.
[+] [-] constantcrying|2 years ago|reply
>AI isn’t “artificial” and it’s not “intelligent.” “Machine learning” doesn’t learn.
Is just an insane thing to write.
[+] [-] jokoon|2 years ago|reply
[+] [-] bbstats|2 years ago|reply
[+] [-] awaythrow483|2 years ago|reply