Investors who do nothing but follow the crowd dumped a tonne of money into cryptocurrency / blockchain startups regardless of merit. Now they are doing the same with AI. But AI actually has clear, immediate, and lasting value. Articles like this seem to be written by people who don’t understand the basic concepts about people who don’t understand the basic concepts. Both sets of people just seem to identify the trends and mistake the trend for the overall value. AI only looks like cryptocurrency / blockchain if you don’t know anything at all about either one beyond “they are trendy and attract people who like investing in trendy things”.
I think you are being harsh on the critics. A technology can have tremendous value while still attracting grifters and being an early investors' graveyard.
Best example is the dot com bubble in 1996 - 2001, the web had truly world-changing potential, but grifting was rife and practically all early investors lost their money unless you were one of the lucky ones who invested in amazon, yahoo, google or ebay out of the 10,000 companies that were shilling back then (and even then you had to be very patient not to sell your stock for decades to make the really big gains - see the history of amazon and apple stock during this era).
There are many such examples in history of technology concerning grifters and the fate of early investors see also automobiles and printing press (even in printing press Guttenberg lost a lot of money and had many grifters/copycats)
Couldn’t agree more. Crypto was a monetizable solution in search of a problem.
AI solves real problems, and has been doing so for many years. Generative AI is just the latest and most accessible instantiating of it.
My first company was a dialup ISP in the 90s. I look at GPT4 and I see the 56K modem of AI. There is so much upside to be discovered, we have barely started.
None of those "knowing why others are stupid" and "AI is muchmuchmore useful" matters if the topic is rampaged and distorted by clueless pile of wealth into momentary apparent fame instead of developed sensibly. Crypto and blockchain could have had much better merit if it's reputation was not destoyed by forced through careless mania. The society will f up AI into something of a freak, maybe even quicker due to its powerful hence scary nature, if maniacs push into something where it causes serious damages or cathastrophy, triggering hard regulations and bans. At least for our lifetime, slowing its much needed introduction to life to several generations timespan. AI could, and probably will, be castrated rapidly by the frenzy of huge piles of clueless money.
The distance between "sort of works" and "works" for AI is considerable. Not infinite.
Look at self-driving cars. The first tries were in the late 1950s, with GM's Firebird 3, guided by wires in the road. By the 1980s, the first self-driving vehicles were moving around CMU, very slowly. By the early 1990s, experimental highway driving had been demoed. In the early 2000s, we had the DARPA Grand Challenge, which had off-road driving on empty roads working. Then there were a few experimental self-driving cars that sort of worked on general roads. Many startups, most went bust.
Today you can take a driverless cab in San Francisco. 64 years since GM's Firebird III.
(Which still exists, in driveable condition, in GM's in-house collection.)
It may take a while to get from GPT-3 to Microsoft Middle Manager 2.0. But the path is clear now.
> The distance between "sort of works" and "works" for AI is considerable. Not infinite.
> Today you can take a driverless cab in San Francisco [...]
From the outside, it sure does look like driverless is still firmly at "sort of works":
"After California regulators approved the expansion of driverless taxi services in San Francisco earlier this month, it took only a little more than 24 hours for a series of events to begin that seemed to justify the taxis’ detractors.
The day after the vote, 10 autonomous vehicles operated by Cruise, a subsidiary of General Motors, abruptly stopped functioning in the middle of a busy street in the North Beach neighborhood of San Francisco. Posts to social media showed the cars jammed up, their hazard lights flashing, blocking traffic for 15 minutes.
A few days later, another Cruise vehicle drove into a paving project in the Western Addition and got stuck in freshly poured concrete.
And then last week, a Cruise car collided with a fire truck in the city, injuring a passenger in the car.
So it was that last Friday Cruise agreed to a request from the California Department of Motor Vehicles to cut in half the number of vehicles it operated in San Francisco, even though regulatory approval for more remained in place. The company, which has had 400 driverless vehicles operating in the city, will now have no more than 50 cars running during the day and 150 at night."[0]
Does Microsoft Middle Manager 2.0 stop working in the presence of traffic cones?
More seriously, the distance between "sort of works" and "works" might not be infinite, but it most likely involves fundamentally unpredictable future developments of the current technology. There is no straight line of incremental improvements that gets us there.
It's fairly straightforward to imagine that if you have a 4.77mhz CPU and 64kb of RAM you will soon have a 3ghz CPU and 64gb RAM.
Bigger number going brrr is no guarantee of anything, here, so much so that models using a fraction of GPT4's resources are somewhat competitive.
By all means continue developing the technology, but claims that we are within arms' reach of X for disparate values of X are not exactly supported by anything.
I think we've found the common thread between AI and crypto: the dumping of externalities and increase in energy usage.
Do people really want to work for Microsoft Middle Manager? Have we not seen enough horror stories about metric-slavery in Amazon warehouses and the gig economy? It might be cheaper, but it's also worse, for a class of people who don't get any input in the decision. Similarly self-driving unleashes a new class of poorly behaved "learner" drivers on the road, who may be less aggressive but are also capable of causing problems from that very timidity and lack of general competence.
That's true for driverless cars but there's many areas where you don't need 100% to be useful. Stable diffusion and LLMs are 90% there but still very useful and cool.
Self-driving is never going to happen, and its also sitting in some sort of informational blind spot for the people working on it - I have no idea why.
There is no such thing as "driving" there is no physical force, or particle.
There is no force preventing you from driving in the opposite direction of traffic, or through glass panes.
"Driving" is entirely a social phenomnon, the confluence of societal self impositions and engineering.
If you have a car on fire in front of you, you will need to reverse in the wrong direction of traffic.
In many countries - you have to regularly deal with drivers going full tilt, on the wrong side of the road.
Or You have to deal with theft, and people trying to rob you at every red light.
Im underscoring that this is a social issue. You would need to create models for each country and region, to truly improve self driving.
Self driving assumes a far narrower problem space than reality gives a fig for.
Self driving theory currently works in the same way any theory that assumes spherical cows works.
>It may take a while to get from GPT-3 to Microsoft Middle Manager 2.0. But the path is clear now.
I was intrigued by the announced Business Chat feature for Microsoft Teams <https://www.reddit.com/r/singularity/comments/11swyeu/introd...>, but learned that it is just summaries of conversations. That's not quite what I'd imagined, which is something like this:
----
A: ... and that is why I think we should go with option 1.
B: No, the points you mentioned support my case for option 2.
C: Nothing you guys have said changes my mind about option 3 being best.
D: Business Chat, what do you think?
BC: Based on this discussion, and my research, option 1 seems more realistic but option 2 would be more profitable if possible. My reasons are ...
C: Business Chat and you guys all don't understand point N, which is the main reason why option 3 is best.
B: Higher profit is exactly why I think option 2 is the way to go.
A: No, our rival is going to hit the market next month. We need to get something out there ASAP. Option 1 can do this.
D: You've all given me things to think about. Thank you for coming. Business Chat, email me a summary of the meeting, and set up a followup meeting for Tuesday 3pm.
----
That is, AI used as a colleague/assistant, not necessarily subordinate but not seen as omniscient, either; another viewpoint to consider. Like you said, Middle Manager 2.0. When do you think the above will be feasible? This year? Five years?
> It may take a while to get from GPT-3 to Microsoft Middle Manager 2.0. But the path is clear now.
Is there a way you could make the path clear to me too? I just don't get it. Sure, I can imagine in principle it's possible. But I don't see how you can already see it's certain. Is there something you can share that will allow me to see why it's certain?
Transformative tech: cars. If you invested in the early movers you very likely lost your money.
Transformative tech: personal computers. If you invested in early movers you missed microsoft and you very likely lost your money.
Transformative tech: Internet/web, dotcom boom. Google, facebook, twitter were not investable or did not even exist even up to when it popped.
Whether you believe AI is going to be massively transformative to the modern economy on a scale with cars or personal computing is actually not sufficient to start investing in the 'sector' (for want of a better term).
So this is useful as a reverse indicator. Anyone investing buckets in AI is worth betting against in general. Everyone? Maybe not everyone. Maybe.
Likewise even if the blockchain sector is dead. (Is it? No clue) This does not necessarily make the technology dead. (Even if you would like it to be). The web came back and reinvented as a massive distributed surveillance machine. Who would have predicted that in the crash? Who would have wanted it? Well we got it anyway.
For cars the inventor is (Mercedes) Benz, the people who first mass produced it was Ford. Both seem like good investments.
PC: Who is this early mover you talk about? I guess not Apple in your mind, since that would've been a fantastic investment. Hard to think of any better.
Plus, there are hundreds if not thousands of companies along the way you could've invested in that were swallowed up by the bigger fish. That's still a good investment since you either get a good exit or shares in the bigger fish.
You can always find some earlier example of a tech that lost. But if you invested in the first company to mass produce a car (Ford) you'd have done well. If you invested in the first of its kind smartphone (Apple) even better. Likewise there's been a lot of early failed AI startups in the past 20 years that you could point to, and now OpenAI is hitting it out of the park, expecting a billion in revenue next year on a pure AI consumer product. I think that shows this time is different.
Isn't a lot of the cash coming from funds? Funds that diversify but still can pump billions into this sector without breaking a sweat. Individual investors don't have a lot of opportunity to buy in anyway, and they usually know the risks. But funds can blindly pump memes alongside modest strategies and still return on investment overall; and they aren't left standing when the music stops.
I guess the time to invest in dot coms would have been around 2002. After the hype wave collapsed and you can see which businesses are hanging in and providing value.
What I find interesting about this wave of tech is not just how useful, but how accessible it is. Image generators like SD pretty much work out of the box on a lot of consumer-grade hardware, LLMs might be a bit more of a stretch but still doable (haven't tried that yet, though). It's quite unusual, compare that to how inaccessible the first computers were.
Not sure I would love this fact if I was an AI investor, but for the rest of us, it's just a blessing. Let's hope it stays that way by supporting researchers/companies who do share their weights, and being mindful of the CEOs telling lawmakers that only they should be allowed to do matrix multiplications (not saying we don't need any regulation though). Those tools undeniably do create value, maybe not for every investor, but for countless users. And the investors should understand the risks, my guess is that if you'd invested in "cars" around 1900, chances that you'd have lost your money would have been quite high, even though your idea might have been right in principle.
I find them superfluous and a symptom of the human tendency to never sit down and reflect about stuff.
Grifts prey on the ignorant, duping them through an informational gap.
As such, grifts flourish in new and uncertain environments. Environments where expert opinion is still forming and is badly communicated, and scammers can sound knowledgeable to the average person without actually knowing anything.
Now that experts have pushed against crypto and the whole field has become clearer in what can be done with it and what can't, it's obvious that grifters are moving to the new uncharted hotness: "AI". Writing articles like this, always talking about the specific events and never the global trend, does not help the wheel of grifting stop spinning.
I was a complete believer in AI when chatGPT first dropped. The tech seemed revolutionary. GPT-4 even helped me write a ton of code for an app.
But if you ask me now, I feel that the AI revolution is a little overstated. The tech, while incredibly good, is not really ready for large scale adoption. Individuals and hobbyists might benefit from it, but for large enterprises and serious applications, it's too inconsistent and unreliable.
All I can see it accomplishing is pushing out the lowest end of the content/code creation totem pole. That's nice, but it's not nearly the "intelligence revolution" the promoters have been promising.
Agree, and would add that I’m sensing the bigger value will be in analysis of huge, complex, noisy data sets. Most people don’t have those, so it’s not a widely accessible benefit.
Sorting through legal discovery documents is a good example. A team of smart, trained, observant JDs will do a pretty good job given a few weeks to plough through it all; a reasonably tuned AI should be able produce similar value (even if not identical results) for a fraction of the cost and do it overnight.
IMO AI is more underrated than overhyped. The scale of value that AI can bring maybe larger than what internet brought. But, product design and engineering havn't caught up with the science yet. I think we are too narrowly looking at AI. LLMs are cool, but investors should look beyond that. ChatGPT wrappers aren't the next big thing. The fact that LLMs and image generation models work as good as they do now, should give investors a signal that the science of AI is approaching a tipping point where it's finally good enough to be incorporated into products. I see potential in 10 years time for a new FAANG, 5 trillion dollar companies with heavy reliance on AI that bring automation to various aspects of our lives.
I agree. LLMs have a ton of unrealised applications in business. Imagine training one on your company wiki and chat history.
Barely any companies have done that yet because of legal and security concerns and because it isn't easy to do yet, but that will change.
It's not going to be long before someone makes and end-to-end speech to speech model. A single model that incorporates speech recognition, LLM and speech synthesis. In fact I'm really surprised it hasn't happened already because it's such an obvious thing to try. That's going to blow people's minds.
Yes that's probably true. The article makes a parallel between the current AI hype and crypto, but there's a huge difference. Crypto didn't bring any benefit to anyone and didn't do anything one couldn't do before (with orders of magnitude better efficiency and security).
The current situation is more like the early dot-com boom of the 2000s; Webvan and pets.com or Altavista were ill-executed but they weren't stupid ideas. It was then that Amazon and Google were founded.
I'm not a skeptic per se (I'm impressed by generative AI and LLMs) but I can't avoid to notice that the coding AI of these last months didn't lead to an explosion of software. AFAIK I haven't see any non trivial software that have been programmed either completely by AI or with the majority of the work done by AI.
Firmly in the skeptic camp, here to say people are love bombing AI to make money, not because LLM are a gateway to AGI and AGI doesn't have to appear for people to make money.
Belief in AGI is useful to sell stock and IPO. Few serious researchers in academia see AGI in what's going on, or even roadsigns. Look at the language Hinton uses more closely.
he compared the latest AI advances to the invention of the wheel or discovery of fire. That sounds pretty cut and dry to me. This isn't a Nutjob, this is basically the father of AI saying even he's afraid of what might be wrought.
The skeptics will be wrong this time. LLMs will be the biggest tech revolution since smartphones. It's something that literally every single person can find a use for.
I don't think LLMs will be, LLMs are entry level, embodied multi modal systems even if the body is just virtual or a simulation. LLMs, stable diffusion, text to speech and vice versa, image recognition, tactile understanding, other 'senses'we could imbue in it. LLMs definitely are amazing but they're only a piece of the real revolution coming.
Regardless of wether it's the right idea right now, I am convinced that AI is at least not he complete vapourware that blockchain was. That really was some useless hype. There's something to show for and real applications, from self driving cars to other smart systems. Classifiers are everywhere.
I interact with chatgpt regularly. It's in my smartphone classifying my photos. I don't know when I have ever interacted with a blockchain.
It's all a grift. The whole economy is grifters grifting grifters, a game of musical chairs that's going on ever since the stock market was first invented. Probably even before.
I tried to get a really tiny crack in my windshield repaired yesterday. Something coincidentally went wrong during repair, so I have an appointment to get the entire windshield replaced next week!
I am not going to shed a tear for those VCs with dumb money who back the truck up and dump their cash into a "startup" that pivoted from blockchain to AI last week.
The "AI" bubble has some similarities but also important differences from the "crypto/blockchain" bubble and the brief "metaverse" mania.
The similarities are sort of obvious. The real economy is in a precarious state worldwide. Geopolitical strife, political polarization, exhausted and confused households, still reeling from the pandemic. All in a background of a deteriorating environment that either burns to ashes or is swamped by plastic. This is our real condition and there is no turnaround in sight.
Yet the "optimism" and valuations must keep up or the system will collapse for good. The reliable pony delivering tricks is the tech sector. Being unregulated/oligopolistic with massive rent extraction, operating in an entirely virtual realm and by now controlling all digital communication channels it has massive resources and opportunity to pump up every nugget into a digital gold rush and it does so shamelessly and with predictable regularity.
So what is different between these serial bubbles? crypto and metaverse require massive social and/or behavioral change. If you look at the problems that blockchain was supposed to solve, all of them would be solvable with lower tech if people actually had an interest to solve them. There far easier ways to make the monetary and financial systems more fair and honest than inventing a poor simulacra. The metaverse requires a collective migration into a fake reality. People are increasingly escapists and absurdists but strapping a heavy idiot-signaling device on your head is a virtual bridge too far for most.
"AI" is a better fit to the status quo. Grabbing any and all accessible data and algorithmic manipulation of people is already enshrined as acceptable practice ("people so much enjoy the convenience"). So imho this bubble has some legs. Which means the fall will more painful when it happens. What will burst the bubble? Regulation on data collection and possible applications is one possible balloon prick. The other one is commoditization.
Commoditization is an interesting one. If there is any silver lining in this dismal doom tech era we live through it is the fact that major information processing and communication capabilities are being built. It is conceivable that at some point these will be deployed in very different ways and with much bigger positive impact.
There are a bunch of thousands of guys with a couple to a couple hundred billion each. There are millions of people desperate to own a house and afford life. The rich guys want to be more rich. They're hiring some of the poor suckers to check how others got richer in the past. The answer is tech!
New tech is created every couple of years. People hype it up as much as they can. The rich guys give a fraction of their billions each to finance whatever seems remotely reasonable while squinting in that space, just for a chance to hit the jackpot and get more billions, maybe even a trillion, and their face onto Forbes and on TV.
The poor suckers gotta scramble. They invent all kinds of bullshit, and they sell it to the other poor suckers who advise the rich guys. Teams of specialists are created. Whole organizations. There's HR, somebody to organize team building events. Every layer spawns another layer. Lawyers, somebody to give sexual harassment trainings, someone to run the cafeteria.
Buildings are rented from the rich guys via managment companies run by the poor suckers. Every day a handful of people make it and can even buy a house! Codes of conduct are written, company values and mission statements. People pivot, jump from place to place, try to sign the best contract. Every once in a while an exec jumps ship with several hundred mil in the bank.
This isn't exactly untrue, although the numbers are off, I think, but no one seems to invented a better alternative. If you have money because you stole it from your citizens, or because you sent your serfs to die in a war, that's infinitely worse than because you did something someone else thought was valuable and they paid you for it.
You need to be really blinded by cynism to not see how much better modern life is for average people or how many people benefit from advancements and scale which wouldn't be possible without billions of dollars worth of investments.
How do you define "poor"? Net worth of less than $10M? Income less than $1M?
Unless you set very strict limits for "poor" like that, the people that the ultra rich hire tend to be rather well off, or at least comfortable, themselves ( by that I mean net worth of >$1M OR income of >$100k.
Actual poor people don't built state of the art tech. At best, they work as cleaning staff or in the cafeteria of those companies. Or maybe in the assembly plant in a foreign country. (And even those may feel wealthy when compared to their friends and family.)
Those who resent the ultra rich the most tend to be those who are themselves quite comfortable, often affluent even, but really hate it when other people are even more successful than themselves.
They often pretend to care for "the poor", but really all they want is to pull down anyone more successful than themselves.
You started a hell of a flamewar with this. Could you please not do that on HN, regardless of how bad things are or you feel they are? We're trying for something different here, such as not burning to a crisp.
The US and its mindless dynamics definately could use a kick in the teeth to speed up the change. But its happening with interest rates rising, dedollarization etc. Some rich folk who think nothing is changing and their behavior doesnt need to change, their story wont end well.
They dont have as much control over anything as they think. In fact I think rich people are going to take the biggest hit mentally, financially, socially cause its very precarious having wealth without control over what happens tomorrow morning.
Money is not wealth. Once you have more of it than is necessary for basic needs, you have to figure out a way to get rid of it and exchange it for real wealth. Investing it in "tech" is an easy and dumb solution to the problem.
Actually compared to the economies of past societies, the modern economy in the West is amazing. If you work hard, have a little bit above the average intelligence and potentially a bit of luck you can make a lot of money. If not I guess you can make excuses.
The simplest way to organize a group of people, animals, organisms is to make one of them leader and have the rest follow him. This is the way our ancestors behaved for hundreds of millions of years and it's hardwired into our brains.
AI investment is actually down recently, looks like the hype is wearing off since most of the companies funded were just wrapping OpenAI APIs. I will copy paste a post I submitted before regarding a similar issue.
6 months ago it looked like AI / LLMs were going to bring a much needed revival to the venture startup ecosystem after a few tough years.
With companies like Jasper starting to slow down, it’s looking like this may not be the case.
Right now there are 2 clear winners, a handful of losers, and a small group of moonshots that seem promising.
Let’s start with the losers.
Companies like Jasper and the VCs that back them are the biggest losers right now. Jasper raised >$100M at a 10-figure valuation for what is essentially a generic, thin wrapper around OpenAI. Their UX and brand are good, but not great, and competition from companies building differentiated products specifically for high-value niches are making it very hard to grow with such a generic product. I’m not sure how this pans out but VC’s will likely lose their money.
The other category of losers are the VC-backed teams building at the application layer that raised $250K-25M in Dec - March on the back of the chatbot craze with the expectation that they would be able to sell to later-stage and enterprise companies. These startups typically have products that are more focused than something very generic like Jasper, but still don't have a real technology moat; the products are easy to copy.
Executives at enterprise companies are excited about AI, and have been vocal about this from the beginning. This led a lot of founders and VC's to believe these companies would make good first customers. What the startups building for these companies failed to realize is just how aligned and savvy executives and the engineers they manage would be at quickly getting AI into production using open-source tools. An engineering leader would rather spin up their own @LangChainAI and @trychroma infrastructure for free and build tech themselves than buy something from a new, unproven startup (and maybe pick up a promotion along the way).
In short, large companies are opting to write their own AI success stories rather than being a part of the growth metrics a new AI startup needs to raise their next round.
(This is part of an ongoing shift in the way technology is adopted; I'll discuss this in a post next week.)
This brings us to our first group of winners — established companies and market incumbents. Most of them had little trouble adding AI into their products or hacking together some sort of "chat-your-docs" application internally for employee use. This came as a surprise to me. Most of these companies seemed to be asleep at the wheel for years. They somehow woke up and have been able to successfully navigate the LLM craze with ample dexterity.
There are two causes for this:
1. Getting AI right is a life or death proposition for many of these companies and their executives; failure here would mean a slow death over the next several years. They can't risk putting their future in the hands of a new startup that could fail and would rather lead projects internally to make absolutely sure things go as intended.
2. There is a certain amount of kick-ass wafting through halls of the C-Suite right now. Ambitious projects are being green-lit and supported in ways they weren't a few years ago. I think we owe this in part to @elonmusk reminding us of what is possible when a small group of smart people are highly motivated to get things done. Reduce red-tape, increase personal responsibility, and watch the magic happen.
Our second group of winners live on the opposite side of this spectrum; indie devs and solopreneurs. These small, often one-man outfits do not raise outside capital or build big teams. They're advantage is their small size and ability to move very quickly with low overhead. They build niche products for niche markets, which they often dominate. The goal is build a saas product (or multiple) that generates ~$10k/mo in relatively passive income. This is sometimes called "mirco-saas."
These are the @levelsio's and @dannypostmaa's of the world. They are part software devs, part content marketers, and full-time modern internet businessmen. They answer to no one except the markets and their own intuition.
This is the biggest group of winners right now. Unconstrained by the need for a $1B+ exit or the goal of $100MM ARR, they build and launch products in rapid-fire fashion, iterating until PMF and cashflow, and moving on to the next. They ruthlessly shutdown products that are not performing.
LLMs and text-to-image models a la Stable Diffusion have been a boon for these entrepreneurs, and I personally know of dozens of successful (keeping in mind their definition of successful) apps that were started less than 6 months ago. The lifestyle and freedom these endeavors afford to those that perform well is also quite enticing.
I think we will continue to see the number of successful micro-saas AI apps grow in the next 12 months. This could possibly become one of the biggest cohorts creating real value with this technology.
The last group I want to talk about are the AI Moonshots — companies that are fundamentally re-imagining an entire industry from the ground up. Generally, these companies are VC-backed and building products that have the potential to redefine how a small group of highly-skilled humans interact with and are assisted by technology. It's too early to tell if they'll be successful or not; early prototypes have been compelling. This is certainly the most exciting segment to watch.
A few companies I would put in this group are:
1. https://cursor.so - an AI-first code editor that could very well change how software is written.
This is an incomplete list, but overall I think the Moonshot category needs to grow massively if we're going to see the AI-powered future we've all been hoping for.
If you're a founder in the $250K-25M raised category and are having a hard time finding PMF for your chatbot or LLMOps company, it may be time to consider pivoting to something more ambitious.
Lets recap:
1. VC-backed companies are having a hard time. The more money a company raised, the more pain they're feeling.
2. Incumbents and market leaders are quickly become adept at deploying cutting-edge AI using internal teams and open-source, off-the-shelf technology, cutting out what seemed to be good opportunities for VC-backed startups.
3. Indie devs are building small, cash-flowing businesses by quickly shipping niche AI-powered products in niche markets.
4. A small number of promising Moonshot companies with unproven technology hold the most potential for VC-sized returns.
It's still early. This landscape will continue to change as new foundational models are released and toolchains improve. I'm sure you can find counter examples to everything I've written about here. Put them in the comments for others to see.
And just to be upfront about this, I fall squarely into the "raised $250K-25M without PMF" category.
I'd add that actually using llms to add surprisingly powerful or complex features is extremely easy as a dev. It's turned things that would have needed large ML investment and expertise into a few rest API calls. The other vital thing imo is the pay per token pricing with no minimum, and a simple UI for prototyping - you can build out a demo paying personally and then get a company account.
Given the absurd pace of open source, I'm not sure if the money itself is in "AI", maybe certain applications but if I can run something 95% as good as ChatGTP-4 on my home desktop in a year or so, then I'm not going to be paying for any "AI" solutions.
It is a grift regardless of usefulness. "But it is useful" is hardly a justification for destroying the planet [0] [1] [2] without any viable efficient methods available today in training, fine-tuning and running it on every inference on tons of data centers.
All for so-called companies claiming to be 'AI companies' when they cannot even read or implement a technical paper and are just wrapping over someone elses API and immediately they are 'AI companies'. When it goes down they start crying over it 'not working'.
That is a confidence trick which is the definition of a grift and most replying here with excuses of "But it is useful" are likely to be underwater over their investments in inflated ChatGPT wrapper companies.
JimDabell|2 years ago
ak_111|2 years ago
Best example is the dot com bubble in 1996 - 2001, the web had truly world-changing potential, but grifting was rife and practically all early investors lost their money unless you were one of the lucky ones who invested in amazon, yahoo, google or ebay out of the 10,000 companies that were shilling back then (and even then you had to be very patient not to sell your stock for decades to make the really big gains - see the history of amazon and apple stock during this era).
There are many such examples in history of technology concerning grifters and the fate of early investors see also automobiles and printing press (even in printing press Guttenberg lost a lot of money and had many grifters/copycats)
doctor_eval|2 years ago
AI solves real problems, and has been doing so for many years. Generative AI is just the latest and most accessible instantiating of it.
My first company was a dialup ISP in the 90s. I look at GPT4 and I see the 56K modem of AI. There is so much upside to be discovered, we have barely started.
mihaaly|2 years ago
Animats|2 years ago
Look at self-driving cars. The first tries were in the late 1950s, with GM's Firebird 3, guided by wires in the road. By the 1980s, the first self-driving vehicles were moving around CMU, very slowly. By the early 1990s, experimental highway driving had been demoed. In the early 2000s, we had the DARPA Grand Challenge, which had off-road driving on empty roads working. Then there were a few experimental self-driving cars that sort of worked on general roads. Many startups, most went bust.
Today you can take a driverless cab in San Francisco. 64 years since GM's Firebird III. (Which still exists, in driveable condition, in GM's in-house collection.)
It may take a while to get from GPT-3 to Microsoft Middle Manager 2.0. But the path is clear now.
logifail|2 years ago
From the outside, it sure does look like driverless is still firmly at "sort of works":
"After California regulators approved the expansion of driverless taxi services in San Francisco earlier this month, it took only a little more than 24 hours for a series of events to begin that seemed to justify the taxis’ detractors.
The day after the vote, 10 autonomous vehicles operated by Cruise, a subsidiary of General Motors, abruptly stopped functioning in the middle of a busy street in the North Beach neighborhood of San Francisco. Posts to social media showed the cars jammed up, their hazard lights flashing, blocking traffic for 15 minutes.
A few days later, another Cruise vehicle drove into a paving project in the Western Addition and got stuck in freshly poured concrete.
And then last week, a Cruise car collided with a fire truck in the city, injuring a passenger in the car.
So it was that last Friday Cruise agreed to a request from the California Department of Motor Vehicles to cut in half the number of vehicles it operated in San Francisco, even though regulatory approval for more remained in place. The company, which has had 400 driverless vehicles operating in the city, will now have no more than 50 cars running during the day and 150 at night."[0]
[0] https://www.nytimes.com/2023/08/22/us/california-autonomous-...
qsort|2 years ago
More seriously, the distance between "sort of works" and "works" might not be infinite, but it most likely involves fundamentally unpredictable future developments of the current technology. There is no straight line of incremental improvements that gets us there.
It's fairly straightforward to imagine that if you have a 4.77mhz CPU and 64kb of RAM you will soon have a 3ghz CPU and 64gb RAM.
Bigger number going brrr is no guarantee of anything, here, so much so that models using a fraction of GPT4's resources are somewhat competitive.
By all means continue developing the technology, but claims that we are within arms' reach of X for disparate values of X are not exactly supported by anything.
pjc50|2 years ago
I think we've found the common thread between AI and crypto: the dumping of externalities and increase in energy usage.
Do people really want to work for Microsoft Middle Manager? Have we not seen enough horror stories about metric-slavery in Amazon warehouses and the gig economy? It might be cheaper, but it's also worse, for a class of people who don't get any input in the decision. Similarly self-driving unleashes a new class of poorly behaved "learner" drivers on the road, who may be less aggressive but are also capable of causing problems from that very timidity and lack of general competence.
naillo|2 years ago
intended|2 years ago
There is no such thing as "driving" there is no physical force, or particle. There is no force preventing you from driving in the opposite direction of traffic, or through glass panes.
"Driving" is entirely a social phenomnon, the confluence of societal self impositions and engineering.
If you have a car on fire in front of you, you will need to reverse in the wrong direction of traffic.
In many countries - you have to regularly deal with drivers going full tilt, on the wrong side of the road.
Or You have to deal with theft, and people trying to rob you at every red light.
Im underscoring that this is a social issue. You would need to create models for each country and region, to truly improve self driving.
Self driving assumes a far narrower problem space than reality gives a fig for.
Self driving theory currently works in the same way any theory that assumes spherical cows works.
drawfloat|2 years ago
TMWNN|2 years ago
I was intrigued by the announced Business Chat feature for Microsoft Teams <https://www.reddit.com/r/singularity/comments/11swyeu/introd...>, but learned that it is just summaries of conversations. That's not quite what I'd imagined, which is something like this:
----
A: ... and that is why I think we should go with option 1.
B: No, the points you mentioned support my case for option 2.
C: Nothing you guys have said changes my mind about option 3 being best.
D: Business Chat, what do you think?
BC: Based on this discussion, and my research, option 1 seems more realistic but option 2 would be more profitable if possible. My reasons are ...
C: Business Chat and you guys all don't understand point N, which is the main reason why option 3 is best.
B: Higher profit is exactly why I think option 2 is the way to go.
A: No, our rival is going to hit the market next month. We need to get something out there ASAP. Option 1 can do this.
D: You've all given me things to think about. Thank you for coming. Business Chat, email me a summary of the meeting, and set up a followup meeting for Tuesday 3pm.
----
That is, AI used as a colleague/assistant, not necessarily subordinate but not seen as omniscient, either; another viewpoint to consider. Like you said, Middle Manager 2.0. When do you think the above will be feasible? This year? Five years?
tome|2 years ago
Is there a way you could make the path clear to me too? I just don't get it. Sure, I can imagine in principle it's possible. But I don't see how you can already see it's certain. Is there something you can share that will allow me to see why it's certain?
harry8|2 years ago
Transformative tech: personal computers. If you invested in early movers you missed microsoft and you very likely lost your money.
Transformative tech: Internet/web, dotcom boom. Google, facebook, twitter were not investable or did not even exist even up to when it popped.
Whether you believe AI is going to be massively transformative to the modern economy on a scale with cars or personal computing is actually not sufficient to start investing in the 'sector' (for want of a better term).
So this is useful as a reverse indicator. Anyone investing buckets in AI is worth betting against in general. Everyone? Maybe not everyone. Maybe.
Likewise even if the blockchain sector is dead. (Is it? No clue) This does not necessarily make the technology dead. (Even if you would like it to be). The web came back and reinvented as a massive distributed surveillance machine. Who would have predicted that in the crash? Who would have wanted it? Well we got it anyway.
apexalpha|2 years ago
For cars the inventor is (Mercedes) Benz, the people who first mass produced it was Ford. Both seem like good investments.
PC: Who is this early mover you talk about? I guess not Apple in your mind, since that would've been a fantastic investment. Hard to think of any better.
Plus, there are hundreds if not thousands of companies along the way you could've invested in that were swallowed up by the bigger fish. That's still a good investment since you either get a good exit or shares in the bigger fish.
martindbp|2 years ago
veltas|2 years ago
tim333|2 years ago
jtode|2 years ago
c7b|2 years ago
Not sure I would love this fact if I was an AI investor, but for the rest of us, it's just a blessing. Let's hope it stays that way by supporting researchers/companies who do share their weights, and being mindful of the CEOs telling lawmakers that only they should be allowed to do matrix multiplications (not saying we don't need any regulation though). Those tools undeniably do create value, maybe not for every investor, but for countless users. And the investors should understand the risks, my guess is that if you'd invested in "cars" around 1900, chances that you'd have lost your money would have been quite high, even though your idea might have been right in principle.
Almondsetat|2 years ago
I find them superfluous and a symptom of the human tendency to never sit down and reflect about stuff.
Grifts prey on the ignorant, duping them through an informational gap.
As such, grifts flourish in new and uncertain environments. Environments where expert opinion is still forming and is badly communicated, and scammers can sound knowledgeable to the average person without actually knowing anything.
Now that experts have pushed against crypto and the whole field has become clearer in what can be done with it and what can't, it's obvious that grifters are moving to the new uncharted hotness: "AI". Writing articles like this, always talking about the specific events and never the global trend, does not help the wheel of grifting stop spinning.
WaxProlix|2 years ago
spaceman_2020|2 years ago
But if you ask me now, I feel that the AI revolution is a little overstated. The tech, while incredibly good, is not really ready for large scale adoption. Individuals and hobbyists might benefit from it, but for large enterprises and serious applications, it's too inconsistent and unreliable.
All I can see it accomplishing is pushing out the lowest end of the content/code creation totem pole. That's nice, but it's not nearly the "intelligence revolution" the promoters have been promising.
Kaibeezy|2 years ago
Sorting through legal discovery documents is a good example. A team of smart, trained, observant JDs will do a pretty good job given a few weeks to plough through it all; a reasonably tuned AI should be able produce similar value (even if not identical results) for a fraction of the cost and do it overnight.
padolsey|2 years ago
floucky|2 years ago
codelord|2 years ago
IshKebab|2 years ago
Barely any companies have done that yet because of legal and security concerns and because it isn't easy to do yet, but that will change.
It's not going to be long before someone makes and end-to-end speech to speech model. A single model that incorporates speech recognition, LLM and speech synthesis. In fact I'm really surprised it hasn't happened already because it's such an obvious thing to try. That's going to blow people's minds.
bambax|2 years ago
The current situation is more like the early dot-com boom of the 2000s; Webvan and pets.com or Altavista were ill-executed but they weren't stupid ideas. It was then that Amazon and Google were founded.
poulpy123|2 years ago
ggm|2 years ago
Belief in AGI is useful to sell stock and IPO. Few serious researchers in academia see AGI in what's going on, or even roadsigns. Look at the language Hinton uses more closely.
gremlinsinc|2 years ago
ramesh31|2 years ago
gremlinsinc|2 years ago
LeanderK|2 years ago
I interact with chatgpt regularly. It's in my smartphone classifying my photos. I don't know when I have ever interacted with a blockchain.
mihaaly|2 years ago
Probably too much money is with clueless but greedy and lazy (to discover) people?
flarecoder|2 years ago
dvh|2 years ago
thorin|2 years ago
birracerveza|2 years ago
mawadev|2 years ago
renegade-otter|2 years ago
nologic01|2 years ago
The similarities are sort of obvious. The real economy is in a precarious state worldwide. Geopolitical strife, political polarization, exhausted and confused households, still reeling from the pandemic. All in a background of a deteriorating environment that either burns to ashes or is swamped by plastic. This is our real condition and there is no turnaround in sight.
Yet the "optimism" and valuations must keep up or the system will collapse for good. The reliable pony delivering tricks is the tech sector. Being unregulated/oligopolistic with massive rent extraction, operating in an entirely virtual realm and by now controlling all digital communication channels it has massive resources and opportunity to pump up every nugget into a digital gold rush and it does so shamelessly and with predictable regularity.
So what is different between these serial bubbles? crypto and metaverse require massive social and/or behavioral change. If you look at the problems that blockchain was supposed to solve, all of them would be solvable with lower tech if people actually had an interest to solve them. There far easier ways to make the monetary and financial systems more fair and honest than inventing a poor simulacra. The metaverse requires a collective migration into a fake reality. People are increasingly escapists and absurdists but strapping a heavy idiot-signaling device on your head is a virtual bridge too far for most.
"AI" is a better fit to the status quo. Grabbing any and all accessible data and algorithmic manipulation of people is already enshrined as acceptable practice ("people so much enjoy the convenience"). So imho this bubble has some legs. Which means the fall will more painful when it happens. What will burst the bubble? Regulation on data collection and possible applications is one possible balloon prick. The other one is commoditization.
Commoditization is an interesting one. If there is any silver lining in this dismal doom tech era we live through it is the fact that major information processing and communication capabilities are being built. It is conceivable that at some point these will be deployed in very different ways and with much bigger positive impact.
Havoc|2 years ago
shaburn|2 years ago
kubb|2 years ago
There are a bunch of thousands of guys with a couple to a couple hundred billion each. There are millions of people desperate to own a house and afford life. The rich guys want to be more rich. They're hiring some of the poor suckers to check how others got richer in the past. The answer is tech!
New tech is created every couple of years. People hype it up as much as they can. The rich guys give a fraction of their billions each to finance whatever seems remotely reasonable while squinting in that space, just for a chance to hit the jackpot and get more billions, maybe even a trillion, and their face onto Forbes and on TV.
The poor suckers gotta scramble. They invent all kinds of bullshit, and they sell it to the other poor suckers who advise the rich guys. Teams of specialists are created. Whole organizations. There's HR, somebody to organize team building events. Every layer spawns another layer. Lawyers, somebody to give sexual harassment trainings, someone to run the cafeteria.
Buildings are rented from the rich guys via managment companies run by the poor suckers. Every day a handful of people make it and can even buy a house! Codes of conduct are written, company values and mission statements. People pivot, jump from place to place, try to sign the best contract. Every once in a while an exec jumps ship with several hundred mil in the bank.
It's a great life. What could be better?
robertlagrant|2 years ago
Tenoke|2 years ago
kortilla|2 years ago
Reality. Most of tech is not funded by existing billionaires.
trashtester|2 years ago
Unless you set very strict limits for "poor" like that, the people that the ultra rich hire tend to be rather well off, or at least comfortable, themselves ( by that I mean net worth of >$1M OR income of >$100k.
Actual poor people don't built state of the art tech. At best, they work as cleaning staff or in the cafeteria of those companies. Or maybe in the assembly plant in a foreign country. (And even those may feel wealthy when compared to their friends and family.)
Those who resent the ultra rich the most tend to be those who are themselves quite comfortable, often affluent even, but really hate it when other people are even more successful than themselves.
They often pretend to care for "the poor", but really all they want is to pull down anyone more successful than themselves.
moandcompany|2 years ago
dang|2 years ago
https://news.ycombinator.com/newsguidelines.html
gsatic|2 years ago
The US and its mindless dynamics definately could use a kick in the teeth to speed up the change. But its happening with interest rates rising, dedollarization etc. Some rich folk who think nothing is changing and their behavior doesnt need to change, their story wont end well.
They dont have as much control over anything as they think. In fact I think rich people are going to take the biggest hit mentally, financially, socially cause its very precarious having wealth without control over what happens tomorrow morning.
MichaelZuo|2 years ago
otabdeveloper4|2 years ago
unknown|2 years ago
[deleted]
c03|2 years ago
mihaaly|2 years ago
unknown|2 years ago
[deleted]
throwawaylinux|2 years ago
[deleted]
oneshtein|2 years ago
a) everybody are poor;
b) some are poor, some are in the middle; some are rich.
Variant (b) is bad because some are poor.
borissk|2 years ago
The simplest way to organize a group of people, animals, organisms is to make one of them leader and have the rest follow him. This is the way our ancestors behaved for hundreds of millions of years and it's hardwired into our brains.
satvikpendem|2 years ago
https://twitter.com/0xSamHogan/status/1680725207898816512
Nitter: https://nitter.net/0xSamHogan/status/1680725207898816512#m
---
6 months ago it looked like AI / LLMs were going to bring a much needed revival to the venture startup ecosystem after a few tough years.
With companies like Jasper starting to slow down, it’s looking like this may not be the case.
Right now there are 2 clear winners, a handful of losers, and a small group of moonshots that seem promising.
Let’s start with the losers.
Companies like Jasper and the VCs that back them are the biggest losers right now. Jasper raised >$100M at a 10-figure valuation for what is essentially a generic, thin wrapper around OpenAI. Their UX and brand are good, but not great, and competition from companies building differentiated products specifically for high-value niches are making it very hard to grow with such a generic product. I’m not sure how this pans out but VC’s will likely lose their money.
The other category of losers are the VC-backed teams building at the application layer that raised $250K-25M in Dec - March on the back of the chatbot craze with the expectation that they would be able to sell to later-stage and enterprise companies. These startups typically have products that are more focused than something very generic like Jasper, but still don't have a real technology moat; the products are easy to copy.
Executives at enterprise companies are excited about AI, and have been vocal about this from the beginning. This led a lot of founders and VC's to believe these companies would make good first customers. What the startups building for these companies failed to realize is just how aligned and savvy executives and the engineers they manage would be at quickly getting AI into production using open-source tools. An engineering leader would rather spin up their own @LangChainAI and @trychroma infrastructure for free and build tech themselves than buy something from a new, unproven startup (and maybe pick up a promotion along the way).
In short, large companies are opting to write their own AI success stories rather than being a part of the growth metrics a new AI startup needs to raise their next round.
(This is part of an ongoing shift in the way technology is adopted; I'll discuss this in a post next week.)
This brings us to our first group of winners — established companies and market incumbents. Most of them had little trouble adding AI into their products or hacking together some sort of "chat-your-docs" application internally for employee use. This came as a surprise to me. Most of these companies seemed to be asleep at the wheel for years. They somehow woke up and have been able to successfully navigate the LLM craze with ample dexterity.
There are two causes for this:
1. Getting AI right is a life or death proposition for many of these companies and their executives; failure here would mean a slow death over the next several years. They can't risk putting their future in the hands of a new startup that could fail and would rather lead projects internally to make absolutely sure things go as intended.
2. There is a certain amount of kick-ass wafting through halls of the C-Suite right now. Ambitious projects are being green-lit and supported in ways they weren't a few years ago. I think we owe this in part to @elonmusk reminding us of what is possible when a small group of smart people are highly motivated to get things done. Reduce red-tape, increase personal responsibility, and watch the magic happen.
Our second group of winners live on the opposite side of this spectrum; indie devs and solopreneurs. These small, often one-man outfits do not raise outside capital or build big teams. They're advantage is their small size and ability to move very quickly with low overhead. They build niche products for niche markets, which they often dominate. The goal is build a saas product (or multiple) that generates ~$10k/mo in relatively passive income. This is sometimes called "mirco-saas."
These are the @levelsio's and @dannypostmaa's of the world. They are part software devs, part content marketers, and full-time modern internet businessmen. They answer to no one except the markets and their own intuition.
This is the biggest group of winners right now. Unconstrained by the need for a $1B+ exit or the goal of $100MM ARR, they build and launch products in rapid-fire fashion, iterating until PMF and cashflow, and moving on to the next. They ruthlessly shutdown products that are not performing.
LLMs and text-to-image models a la Stable Diffusion have been a boon for these entrepreneurs, and I personally know of dozens of successful (keeping in mind their definition of successful) apps that were started less than 6 months ago. The lifestyle and freedom these endeavors afford to those that perform well is also quite enticing.
I think we will continue to see the number of successful micro-saas AI apps grow in the next 12 months. This could possibly become one of the biggest cohorts creating real value with this technology.
The last group I want to talk about are the AI Moonshots — companies that are fundamentally re-imagining an entire industry from the ground up. Generally, these companies are VC-backed and building products that have the potential to redefine how a small group of highly-skilled humans interact with and are assisted by technology. It's too early to tell if they'll be successful or not; early prototypes have been compelling. This is certainly the most exciting segment to watch.
A few companies I would put in this group are:
1. https://cursor.so - an AI-first code editor that could very well change how software is written.
2. https://harvey.ai - AI for legal practices
3. https://runwayml.com - an AI-powered video editor
This is an incomplete list, but overall I think the Moonshot category needs to grow massively if we're going to see the AI-powered future we've all been hoping for.
If you're a founder in the $250K-25M raised category and are having a hard time finding PMF for your chatbot or LLMOps company, it may be time to consider pivoting to something more ambitious.
Lets recap:
1. VC-backed companies are having a hard time. The more money a company raised, the more pain they're feeling.
2. Incumbents and market leaders are quickly become adept at deploying cutting-edge AI using internal teams and open-source, off-the-shelf technology, cutting out what seemed to be good opportunities for VC-backed startups.
3. Indie devs are building small, cash-flowing businesses by quickly shipping niche AI-powered products in niche markets.
4. A small number of promising Moonshot companies with unproven technology hold the most potential for VC-sized returns.
It's still early. This landscape will continue to change as new foundational models are released and toolchains improve. I'm sure you can find counter examples to everything I've written about here. Put them in the comments for others to see.
And just to be upfront about this, I fall squarely into the "raised $250K-25M without PMF" category.
IanCal|2 years ago
rvz|2 years ago
There are going to be much more losers than winners than people realize in this AI race to zero.
flangola7|2 years ago
unknown|2 years ago
[deleted]
adamnemecek|2 years ago
jahnu|2 years ago
ChatGTP|2 years ago
flangola7|2 years ago
rvz|2 years ago
All for so-called companies claiming to be 'AI companies' when they cannot even read or implement a technical paper and are just wrapping over someone elses API and immediately they are 'AI companies'. When it goes down they start crying over it 'not working'.
That is a confidence trick which is the definition of a grift and most replying here with excuses of "But it is useful" are likely to be underwater over their investments in inflated ChatGPT wrapper companies.
[0] https://gizmodo.com/chatgpt-ai-water-185000-gallons-training...
[1] https://www.independent.co.uk/tech/chatgpt-data-centre-water...
[2] https://www.theguardian.com/technology/2023/jun/08/artificia...