Companies keep going at it the wrong way. Instead of saying "We have AI, let's find products we can make out of AI!" they should be saying, "What products do people want, let's use whatever tools we have (including maybe AI) to make them."
The idea that a company is an AI company should be as ridiculous as a company being a Python company. "We are Python-first, have Python experts, and all of our products are made with Python. Our customers want their apps to have Python in them. We just have to 'productize Python' and find the right killer app for Python and we'll be successful!" Going at it from the wrong direction. Replace Python in that quote with AI, and you probably have something a real company has said in 2024.
Its the same as all the "we are a blockchain company" startups that popped up looking for a problem to solve with their tech rather than the right way round.
However, a lot of those got a bunch of investment or made some decent money in the short term. Very few are still around. We will see the same pattern here.
It was likely the same back when the steam engine was invented. Everyone who could start a steam engine company, started a steam engine company. Because learning how to be a steam engine company was difficult, new, and unique. It would be a while before finding all the products that could be sold to people incorporating that new tech.
I don't entirely disagree with you, but "what products do people want" is overly conservative. Pre-ChatGPT, very few people wanted a (more or less) general purpose chatbot.
A ton of the industrial revolution was actually motivated by that input-driven thinking. You don't decide you want an Eiffel Tower from first principles, you consider "what is the coolest thing I can make out of wrought iron".
I only partially agree with this. Having spent a lot of time in the “find a problem then the solution” way of working, I’ve found the solutions are often too tame and lack innovation.
When you’re truly bring novel new value to things, sometime you need to say “we can do this cool thing, but don’t know what that means”. Simply knowing that capability opens you up to better sets of solutions.
AI is trending right now. The most important thing for new companies is finding investors, and those people have been throwing cash at any company with AI.
Customers are also more interested in AI products. The tech industry has stagnated for years with incremental improvements on existing products. ChatGPT and generative AI are new capabilities that draw interest, and companies have been doing anything they can to stand out today.
There's still a lot of real work to be done knowing what can be built and operated profitably, because the underlying tech is so new.
So just zooming out, we need people trying to figure out what can be built with this Lego set. We also need people like you're saying to work the other side so everyone can meet in the middle.
You are forgetting marketing is temporal. Fifteen years ago you could sell your software as the Cloud version of a legacy app. Right now, there's a window that being the AI version will get you a call.
That requires they you understand the capabilities and limitations of the tech way better than anyone does. So instead "let's see what we can do with this" is the underlying approach.
if you're some Python contractor company, the angle makes sense. but of course, very few AI companies are out there trying to help others solve problems.
this is how things evolve everything was .com company when internet started going mainstream then real product and service providers were left standing
Ehhh it’s a spectrum. First you innovate, then you commercialise. Even Google took a few years to successfully monetise and they weren’t the first mover in web search. LLMs have been around for, what, coming up on three years? Probably two to four more years to see results.
I’m seeing a lot of meh products that take like 4 units of effort to integrate. I think multiple LLMs, deeply integrated into a cohesive product with 100+ effort units, that can be great. An AI that’s familiar with the use of every settings menu on windows would be awesome
I'm not so sure. When a technological wave is big enough, it seems reasonable to start by asking: "what business can be built on this exponential wave?" This is contrary to standard YC advice (make something people want right away, don't create a solution in search of a problem) but empirically a lot of big companies started this way:
- Bezos saw the growth rate of the internet, spent a few months mulling over the question: "what business would make sense to start in the context of massive internet adoption" and came up with an online bookstore.
- OpenAI's ChatGPT effort really began when they saw Google's paper on transformers and decided to see how far they could push this technology (it's hard to imagine they forecasted all the chatbot usecases; in reality I'm sure they were just stoked to push the technology forward).
- Intel was founded on the discovery of the integrated circuit, and again I think the dominant motivation was to see how far they could push transistor density with a very hazy vision at best of how the CPUs would eventually be used.
I think the reason this strategy works is that the newness of a truly important technology counteracts much of the adverse selection of starting a new business. If you make a new To-Do iPhone app, it's unlikely that people have overlooked a great idea in that space over the last 10 years. But if lithium ion batteries only just barely started becoming energy dense enough to make a car, there's a much more plausible argument why you could be successful now.
Said another way: "why hasn't this been done before?" (both by resource-rich incumbents as well as new entrants) is a good filter (and often a limiting one) for starting a business. New technological capabilities are one good answer to this question. Therefore if you're trying to come up with an idea for a business, it seems reasonable to look at new technologies that you think are actually important and then reason backward to what new businesses they enable.
Two additional positive factors I can think of:
1. A common dynamic is that a new technology is progressing rapidly but is of course far behind traditional solutions at the outset. Thus it is difficult to find immediate applications, even if large applications are almost guaranteed in 10-20 years. Getting in early - during the borderline phase where most applications are very contrived - is often a big advantage. See Tesla Roadster (who wants a $100k electric sports car with 200mi range and minimal charging network?), early computers (what is the advantage of a slow machine with no GUI over doing work by hand?), and perhaps current LLMs (how valuable is a chatbot that frequently hallucinates and has trouble thinking critically in original ways)? It's the classic Innovator's Dilemma - we overweight the initial warts and don't properly forecast how quickly things are improving.
2. There is probably a helpful motivational force for many people if they get to feel that they are on the cutting edge of technology that interests them and building products that simply weren't possible two years ago.
You're suggesting boring business way to do things. The tech ecosystem is full of startups doing that ridiculous thing you said chasing the hot new thing and raising huge amounts of money off the hype. This AI hype cycle is really bad and before that we had cryptocurrency.
> But when developers put AI in consumer products, people expect it to behave like software, which means that it needs to work deterministically. If your AI travel agent books vacations to the correct destination only 90% of the time, it won’t be successful.
This is the fundamental problem that prevents generative AI from becoming a "foundational building block" for most products. Even with rigorous safety measures in place, there are few guarantees about its output. AI is about as solid as sand when it comes to determinism, which is great if you're trying to sell sand, but not so great if you're trying to build a huge structure on top of it.
I've made this statement a bunch in other mediums: The reason AI software is always "AI software" and not just a useful product is because AI is fallible.
The reason we can build such deep and complex software system is because each layer can assume the one below it will "just work". If it only worked 99% of the time, we'd all still be interfacing with assembly, because we'd have to be aware of the mistakes that were made and deal with them, otherwise the errors would compound until software was useless.
Until AI achieves the level of determinism we have with other software, it'll have to stay at the surface.
structured outputs help, paired with regular old systems design I think you can get pretty far. it really depends what you're building though.
>If your AI travel agent books vacations to the correct destination only 90% of the time
that would be using the wrong tool for the job. an AI travel agent would be very useful for making suggestions, either for destinations or giving a list of suggested flights, hotels etc, and then hand off to your standard systems to complete the transaction.
there are also a lot of systems that tolerate "faults" just fine such as image/video/audio gen
> If your AI travel agent books vacations to the correct destination only 90% of the time, it won’t be successful.
Well, I don't agree. I think there are ways to make this successful, but you have to be honest about the limitations you're working it with and play to your strengths.
How about an AI travel agent that gets your itineraries at a discount with the caveat that you be ready for anything. Like old, cheap standby tickets where you just went wherever there was an empty seat that day.
Or how about an AI Spotify for way less money than current Spotify. It's not competing on quality, it can't. Occasionally you'll hear weird artifacts, but hey it's way cheaper.
The AI travel agent is trivial to solve though. It's the same as the human travel agent. Put the plan and pricing together, then give it to the user to sign and accept. Do it in an app, do it in an email, do it on a piece of paper, whatever floats your boat, but give them something they can review and accept instead of trying to do everything verbally or in a basic chat interface.
I'm not disagreeing with the "needs to work deterministically" -- there is a need for that, but this is a poor example. "Hey robot, plan a trip to Mexico" might still save me time overall if done right, and that has value.
I have a question for folks working heavily with AI blackboxes related to this - what are methods that companies use to test the quality of outputs? Testing the integration itself can be treated pretty much the same as testing around any third-party service, but what I've seen are some teams using models to test the output quality of models... which doesn't seem great instinctively
But a knowledgeable human can take the iternarary and run with it. I know I’ve done that with code enough from AI generated stuff, it’s basically boiler plate. You still run it through the same tests, reviews, and verification as you would have had to do anyway.
And yet, generative AI also seems to be poor at randomness. When I ask Google Gemini for a list of 50 random words, it gave me a list of 18 unique words, with 16 of them repeated exactly 3 times.
Instead of pivoting, can this behaviour be explained by trying lots of different things and then iterating on the ones that show promise?
It's all well and good to say "Make something people want" but for anything that people want usually one of three things is true
1. Someone else is already making it.
2. Nobody knows how to make it.
3. Nobody knows that people want it.
People experimenting with 2 and 3 will have a lot of failures, but the great successes will come from those groups as well.
Sure, every trend in business has a lot of companies going "we should do this because everyone else is" It was a dumb idea for previous trends and it is a dumb idea now. Consider how many companies did that for the internet. There were a lot of poorly thought out forays into having an internet presence. Of those companies still around, they pretty much will have an internet presence now that serves their purposes. They transition from "because everyone else is" as their motivation to "We want specific ability x,y,&z"
Perhaps the best way to get from "everyone else is doing it" to knowing what to build is to play in the pool.
That's exactly what these companies are doing. They're trying a lot of different ideas, and seeing what sticks. The problem is that they're annoying users and causing distrust.
Using herbal natural remedy was what got me tested negative to HSV 2 after being diagnosed for years. I have spent so much funds on medications like acyclovir (Zovirax), Famciclovir (Famvir), and Valacyclovir (Valtrex). But it was all a waste of time and my symptoms got worse. To me It is very bad what Big pharma are doing, why keep making humans suffer greatly just to get profits annually for medications that don't work. I'm glad that herbal remedies are gaining so much awareness and many people are getting off medications and activating their entire body system with natural herbal remedies and they have become holistically healed totally, It’s also crucial to learn as much as you can about your diagnosis. Seek options visit: worldrehabilitateclinic. com
I’m building an integration platform. There’s a thousand ways to deeply embed AI throughout it, both to build integration workflows faster, and to help us build smarter API wrappers faster.
But AI has always been a secondary augmentation to the product itself. It’s a tool, it shouldn’t be the other way around.
Yeah, ChatGPT itself is amazing. What I don't understand is, why are other companies paying so much for training hardware now? Trying to make more specialized LLMs now that ChatGPT has proven the technology?
Google has been productizing AI for a while now. 2021 Pixels have the Tensor SoC which was explicitly marketed as an AI chip. Chatbots weren't part of the equation back then, but offline image translation, magic eraser, etc certainly were.
When I see “AI” in the product description of something I’m almost immediately turned off. It’s plastered everywhere for most tech companies now and doesn’t mean anything practically, despite trying to sound like a differentiator.
While I don't like the blog title, many things said in there rang true for my company (MoveAI.com). We are building an AI-powered moving concierge that can orchestrate your relocation experience end-to-end.
We initially were developing a system that we had hoped could handle everything and eject any workflow issues to a human so the operations team could kick the machine. We were hoping to avoid an interface all together on the customer side.
After a few versions and attempts at building this system, we moved towards a traditional app where we focused on building a product people wanted and automate parts of it over time. But even the parts we automated needed an interface for customers to spot check our work. So we found a great designer.
...Before we knew it, we were building a traditional company, with some AI. The company is doing well and people love what we're building, but it's different than we imagined.
We still believe in the long term vision and promise of the technology, but the article is right, this isn't going to be an overnight process unless some new architecture emerges.
In the mean time, we're focused on helping people get from A to B easily using whatever means necessary, because moving f**ing sucks. If you're moving soon or know anybody who is, we'd be happy to help them. -P
Their moat has evaporated on the B2C side--no friction, plenty of alternatives, overly generous free tier--and B2B is freaked out about non-local usage.
And at release ChatGPT was meant as a marketing gimmick. A fun way to interact with a slightly finetuned version of GPT3.5 to showcase how good their models had become.
If anything it's remarkable how much they leaned into this success, building an iOS and Android app, speeding up the models, adding a premium plan, lots of new features, and eventually deprecating their text-completion mode and going all in on chat as the interaction mode for their LLMs.
Their numbers aren't public, but I'm not 100% certain that they're making significantly more money through the API than they are through paid subscribers to their products.
They have a LOT of paid subscribers, and they're signing big "enterprise" deals with companies that have thousands of seats.
> Imagine not understanding that their main way of doing money is through their API for other companies, and not through a product. They are focused on doing something they are good: good AI models, they let other companies take the risk to build product on top of it, and reap benefits from theses products.
There is no moat in an API-gated foundation model. One LLM is as good as any other, and it'll be a race to the bottom.
The only way to mint a new FAANG is to build a platform that captivates and ensnares the populace, like iPhone or Instagram.
The value in AI will be accrued at the product layer, not the ML infra tooling, not the foundation model. The product layer.
It might be too late to do this with LLMs and voice assistants, though. OpenAI is super distracted, and there's plenty of time for Google, Meta, and Apple to come in and fill the void.
Everyone was too busy selling the creation of gods, or spreading FOMO to elevate themselves to lofty valuations. At the end of the day, business still looks the same as it always has: create value for customers, ideally in a big market where you can own a large slice. LLMs and foundation models are fungible and easy.
depends on your definition of "good." if good means creating the next generation of recommendation algorithms that result massive technology addiction and a mental health crisis, then yeah
Charlie Bit Me is of the YouTube generation, so it wasn't passed around as an avi email attachment like some older memes of the previous generation. From that long ago, Exploding Whale comes to mind.
ryandrake|1 year ago
The idea that a company is an AI company should be as ridiculous as a company being a Python company. "We are Python-first, have Python experts, and all of our products are made with Python. Our customers want their apps to have Python in them. We just have to 'productize Python' and find the right killer app for Python and we'll be successful!" Going at it from the wrong direction. Replace Python in that quote with AI, and you probably have something a real company has said in 2024.
cooperx|1 year ago
However, a lot of those got a bunch of investment or made some decent money in the short term. Very few are still around. We will see the same pattern here.
tux1968|1 year ago
necroforest|1 year ago
slavboj|1 year ago
candiddevmike|1 year ago
https://candid.dev/blog/becoming-an-ai-company/
mondrian|1 year ago
SkyPuncher|1 year ago
When you’re truly bring novel new value to things, sometime you need to say “we can do this cool thing, but don’t know what that means”. Simply knowing that capability opens you up to better sets of solutions.
mu53|1 year ago
Customers are also more interested in AI products. The tech industry has stagnated for years with incremental improvements on existing products. ChatGPT and generative AI are new capabilities that draw interest, and companies have been doing anything they can to stand out today.
siruncledrew|1 year ago
Every cycle, theres all types of people hop on board whatever the hype train is... it's the same mindset as pioneering for gold in the wild west.
I just hope we can move along more in the "wheat" direction with AI products. There's so much low-effort crap already out there.
ants_everywhere|1 year ago
So just zooming out, we need people trying to figure out what can be built with this Lego set. We also need people like you're saying to work the other side so everyone can meet in the middle.
sroussey|1 year ago
widenrun|1 year ago
lallysingh|1 year ago
tim333|1 year ago
_xiaz|1 year ago
seydor|1 year ago
johnnyanmac|1 year ago
xbmcuser|1 year ago
diatone|1 year ago
unknown|1 year ago
[deleted]
DylanDmitri|1 year ago
highfrequency|1 year ago
- Bezos saw the growth rate of the internet, spent a few months mulling over the question: "what business would make sense to start in the context of massive internet adoption" and came up with an online bookstore.
- OpenAI's ChatGPT effort really began when they saw Google's paper on transformers and decided to see how far they could push this technology (it's hard to imagine they forecasted all the chatbot usecases; in reality I'm sure they were just stoked to push the technology forward).
- Intel was founded on the discovery of the integrated circuit, and again I think the dominant motivation was to see how far they could push transistor density with a very hazy vision at best of how the CPUs would eventually be used.
I think the reason this strategy works is that the newness of a truly important technology counteracts much of the adverse selection of starting a new business. If you make a new To-Do iPhone app, it's unlikely that people have overlooked a great idea in that space over the last 10 years. But if lithium ion batteries only just barely started becoming energy dense enough to make a car, there's a much more plausible argument why you could be successful now.
Said another way: "why hasn't this been done before?" (both by resource-rich incumbents as well as new entrants) is a good filter (and often a limiting one) for starting a business. New technological capabilities are one good answer to this question. Therefore if you're trying to come up with an idea for a business, it seems reasonable to look at new technologies that you think are actually important and then reason backward to what new businesses they enable.
Two additional positive factors I can think of:
1. A common dynamic is that a new technology is progressing rapidly but is of course far behind traditional solutions at the outset. Thus it is difficult to find immediate applications, even if large applications are almost guaranteed in 10-20 years. Getting in early - during the borderline phase where most applications are very contrived - is often a big advantage. See Tesla Roadster (who wants a $100k electric sports car with 200mi range and minimal charging network?), early computers (what is the advantage of a slow machine with no GUI over doing work by hand?), and perhaps current LLMs (how valuable is a chatbot that frequently hallucinates and has trouble thinking critically in original ways)? It's the classic Innovator's Dilemma - we overweight the initial warts and don't properly forecast how quickly things are improving.
2. There is probably a helpful motivational force for many people if they get to feel that they are on the cutting edge of technology that interests them and building products that simply weren't possible two years ago.
honestjohn|1 year ago
[deleted]
lispisok|1 year ago
rustypotato|1 year ago
This is the fundamental problem that prevents generative AI from becoming a "foundational building block" for most products. Even with rigorous safety measures in place, there are few guarantees about its output. AI is about as solid as sand when it comes to determinism, which is great if you're trying to sell sand, but not so great if you're trying to build a huge structure on top of it.
BobbyJo|1 year ago
The reason we can build such deep and complex software system is because each layer can assume the one below it will "just work". If it only worked 99% of the time, we'd all still be interfacing with assembly, because we'd have to be aware of the mistakes that were made and deal with them, otherwise the errors would compound until software was useless.
Until AI achieves the level of determinism we have with other software, it'll have to stay at the surface.
slidehero|1 year ago
>If your AI travel agent books vacations to the correct destination only 90% of the time
that would be using the wrong tool for the job. an AI travel agent would be very useful for making suggestions, either for destinations or giving a list of suggested flights, hotels etc, and then hand off to your standard systems to complete the transaction.
there are also a lot of systems that tolerate "faults" just fine such as image/video/audio gen
aprilthird2021|1 year ago
Well, I don't agree. I think there are ways to make this successful, but you have to be honest about the limitations you're working it with and play to your strengths.
How about an AI travel agent that gets your itineraries at a discount with the caveat that you be ready for anything. Like old, cheap standby tickets where you just went wherever there was an empty seat that day.
Or how about an AI Spotify for way less money than current Spotify. It's not competing on quality, it can't. Occasionally you'll hear weird artifacts, but hey it's way cheaper.
That could work, imo
8n4vidtmkvmk|1 year ago
I'm not disagreeing with the "needs to work deterministically" -- there is a need for that, but this is a poor example. "Hey robot, plan a trip to Mexico" might still save me time overall if done right, and that has value.
MattGaiser|1 year ago
Call centre workers are often dreadfully inaccurate as well. Same with support engineers.
Heck even for banking, there are enormous teams fixing every screw up made by some other employee.
davidsgk|1 year ago
EasyMark|1 year ago
hx2a|1 year ago
Abyss: 1 Ambiguous: 3 Cacophony: 3 Crescendo: 3 Ephemeral: 3 Ethereal: 3 Euphoria: 3 Labyrinth: 3 Maverick: 3 Melancholy: 3 Mellifluous: 3 Nostalgia: 3 Oblivion: 3 Paradox: 3 Quixotic: 1 Serendipity: 3 Sublime: 3 Zenith: 3
Lerc|1 year ago
It's all well and good to say "Make something people want" but for anything that people want usually one of three things is true
1. Someone else is already making it.
2. Nobody knows how to make it.
3. Nobody knows that people want it.
People experimenting with 2 and 3 will have a lot of failures, but the great successes will come from those groups as well.
Sure, every trend in business has a lot of companies going "we should do this because everyone else is" It was a dumb idea for previous trends and it is a dumb idea now. Consider how many companies did that for the internet. There were a lot of poorly thought out forays into having an internet presence. Of those companies still around, they pretty much will have an internet presence now that serves their purposes. They transition from "because everyone else is" as their motivation to "We want specific ability x,y,&z"
Perhaps the best way to get from "everyone else is doing it" to knowing what to build is to play in the pool.
8n4vidtmkvmk|1 year ago
violetwhowwq|1 year ago
frabjoused|1 year ago
But AI has always been a secondary augmentation to the product itself. It’s a tool, it shouldn’t be the other way around.
FpUser|1 year ago
8organicbits|1 year ago
honestjohn|1 year ago
marymkearney|1 year ago
https://sites.google.com/princeton.edu/agents-workshop
pphysch|1 year ago
captainkrtek|1 year ago
ei8htyfi5e|1 year ago
We initially were developing a system that we had hoped could handle everything and eject any workflow issues to a human so the operations team could kick the machine. We were hoping to avoid an interface all together on the customer side.
After a few versions and attempts at building this system, we moved towards a traditional app where we focused on building a product people wanted and automate parts of it over time. But even the parts we automated needed an interface for customers to spot check our work. So we found a great designer.
...Before we knew it, we were building a traditional company, with some AI. The company is doing well and people love what we're building, but it's different than we imagined.
We still believe in the long term vision and promise of the technology, but the article is right, this isn't going to be an overnight process unless some new architecture emerges.
In the mean time, we're focused on helping people get from A to B easily using whatever means necessary, because moving f**ing sucks. If you're moving soon or know anybody who is, we'd be happy to help them. -P
m3kw9|1 year ago
jerrygoyal|1 year ago
mrinfinitiesx|1 year ago
bamboozled|1 year ago
candiddevmike|1 year ago
unknown|1 year ago
[deleted]
perihelion_zero|1 year ago
[deleted]
Kuinox|1 year ago
[deleted]
pdpi|1 year ago
Or, more to the point: Their primary product is B2B, not B2C.
wongarsu|1 year ago
If anything it's remarkable how much they leaned into this success, building an iOS and Android app, speeding up the models, adding a premium plan, lots of new features, and eventually deprecating their text-completion mode and going all in on chat as the interaction mode for their LLMs.
fire_lake|1 year ago
simonw|1 year ago
They have a LOT of paid subscribers, and they're signing big "enterprise" deals with companies that have thousands of seats.
echelon|1 year ago
There is no moat in an API-gated foundation model. One LLM is as good as any other, and it'll be a race to the bottom.
The only way to mint a new FAANG is to build a platform that captivates and ensnares the populace, like iPhone or Instagram.
The value in AI will be accrued at the product layer, not the ML infra tooling, not the foundation model. The product layer.
It might be too late to do this with LLMs and voice assistants, though. OpenAI is super distracted, and there's plenty of time for Google, Meta, and Apple to come in and fill the void.
Everyone was too busy selling the creation of gods, or spreading FOMO to elevate themselves to lofty valuations. At the end of the day, business still looks the same as it always has: create value for customers, ideally in a big market where you can own a large slice. LLMs and foundation models are fungible and easy.
unknown|1 year ago
[deleted]
warkdarrior|1 year ago
[deleted]
byyoung3|1 year ago
23B1|1 year ago
It's okay, I mean even the internet started out as Charlie_Bit_Me.avi and free porn.
findmore77|1 year ago
fragmede|1 year ago
Charlie Bit Me is of the YouTube generation, so it wasn't passed around as an avi email attachment like some older memes of the previous generation. From that long ago, Exploding Whale comes to mind.
EasyMark|1 year ago