The idea that the use of GPT-3 in a product is not providing a differentiator is true if we are rebottling the output with no value added. Stock photos from Unsplash and software libraries from Github also provides zero competitive advantage - but the possibility of synthesizing a competitive new offering on the back of these is still perfectly possible. Enablers and Differentiators, not the same thing.
The obvious out of the way, I have learned never to underestimate market timing. Having laughed at the IRC for Peeps they called Slack, perhaps the asymmetry of knowing how to use GPT-3 is still a great untapped opportunity.
this exactly is what i have been exploring/writing about. productizing a tech is a whole other discipline than making the tech itself and its not just "heh be good at marketing and distribution lol"
Read this article a few years ago and was glad it was published! It definitely reduced competition judging by the popularity on HN when it first debuted.
We were building our company on the back of GPT-3 and soon sold it for a life-changing amount of money.
So starting a business around GPT-3 ended up being a very good idea :)
The difference between stable diffusion and GPT-3 is the former is open source, meaning you don't have to pay tributes to one party.
The barrier entry to developing apps on top of Diffusion is higher because you have to setup GPU instances. It's quite expensive, compared to OpenAI's GPT-3 where you can just use their API.
Unpopular opinion. But starting a business around AI is a bad idea -- it's a tool not a business. AI is the "Object Oriented" of our times. It'll end up being something that will be used in our tooling, but I recall all those 90's companies who died miserably basing their whole business model around objects... I feel like AI has the same future.
The Implicit assumption here is that AI in general is one of several possible tools to a given problem. This is true in some cases, but not in others (eg media synthesis, automated transcription/translation/classification, etc.). So I think "starting a business around AI" should imply that AI is a core necessity, not just a chosen tool. Granted, it may be just one tool among many that could be useful, even if it necessary.
It's not based around AI. It's based around content creation. The AI part is just the means to the end.
These methods allow for easy content creation. It's akin to an industrialization of the mind. We're now currently searching for the best human interface to control the outputs so we can attain the results we want immediately and with high fidelity.
Once the images and sounds in your brain can immediately jump to the screen, you'll see what this has all been about.
I don't see how this is at all comparable to object oriented programming. These techniques solve real business and social needs. They automate entire decades of learning, hours of toil, and free up enormous capital.
While I enjoyed reading this article, I more or less disagree with it.
To me, using large language models like GPT-3 is now fungible architecture component, multi sourced from OpenAI, Hugging Face, etc. For many NLP tasks, not using modern deep learning models in your infrastructure dooms you to writing inferior systems.
I am under the impression that the optimal strategy, if you have technical skills, is to be constantly on the lookout for new shovels or other tools to sell to the prospectors. Build quickly, sell quickly and it doesn't really matter if your idea is truly viable beyond 6 months or so
Enjoying the article, but feel the "Economies of scale" section makes an incorrect comparison between a Spotify business model to a theoretical business using OpenAI's API. The author suggests that since Spotify pays royalties per song played, getting more users doesn't mean more money for them and then claims a business using GPT-3 would have a similar limitation.
There are a couple things I think is wrong with this. First, depending on the sort of users acquired by Spotify it does directly translate to more earnings. What doesn't scale well for Spotify seems to be how active the subscription-paying users are. To which I mean that a user who listens to 50 songs a day will cost more than a user only listening to 10 since the subscription price is static and common across users despite usage.
That last point is where the author gets the next thing wrong: assuming that services employing GPT-3 will be fixed subscriptions instead of a pay-as-you-go model (like AWS). I am sure there will be business using fixed sub prices that are independent of usage, but we shouldn't assume that there is anything about GPT-3 that makes that more likely or even very different from other cases where fixed subs are used. There will always be some costs per user, be it the raw cost of electricity or cloud infrastructure. GPT-3's API would just be one more cost per request to consider.
When the tech is open sourced, the product managers get to shine. Requiring less engineering skill to pull off something means a wider range of product people (founders, marketers, corporate product managers) get to show their skills in finding product market fit in more niches. Starting a business around Excel was not a bad idea at all. OpenAI is becoming the Fairchild of our time - keeping things closed has triggered an exodus of brains and open source activity that creates the cambrian explosion. If they keep up with their strategy, they would become the marketers of tech that is commercialized by open source instead.
I don't think OpenAI being open source would have made much of a difference. The AI cat has been out of the bag so open source models and tools would have come anyway. Once the idea is out there it's only a matter of time.
The only people keeping some AI secrets secret would be quants and perhaps the NSA.
Also the ArXiv literature explosion is proof of the Cambrian explosion.
Training the damn things though, that's the tricky part. I want to build products on your platform as that's where the money is.
The drive to be differentiated is a way more powerful force than people appreciate. When big oil was deciding what to use as an anti-knocking agent (tetraethyllead or ethanol) the two biggest concerns that tipped the decision in favor of lead were:
1. Ethanol might become a competitor to oil.
2. Lead could be patented, licensed, and used to differentiate their gasoline from competitors.
> Meanwhile, the profits will accrue to the true beneficiaries: 1) the algorithm owners, OpenAI [...]
This seems incorrect to me. The crucial parts have been reimplemented. The weights are their only secret sauce and equally good free replacements are only a matter of time.
I agree, they get replicated fast. But large language models also have a democratising effect even when they are under paid API - they are lowering the barrier to many NLP tasks. They take skills from the internet and repackage them in useful and customised forms. This means the benefit is really being spread around to everyone building on them, they can build in a day what used to take a month or a year. I see them as "open sourcing" all these previously hard to access AI skills.
I see LLMs as part of a wider trend - we used to transmit information orally, then we invented writing, then printing, then media and internet. Now we can transmit the distillation of our whole culture as a model, it can be applied directly to solve problems. It's the next step in the propagation of culture.
Given that it's a 2 yearish old article (although there's no date in the post), I'm wondering if these predictions are proven correct:
> The barrier to entry to developing a viable product gets low for everyone, meaning hundreds of competitors will pop up overnight.
> A lot of founders are going to try to start businesses based on GPT-3, and a lot of money will go into them, and it’s going to be a blood bath.
I'm not really following the AI startup landscape, but I haven't seen a Cambrian explosion of GPT-3 apps, although I noticed a few ones. Blood bath is also way too dramatic. Anyone seen a post where a founder of a heavily GPT-3 based startup cried out how their startup was destroyed because "x"?
It has happened in some cases, eg with all the "generate marketing copy with AI" businesses.
But yeah, in general the article assumes that GPT-3 will have lots of applications that make it super easy to make a useful product with very little extra effort, and that is just not true. Twitter demos are easy, robust and useful products are not.
Seeing Grammarly doing quite well (judging from all the adverts on yt), I can imagine a GPT-3 based editor that improves the user's prosaic output, and I suppose it could be quite popular. Perhaps writing technical documentation can even become fun.
GitHub Copilot (in markdown mode) provides that already. I'm increasingly using it to help write technical documentation and blog posts - it works great.
You can even paste in a chunk of code to give it some hints, start writing about it (with Copilot assistance) and then delete the code later.
submitted this because i have been doing a bunch of research around productized AI businesses (https://lspace.swyx.io/) in preparation for someday pivoting
notable that in the small, this post was "wrong" - Jasper AI went from 0 to $60m/ARR in the 2 years since this post. sure, you could regard them as "winning the lottery", but i'm sure if you asked their bank accounts they wouldn't agree starting a biz around GPT3 is a bad idea :)
Isn't GPT-3 a glorified autocompletion algorithm? Why would anyone want to start a business around it? I understand it's fun to play with it and NLP has come a long way from Eliza, but at the end of the day they aren't that different in the sense that it's not real AI and like Eliza GPT-3 has no understanding about the generated text. Using GPT-3 to provide one functionality of your product is one thing, create a business around it makes no sense IMO.
Alternatively, figure out how to use GPT-3 in a market that involves some schleps. I'm working on one in Education Tech, building something for a need that teachers have been practically begging for. There are particular regulatory challenges, unique sales paths, and a big first to market advantage because educators aren't online constantly researching their options. Once you're embedded in the education consciousness, you're there for years.
But yes, another random thing GUI for generating marketing copy from GPT3 isn't a good long term play.
There was a site that was making 50k USD per month, providing very basic analytics. This was in the early days of Twitter. Even if it lasted only 6 months at 50k, that is 300k , for a side project.
So, if you have a great idea for a business based on GPT-3, don't do it because you might be able to finish it in an afternoon? That seems better than having a great idea with many barriers to entry.
[+] [-] lokimedes|3 years ago|reply
The obvious out of the way, I have learned never to underestimate market timing. Having laughed at the IRC for Peeps they called Slack, perhaps the asymmetry of knowing how to use GPT-3 is still a great untapped opportunity.
[+] [-] swyx|3 years ago|reply
[+] [-] chrisfrantz|3 years ago|reply
We were building our company on the back of GPT-3 and soon sold it for a life-changing amount of money.
So starting a business around GPT-3 ended up being a very good idea :)
[+] [-] bspammer|3 years ago|reply
[+] [-] joe_the_user|3 years ago|reply
[+] [-] geoduck14|3 years ago|reply
[+] [-] shisisms|3 years ago|reply
I’ve been increasingly concerned that given now the wide spread skills to build consumer apps for example, the value will be eaten up too quickly.
Thai article makes the point that in fact this isn’t where (economic) value really sits. Building a moat requires more than just the ability to build.
Thus, it feels as though building simple consumer interfaces to the latest AI models is a short-term and largely thankless play.
[+] [-] langitbiru|3 years ago|reply
The barrier entry to developing apps on top of Diffusion is higher because you have to setup GPU instances. It's quite expensive, compared to OpenAI's GPT-3 where you can just use their API.
[+] [-] tjr225|3 years ago|reply
[+] [-] gjmacd|3 years ago|reply
[+] [-] andreyk|3 years ago|reply
[+] [-] Kiro|3 years ago|reply
Do you have any examples?
[+] [-] echelon|3 years ago|reply
It's not based around AI. It's based around content creation. The AI part is just the means to the end.
These methods allow for easy content creation. It's akin to an industrialization of the mind. We're now currently searching for the best human interface to control the outputs so we can attain the results we want immediately and with high fidelity.
Once the images and sounds in your brain can immediately jump to the screen, you'll see what this has all been about.
I don't see how this is at all comparable to object oriented programming. These techniques solve real business and social needs. They automate entire decades of learning, hours of toil, and free up enormous capital.
[+] [-] mark_l_watson|3 years ago|reply
To me, using large language models like GPT-3 is now fungible architecture component, multi sourced from OpenAI, Hugging Face, etc. For many NLP tasks, not using modern deep learning models in your infrastructure dooms you to writing inferior systems.
[+] [-] charcircuit|3 years ago|reply
If everyone is using it then using it isn't much of a competitive advantage.
[+] [-] Bakary|3 years ago|reply
[+] [-] dominotw|3 years ago|reply
[+] [-] yeck|3 years ago|reply
There are a couple things I think is wrong with this. First, depending on the sort of users acquired by Spotify it does directly translate to more earnings. What doesn't scale well for Spotify seems to be how active the subscription-paying users are. To which I mean that a user who listens to 50 songs a day will cost more than a user only listening to 10 since the subscription price is static and common across users despite usage.
That last point is where the author gets the next thing wrong: assuming that services employing GPT-3 will be fixed subscriptions instead of a pay-as-you-go model (like AWS). I am sure there will be business using fixed sub prices that are independent of usage, but we shouldn't assume that there is anything about GPT-3 that makes that more likely or even very different from other cases where fixed subs are used. There will always be some costs per user, be it the raw cost of electricity or cloud infrastructure. GPT-3's API would just be one more cost per request to consider.
[+] [-] dzink|3 years ago|reply
[+] [-] theGnuMe|3 years ago|reply
The only people keeping some AI secrets secret would be quants and perhaps the NSA.
Also the ArXiv literature explosion is proof of the Cambrian explosion.
Training the damn things though, that's the tricky part. I want to build products on your platform as that's where the money is.
[+] [-] bloaf|3 years ago|reply
1. Ethanol might become a competitor to oil. 2. Lead could be patented, licensed, and used to differentiate their gasoline from competitors.
[+] [-] theGnuMe|3 years ago|reply
[+] [-] andybak|3 years ago|reply
This seems incorrect to me. The crucial parts have been reimplemented. The weights are their only secret sauce and equally good free replacements are only a matter of time.
[+] [-] visarga|3 years ago|reply
I see LLMs as part of a wider trend - we used to transmit information orally, then we invented writing, then printing, then media and internet. Now we can transmit the distillation of our whole culture as a model, it can be applied directly to solve problems. It's the next step in the propagation of culture.
[+] [-] tiborsaas|3 years ago|reply
> The barrier to entry to developing a viable product gets low for everyone, meaning hundreds of competitors will pop up overnight.
> A lot of founders are going to try to start businesses based on GPT-3, and a lot of money will go into them, and it’s going to be a blood bath.
I'm not really following the AI startup landscape, but I haven't seen a Cambrian explosion of GPT-3 apps, although I noticed a few ones. Blood bath is also way too dramatic. Anyone seen a post where a founder of a heavily GPT-3 based startup cried out how their startup was destroyed because "x"?
[+] [-] andreyk|3 years ago|reply
But yeah, in general the article assumes that GPT-3 will have lots of applications that make it super easy to make a useful product with very little extra effort, and that is just not true. Twitter demos are easy, robust and useful products are not.
[+] [-] amelius|3 years ago|reply
[+] [-] simonw|3 years ago|reply
You can even paste in a chunk of code to give it some hints, start writing about it (with Copilot assistance) and then delete the code later.
[+] [-] swyx|3 years ago|reply
[+] [-] nradov|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] swyx|3 years ago|reply
and my latest post is here https://news.ycombinator.com/item?id=33144039
notable that in the small, this post was "wrong" - Jasper AI went from 0 to $60m/ARR in the 2 years since this post. sure, you could regard them as "winning the lottery", but i'm sure if you asked their bank accounts they wouldn't agree starting a biz around GPT3 is a bad idea :)
[+] [-] samuelstros|3 years ago|reply
If your technological advantage is not high (enough), you have to compete on marketing and distribution.
Jasper AI is heavy on marketing and distribution.
[+] [-] lee101|3 years ago|reply
Lots of the startups can pivot easily to use these cheaper systems after capturing the market with OpenAI powered apps, especially as it's just a one line change https://text-generator.io/blog/over-10x-openai-cost-savings-...
Or train their own algorithms etc so there will be lots of winners not just OpenAI and the tech Giants
[+] [-] alexjpb|3 years ago|reply
[+] [-] GavinB|3 years ago|reply
But yes, another random thing GUI for generating marketing copy from GPT3 isn't a good long term play.
[+] [-] gennarro|3 years ago|reply
[+] [-] davidklemke|3 years ago|reply
[+] [-] RobotToaster|3 years ago|reply
[+] [-] akudha|3 years ago|reply
It all depends on timing, risk vs reward etc
[+] [-] osigurdson|3 years ago|reply