top | item 41294764

AI companies are pivoting from creating gods to building products

133 points| randomwalker | 1 year ago |aisnakeoil.com

195 comments

order

ryandrake|1 year ago

Companies keep going at it the wrong way. Instead of saying "We have AI, let's find products we can make out of AI!" they should be saying, "What products do people want, let's use whatever tools we have (including maybe AI) to make them."

The idea that a company is an AI company should be as ridiculous as a company being a Python company. "We are Python-first, have Python experts, and all of our products are made with Python. Our customers want their apps to have Python in them. We just have to 'productize Python' and find the right killer app for Python and we'll be successful!" Going at it from the wrong direction. Replace Python in that quote with AI, and you probably have something a real company has said in 2024.

cooperx|1 year ago

Its the same as all the "we are a blockchain company" startups that popped up looking for a problem to solve with their tech rather than the right way round.

However, a lot of those got a bunch of investment or made some decent money in the short term. Very few are still around. We will see the same pattern here.

tux1968|1 year ago

It was likely the same back when the steam engine was invented. Everyone who could start a steam engine company, started a steam engine company. Because learning how to be a steam engine company was difficult, new, and unique. It would be a while before finding all the products that could be sold to people incorporating that new tech.

necroforest|1 year ago

I don't entirely disagree with you, but "what products do people want" is overly conservative. Pre-ChatGPT, very few people wanted a (more or less) general purpose chatbot.

slavboj|1 year ago

A ton of the industrial revolution was actually motivated by that input-driven thinking. You don't decide you want an Eiffel Tower from first principles, you consider "what is the coolest thing I can make out of wrought iron".

mondrian|1 year ago

I 95% agree, but "what people want" is probably not a strong indicator on the thresholds of paradigm shifts, since people don't know what's possible.

SkyPuncher|1 year ago

I only partially agree with this. Having spent a lot of time in the “find a problem then the solution” way of working, I’ve found the solutions are often too tame and lack innovation.

When you’re truly bring novel new value to things, sometime you need to say “we can do this cool thing, but don’t know what that means”. Simply knowing that capability opens you up to better sets of solutions.

mu53|1 year ago

AI is trending right now. The most important thing for new companies is finding investors, and those people have been throwing cash at any company with AI.

Customers are also more interested in AI products. The tech industry has stagnated for years with incremental improvements on existing products. ChatGPT and generative AI are new capabilities that draw interest, and companies have been doing anything they can to stand out today.

siruncledrew|1 year ago

The market is sorting itself out right now, and eventually the wheat will get separated from the chaff.

Every cycle, theres all types of people hop on board whatever the hype train is... it's the same mindset as pioneering for gold in the wild west.

I just hope we can move along more in the "wheat" direction with AI products. There's so much low-effort crap already out there.

ants_everywhere|1 year ago

There's still a lot of real work to be done knowing what can be built and operated profitably, because the underlying tech is so new.

So just zooming out, we need people trying to figure out what can be built with this Lego set. We also need people like you're saying to work the other side so everyone can meet in the middle.

sroussey|1 year ago

This has been the case for decades. Look at the internet and .com’s. Mobile. Etc.

widenrun|1 year ago

You are forgetting marketing is temporal. Fifteen years ago you could sell your software as the Cloud version of a legacy app. Right now, there's a window that being the AI version will get you a call.

lallysingh|1 year ago

That requires they you understand the capabilities and limitations of the tech way better than anyone does. So instead "let's see what we can do with this" is the underlying approach.

tim333|1 year ago

A python company is too specialized but software companies are a thing, Maybe AI will be another tool for software companies.

_xiaz|1 year ago

To be fair, astral is the python company and thank god they are. I love ruff and uv

seydor|1 year ago

People want a faster horse

johnnyanmac|1 year ago

if you're some Python contractor company, the angle makes sense. but of course, very few AI companies are out there trying to help others solve problems.

xbmcuser|1 year ago

this is how things evolve everything was .com company when internet started going mainstream then real product and service providers were left standing

diatone|1 year ago

Ehhh it’s a spectrum. First you innovate, then you commercialise. Even Google took a few years to successfully monetise and they weren’t the first mover in web search. LLMs have been around for, what, coming up on three years? Probably two to four more years to see results.

DylanDmitri|1 year ago

I’m seeing a lot of meh products that take like 4 units of effort to integrate. I think multiple LLMs, deeply integrated into a cohesive product with 100+ effort units, that can be great. An AI that’s familiar with the use of every settings menu on windows would be awesome

highfrequency|1 year ago

I'm not so sure. When a technological wave is big enough, it seems reasonable to start by asking: "what business can be built on this exponential wave?" This is contrary to standard YC advice (make something people want right away, don't create a solution in search of a problem) but empirically a lot of big companies started this way:

- Bezos saw the growth rate of the internet, spent a few months mulling over the question: "what business would make sense to start in the context of massive internet adoption" and came up with an online bookstore.

- OpenAI's ChatGPT effort really began when they saw Google's paper on transformers and decided to see how far they could push this technology (it's hard to imagine they forecasted all the chatbot usecases; in reality I'm sure they were just stoked to push the technology forward).

- Intel was founded on the discovery of the integrated circuit, and again I think the dominant motivation was to see how far they could push transistor density with a very hazy vision at best of how the CPUs would eventually be used.

I think the reason this strategy works is that the newness of a truly important technology counteracts much of the adverse selection of starting a new business. If you make a new To-Do iPhone app, it's unlikely that people have overlooked a great idea in that space over the last 10 years. But if lithium ion batteries only just barely started becoming energy dense enough to make a car, there's a much more plausible argument why you could be successful now.

Said another way: "why hasn't this been done before?" (both by resource-rich incumbents as well as new entrants) is a good filter (and often a limiting one) for starting a business. New technological capabilities are one good answer to this question. Therefore if you're trying to come up with an idea for a business, it seems reasonable to look at new technologies that you think are actually important and then reason backward to what new businesses they enable.

Two additional positive factors I can think of:

1. A common dynamic is that a new technology is progressing rapidly but is of course far behind traditional solutions at the outset. Thus it is difficult to find immediate applications, even if large applications are almost guaranteed in 10-20 years. Getting in early - during the borderline phase where most applications are very contrived - is often a big advantage. See Tesla Roadster (who wants a $100k electric sports car with 200mi range and minimal charging network?), early computers (what is the advantage of a slow machine with no GUI over doing work by hand?), and perhaps current LLMs (how valuable is a chatbot that frequently hallucinates and has trouble thinking critically in original ways)? It's the classic Innovator's Dilemma - we overweight the initial warts and don't properly forecast how quickly things are improving.

2. There is probably a helpful motivational force for many people if they get to feel that they are on the cutting edge of technology that interests them and building products that simply weren't possible two years ago.

lispisok|1 year ago

You're suggesting boring business way to do things. The tech ecosystem is full of startups doing that ridiculous thing you said chasing the hot new thing and raising huge amounts of money off the hype. This AI hype cycle is really bad and before that we had cryptocurrency.

rustypotato|1 year ago

> But when developers put AI in consumer products, people expect it to behave like software, which means that it needs to work deterministically. If your AI travel agent books vacations to the correct destination only 90% of the time, it won’t be successful.

This is the fundamental problem that prevents generative AI from becoming a "foundational building block" for most products. Even with rigorous safety measures in place, there are few guarantees about its output. AI is about as solid as sand when it comes to determinism, which is great if you're trying to sell sand, but not so great if you're trying to build a huge structure on top of it.

BobbyJo|1 year ago

I've made this statement a bunch in other mediums: The reason AI software is always "AI software" and not just a useful product is because AI is fallible.

The reason we can build such deep and complex software system is because each layer can assume the one below it will "just work". If it only worked 99% of the time, we'd all still be interfacing with assembly, because we'd have to be aware of the mistakes that were made and deal with them, otherwise the errors would compound until software was useless.

Until AI achieves the level of determinism we have with other software, it'll have to stay at the surface.

slidehero|1 year ago

structured outputs help, paired with regular old systems design I think you can get pretty far. it really depends what you're building though.

>If your AI travel agent books vacations to the correct destination only 90% of the time

that would be using the wrong tool for the job. an AI travel agent would be very useful for making suggestions, either for destinations or giving a list of suggested flights, hotels etc, and then hand off to your standard systems to complete the transaction.

there are also a lot of systems that tolerate "faults" just fine such as image/video/audio gen

aprilthird2021|1 year ago

> If your AI travel agent books vacations to the correct destination only 90% of the time, it won’t be successful.

Well, I don't agree. I think there are ways to make this successful, but you have to be honest about the limitations you're working it with and play to your strengths.

How about an AI travel agent that gets your itineraries at a discount with the caveat that you be ready for anything. Like old, cheap standby tickets where you just went wherever there was an empty seat that day.

Or how about an AI Spotify for way less money than current Spotify. It's not competing on quality, it can't. Occasionally you'll hear weird artifacts, but hey it's way cheaper.

That could work, imo

8n4vidtmkvmk|1 year ago

The AI travel agent is trivial to solve though. It's the same as the human travel agent. Put the plan and pricing together, then give it to the user to sign and accept. Do it in an app, do it in an email, do it on a piece of paper, whatever floats your boat, but give them something they can review and accept instead of trying to do everything verbally or in a basic chat interface.

I'm not disagreeing with the "needs to work deterministically" -- there is a need for that, but this is a poor example. "Hey robot, plan a trip to Mexico" might still save me time overall if done right, and that has value.

MattGaiser|1 year ago

It just needs to beat all the other non-deterministic processes at accuracy.

Call centre workers are often dreadfully inaccurate as well. Same with support engineers.

Heck even for banking, there are enormous teams fixing every screw up made by some other employee.

davidsgk|1 year ago

I have a question for folks working heavily with AI blackboxes related to this - what are methods that companies use to test the quality of outputs? Testing the integration itself can be treated pretty much the same as testing around any third-party service, but what I've seen are some teams using models to test the output quality of models... which doesn't seem great instinctively

EasyMark|1 year ago

But a knowledgeable human can take the iternarary and run with it. I know I’ve done that with code enough from AI generated stuff, it’s basically boiler plate. You still run it through the same tests, reviews, and verification as you would have had to do anyway.

hx2a|1 year ago

And yet, generative AI also seems to be poor at randomness. When I ask Google Gemini for a list of 50 random words, it gave me a list of 18 unique words, with 16 of them repeated exactly 3 times.

Abyss: 1 Ambiguous: 3 Cacophony: 3 Crescendo: 3 Ephemeral: 3 Ethereal: 3 Euphoria: 3 Labyrinth: 3 Maverick: 3 Melancholy: 3 Mellifluous: 3 Nostalgia: 3 Oblivion: 3 Paradox: 3 Quixotic: 1 Serendipity: 3 Sublime: 3 Zenith: 3

Lerc|1 year ago

Instead of pivoting, can this behaviour be explained by trying lots of different things and then iterating on the ones that show promise?

It's all well and good to say "Make something people want" but for anything that people want usually one of three things is true

1. Someone else is already making it.

2. Nobody knows how to make it.

3. Nobody knows that people want it.

People experimenting with 2 and 3 will have a lot of failures, but the great successes will come from those groups as well.

Sure, every trend in business has a lot of companies going "we should do this because everyone else is" It was a dumb idea for previous trends and it is a dumb idea now. Consider how many companies did that for the internet. There were a lot of poorly thought out forays into having an internet presence. Of those companies still around, they pretty much will have an internet presence now that serves their purposes. They transition from "because everyone else is" as their motivation to "We want specific ability x,y,&z"

Perhaps the best way to get from "everyone else is doing it" to knowing what to build is to play in the pool.

8n4vidtmkvmk|1 year ago

That's exactly what these companies are doing. They're trying a lot of different ideas, and seeing what sticks. The problem is that they're annoying users and causing distrust.

violetwhowwq|1 year ago

Using herbal natural remedy was what got me tested negative to HSV 2 after being diagnosed for years. I have spent so much funds on medications like acyclovir (Zovirax), Famciclovir (Famvir), and Valacyclovir (Valtrex). But it was all a waste of time and my symptoms got worse. To me It is very bad what Big pharma are doing, why keep making humans suffer greatly just to get profits annually for medications that don't work. I'm glad that herbal remedies are gaining so much awareness and many people are getting off medications and activating their entire body system with natural herbal remedies and they have become holistically healed totally, It’s also crucial to learn as much as you can about your diagnosis. Seek options visit: worldrehabilitateclinic. com

frabjoused|1 year ago

I’m building an integration platform. There’s a thousand ways to deeply embed AI throughout it, both to build integration workflows faster, and to help us build smarter API wrappers faster.

But AI has always been a secondary augmentation to the product itself. It’s a tool, it shouldn’t be the other way around.

FpUser|1 year ago

ChatGPT is very useful for me to the point that I pay subscription fee. To me it IS the product.

8organicbits|1 year ago

I haven't found a use for it. What do you use it for?

honestjohn|1 year ago

Yeah, ChatGPT itself is amazing. What I don't understand is, why are other companies paying so much for training hardware now? Trying to make more specialized LLMs now that ChatGPT has proven the technology?

pphysch|1 year ago

Google has been productizing AI for a while now. 2021 Pixels have the Tensor SoC which was explicitly marketed as an AI chip. Chatbots weren't part of the equation back then, but offline image translation, magic eraser, etc certainly were.

captainkrtek|1 year ago

When I see “AI” in the product description of something I’m almost immediately turned off. It’s plastered everywhere for most tech companies now and doesn’t mean anything practically, despite trying to sound like a differentiator.

ei8htyfi5e|1 year ago

While I don't like the blog title, many things said in there rang true for my company (MoveAI.com). We are building an AI-powered moving concierge that can orchestrate your relocation experience end-to-end.

We initially were developing a system that we had hoped could handle everything and eject any workflow issues to a human so the operations team could kick the machine. We were hoping to avoid an interface all together on the customer side.

After a few versions and attempts at building this system, we moved towards a traditional app where we focused on building a product people wanted and automate parts of it over time. But even the parts we automated needed an interface for customers to spot check our work. So we found a great designer.

...Before we knew it, we were building a traditional company, with some AI. The company is doing well and people love what we're building, but it's different than we imagined.

We still believe in the long term vision and promise of the technology, but the article is right, this isn't going to be an overnight process unless some new architecture emerges.

In the mean time, we're focused on helping people get from A to B easily using whatever means necessary, because moving f**ing sucks. If you're moving soon or know anybody who is, we'd be happy to help them. -P

m3kw9|1 year ago

Because you need to make money to get there

jerrygoyal|1 year ago

people who claim AI is just snake oil are farthest from the reality.

bamboozled|1 year ago

I guess they've worked out making money is an important part of any business?

candiddevmike|1 year ago

Their moat has evaporated on the B2C side--no friction, plenty of alternatives, overly generous free tier--and B2B is freaked out about non-local usage.

Kuinox|1 year ago

[deleted]

pdpi|1 year ago

> Imagine not understanding that their main way of doing money is through their API for other companies, and not through a product.

Or, more to the point: Their primary product is B2B, not B2C.

wongarsu|1 year ago

And at release ChatGPT was meant as a marketing gimmick. A fun way to interact with a slightly finetuned version of GPT3.5 to showcase how good their models had become.

If anything it's remarkable how much they leaned into this success, building an iOS and Android app, speeding up the models, adding a premium plan, lots of new features, and eventually deprecating their text-completion mode and going all in on chat as the interaction mode for their LLMs.

fire_lake|1 year ago

Meanwhile Amazon will host Llama and other models in AWS (which you are already using) at reasonable rates.

simonw|1 year ago

Their numbers aren't public, but I'm not 100% certain that they're making significantly more money through the API than they are through paid subscribers to their products.

They have a LOT of paid subscribers, and they're signing big "enterprise" deals with companies that have thousands of seats.

echelon|1 year ago

> Imagine not understanding that their main way of doing money is through their API for other companies, and not through a product. They are focused on doing something they are good: good AI models, they let other companies take the risk to build product on top of it, and reap benefits from theses products.

There is no moat in an API-gated foundation model. One LLM is as good as any other, and it'll be a race to the bottom.

The only way to mint a new FAANG is to build a platform that captivates and ensnares the populace, like iPhone or Instagram.

The value in AI will be accrued at the product layer, not the ML infra tooling, not the foundation model. The product layer.

It might be too late to do this with LLMs and voice assistants, though. OpenAI is super distracted, and there's plenty of time for Google, Meta, and Apple to come in and fill the void.

Everyone was too busy selling the creation of gods, or spreading FOMO to elevate themselves to lofty valuations. At the end of the day, business still looks the same as it always has: create value for customers, ideally in a big market where you can own a large slice. LLMs and foundation models are fungible and easy.

byyoung3|1 year ago

depends on your definition of "good." if good means creating the next generation of recommendation algorithms that result massive technology addiction and a mental health crisis, then yeah

23B1|1 year ago

Well the whole 'creating gods' thing is just silly fantasy nonsense to cover up for what's really going on, which is novelty and gimmicks.

It's okay, I mean even the internet started out as Charlie_Bit_Me.avi and free porn.

findmore77|1 year ago

Factually incorrect about the genesis of the internet or even the “mass” internet. But, okay.

fragmede|1 year ago

> Charlie_Bit_Me.avi

Charlie Bit Me is of the YouTube generation, so it wasn't passed around as an avi email attachment like some older memes of the previous generation. From that long ago, Exploding Whale comes to mind.

EasyMark|1 year ago

I’m not sure what book you read on the origins of the internet but it is very very far off from the actual truth.