top | item 47163158

(no title)

nextlevelwizard | 5 days ago

Is she paying for it? That is the only question that matters in the end.

For myself, I use LLMs daily and I would even say a lot on some days and I _did_ pay the 20€/mo subscription for ChatGPT, but with the latest model I cannot justify that anymore.

4o was amazingly good even if it had some parasocial issues with some people, it actually did what I expect an LLM to do. Now the quality of the 5.whatever has gone drastically down. It no longer searches web for things it doesn't know, but instead guesses.

Even worse is the tone it uses; "Let's look at this calmly" and other repeated sentences are just off putting and make the conversation feel like the LLM thinks I am about to kill myself constantly and that is not what I want from my LLM.

discuss

order

sigmoid10|5 days ago

>Is she paying for it? That is the only question that matters in the end.

Don't underestimate advertising. Noone pays for Facebook or Google search. Yet the ad business with a couple billion users seems profitable enough to fund frontier LLM research and inference infrastructure as a side-gig in these companies. Google only rushed out AI overview because they saw ChatGPT eating their market share in information retrieval and Zuck is literally panicking about the fact that users share more personal details with OpenAI than on his doomscrolling attention sinks.

SlinkyOnStairs|5 days ago

> Don't underestimate advertising.

OpenAI is talking out of their ass with their advertising plans. Meta and Google are an advertising duopoly, extremely anti-competitive, and basically defrauding their own customers. OpenAI can't just replicate that.

Worse still is that OpenAI has no competitive edge. All the hype around their advertising plans is based on the idea that they can blend the ads right into the response, a turbocharged version of Native Advertising.

This is explicitly illegal. Very explicitly.

The US' FTC may have been declawed by the current US government, but the rest of the west will nuke them from orbit over it. Doubtless OpenAI will try some stunt alike marking the entire LLM response as "this is an ad", but that won't satisfy the regulators.

This only gets worse with further problems. An LLM hallucinating product features is going to invoke regulator wrath as well, and an LLM deciding to cut off the adcopy early will invoke the wrath of the advertiser.

> Yet the ad business with a couple billion users seems profitable enough to fund frontier LLM research and inference infrastructure as a side-gig in these companies

Also important: Not anymore. The tech giants are now issuing quite a lot of debt to pay for the AI plans.

nextlevelwizard|5 days ago

Maybe I am underestimating how suggestible average people are as someone who has never in their lives clicked on an ad I just can't see ads being anything but a deterrent for using the service

dahcryn|5 days ago

not necessarily, if openai managed to monetize free users. Could be through advertising, or integrations with marketplaces on commission (e.g. order your next Hello Fresh through ChatGPT? Get recommended a hotel?)

They could succeed where Alexa failed. A free user can even bring in more than a paid user if you look at some platforms like spotify, where apparently there is a large chunk of free users generating more income through ads than if they would pay

nextlevelwizard|5 days ago

We are so far away from ordering stuff from LLM

carlosjobim|5 days ago

Most potential customers wouldn't ever think in terms as "justifying" a €20 purchase when the product is great.

ChatGPT (and competitors) is an incredibly high value tool, and €20 per month is nothing for somebody who wants or needs it. It's just a matter of if they use it enough to start hitting the daily limits.

nextlevelwizard|4 days ago

This is why people are constantly moaning about paying too many subscriptions and why we have companies whose whole business is to remind you to not pay for stupid subscriptions.

As if paying constantly for a thing is normal and not dystopian

fuzzfactor|4 days ago

>no longer searches web for things it doesn't know, but instead guesses.

This could very well have been a cost-reduction effort to try and simulate what it was doing before.

Somebody must think training has already looked at the web enough, or there may be too much slop now that there was no contingency for.

Then you've got tighter guardrails to make it more palatable for a wider audience.

I guess different people would draw the line differently, but when it goes from being worth money to not worth it any more that could be an enshittification effect.

Especially if things like that accelerate.