(no title)
jdoliner | 3 months ago
Now there's lots of variables that can be tweaked on this. So it's possible to get it to work. But there's a lot less room for error.
jdoliner | 3 months ago
Now there's lots of variables that can be tweaked on this. So it's possible to get it to work. But there's a lot less room for error.
btheunissen|3 months ago
As someone outside of the ad-tech space it blows my mind how much Instagram and Google ads cost these days, and OpenAI would certainly want to price their ad offering as more “premium” (see: $$$).
tim333|3 months ago
ggregoire|3 months ago
Which is great… that's why I don't use chatGPT at all, having a LLM summary + a list of websites to deepen the search if I need, is just a superior user experience IMO.
crackrook|3 months ago
bjacobel|2 months ago
AmbroseBierce|3 months ago
Maxion|3 months ago
Within the web-search and product-search requests there is undoubtedly A LOT of overlap between peoples queries. It would not be unfeasible to have on nice long good answer generated by e.g. ChatGPT 5.1 cached, and first throw the initial user request into some kind of classifier and use a smaller LLM to judge whether the cached answer is close enough to the initial query.
magixx|3 months ago
HNdev1995|3 months ago
malthaus|3 months ago
combine this with the fact that i have disposable income.
i can't fathom how much advertisers are willing to pay to put themselves in front of my eyes vs a google search for "dining table"
chrismustcode|3 months ago
Go on something like openrouter with gpt 5.1 and use the chat then check the billing and you’ll see an average joe query is like 0.00102 or something.
You’re quoting figures from articles for initial ChatGPT release in 2022
thefourthchime|3 months ago
Cthulhu_|3 months ago
pengaru|3 months ago
That will no doubt have higher value than Google's $.02/search revenue, since the users will be completely incapable of separating the wheat from the chaff.