top | item 47015070

(no title)

13pixels | 16 days ago

The 'without SEO bias' point is the most interesting part here. We're already seeing users trust LLM synthesis more than direct search precisely because it (theoretically) filters out the affiliate spam.

But aren't we just moving the problem up a layer? Brands are aggressively optimizing for LLM visibility now ('Generative Engine Optimization'). If your framework relies on LLM training data or RAG, it's still downstream of whatever content dominates the web.

Curious if you've seen any 'hallucinated brands' or persistent bias towards certain vendors in your tests across models? e.g. does Gemini favor Google products in your laptop comparison?

discuss

order

boundedreason|14 days ago

I have not seen any brands hallucinated. The biggest issue was the math as you pointed out. I created an if-then-else if statements in the prompt to force the use of code/Python for calculations.

The free version of Gemini functions less well than the others because it executes my instructions less rigorously, but I have not seen product prevalence over others. I'm also not sure how I could track though without running the same case study several dozen times to see if anything statistically significant comes ups or changes.

At the end of the day I would say my idea is best for getting you from 5% knowledge on a topic to an 80% level and a much more advanced and objective place to make a decision and finish out with your own "eyes-on" research.