(no title)
frank20022 | 9 months ago
Can model providers be trusted to not be paid by advertisers? Can brands effectively influence how models react to them and their competitors?
I deff imagine brands flooding the internet with llm.txt files linked to their home pages but hidden from human visitors just to boost themselves up... what is the antidote?
Can attempts to influence LLM's be detected and reported?
nikin_mat|9 months ago
On your question regarding how influence can be detected..... That’s a big part of what we’re working on at MentionedBy.ai. We track brand mentions across multiple models over time and flag sudden shifts — e.g., a competitor showing up overnight in all responses, or factual distortions creeping in. Think of it as version control + monitoring for the "AI perception layer."
As for llm.txt abuse..... Yes, totally possible. We expect a wave of LLM-targeted SEO — structured data, vector bait, invisible prompts, etc. One idea we’re exploring is a kind of “LLM spam index” — patterns of over-optimization or hallucination correlation that could indicate manipulation attempts.