If agents become the users, what happens to the software market?
2 points| nworley | 5 days ago
They scrape HTML, interpret UIs, simulate clicks and and sometimes guess at workflows. It works, but it feels like a transitional phase like early mobile apps pretending to be desktop sites.
If agents start becoming the primary operators of software, the market itself shifts.
Today software competes for human attention with landing pages, feature comparisons, SEO, ads, and UI polish BUT if the “user” is an agent, none of that matters in the same way. What matters is whether the agent can understand your capabilities, trust your outputs, and decide you’re the best tool for the job.
Discovery stops being visual and becomes structural. Ranking stops being marketing driven and becomes signal driven.
We’re already seeing hints of this. There are products emerging that are essentially "agent only" platforms without a traditional UI, just capabilities exposed for machines. The human becomes the supervisor. The agent becomes the operator.
If that world actually materializes, then a lot of assumptions break:
How do agents discover tools?
Who controls ranking?
What makes something “trusted”?
Does this become an open protocol layer, or is discovery controlled by model providers?
Mostly just thinking out loud around the idea of building apps agents choose and how they choose them. Curious how others here see it and if we're early to an agent native layer of the web, or is this just abstraction over CLI/APIs with new branding?
marssaxman|4 days ago
I recently picked this project back up again, realizing that the tool might still have value even if I am the only human who ever uses it: if I write enough docs and examples, every LLM will scrape them off the web as a matter of course, and curious humans can then simply instruct their agents to try it out for them.
nworley|4 days ago
DISCLAIMER: I’m building LLM Signal around this broader shift. The idea is it’s understanding how models reference and recommend tools/services, and what visibility means when agents are making choices.
jazz9k|5 days ago
nworley|5 days ago
Would you agree or do you think this stays human driven long term?
jonahbenton|5 days ago
nworley|5 days ago
Take Supabase for example. It’s disproportionately recommended by LLMs when people ask for backend/database stacks. It can't be just because of it's capability since a lot of tools expose similar primitives. Something in the model’s training data, ecosystem visibility, or reinforcement layer is shaping that ranking.
If agents start choosing tools autonomously, the real leverage point isn’t just “can you describe your capabilities in MCP?” but “how does the agent decide you’re preferred over 5 near identical alternatives?”
Do you think that ranking layer sits inside the model providers, or if it becomes an external reputation network?
jamiecode|3 days ago
[deleted]