(no title)
ondrsh | 11 months ago
> As far as I understand, they would have to somehow pass in either a script to spin up their own server locally (unlikely for your everyday user), or a url to access some live MCP server. This means that the host they are using needs an input on the frontend specifically for this, where the user can input a url for the service they want their LLM to be able to talk to. This then gets passed to the client, the client calls the server, the server returns the list of available tools, and the client passes those tools to the LLM to be used.
This is precisely how it would work. Currently, I'm not sure how many host applications (if any) actually feature a URL input field to add remote servers, since most servers are local-only for now. This situation might change once authentication is introduced in the next protocol version. However, as you pointed out, even if such a URL field existed, the discovery problem remains.
But discovery should be an easy fix, in my opinion. Crawlers or registries (think Google for web or Archie for FTP) will likely emerge, so host applications could integrate these external registries and provide simple one-click installs. Apparently, Anthropic is already working on a registry API to simplify exactly this process. Ideally, host applications would automatically detect when helpful tools are available for a given task and prompt users to enable them.
The problem with local-only servers is that they're hard to distribute (just as local HTTP servers are) and that sandboxing is an issue. One workaround is using WASM for server development, which is what mcp.run is doing (https://docs.mcp.run/mcp-clients/intro), but of course this breaks the seamless compatibility.
PeterBrink|11 months ago
Thanks for the awesome feedback, and congrats on the blog posts by the way, they are a great read!
josvdwest|11 months ago