(no title)
ondrsh | 11 months ago
In traditional applications, you know at design-time which functionality will end up in the final product. For example, you might bundle AI tools into the application (e.g. by providing JSON schemas manually). Once you finish coding, you ship the application. Design-time is where most developers are operating in, and it's not where MCP excels. Yes, you can add tools via MCP servers at design-time, but you can also include them manually through JSON schemas and code (giving you more control because you're not restricted by the abstractions that MCP imposes).
MCP-native applications on the other hand can be shipped, and then the users can add tools to the application — at runtime. In other words, at design-time you don't know which tools your users will add (similar to how browser developers don't know which websites users will visit at runtime). This concept — combined with the fact that AI generalizes so well — makes designing this kind of application extremely fascinating, because you're constantly thinking about how users might end up enhancing your application as it runs.
As of today, the vast majority of developers aren't building applications of this kind, which is why there's confusion.
paradite|11 months ago
Most developers are currently building MCP servers that wrap a 3rd party or wrap their own service. And in this case, they are still at deciding on the tools in design-time, not runtime.
Also I want to mention that both Cursor and Claude desktop don't support dynamic toggling on / off tools within a MCP server, which means users can't really pick which tools to expose to AI. It exposes all tools within a MCP server in current implementation.
ondrsh|11 months ago
I believe you're implying that server developers can focus less on this concept (or sometimes even ignore it) when building a server. This is true.
However, the fact that end-users can now run MCP servers directly — rather than having to wait for developers to bundle them into applications — is a significant paradigm shift that directly benefits MCP server authors.
vykthur|11 months ago
In your opinion, what percentage of apps might benefit from this model where end users bring their own MCP tools to extend the capabilities of your app. What are some good examples of this - e.g., a development tool like Cursor, WindSurf likely apply, but are there others, preferable with end users?
How is the user incentivized to upskill towards finding the right tool to "bring in", installing it and then using it to solve their problem.
How do we think about about the implications of bring your own tools, knowing that unlike plugin based systems (e.g,. Chrome/extensions), MCP servers can be unconstrained in behaviour - all running within your app
ondrsh|11 months ago
Long term close to 100%. Basically all long-running, user-facing applications. I'm looking through my dock right now and I can imagine using AI tools in almost all of them. The email client could access Slack and Google Drive before drafting a reply, Linear could access Git, Email and Slack in an intelligent manner and so on. For Spotify I'm struggling right now, but I'm sure there'll soon be some kind of Shazam MCP server you can hum some tunes into.
> How is the user incentivized to upskill towards finding the right tool to "bring in", installing it and then using it to solve their problem.
This will be done automatically. There will be registries that LLMs will be able to look through. You just ask the LLM nicely to add a tool, it then looks one up and asks you for confirmation. Running servers locally is an issue right now because local deployment is non-trivial, but this could be solved via something like WASM.
> How do we think about about the implications of bring your own tools, knowing that unlike plugin based systems (e.g,. Chrome/extensions), MCP servers can be unconstrained in behaviour - all running within your app
There are actually 3 different security issues here.
#1 is related to the code the MCP server is running, i.e. the tools themselves. When running MCP servers remotely this obviously won't be an issue, when running locally I hope WASM can solve this.
#2 is that MCP servers might be able to extract sensitive information via tool call arguments. Client applications should thus ask for confirmation for every tool call. This is the hardest to solve because in practice, people won't bother checking.
#3 is that client applications might be able to extract sensitive information from local servers via tool results (or resources). Since the user has to set up local servers themselves right now, this is not a huge issue now. Once LLMs set them up, they will need to ask for confirmation.
amerine|11 months ago
Well said.
freeone3000|11 months ago
ondrsh|11 months ago
HATEOAS is great for web-like structures because in each response it not only includes the content, but also all actions the client can take (usually via links). This is critical for architectures without built-in structure — unlike Gopher which has menus and FTP and Telnet which have stateful connections — because otherwise a client arriving at some random place has no indication on what to do next. MCP tackles this by providing a stateful connection (similar to FTP) and is now moving toward static entry points similar to Gopher menus.
I specifically wrote about why pure HATEOAS should come back instead of MCP: https://www.ondr.sh/blog/ai-web
TeMPOraL|11 months ago
(Not even webshit is best used by REST, as evidenced by approximately every "REST" API out there, designed as RPC over HTTP pretending it's not.)
kblissett|11 months ago
ondrsh|11 months ago
Plugins have pre-defined APIs. You code your application against the plugin API and plugin developers do the same. Functionality is being consumed directly through this API — this is level 1.
MCP is a meta-protocol. Think of it as an API that lets arbitrary plugins announce their APIs to the application at runtime. MCP thus lives one level above the plugin's API level. MCP is just used to exchange information about the level 1 API so that the LLM can then call the plugin's level 1 API at runtime.
This only works because LLMs can understand and interpret arbitrary APIs. Traditionally, developers needed to understand an API at design-time, but now LLMs can understand an API at runtime. And because this can now happen at runtime, users (instead of developers) can add arbitrary functionality to applications.
I hate plugging my own blog again but I wrote about that exact thing before, maybe it helps you: https://www.ondr.sh/blog/thoughts-on-mcp
aeonik|11 months ago
sebazzz|11 months ago