top | item 46552553

(no title)

kburman | 1 month ago

This analysis dismisses MCP by focusing too narrowly on local file system interactions. The real value isn't just running scripts; it's interoperability.

MCP allows any client (Claude, Cursor, IDEs) to dynamically discover and interact with any resource (Postgres, Slack) without custom glue code. Comparing it to local scripts is like calling USB a fad because parallel ports worked for printers. The power is standardization: write once, support every AI client.

Edit:

To address the security concerns below: MCP is just the wire protocol like TCP or HTTP. We don't expect TCP to natively handle RBAC or prevent data exfil. That is the job of the application/server implementation.

discuss

order

Aldipower|1 month ago

> To address the security concerns below: MCP is just the wire protocol like TCP or HTTP. We don't expect TCP to natively handle RBAC or prevent data exfil. That is the job of the application/server implementation.

That is simply incorrect. It is not a wire protocol. Please do not mix terminology. MCPs communicate via JSON-RPC which is the wire protocol. And TCP you describing as wire protocol isn't a wire protocol at all! TCP is a transport protocol. IT isn't only philosophy, you need some technical knowledge too.

kburman|1 month ago

Fair point on the strict terminology, I was using 'wire protocol' broadly to mean the communication standard vs. the implementation.

A more precise analogy is likely LSP (Language Server Protocol). MCP is to AI agents what LSP is to IDEs. LSP defines how an editor talks to a language server (go to definition, hover, etc.), but it doesn't handle file permissions or user auth, that’s the job of the OS or the editor.

smurda|1 month ago

Would you say MCP is a protocol (or standard) similar to how REST is a protocol in that they both define how two parties communicate with each other? Or, in other words, REST is a protocol for web APIs and MCP is a protocol for AI capabilities?

embedding-shape|1 month ago

> MCP allows any client (Claude, Cursor, IDEs) to dynamically discover and interact with any resource (Postgres, Slack) without custom glue code.

I don't think MCP is what actually enables that, it's LLMs that enable that. We already had the "HTTP API" movement, and it still didn't allow "without custom glue code", because someone still had to write the glue.

And even with MCP, something still has to glue things together, and it currently is the LLMs that do so. MCP probably makes this a bit easier, but OpenAPI or something else could have as easily have done that. The hard and shitty part is still being done by a LLM, and we don't need MCP for this.

vidarh|1 month ago

The thing is, current models are good enough that you can mostly achieve the same by just putting a markdown file[1] on your server that describes their API, and tell people to point their agent at that.

For complex interactions it might be marginally more efficient to use an MCP server, but current SOTA models are good at cobbling together tools, and unless you're prepared to spend a lot of time testing how the models actually end up interacting with your MCP tools you might find it better to "just" describe your API to avoid a mismatch between what you expose and what the model thinks it needs.

[1] Slightly different, but fun: For code.claude.com, you can add ".md" to most paths and get back the docs as a Markdown file; Claude Code is aware of this, and uses it to get docs about itself. E.g. https://code.claude.com/docs/en/overview.md )

falloutx|1 month ago

adding MCP servers isnt free, they take space in your context and if you are working at anything bigger than a startup, none of the companies allow thier workers to connect to other companies' MCPs and they can easily make thier MCP a data exfil machine

jauntywundrkind|1 month ago

I'm not sure what the use case is? The llm is the user's agent and can coordinate inter-MCP work itself, can feed data across mcp's.

the_mitsuhiko|1 month ago

> MCP allows any client (Claude, Cursor, IDEs) to dynamically discover and interact with any resource (Postgres, Slack) without custom glue code.

My agent writes its own glue code so the benefit does not seem to really exist in practice. Definitely not for coding agents and increasingly less for non coding agents too. Give it a file system and bash in a sandbox and you have a capable system. Give it some skills and it will write itself whatever is neeeded to connect to an API.

Every time I think I have a use case for MCP I discover that when I ask the agent to just write its own skill it works better, particularly because the agent can fix it up itself.

aschuth|1 month ago

The skill/CLI argument misses what MCP enables for interactive workflows. Sure, Claude can shell out to psql. But MCP lets you build approval gates, audit logs, and multi-step transactions that pause for human input.

Claude Code's --permission-prompt-tool flag is a good example. You point it at an MCP server, and every permission request goes through that server instead of a local prompt. The server can do whatever: post to Slack, require 2FA, log to an audit trail. Instead of "allow all DB writes" or "deny all," the agent requests approval for each mutation with context about what it's trying to do.

MCP is overkill for "read a file" but valuable when you need the agent to ask permission, report progress, or hand off to another system mid-task.

p337|1 month ago

You end up wasting tokens on implementation, debugging, execution, and parsing when you could just use the tool (tool description gets used instead).

Also, once you give it this general access, it opens up essentially infinite directions for the model to go to. Repeatability and testing become very difficult in that situation. One time it may write a bash script to solve the problem. The next, it may want to use python, pip install a few libraries to solve that same problem. Yes, both are valid, but if you desire a particular flow, you need to create a prompt for it that you'll hope it'll comply with. It's about shifting certain decisions away from the model so that it can have more room for the stuff you need it to do while ensuring that performance is somewhat consistent.

For now, managing the context window still matters, even if you don't care about efficient token usage. So burning 5-10% on re-writing the same API calls makes the model dumber.

CuriouslyC|1 month ago

Interoperability? MCP has zero "interoperability", the model has to mash together everything manually.

That's why anthropic keeps walking back MCP towards just code. They'd run it back but that would be embarrassing.

thomasfromcdnjs|1 month ago

Yeah, it might be useful for some people to stop thinking about MCP in relation to agentic harnesses. Think more about environments you don't control, such as Claude Web or ChatGPT. MCP is just a standard (fallible like most standards) but has gained traction and likely to stick around. Extremely useful for non technical people if all their apps/agents are communicating with each other (mcp).

Useful for service providers who want to expose themselves to technical consumers without having to write custom sdk's that consume their restful/graphql endpoints.

The best implementation of MCP is when you won't even hear about it.

I definitely agree that it is currently pretty shit and unnecessary for agentic coding, cli's or some other solutions will come along. (the premise being the same though, searchable/discoverable and executable tools in your agentic harness is likely going to be a very good thing instead of having to document in claude.md which os and cli specific commands it should run (even though this seems far more powerful and sensible at this point in time))

h33t-l4x0r|1 month ago

Doesn't that require a complete lack of concern on the part of the postgres side? I feel like I'm missing something in terms of why anyone would even ever allow that.

ACCount37|1 month ago

In the same way giving an LLM shell access requires a complete lack of concern.

You can give an LLM a shell into a container sandbox with basically nothing in it, or root shell on a live production server, or anything in between. Same goes for how much database access you want to give an LLM with your MCP shims.

apothegm|1 month ago

With a read only account, with access only to certain safe tables and views, for querying.

kobalsky|1 month ago

you can ask the LLM for an adhoc report. it can look at the schema, run the queries and give you the results. of course you can just give it read access.

lateral_cloud|1 month ago

[deleted]

TeodorDyakov|1 month ago

It is really funny to me that in 2026 a coherent, grammatically correct response is assumed to be written by an AI. Oh how the tables have turned.