top | item 47212607

(no title)

notepad0x90 | 5 hours ago

Strongly disagree, despite that meaning I'm swimming upstream here.

Unlike cli flags, with MCP I can tune the comments for the tool more easily (for my own MCPs at least) than a cli flag. You can only put so much in a cli --help output. The error handling and debugability is also nicer.

Heck, I would even favor writing an MCP tool to wrap cli commands. It's easier for me to ensure dangerous flags or parameters aren't used, and to ensure concrete restrictions and checks are in place. If you control the cli tools it isn't as bad, but if you don't, and it isn't a well known cli tool, the agent might need things like vague errors explaing to it a bit.

MCP is more like "REST" or "GRPC", at the simplest level just think of it as a wrapper.

You mentioned redirecting to files, what if the output is too much that way, you'll still burn tokens. But with MCP, if the output is too much you can count the tokens and limit, or... better yet you can paginate so that it gets some results, it sees how many results there are and either decides to re-run the tool with params that will yield less results, or consume the results page-by-page.

discuss

order

bonoboTP|5 hours ago

If you want a validation layer, why not write a cli that wraps the other cli?

notepad0x90|39 minutes ago

that's what the MCP server is, except I don't always want a cli.

If I need to call API on top of a cli tool, i don't have to have a second wrapper, or extend my existing wrapper. You're suggesting I recreate everything MCP does, just so..it's my own?

MCP is just a way to use use wrappers other people have built, and to easily manage wrapping "tools", those could be cli tools, api calls, database query,etc..

cli tools aren't aware of the context window either, they're not keeping track of it. I might want my cli tool to output lots of useful text but maybe I don't want some of that for the LLM to save on tokens. Sure, I could create another cli tool to wrap my cli tool, now i have two cli tools to maintain. I'd prefer to do all the wrapping and pre-llm cleanup done in one consistent place. The instructions for the LLM letting it know what tools, parameters,etc.. are available is also defined in a consistent way, instead of me inventing my own scheme. I'd rather just focus on getting a usable agent.

I don't get the issue people in this thread have with MCP, is there some burden about it I haven't ran into? It's pretty easy to set one up.

woctordho|3 hours ago

You can just write a README.md and put it along with the CLI

notepad0x90|30 minutes ago

Because that is a consistent and reliable way of doing it? what happens when I have to use something that can't be done via cli, or if I have lots of small use cases (like I sometimes do with MCP servers - lots of tiny functions), do I create a separate readme for each of them and manage the mess? what exactly is the issue with MCP? is it too well organized?

I mean technically I could be using cli tools to browse HN as well I guess. curl would do fine I suppose, but that'd be too annoying. Why not use the best tool for the task. as far as I'm concerned an stdio MCP server is a cli tool, it just happens to be an integration layer that can run either other cli tools, or do other things as it makes sense.

And FFS! I know jq can do wonderful things, but I'd seriously question anyone's competency if you're building a production code base that relies on a tangled mess of jq piping commands when you could just write a python function to parse, validate and process the content. And don't get me started with the risks of letting an agent running commands unchecked. What happens when your agent is running your cli tool using user-input arguments and you forgot to make sure command-injection won't be a thing? That can happen with MCP as well, but in many cases you shouldn't just run cli commands, you would call libraries, apis, process data files directly instead. You wouldn't call the sqlite3 command when you can just use the library/module.