top | item 44110809

(no title)

oliviergg | 9 months ago

Thank you for this release. I believe your library is a key component to unlocking the potential of LLMs without the limitations/restricitions of existing clients.

Since you released version 0.26 alpha, I’ve been trying to create a plugin to interact with a some MCP server, but it’s a bit too challenging for me. So far, I’ve managed to connect and dynamically retrieve and use tools, but I’m not yet able to pass parameters.

discuss

order

simonw|9 months ago

Yeah I had a bit of an experiment with MCP this morning, to see if I could get a quick plugin demo out for it. It's a bit tricky! The official mcp Python library really wants you to run asyncio and connect to the server and introspect the available tools.

mihau|9 months ago

Hi Simon!

I'm a heavy user of the llm tool, so as soon as I saw your post, I started tinkering with MCP.

I’ve just published an alpha version that works with stdio-based MCP servers (tested with @modelcontextprotocol/server-filesystem) - https://github.com/Virtuslab/llm-tools-mcp. Very early stage, so please make sure to use with --ta option (Manually approve every tool execution).

The code is still messy and there are a couple of TODOs in the README.md, but I plan to work on it full-time until the end of the week.

Some questions:

Where do you think mcp.json should be stored? Also, it might be a bit inconvenient to specify tools one by one with -T. Do you think adding a --all-tools flag or supporting glob patterns like -T name-prefix* in llm would be a good idea?