In the case of MCPs, this post is indeed a quick primer. But from a coding standpoint, and despite the marketing that Agent/MCP development simplifies generative LLM workflows, it’s a long coding mess that is hard to tell if it’s even worth it. It’s still the ReAct paradigm at a low level and if you couldn’t find a case for tools then, nothing has changed other than the Agent/MCP hype making things more confusing and giving more ammunition to AI detractors.
Yes, I read this post and was actually emotionally affected by a post about coding. I was surprised how sad I felt. I’ve been around for a long time but this truly feels like the best era if you like gluing trash to other trash and shipping it.
MCP is great for when you’re integrating tools locally into IDEs and such. It’s a terrible standard for building more robust applications with multi-user support. Security and authentication are completely lacking.
99% of people wouldn’t be able to find the API keys you need to feed into most MCP servers.
I recently finished a Langgraph class on Deeplearning.ai about a week after it came out. Already then the provided Notebook example didn't work and I needed to debug it to pass. I had great hopes on Langchain in 2024, but their product decisions toward LCEL and the complete lack of a discernible roadmap that does not constantly break things made me move away from them.
If you need to define and write the functions to calculate interest… what exactly is the llm bringing to the table here? I feel like I’m missing something.
The units for the free memory are interestingly wrong;
'Executing shell command: free -m'
The total system memory is 64222 bytes, with used (available) 8912 bytes.
which given that there seems to be no way to specify any data structure or typing in this MCP interface is hardly surprising!
minimaxir|10 months ago
copperroof|10 months ago
gsibble|10 months ago
99% of people wouldn’t be able to find the API keys you need to feed into most MCP servers.
sunpazed|10 months ago
Authentication, session management, etc, should be handled outside of the standard, and outside of the LLM flow entirely.
I recently mused on these here; https://github.com/sunpazed/agent-mcp/blob/master/mcp-what-i...
bswamina|10 months ago
pkoird|10 months ago
bongodongobob|10 months ago
jsemrau|10 months ago
pydry|10 months ago
WD-42|10 months ago
minimaxir|10 months ago
The original 2022 ReACT paper is still best explainer: https://arxiv.org/abs/2210.03629
mistrial9|10 months ago
mettamage|10 months ago
gatienboquet|10 months ago
gclawes|10 months ago
entrop|10 months ago
It should be "Voilà", which is French for “there it is”.
the_arun|10 months ago
trebligdivad|10 months ago
which given that there seems to be no way to specify any data structure or typing in this MCP interface is hardly surprising!