top | item 43676084

Quick Primer on MCP Using Ollama and LangChain

131 points| bswamina | 10 months ago |polarsparc.com

20 comments

order

minimaxir|10 months ago

In the case of MCPs, this post is indeed a quick primer. But from a coding standpoint, and despite the marketing that Agent/MCP development simplifies generative LLM workflows, it’s a long coding mess that is hard to tell if it’s even worth it. It’s still the ReAct paradigm at a low level and if you couldn’t find a case for tools then, nothing has changed other than the Agent/MCP hype making things more confusing and giving more ammunition to AI detractors.

copperroof|10 months ago

Yes, I read this post and was actually emotionally affected by a post about coding. I was surprised how sad I felt. I’ve been around for a long time but this truly feels like the best era if you like gluing trash to other trash and shipping it.

gsibble|10 months ago

MCP is great for when you’re integrating tools locally into IDEs and such. It’s a terrible standard for building more robust applications with multi-user support. Security and authentication are completely lacking.

99% of people wouldn’t be able to find the API keys you need to feed into most MCP servers.

bswamina|10 months ago

You are correct ... it is still early days IMHO ... will have to see how this evolves

pkoird|10 months ago

What, according to you, are some alternatives that exist or are in development that fill these gaps?

bongodongobob|10 months ago

Is anyone really still using langchain? Has it gotten better? Seemed like a token burning platform the last time I used it.

jsemrau|10 months ago

I recently finished a Langgraph class on Deeplearning.ai about a week after it came out. Already then the provided Notebook example didn't work and I needed to debug it to pass. I had great hopes on Langchain in 2024, but their product decisions toward LCEL and the complete lack of a discernible roadmap that does not constantly break things made me move away from them.

pydry|10 months ago

It's still absolutely fucking terrible - the mongo of the LLM world.

WD-42|10 months ago

If you need to define and write the functions to calculate interest… what exactly is the llm bringing to the table here? I feel like I’m missing something.

minimaxir|10 months ago

The LLM is what decides which endpoint/tool to call (or none at all) in response to the user input.

The original 2022 ReACT paper is still best explainer: https://arxiv.org/abs/2210.03629

mettamage|10 months ago

I think it's the case you don't need to but if you find it necessary. Basically you're augmenting LLMs with "normal computer power" just like a human.

gatienboquet|10 months ago

You know it's going to be a great article when the design is from 1995

gclawes|10 months ago

This website design is blessed. A great return to the past

entrop|10 months ago

I'm just distracted by the "WALLA" in the penultimate paragraph.

It should be "Voilà", which is French for “there it is”.

the_arun|10 months ago

Even name takes us back in time - SPARC

trebligdivad|10 months ago

The units for the free memory are interestingly wrong; 'Executing shell command: free -m' The total system memory is 64222 bytes, with used (available) 8912 bytes.

which given that there seems to be no way to specify any data structure or typing in this MCP interface is hardly surprising!