top | item 44666703

(no title)

keminghe | 7 months ago

This is great feedback. I just improved the README and DockerHub overview to be more clear:

"Stop getting out-of-date Python package manager commands from your AI. Cross-reference latest official pip, poetry, uv, and conda docs with auto-updates."

discuss

order

imcritic|7 months ago

It didn't get clearer much: so what am I supposed to do to achieve that? If I query AI and it gives me outdated instructions how is your server supposed to help here? I suppose that I am supposed to somehow direct that AI's pipeline to use MCP protocol to make as a next step and query a locally running instance of the server of yours to improve the final answer..? The solution sounds quite ad-hoc: so instead of fixing the problem where it is (in AI's knowledge base), you suggest to apply corrections to it's results by making it query a server each user supposed to run locally? Sounds as a wrong approach to me, I honestly doubt many people would bother working with AI that way, especially given that AI is a paid service.

What I think would be great is either you hosting a central server permanently available to public and somehow convincing major AI service providers to query your servers for solving that narrow scope of tasks, or rather do something similar for open source models available on Hugging Face or something.

keminghe|7 months ago

You're absolutely right about the root cause being outdated AI knowledge bases/training data. I agree, my solution doesn't address that directly.

Where this actually shines is with local LLMs (Ollama, etc) - smaller models, no API costs, fully offline, and the AI gets fresh docs without waiting months for model retraining cycles. Your point about convincing major providers to integrate something like Dash (https://kapeli.com/dash) would definitely be the ideal solution though.

I definitely hear you on the broader ecosystem approach. Anything you've been working on in the same space?