(no title)
mindwok | 3 months ago
I love being able to type "make an iptables rule that opens 443" instead of having to dig out the man page and remember how to do that. IMO the next natural extension of this is giving the LLM more capability to generate user interfaces so I can interact with stuff exactly bespoke to my task.
This on the other hand seems the other way round, it's like bolting a static interface onto the LLM, which could defeat the purpose of the LLM interface layer in the first place right?
mercury24aug|3 months ago
I don't think we can generate anywhere close to this kind of UI just yet.
We built https://usefractal.dev/ to make it easier for people to build ChatGPT Apps (they are technically MCP Apps) so I have seen the use cases. Most of these use cases LLM cannot generate the UI on the fly.
johndevor|3 months ago
UIs should be fully remix-able and not set by the datasource/SaaS. So we built out a system to allow users to use the standard UI or remix apps as they want. Like Val.town, but with a flexible UX/workspace layer. Come check us out!
nsonha|3 months ago
This is not dissimilar to the argument that "MCP needs not exist, just tell llm to run commands and curl". Well, llm can do those, and generate user interfaces. It's just they don't work reliably (maybe ever, depending on how you define "reliable").
I guess as engineers we can do some work and create stopgap solutions or we can all sit and wait for someone else (who? when?) to make AGIs in which everything just magically works, reliably.
vidarh|3 months ago
I'd imagine the same thing will happen here: It will prove more flexible to not push the model (and user) towards a UI that may not match what the user is trying to accomplish.
To me this seems like something I categorically don't want unless it is purely advisory.
prescriptivist|3 months ago
Something as simple as correlating a git SHA to a CI build takes 10s of seconds and some number of tokens if Claude is utilizing skills (making API calls to the CI server and GitHub itself). If you have an MCP server that Claude feeds a SHA into and gets back a bespoke, organized payload that adds relevant context to its decision making process (such as a unified view of CI, diffs, et. al), then MCP is a win.
MCP shines as a bespoke context engine and fails as a thin API translation layer, basically. And the beauty/elegance is you can use AI to build these context engines.
electric_muse|3 months ago
ivape|3 months ago
kkarpkkarp|3 months ago
for free: not.
but I'm certain the race has just begun: big service providers and online retailers are currently implementing widgets enabling the purchase of their services and goods directly within the ChatGPT or Claude chat windows.