top | item 46022551

(no title)

mindwok | 3 months ago

It'll be interesting to see how this goes, but my first impression is that it's actually not where we want to go. One of the cool things about MCP (or even just tool calling) is that the LLM on top of a tool provides a highly flexible and dynamic interface to traditionally static tools.

I love being able to type "make an iptables rule that opens 443" instead of having to dig out the man page and remember how to do that. IMO the next natural extension of this is giving the LLM more capability to generate user interfaces so I can interact with stuff exactly bespoke to my task.

This on the other hand seems the other way round, it's like bolting a static interface onto the LLM, which could defeat the purpose of the LLM interface layer in the first place right?

discuss

order

mercury24aug|3 months ago

Giving LLM the ability to generate UI is a cool concept, but our models are not there yet. MCP Apps can be extremely powerful, for example, you can play Doom inside ChatGPT: https://x.com/rauchg/status/1978235161398673553?s=20

I don't think we can generate anywhere close to this kind of UI just yet.

We built https://usefractal.dev/ to make it easier for people to build ChatGPT Apps (they are technically MCP Apps) so I have seen the use cases. Most of these use cases LLM cannot generate the UI on the fly.

johndevor|3 months ago

That's what we're doing at Hallway (https://hallway.com).

UIs should be fully remix-able and not set by the datasource/SaaS. So we built out a system to allow users to use the standard UI or remix apps as they want. Like Val.town, but with a flexible UX/workspace layer. Come check us out!

nsonha|3 months ago

> giving the LLM more capability to generate user interfaces

This is not dissimilar to the argument that "MCP needs not exist, just tell llm to run commands and curl". Well, llm can do those, and generate user interfaces. It's just they don't work reliably (maybe ever, depending on how you define "reliable").

I guess as engineers we can do some work and create stopgap solutions or we can all sit and wait for someone else (who? when?) to make AGIs in which everything just magically works, reliably.

vidarh|3 months ago

MCP has already drastically lost utility already thanks to skills - for most things it is easier to just hand the model a CLI that it can run.

I'd imagine the same thing will happen here: It will prove more flexible to not push the model (and user) towards a UI that may not match what the user is trying to accomplish.

To me this seems like something I categorically don't want unless it is purely advisory.

prescriptivist|3 months ago

MCPs as a thin layer over existing APIs has lost utility. Custom MCPs for teams that reduces redundant thinking/token consumption and provides more useful context for the agent and decreases mean time to decision making is where MCPs shine.

Something as simple as correlating a git SHA to a CI build takes 10s of seconds and some number of tokens if Claude is utilizing skills (making API calls to the CI server and GitHub itself). If you have an MCP server that Claude feeds a SHA into and gets back a bespoke, organized payload that adds relevant context to its decision making process (such as a unified view of CI, diffs, et. al), then MCP is a win.

MCP shines as a bespoke context engine and fails as a thin API translation layer, basically. And the beauty/elegance is you can use AI to build these context engines.

electric_muse|3 months ago

There are important contexts outside of machines you control where installing or running cli commands isn’t possible. In those cases, skills won’t help, but MCP will.

ivape|3 months ago

I personally don’t see why developers should just add tons of functionality to any model for free like this. Some of these MCPs are pretty good, and I was a little shocked how much functionality developers released for free to drop into something like Claude. Either developers are stupid or there really is no market yet.

kkarpkkarp|3 months ago

> for free like this

for free: not.

but I'm certain the race has just begun: big service providers and online retailers are currently implementing widgets enabling the purchase of their services and goods directly within the ChatGPT or Claude chat windows.