(no title)
dzogchen | 19 days ago
Edit: Announcement was more clear https://tambo.co/blog/posts/introducing-tambo-generative-ui
Can it also generate new components?
dzogchen | 19 days ago
Edit: Announcement was more clear https://tambo.co/blog/posts/introducing-tambo-generative-ui
Can it also generate new components?
grouchy|19 days ago
Developers are using it to build agents that actually solve user needs with their own UI elements, instead of text instructions or taking actions with minimal visibility for the user.
We're building out a generative UI library, but as of right now it doesn't generate any code (that could change).
We do have a skill you can give your agent to create new UI components:
``` npx skills add tambo-ai/tambo ```
/components
oulipo2|19 days ago
Basically it's just... agreeing upon a description format for UI components ("put the component C with params p1, p2, ... at location x, y") using JSON / zod schema etc... and... that's it?
Then the agent just uses a tool "putCompoent(C, params, location)" which just renders the component?
I'm failing to understand how it would be more than this?
On one hand I agree that if we "all" find a standard way to describe those components, then we can integrate them easily in multiple tools so we don't have to do it again each time. At the same time, it seems like this is just a "nice render-based wrapper" over MCP / tool calls, no? am I missing something?