Tambo 1.0: Open-source toolkit for agents that render React components
101 points| grouchy | 19 days ago |github.com
We've been building Tambo for about a year, and just released our 1.0.
We make are making it easier to register React components with Zod schemas, a build an agent picks the right one and renders the right props.
We handle many of the complications with building generative user interfaces like: managing state between the user, the agent, and react component, rendering partial props, and we handle auth between your user, and MCP. We also support adding MCP servers and most of the spec.
We are 100% open-source and currently have 8k+ GitHub stars, thousands of developers, and over half-millions messages processed by our hosted service.
If you're building AI agents with generative UI, we'd like to hear from you.
avaer|19 days ago
My agents need a UI and I'm in the market for a good framework to land on, but as is always the case in these kinds of interfaces I strongly suspect there will be a standard inter-compatible protocol underlying it that can connect many kinds of agents to many kinds of frontends. What is your take on that?
lachieh|19 days ago
The way we elevator-pitch Tambo is "an agent that understands your UI" (which, admittedly, is not very descriptive on the implementation details). We've spent the time on allowing components (be that pre-existing or purpose-built) to be registered as tools that can be controlled and rendered either in-chat, or out within your larger application. The chat box shouldn't be the boundary.
Personally, my take on standards like A2UI is that they could prove useful but the models have to easily understand them or else you have to take up additional context explaining the protocol. Models already understand tool-calling so we're making use of that for now.
dzogchen|19 days ago
Edit: Announcement was more clear https://tambo.co/blog/posts/introducing-tambo-generative-ui
Can it also generate new components?
grouchy|19 days ago
Developers are using it to build agents that actually solve user needs with their own UI elements, instead of text instructions or taking actions with minimal visibility for the user.
We're building out a generative UI library, but as of right now it doesn't generate any code (that could change).
We do have a skill you can give your agent to create new UI components:
``` npx skills add tambo-ai/tambo ```
/components
fitzgera1d|19 days ago
Maybe I’m misunderstanding, but isn’t generating UI just-in-time kind of risky because AI can get it wrong? Whereas you can generate/build an MCP App once that is deterministic, always returns a working result, and just as AI native.
milst|19 days ago
jauntywundrkind|19 days ago
Release: http://blog.modelcontextprotocol.io/posts/2026-01-26-mcp-app... . Announcement: http://blog.modelcontextprotocol.io/posts/2025-11-21-mcp-app... . Submission: https://news.ycombinator.com/item?id=46020502
grouchy|19 days ago
But our use case is a little different. MCP Apps embed interfaces into other agents. Tambo is an embedded agent that can render your UI. There's overlap for sure, but many of the developers using us don't see themselves putting their UI inside ChatGPT or Claude. That's just not how users use their apps.
That said, we're thinking about how we could make it easy to build an embedded agent and then selectively expose those UI elements over MCP Apps where it makes sense.
_the_inflator|19 days ago
It sounds promising, because it is on the outside reproducible deterministic component generation in a modern fashion as far as I understood it.
I build a large platform using a methodically comparable approach I suppose, albeit in the pre-AI time, and that's why I wanna have a closer look at the inner workings and results of your project - curiosity so to say.
You appear to be the only solid and promising endeavor in the GenUI domain with a solid approach other than simply relying on an LLM but using math in combination with AI.
Good luck!
grouchy|19 days ago
We are constantly improving tambo. It's crazy to see how much it's improved since we first started.
deep_origins|19 days ago
grouchy|19 days ago
We love zod, we also support standard schema and thus most other popular typing libraries.
I'm curious how you found us?
svrma|18 days ago
cjonas|19 days ago
grouchy|19 days ago
Any specific experience you had? or more specifics of where batteries included went to far?
krashidov|19 days ago
our use case is to allow other users to build lightweight internal apps within your chat workspace (say like an applicant tracking system per hire etc.)
grouchy|19 days ago
danialtz|19 days ago
is this the same category to CopilotKit? CPK is a AGUI proxy for similar topics, but here seems to be more emphasis on linked components?
grouchy|19 days ago
The major difference is we provide an agent. You don't need to bring your own agent or framework. A lot of our developers are using our agent, really happy with it, and we have a bunch of upcoming features to make it even better out of the box.
eagleinparadise|19 days ago
grouchy|19 days ago
import { z } from "zod";
inputSchema: z.object({ query: z.string() });
or
import * as v from "valibot";
inputSchema: v.object({ query: v.string() });
or
import { type } from "arktype";
inputSchema: type({ query: "string" });
unknown|19 days ago
[deleted]