jlowin's comments

jlowin | 6 months ago | on: Accelerating AI with FastMCP Cloud

Sure! There will not be any contention between their roadmaps, in fact FastMCP will continue to evolve somewhat independently of FastMCP Cloud. We'll be announcing our first external maintainer later this week.

Our goal is for FastMCP to be the most full-featured framework for building production-ready MCP servers. FastMCP Cloud is an opinionated way to spin up infrastructure, auth, observability, etc. automatically for those servers. Nonetheless, FastMCP 2.12 (probably releasing this week) will include a completely new auth layer that supports Google, GitHub, WorkOS, and Azure as integrated OAuth providers. It'll also include a new portable deployment configuration that will make it easy to spin up FastMCP servers -- and all their dependencies -- from a declarative JSON file. The objective is to make MCP accessible, whether you want to use our platform or roll your own!

jlowin | 8 months ago | on: MCP is eating the world

FastMCP author here -- (maybe surprisingly) I agree with many of the observations here.

FastMCP exists because I found the original spec and SDK confusing and complicated. It continues to exist because it turns out there's great utility in curating an agent-native API, especially when so many great dev tools have adopted this client interface.

But the spec is still so young and at such risk of being co-opted by hype (positive and negative). I would invite everyone to participate constructively in improving it.

Lastly: this article is plainly AI generated, as `from mcp import tool` is completely hallucinated. Just some food for thought for the "AI should be able to figure out my complex REST API" crowd that seems well represented here.

jlowin | 9 months ago | on: Cursor 1.0

One of the first features we added to FastMCP 2.0 [1] was server composition and proxying (plus some niceties around trying to reuse servers whenever possible) for exactly this reason, so that you can ideally run a single instance of your preferred servers without replicating the setup over every application. In some ways, MCP standardized how agents talk to APIs but introduced a new frontier of lawless distribution! This is something that has to improve.

[1]: https://gofastmcp.com

jlowin | 2 years ago | on: Show HN: A ChatGPT TUI with custom bots

No quota here! If you like Marvin, you’ll probably like Prefect. Both are designed to be clean, Pythonic interfaces to a complex and hard-to-observe system (a data stack; an AI stack).

I think one of the key differences between Prefect and Dagster is that Prefect views orchestration as coordination, while Dagster views orchestration as reconciliation. The data stack is a complex system whose state is frequently mutated by forces outside our users’ control. Therefore, our product is focused on letting our users understand and react to those events, no matter where they come from. That could include everything from scheduling fully-orchestrated Prefect pipelines, to setting an SLA for database maintenance that Prefect doesn’t have anything else to do with. Reconciliation, in contrast, requires users to define a digital twin of their stack, in order to serve as ground truth and become the reconciliation target. Philosophically, we view Prefect as one piece of an ever-changing stack. We are focused on being as flexible as possible to fit into the stack, rather than the other way around.

jlowin | 2 years ago | on: Show HN: A ChatGPT TUI with custom bots

Sure! We wrote a little bit about the origins in an announce post a couple weeks ago (https://news.ycombinator.com/item?id=35366838).

Marvin (https://www.github.com/prefecthq/marvin) powers our AI efforts at Prefect (https://www.github.com/prefecthq/prefect).

The first version of Marvin was an internal framework that powered our Slackbot. There are close to 30,000 members of our open-source community and we rely heavily on automation to deliver support. Then, as more of our customers started building AI stacks, we began to view Marvin as a platform to experiment with high-level UX for deploying AI. We have a few internal use cases, but it was the diversity of customer objectives that gave us confidence.

Historically, we've always focused on data engineering, but the more we worked with LLMs, the more we saw the same set of issues, basically driven by the need to integrate brittle, non-deterministic APIs that are heavily influenced by external state into well-structured traditional engineering and pipelines. We started using Marvin to codify the high-level patterns we were repeatedly deploying, including getting structured outputs from the LLM and building effective conversational agents for B2B use.

The lightbulb moment was when we designed AI functions, which have no source code and essentially use the LLM as a runtime. It's one of those ideas that feels too simple to actually work... but it actually works incredibly well. It was the first time we felt like we weren't building tools to use AI, but rather using AI to build our tools. We open-sourced with AI functions as the headline and the response has been amazing! Now we're focused on releasing the "core" of Marvin -- the bots, plugins, and knowledge handling -- with a similar focus on usability.

Hope that's what you were looking for!

jlowin | 2 years ago | on: Show HN: A ChatGPT TUI with custom bots

Exactly! Threads in Marvin are designed to support multiple bots and users. Two key user stories:

- multiple users in a Slack thread talking to the same bot. This is something we want to deliver soon, as Marvin powers our existing Slack bots

- one user addressing multiple bots, each of which is designed for a specific purpose (because bots do way better with reduced scope than when you have one bot try to do everything)

page 1