top | item 43612788

(no title)

skilbjo | 11 months ago

great to see this. two questions:

-would i be able to publish a link to my canonical openapi spec if my service, or i should plan on doing a PR in this repo for the openapi artefact?

my spec infrequently changes, but it sometimes does, so how do ideally updates happen whereas my current workflow is publishing an openapi spec to a public link

-how does this (and Arazzo) interact with MCP?

is this meant to interact along side MCP, a replacement for it, etc?

discuss

order

seanblanchfield|10 months ago

Jentic co-founder here. Right now, you've got to do a PR, but we plan to monitor the web for new OpenAPI documents and automatically load them in within 24 hours.

Once ingested, we will monitor the original URL for updates. We plan to enrich ingested OpenAPI docs with any additional information we can find on the web (and live agent telemetry). These enrichments will include some spec extensions for additional info agents need (e.g., how to enrol/authenticate, rate limits, pricing, licensing, trust & safety, side-effects, rollback etc).

We will be careful not to clobber any 1st party docs with AI content, and to intelligently merge any AI enrichments into future versions.

Note that a lot of APIs do not have good OpenAPI documentation, and so we'll be generating those from scratch.

In addition, we have an agent that reads any OpenAPI specs in the repo, generating potentially useful workflows composed from OpenAPI operations and other workflows (with all workflows represented in Arazzo specs). That where all the Arazzo specs currently in the repo came from.

MCP is ideal for agents to connect to services, but is not designed to represent the depth of API knowledge we are aiming to represent (and it would be worse at its primary job if it tried to). We will shortly release our own MCP server that provides agents with convenient access to interact with this API repository over MCP. For example, to search for operations and workflows that fit a current sub-goal, and to load details so they can more reliably execute a chosen operation/workflow (assisted by a OSS library we'll release soon) and to interpret the responses intelligently.