I find the state of MCPs with OpenAI quite confusing:
- As an end-user, you can connect MCPs only with `search` and `fetch` that only work in deep research mode.
- As a developer, you can use MCP with the API and that supports the full set of MCP tools - all tools become available; this shows in the dev playground.
- For Custom GPTs, they support any action, but not MCPs. So if you had a layer to translate MCP to their API spec, it will work. But Custom GPTs with actions only support models 4o and 4.1; so you don't get the benefit of the o-series of models.
Figuring out what works when is harder than it needs to be.
OpenAI has fallen behind, way behind. Stop trying to benchmark off openai, it's pointless imo. They're not even playing the same game as everyone else.
The lack of MCP support in their desktop client is especially disappointing considering everything Anthropic has shipped on Claude Code the last few weeks.
ChatGPT desktop client with only search/fetch MCPs is far, far inferior to CC from a utility/value perspective.
Has something changed? That seems to be the same page they've had for a couple of months. It's only for deep research mode, and is restricted to Pro and Enterprise.
If I understand correctly, it requires your MCP server to have exactly two tools - search and fetch.
So this is not really support for MCP in general, as in all the available MCP servers. It’s support for their own custom higher-level protocol built on top of MCP.
For ChatGPT and DeepResearch yes, not when using the API. I guess you could just return empty results if you want to offer other tools as well (can't test it now, since custom connectors only supports Workspace or PRO accounts for this moment).
Quote we're talking about:
> To work with ChatGPT Connectors or deep research (in ChatGPT or via API), your MCP server must implement two tools - search and fetch.
It is very nice to get MCP support in ChatGPT. OpenAI really fumbled the bag with the OpenAPI-based Custom Actions (or was it Custom GPTs?). The web editor experience was always incredibly buggy, even months after initial release. MCP servers allow us to move nearly all of the tool definition bits into the server codebase itself, so we can change things on the fly / version control / feature flag tools etc. much better.
They're not directly solving the same problem. MCP is for exposing tools, such as reading files. a2a is for agents to talk to other agents to collaborate.
MCP servers can expose tools that are agents, but don't have to, and usually don't.
That being said, I can't say I've come across an actual implementation of a2a outside of press releases...
I love how they don't actually explain why I (or anyone else) would ever implement their API.
Given how disastrous the AI 'industry' has been, between misappropriating data from customers, performing actions on behalf of customers that lead to data and/or financial loss, and then seeking protection from the law in one or more cases of these, isn't providing an MCP service essentially requiring you to notify customers of a GDPR-or-similar data compromise event at some point in the future when it suddenly but inevitably betrays you?
Like, isn't OpenAI just leading people to a footgun and then kindly asking them to use it, for the betterment of OpenAI's bottom line, which was significantly in the red for FY24?
Ah yes, “disastrous” not quoted but felt the need to put into quotation “industry” as if it’s not the biggest shift that has happened in your entire lifetime. That is intellectually dishonest and incongruent with the reality that you live in
jngiam1|7 months ago
- As an end-user, you can connect MCPs only with `search` and `fetch` that only work in deep research mode.
- As a developer, you can use MCP with the API and that supports the full set of MCP tools - all tools become available; this shows in the dev playground.
- For Custom GPTs, they support any action, but not MCPs. So if you had a layer to translate MCP to their API spec, it will work. But Custom GPTs with actions only support models 4o and 4.1; so you don't get the benefit of the o-series of models.
Figuring out what works when is harder than it needs to be.
randomjoe2|7 months ago
lherron|7 months ago
ChatGPT desktop client with only search/fetch MCPs is far, far inferior to CC from a utility/value perspective.
jimmydoe|7 months ago
ascorbic|7 months ago
wunderwuzzi23|7 months ago
Eg how I described here a while ago: https://x.com/wunderwuzzi23/status/1930899939737166075?s=46&...
Ironically, I have a blog post drafted that explains this also in detail, and should probably still publish it.
jimmydoe|7 months ago
It’s disappointing they are gating this and browser agent for $100 users, and even for 100, it’s only two tool methods.
cube2222|7 months ago
If I understand correctly, it requires your MCP server to have exactly two tools - search and fetch.
So this is not really support for MCP in general, as in all the available MCP servers. It’s support for their own custom higher-level protocol built on top of MCP.
asabla|7 months ago
Quote we're talking about: > To work with ChatGPT Connectors or deep research (in ChatGPT or via API), your MCP server must implement two tools - search and fetch.
Reference links:
- Using remote MCP servers with the API: https://platform.openai.com/docs/guides/tools-remote-mcp
- Which account types can setup custom connectors in ChatGPT: https://help.openai.com/en/articles/11487775-connectors-in-c...
monadoid|7 months ago
varunneal|7 months ago
maxwellg|7 months ago
babyshake|7 months ago
d_watt|7 months ago
MCP servers can expose tools that are agents, but don't have to, and usually don't.
That being said, I can't say I've come across an actual implementation of a2a outside of press releases...
miguelxpn|7 months ago
kinduff|7 months ago
DiabloD3|7 months ago
Given how disastrous the AI 'industry' has been, between misappropriating data from customers, performing actions on behalf of customers that lead to data and/or financial loss, and then seeking protection from the law in one or more cases of these, isn't providing an MCP service essentially requiring you to notify customers of a GDPR-or-similar data compromise event at some point in the future when it suddenly but inevitably betrays you?
Like, isn't OpenAI just leading people to a footgun and then kindly asking them to use it, for the betterment of OpenAI's bottom line, which was significantly in the red for FY24?
randomjoe2|7 months ago
abletonlive|7 months ago
ipsum2|7 months ago
dang|7 months ago