This is driven by instructions in the Claude Code system prompt:
> When the user directly asks about Claude Code (eg. "can Claude Code do...", "does Claude Code have..."), or asks in second person (eg. "are you able...", "can you do..."), or asks how to use a specific Claude Code feature (eg. implement a hook, or write a slash command), use the WebFetch tool to gather information to answer the question from Claude Code docs. The list of available docs is available at https://docs.claude.com/en/docs/claude-code/claude_code_docs....
The article seems a few months too late. Claude (and others) are already doing this: i've been instructing claude code to generate code following certain best practices provided through URLs or asking it to compare certain approaches from different URLs. Claude Skill uses file "URLs" to provide progressive disclosure: only include detailed texts into the context if needed. This helps reduce context size, and improves cachability.
Heh, the problem with having a half drafted post on your machine for a few weeks is the industry moves fast!
I had the post pretty much done, went on vacation for a week, and Claude Skills came out in the interim.
That being said Skills are indeed an implementation of the patterns possible with linking, but they are narrower in scope than what's possible even with MCP Resources if they were properly made available to agents (e.g. dynamic construction of context based on environment and/or fetching from remote sources).
Codex run locally doesn't have access to any "web search" tool, but that doesn't stop it from trying to browse the web via cURL from time to time. Place hyperlinks in the documents in the repository and it'll try to reach them best it can. It doesn't seem overly eager doing so though, and only does that when absolutely needed and it can't find the information elsewhere. This has been my experience with gpt-5-high at least.
Spot on, this is a solid abstraction to build upon. I always felt MCP was a misstep in comparison to OpenAI’s focus on OpenAPI specs. HATEOAS is the principle that has become more useful as agents drive applications.
I just discovered that I can paste a link into a Claude prompt and ask it to follow read the page so we can talk about it. I no longer have to copy the text of the page and paste it in. Claude uses the web_fetch command. So we're heading in the direction this article discusses.
What you're describing is basically very stripped down versions of pre-SPA web pages.
We don't need MCPs for this, just make a tool that uses Trafilatura to read web pages into markdown and create oldschool server side web UIs, and let the agents curl them.
I wonder if append-only will continue to be important. As agents get more powers, their actions will likely be the bottleneck, not the LLM itself. And at n*2, recomputing a whole new context might not take much longer than just computing the delta, or even save time if the new context is shorter.
I wonder if I can instruct LLMs to use my MCP whenever they need to access anything online. So they can bypass AI blocks when I tell them to read some docs online.
This can already be done with Claude Code or most agentic tools. There will be restrictions for online platforms as LLMs are very vulnerable to prompt attacks.
isn't this basically rag with a different entrypoint? following links works when the corpus is well-authored/ hierarchical but most real data isn't. how do you handle relevance ranking/stale links/huge fan-out?? "just follow hyperlinks" can blow up the context window just as easily
HATEOAS always seemed a bit like a solution in search of a problem to me. It was a nice idea for more convenient "manual exploration" of APIs if you're a human developer and all you have is curl - but I never understood for what kind of "production" scenario they were designing their constraints. The kind of automated client that could make actual use of the metadata always seemed more of a fantasy.
...until now. It seems they finally found their problem.
> Purists have long claimed that a “truly” RESTful API should be fully self-describing, such that a client can explore and interact with it knowing nothing but an entrypoint in advance, with hyperlinks providing all necessary context to discover and consume additional endpoints.
> This never worked in practice. Building hypertext APIs was too cumbersome and to actually consume APIs a human needed to understand the API structure in a useful manner anyway.
Every time I read one of these comments I feel like DiCaprio's character in Inception going "but we did grow old together." HATEOAS worked phenomenally. Every time you go to a webpage with buttons and links in HTML that describe what the webpage is capable of (its API, if you will), you are doing HATEOAS [0]. That this interface can be consumed by both a user (via the browser) and a web scraper (via some other program) is the foundation of modern web infrastructure.
It's a little ironic that the explosion of information made possible by HATEOAS happened while the term it self largely got misunderstood, but such is life. Much like reclaiming the proper usage of its close cousin, "REST," using HATEOAS correctly is helpful for properly identifying what made the world's largest hypermedia system successful—useful if you endeavor to design a new one [1].
I think you're misunderstanding the purpose of hateoas.
If we jump down to the bolts and nuts, let's say on a json API, it's about including extra attributes/fields in your json response that contain links and information of how to continue. These attributes have to be blended with your other real attributes.
For example if you just created a resource with a POST endpoint, you can include a link to GET the freshly created resource ("_fetch"), a link to delete it ("_delete"), a link to list all resources of the same collection ("_list"), etc...
Then the client application is supposed to automatically discover the API's functionality. In case of a UI, it's supposed to automatically discover the API's functionality and build a presentation layer on the fly, which the user can see and use. From our example above, the UI codebase would never have a "delete" resource button, it would have a generic button which would be created and placed on the UI based on the _delete field coming back from the API
I worked for a company that was all hateoas. In the formal sense, explicitly structured around the concept, not the sense that html has both data and actions via links, it worked, it was a real product, but it was slow and terrible to develop and debug.
The front end ui was entirely driven, ui and functionality exposed by the data/action payload.
I'm still not sure if it's because of the implementation or because there is something fundemental.
I came away from that thinking that the db structure, the dag and data flow is what's really important for thinking about any problem space and any ui considerations should be not first class.
But I'm not a theorist, I just found a specific real, real formal working implementation in prod to be not great, it's a little hard even now to understand why.
Maybe it just works for purely text interfaces, adding any design or dynamic interaction causes issues.
I think maybe it's that the data itself should be first class, that well typed data should exist and a system that allows any ui and behavior to be attached to that data is more important than an api saying what explicit mutations are allowed.
If I was to explore this, I think folder and files, spreadsheet, dbs, data structures, those are the real things and the tools we use to mutate them are second order and should be treated as such. Any action that can be done on data should be defined elsewhere and not treated as being the same importance, but idk, that's just me thinking outloud.
I agree. The “purist” REST using HATEOAS is the single most successful API architectural style in history by miles. It’s the foundation of the World-Wide Web, which would not have been anywhere near as successful with a different approach.
Totally agree, the web itself is absolutely HATEOAS, but there was a type of person in the 2000s era who insisted that APIs were not truly RESTful if they weren't also hypermedia APIs, but the only real benefit of those APIs was to enable overly generic API clients that were usually strictly worse than even clumsily tailored custom clients.
The missing piece was having machines that could handle enough ambiguity to "understand" the structure of the API without it needing to be generic to the point of uselessness.
I appreciate the conceptual analogy, but that's not really HATEOAS. HATEOAS would mean your browser/client would be entirely responsible for the presentation layer, in whatever form you desired, whether it's buttons or forms or pages or not even a GUI at all, such as a chat interface.
HATEOAS is hypertext as the engine of application state. When a person reads a webpage and follows links, it’s not HATEOAS, because the person is not an application.
HATEOAS and by-the-book REST don’t provide much practical value for writing applications. As the article says, a human has to read the spec, make sense of each endpoint’s semantics, and write code specific to those semantics. At that point you might as well hardcode the relevant URLs (with string templating where appropriate) rather than jumping through hoops and pretending every URL has to be “discovered” on the off chance that some lunatic will change the entire URL structure of your backend but somehow leave all the semantics unchanged.
The exception, as the article says, is if we don’t have to understand the spec and write custom code for each endpoint. Now we truly can have self-describing endpoints, and HATEOAS moves from a purist fantasy to something that actually makes sense.
simonw|4 months ago
https://docs.claude.com/en/docs/claude-code/claude_code_docs...
This is driven by instructions in the Claude Code system prompt:
> When the user directly asks about Claude Code (eg. "can Claude Code do...", "does Claude Code have..."), or asks in second person (eg. "are you able...", "can you do..."), or asks how to use a specific Claude Code feature (eg. implement a hook, or write a slash command), use the WebFetch tool to gather information to answer the question from Claude Code docs. The list of available docs is available at https://docs.claude.com/en/docs/claude-code/claude_code_docs....
Screenshot and notes here: https://simonwillison.net/2025/Oct/24/claude-code-docs-map/
ako|4 months ago
mbleigh|4 months ago
I had the post pretty much done, went on vacation for a week, and Claude Skills came out in the interim.
That being said Skills are indeed an implementation of the patterns possible with linking, but they are narrower in scope than what's possible even with MCP Resources if they were properly made available to agents (e.g. dynamic construction of context based on environment and/or fetching from remote sources).
CaptainOfCoit|4 months ago
j45|4 months ago
mettamage|4 months ago
Though in this case, I did read the original article.
fooker|4 months ago
Not an exact science yet.
CuriouslyC|4 months ago
jrvarela56|4 months ago
labrador|4 months ago
tmoravec|4 months ago
This used to work great two years ago when chatgpt first got the Web browsing feature. Nowadays, no eyeballs on ads: no content.
CuriouslyC|4 months ago
We don't need MCPs for this, just make a tool that uses Trafilatura to read web pages into markdown and create oldschool server side web UIs, and let the agents curl them.
daxfohl|4 months ago
hu3|4 months ago
csomar|4 months ago
ramanvarma|4 months ago
xg15|4 months ago
...until now. It seems they finally found their problem.
alexpetros|4 months ago
> This never worked in practice. Building hypertext APIs was too cumbersome and to actually consume APIs a human needed to understand the API structure in a useful manner anyway.
Every time I read one of these comments I feel like DiCaprio's character in Inception going "but we did grow old together." HATEOAS worked phenomenally. Every time you go to a webpage with buttons and links in HTML that describe what the webpage is capable of (its API, if you will), you are doing HATEOAS [0]. That this interface can be consumed by both a user (via the browser) and a web scraper (via some other program) is the foundation of modern web infrastructure.
It's a little ironic that the explosion of information made possible by HATEOAS happened while the term it self largely got misunderstood, but such is life. Much like reclaiming the proper usage of its close cousin, "REST," using HATEOAS correctly is helpful for properly identifying what made the world's largest hypermedia system successful—useful if you endeavor to design a new one [1].
[0] https://htmx.org/essays/hateoas/
[1] https://unplannedobsolescence.com/blog/why-insist-on-a-word/
_kidlike|4 months ago
If we jump down to the bolts and nuts, let's say on a json API, it's about including extra attributes/fields in your json response that contain links and information of how to continue. These attributes have to be blended with your other real attributes.
For example if you just created a resource with a POST endpoint, you can include a link to GET the freshly created resource ("_fetch"), a link to delete it ("_delete"), a link to list all resources of the same collection ("_list"), etc...
Then the client application is supposed to automatically discover the API's functionality. In case of a UI, it's supposed to automatically discover the API's functionality and build a presentation layer on the fly, which the user can see and use. From our example above, the UI codebase would never have a "delete" resource button, it would have a generic button which would be created and placed on the UI based on the _delete field coming back from the API
fellowniusmonk|4 months ago
The front end ui was entirely driven, ui and functionality exposed by the data/action payload.
I'm still not sure if it's because of the implementation or because there is something fundemental.
I came away from that thinking that the db structure, the dag and data flow is what's really important for thinking about any problem space and any ui considerations should be not first class.
But I'm not a theorist, I just found a specific real, real formal working implementation in prod to be not great, it's a little hard even now to understand why.
Maybe it just works for purely text interfaces, adding any design or dynamic interaction causes issues.
I think maybe it's that the data itself should be first class, that well typed data should exist and a system that allows any ui and behavior to be attached to that data is more important than an api saying what explicit mutations are allowed.
If I was to explore this, I think folder and files, spreadsheet, dbs, data structures, those are the real things and the tools we use to mutate them are second order and should be treated as such. Any action that can be done on data should be defined elsewhere and not treated as being the same importance, but idk, that's just me thinking outloud.
JimDabell|4 months ago
mbleigh|4 months ago
The missing piece was having machines that could handle enough ambiguity to "understand" the structure of the API without it needing to be generic to the point of uselessness.
not_kurt_godel|4 months ago
jdlshore|4 months ago
HATEOAS and by-the-book REST don’t provide much practical value for writing applications. As the article says, a human has to read the spec, make sense of each endpoint’s semantics, and write code specific to those semantics. At that point you might as well hardcode the relevant URLs (with string templating where appropriate) rather than jumping through hoops and pretending every URL has to be “discovered” on the off chance that some lunatic will change the entire URL structure of your backend but somehow leave all the semantics unchanged.
The exception, as the article says, is if we don’t have to understand the spec and write custom code for each endpoint. Now we truly can have self-describing endpoints, and HATEOAS moves from a purist fantasy to something that actually makes sense.