top | item 45701221

(no title)

alexpetros | 4 months ago

> Purists have long claimed that a “truly” RESTful API should be fully self-describing, such that a client can explore and interact with it knowing nothing but an entrypoint in advance, with hyperlinks providing all necessary context to discover and consume additional endpoints.

> This never worked in practice. Building hypertext APIs was too cumbersome and to actually consume APIs a human needed to understand the API structure in a useful manner anyway.

Every time I read one of these comments I feel like DiCaprio's character in Inception going "but we did grow old together." HATEOAS worked phenomenally. Every time you go to a webpage with buttons and links in HTML that describe what the webpage is capable of (its API, if you will), you are doing HATEOAS [0]. That this interface can be consumed by both a user (via the browser) and a web scraper (via some other program) is the foundation of modern web infrastructure.

It's a little ironic that the explosion of information made possible by HATEOAS happened while the term it self largely got misunderstood, but such is life. Much like reclaiming the proper usage of its close cousin, "REST," using HATEOAS correctly is helpful for properly identifying what made the world's largest hypermedia system successful—useful if you endeavor to design a new one [1].

[0] https://htmx.org/essays/hateoas/

[1] https://unplannedobsolescence.com/blog/why-insist-on-a-word/

discuss

order

_kidlike|4 months ago

I think you're misunderstanding the purpose of hateoas.

If we jump down to the bolts and nuts, let's say on a json API, it's about including extra attributes/fields in your json response that contain links and information of how to continue. These attributes have to be blended with your other real attributes.

For example if you just created a resource with a POST endpoint, you can include a link to GET the freshly created resource ("_fetch"), a link to delete it ("_delete"), a link to list all resources of the same collection ("_list"), etc...

Then the client application is supposed to automatically discover the API's functionality. In case of a UI, it's supposed to automatically discover the API's functionality and build a presentation layer on the fly, which the user can see and use. From our example above, the UI codebase would never have a "delete" resource button, it would have a generic button which would be created and placed on the UI based on the _delete field coming back from the API

fellowniusmonk|4 months ago

I worked for a company that was all hateoas. In the formal sense, explicitly structured around the concept, not the sense that html has both data and actions via links, it worked, it was a real product, but it was slow and terrible to develop and debug.

The front end ui was entirely driven, ui and functionality exposed by the data/action payload.

I'm still not sure if it's because of the implementation or because there is something fundemental.

I came away from that thinking that the db structure, the dag and data flow is what's really important for thinking about any problem space and any ui considerations should be not first class.

But I'm not a theorist, I just found a specific real, real formal working implementation in prod to be not great, it's a little hard even now to understand why.

Maybe it just works for purely text interfaces, adding any design or dynamic interaction causes issues.

I think maybe it's that the data itself should be first class, that well typed data should exist and a system that allows any ui and behavior to be attached to that data is more important than an api saying what explicit mutations are allowed.

If I was to explore this, I think folder and files, spreadsheet, dbs, data structures, those are the real things and the tools we use to mutate them are second order and should be treated as such. Any action that can be done on data should be defined elsewhere and not treated as being the same importance, but idk, that's just me thinking outloud.

alexpetros|4 months ago

> I worked for a company that was all hateoas. In the formal sense, explicitly structured around the concept, not the sense that html has both data and actions via links, it worked, it was a real product, but it was slow and terrible to develop and debug.

The web is also a real product, one that's (when not bloated with adtech) capable of being fast and easy to develop on. That other people have tried to do HATEOAS and failed to make it nice is part of why it's so useful to acknowledge as valid the one implementation that has wildly succeeded.

dpe82|4 months ago

Was this recently, using something like HTMX? Or years ago using some other system (or pure/standard HTML)?

JimDabell|4 months ago

I agree. The “purist” REST using HATEOAS is the single most successful API architectural style in history by miles. It’s the foundation of the World-Wide Web, which would not have been anywhere near as successful with a different approach.

mbleigh|4 months ago

Totally agree, the web itself is absolutely HATEOAS, but there was a type of person in the 2000s era who insisted that APIs were not truly RESTful if they weren't also hypermedia APIs, but the only real benefit of those APIs was to enable overly generic API clients that were usually strictly worse than even clumsily tailored custom clients.

The missing piece was having machines that could handle enough ambiguity to "understand" the structure of the API without it needing to be generic to the point of uselessness.

JimDabell|4 months ago

> there was a type of person in the 2000s era who insisted that APIs were not truly RESTful if they weren't also hypermedia APIs

The creator of REST, Roy Fielding, literally said this loud and clear:

> REST APIs must be hypertext-driven

> What needs to be done to make the REST architectural style clear on the notion that hypertext is a constraint? In other words, if the engine of application state (and hence the API) is not being driven by hypertext, then it cannot be RESTful and cannot be a REST API. Period.

https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypert...

I think of all the people in the world, the creator of REST gets to say what is and isn’t REST.

not_kurt_godel|4 months ago

I appreciate the conceptual analogy, but that's not really HATEOAS. HATEOAS would mean your browser/client would be entirely responsible for the presentation layer, in whatever form you desired, whether it's buttons or forms or pages or not even a GUI at all, such as a chat interface.

tsimionescu|4 months ago

The Web is not only true HATEOAS, it is in fact the motivating example for HATEOAS. Roy Fielding's paper that introduced the concept is exactly about the web, REST and HATEOAS are the architecture patterns that he introduces primarily to guide the design of HTTP for the WWW.

The concept of a HATEOAS API is also very simple: the API is defined by a communication protocol, 1 endpoint, and a series of well-defined media types. For a website, the protocol is HTTP, that 1 endpoint is /index.html, and the media types are text/html, application/javascript, image/jpeg, application/json and all of the others.

The purpose of this system is to allow the creation of clients and servers completely independently of each other, and to allow the protocols to evolve independently in subsets of clients and servers without losing interoperability. This is perfectly achieved on the web, to an almost incredible degree. There has never been, at least not in the last decades, a big where, say, Firefox can't correctly display pages severed by Microsoft IIS: every browser really works with every web server, and no browser or server dev even feels a great need to explicitly test against the others.

alexpetros|4 months ago

> HATEOAS would mean your browser/client would be entirely responsible for the presentation layer, in whatever form you desired, whether it's buttons or forms or pages or not even a GUI at all, such as a chat interface.

Browsers can alter a webpage with your chosen CSS, interactively read webpages out loud to you, or, as is the case with all the new AI browsers, provide LLM powered "answers" about a page's contents. These are all recontextualizations made possible by the universal HATEOAS interface of HTML.

Descon|4 months ago

That's a little picky, maybe it's HATEOAS + a little extra presentation sauce (the hottest HATEOAS extension!)

ffsm8|4 months ago

From what I read on wiki, I'm not sure what to think anymore - it does at least sound inline with the opinion that the current websites are actually HATeOAS.

I guess someone interested would have to read the original work by Roy (who seems to have come up with the term) to find out which opinion is true

jdlshore|4 months ago

HATEOAS is hypertext as the engine of application state. When a person reads a webpage and follows links, it’s not HATEOAS, because the person is not an application.

HATEOAS and by-the-book REST don’t provide much practical value for writing applications. As the article says, a human has to read the spec, make sense of each endpoint’s semantics, and write code specific to those semantics. At that point you might as well hardcode the relevant URLs (with string templating where appropriate) rather than jumping through hoops and pretending every URL has to be “discovered” on the off chance that some lunatic will change the entire URL structure of your backend but somehow leave all the semantics unchanged.

The exception, as the article says, is if we don’t have to understand the spec and write custom code for each endpoint. Now we truly can have self-describing endpoints, and HATEOAS moves from a purist fantasy to something that actually makes sense.