(no title)
alexpetros | 4 months ago
> This never worked in practice. Building hypertext APIs was too cumbersome and to actually consume APIs a human needed to understand the API structure in a useful manner anyway.
Every time I read one of these comments I feel like DiCaprio's character in Inception going "but we did grow old together." HATEOAS worked phenomenally. Every time you go to a webpage with buttons and links in HTML that describe what the webpage is capable of (its API, if you will), you are doing HATEOAS [0]. That this interface can be consumed by both a user (via the browser) and a web scraper (via some other program) is the foundation of modern web infrastructure.
It's a little ironic that the explosion of information made possible by HATEOAS happened while the term it self largely got misunderstood, but such is life. Much like reclaiming the proper usage of its close cousin, "REST," using HATEOAS correctly is helpful for properly identifying what made the world's largest hypermedia system successful—useful if you endeavor to design a new one [1].
[0] https://htmx.org/essays/hateoas/
[1] https://unplannedobsolescence.com/blog/why-insist-on-a-word/
_kidlike|4 months ago
If we jump down to the bolts and nuts, let's say on a json API, it's about including extra attributes/fields in your json response that contain links and information of how to continue. These attributes have to be blended with your other real attributes.
For example if you just created a resource with a POST endpoint, you can include a link to GET the freshly created resource ("_fetch"), a link to delete it ("_delete"), a link to list all resources of the same collection ("_list"), etc...
Then the client application is supposed to automatically discover the API's functionality. In case of a UI, it's supposed to automatically discover the API's functionality and build a presentation layer on the fly, which the user can see and use. From our example above, the UI codebase would never have a "delete" resource button, it would have a generic button which would be created and placed on the UI based on the _delete field coming back from the API
fellowniusmonk|4 months ago
The front end ui was entirely driven, ui and functionality exposed by the data/action payload.
I'm still not sure if it's because of the implementation or because there is something fundemental.
I came away from that thinking that the db structure, the dag and data flow is what's really important for thinking about any problem space and any ui considerations should be not first class.
But I'm not a theorist, I just found a specific real, real formal working implementation in prod to be not great, it's a little hard even now to understand why.
Maybe it just works for purely text interfaces, adding any design or dynamic interaction causes issues.
I think maybe it's that the data itself should be first class, that well typed data should exist and a system that allows any ui and behavior to be attached to that data is more important than an api saying what explicit mutations are allowed.
If I was to explore this, I think folder and files, spreadsheet, dbs, data structures, those are the real things and the tools we use to mutate them are second order and should be treated as such. Any action that can be done on data should be defined elsewhere and not treated as being the same importance, but idk, that's just me thinking outloud.
alexpetros|4 months ago
The web is also a real product, one that's (when not bloated with adtech) capable of being fast and easy to develop on. That other people have tried to do HATEOAS and failed to make it nice is part of why it's so useful to acknowledge as valid the one implementation that has wildly succeeded.
dpe82|4 months ago
JimDabell|4 months ago
mbleigh|4 months ago
The missing piece was having machines that could handle enough ambiguity to "understand" the structure of the API without it needing to be generic to the point of uselessness.
JimDabell|4 months ago
The creator of REST, Roy Fielding, literally said this loud and clear:
> REST APIs must be hypertext-driven
> What needs to be done to make the REST architectural style clear on the notion that hypertext is a constraint? In other words, if the engine of application state (and hence the API) is not being driven by hypertext, then it cannot be RESTful and cannot be a REST API. Period.
— https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypert...
I think of all the people in the world, the creator of REST gets to say what is and isn’t REST.
not_kurt_godel|4 months ago
tsimionescu|4 months ago
The concept of a HATEOAS API is also very simple: the API is defined by a communication protocol, 1 endpoint, and a series of well-defined media types. For a website, the protocol is HTTP, that 1 endpoint is /index.html, and the media types are text/html, application/javascript, image/jpeg, application/json and all of the others.
The purpose of this system is to allow the creation of clients and servers completely independently of each other, and to allow the protocols to evolve independently in subsets of clients and servers without losing interoperability. This is perfectly achieved on the web, to an almost incredible degree. There has never been, at least not in the last decades, a big where, say, Firefox can't correctly display pages severed by Microsoft IIS: every browser really works with every web server, and no browser or server dev even feels a great need to explicitly test against the others.
alexpetros|4 months ago
Browsers can alter a webpage with your chosen CSS, interactively read webpages out loud to you, or, as is the case with all the new AI browsers, provide LLM powered "answers" about a page's contents. These are all recontextualizations made possible by the universal HATEOAS interface of HTML.
Descon|4 months ago
ffsm8|4 months ago
I guess someone interested would have to read the original work by Roy (who seems to have come up with the term) to find out which opinion is true
jdlshore|4 months ago
HATEOAS and by-the-book REST don’t provide much practical value for writing applications. As the article says, a human has to read the spec, make sense of each endpoint’s semantics, and write code specific to those semantics. At that point you might as well hardcode the relevant URLs (with string templating where appropriate) rather than jumping through hoops and pretending every URL has to be “discovered” on the off chance that some lunatic will change the entire URL structure of your backend but somehow leave all the semantics unchanged.
The exception, as the article says, is if we don’t have to understand the spec and write custom code for each endpoint. Now we truly can have self-describing endpoints, and HATEOAS moves from a purist fantasy to something that actually makes sense.