top | item 32141027

How did REST come to mean the opposite of REST?

398 points| edofic | 3 years ago |htmx.org

383 comments

order
[+] Joeri|3 years ago|reply
The client knows nothing about the API end points associated with this data, except via URLs and hypermedia controls (links and forms) discoverable within the HTML itself. If the state of the resource changes such that the allowable actions available on that resource change (for example, if the account goes into overdraft) then the HTML response would change to show the new set of actions available.

If the client knows nothing about the meaning of the responses, it cannot do anything with them but show them to a human for interpretation. This would suggest a restful api is not made for system-to-system communication, but requires human mediation at every step of the way, which is precisely not how api’s are used. In short, this definition of restful api’s can’t be right, because it suggests restful interfaces aren’t api’s at all. Once a programmer adds code to parse the responses and extract meaningful data, treating the restful interface as an api, the tight coupling between client and server reappears.

[+] jaaron|3 years ago|reply
> "This would suggest a restful api is not made for system-to-system communication, but requires human mediation at every step of the way"

Which is exactly what REST was originally designed to do: provide an architecture for the Internet (not your app or service) that allows for humans using software clients to interact with services developed by programmers other than those which developed the clients. It was about an interoperable, diverse Internet.

If the distributed application is not developed across distributed organizations, particularly independent unassociated organizations, then the architectural style of REST is overkill for what you intend and you could have just kept using RPC the whole time.

The point of the later RESTful API movement was to create distributed applications that leveraged the underlying architecture principles of the internet within their smaller distributed application. The theory being that this made the application more friendly and native to the broader internet, which I do agree is true, but was never the original point of REST.

That said, xcamvber [1] is right: this is me being an old person fighting an old person battle.

[1] https://news.ycombinator.com/item?id=32143382

[+] mattarm|3 years ago|reply
By my read of this "REST API" is a near oxymoron. It was never supposed to be an "API" in the sense that a program consumes it. It was originally described as "Representational State Transfer (REST) architectural style for distributed hypermedia systems" with a focus on describing resources in generic ways for consumption by hypermedia systems (not arbitrary programs!).

I think this is most clearly described by two things Fielding wrote (and the original article links to):

https://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arc...

https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypert...

[+] wwweston|3 years ago|reply
> If the client knows nothing about the meaning of the responses, it cannot do anything with them but show them to a human for interpretation.

This is the best counterpoint in this discussion and it deserves a lot of reflection.

But that reflection should include the realization that this is what the browser does all the time. Browsers don't have any particular semantic model in their head of what any particular form action or hyperlink "means" in terms of domain logic... but they do have some semantics for broad classes of verbs and other interactions associated with markup-represented potential actions, and so they serve as an agent presenting these to the user who does whatever wetware does in processing domain semantics and logic and makes a choice.

This has worked so well over the last 30 years it's been the biggest platform ever.

We're now in territory where we're creating software that's almost alarmingly good at doing some semantic processing out of no particular structure. The idea that we're facing a dead end in terms of potential for clients to automatically get anything out of a hypertext API seems pretty tenuous to me.

[+] jrochkind1|3 years ago|reply
I would say "semantic web" is the key technology in an attempt to make that kind of API that doesn't need human intervention.

My understanding of the vision is that when all your responses all described using (fielding original) REST API's via RDF, using URI identifiers everywhere -- then a client that has never seen a particular server can still automatically figure out useful things to do with it (per the end-user's commands, expressed to the software in configuration or execution by some UI), solely by understanding enough of the identifiers in use.

You wouldn't need to write new software for each new API or server, even novel servers doing novel things would re-use a lot of the same identifiers, just mixing and matching them in different ways, and the client could still "automatically" make use of them.

I... don't think it's worked out very well. As far as actually getting to that envisioned scenario. I don't think "semantic web" technology is likely to. I am not a fan.

But I think "semantic web" and RDF is where you get when you try to take HATEOS/REST and deal with what you're describing: What do we need to standardize/formalize so the client can know something about the meaning of the response and be able to do something with it other than show it to a human, even for a novel interface? RDF, ostensibly.

The fielding/HATEOS REST, and the origial vision of RDF/semantic web... are deeply aligned technologies, part of the same ideology or vision.

[+] Sohcahtoa82|3 years ago|reply
I wish I could upvote this 100 times.

REST, in its most strict form, feels like it was designed for humans to directly interact with. But this is exceptionally rare. Access will nearly always be done programmatically, at which point a lot of the cruft of REST is unnecessary.

[+] kragen|3 years ago|reply
Fielding isn't saying the client should know nothing about the meaning of the responses. He's saying that what the client knows about the meaning of the responses is derived by interpreting them according to the media type, which doesn't have to be HTML, rather than, for example, by looking at the URL. Quoting from what Fielding wrote in his post and comments:

> A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). ...

> When I say hypertext, I mean the simultaneous presentation of information and controls such that the information becomes the affordance through which the user (or automaton) obtains choices and selects actions. Hypermedia is just an expansion on what text means to include temporal anchors within a media stream; most researchers have dropped the distinction.

> Hypertext does not need to be HTML on a browser. Machines can follow links when they understand the data format and relationship types.

That is, htmx.org or intercoolerjs.org might argue that "HATEOAS is [exclusively] for humans", but Roy Fielding doesn't agree, or didn't in 02008.

— ⁂ —

While I'm arguing, I'd also like to take exception to the claim that this discussion is irrelevant to anything people are doing today. It's an "old person battle," as some say, in the sense that old people are the people who have enough perspective to know what matters and what doesn't. REST matters, because it is an architectural style that enables the construction of applications that can endure for decades, permitting the construction of applications that evolve without a central administrator, remaining backwards-compatible with decades-old software, and can include billions of simultaneous users.

This is an important problem to solve, the WWW isn't the last such application anyone ever needs to build, and JSON RPC interfaces can't build one.

The trouble with redefining "REST" to mean "not REST" is that the first step in learning known techniques to solve a problem is learning the terminology that people use to explain the techniques. If you think you know the terminology, but you have the wrong definition in your mind, you will not be able to understand the explanations, and you will not be able to figure out why you can't understand them, until you finally figure out that the definition you learned was wrong.

[+] k__|3 years ago|reply
This.

I read quite much about REST and HATEOAS, and it didn't made any sense to me.

Somehow the "magic sauce" was missing. How should a client that doesn't know anything about an API interpret it's meaning?

I felt like an idiot. Like there was some high end algorithm or architecture that completely eluded me.

But it the end, it probably just meant, HATEOAS is for humans.

[+] recursivedoubts|3 years ago|reply
I am the author and I agree with most of what you are saying here, REST and HATEOAS are for humans:

https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...

I disagree that it isn't an API, but that's a definition quibble. It is probably more profitable to talk about RESTful systems rather than RESTful APIs, since people think API == machines talking.

[+] sanderjd|3 years ago|reply
This is my criticism of the architecture.as well. But to try to take it seriously for a moment: I think the idea is supposed to be that a client can be programmed to understand the data types, but not the layout. So a programmer can teach a client that if it gets a content type of `application/json+balances`, and it sees, like `'links': [ 'type': 'withdraw', 'method': 'post', 'url': ..., 'params': { 'amount': 'int' } ]`, then it can know to send that method to that URL with that param and that it semantically means a withdrawal. That's all encoded into the documentation of the data type, rather than the API. I personally consider this all overly clever and not very useful, but I think that's the idea.
[+] zozbot234|3 years ago|reply
> If the client knows nothing about the meaning of the responses, it cannot do anything with them but show them to a human for interpretation.

The client "knows nothing about the meaning of the responses" only inasmuch as it intentionally abstracts away from that meaning to the extent practicable for the intended application. Of course, the requirements of a human looking at some data are not the same as those of a machine using that same data in some automated workflow.

Linked Data standards (usable even within JSON via JSON-LD) have been developed to enable a "hyperlink"-focused approach to these concerns, so there's plenty of overlap with REST principles.

[+] afiori|3 years ago|reply
It is not "If the client knows nothing about the meaning of the responses" but "The client knows nothing about the API end points associated with this data"

This means that if "/api/item/search?filter=xxxxx" returns an array of ids then you don't have to guess that item prices can be fetched by "/api/item/price?id=nnn" but this url (maybe in template form) needs to be provided by either the "/api/item/search?filter=xxxxx" query or another call you have previously executed.

So very similarly to how you click on links on a website. You often have a priori knowledge of the semantic of the website, but you visit the settings page by clicking of the settings link, not by manually going to "website.example/settings".

PS: these links could be provided by a separate endpoint, but this structure is often useful for things like pagination: instead of manually incrementing offsets each paginated reply can include links for next/previous pages and other relevant links. These need not be full URLs also relative URLs, just URL query fragments, or a JSON description of the query would work (together with a template URL from somewhere else)

[+] peeters|3 years ago|reply
This is right on the money in identifying the continual battle with using a hypermedia API for system-to-system communication. There comes a time where you realize that what you're building is great for discoverability, but not for instructing a computer to do something on your behalf.

That said, the situation isn't entirely dire. With some standard linking conventions (e.g RFC 8288 [1]), you can largely make an API that is pleasant to interact with in code as well. That the links/actions are enumerated is good for humans to learn how to manipulate the resource. That they have persistent rels is good for telling a computer how to do the same.

Think <link rel="stylesheet" href="foo"> as an example. A human reading the HTML source will see that there's a related stylesheet available at "foo". But a program wanting to render HTML will check for the existence of links with rel="stylesheet".

1: https://datatracker.ietf.org/doc/html/rfc8288#section-2.1.2

[+] Aeolun|3 years ago|reply
I don’t understand how anyone would ever expect a REST API to work to send information into a system? So far everything I’ve seen in the article is for reading.

You’d still need some generalized format for the client to get some form of input schema, and if you send the input schema for every action every time you retrieve a resource, things quickly become very data intensive.

[+] irrational|3 years ago|reply
Why is the front end coder parsing anything? Just display the HTML.
[+] NonNefarious|3 years ago|reply
EXACTLY. Regurgitators of the HATEOAS mantra never address this. Instead you get statements like one in this article: "the HTML response "carries along" all the API information necessary to continue interacting with the system directly within itself."

No, it doesn't. It's a list of URLs, which doesn't even indicate what operations they accept. The only thing REST supports, according to this philosophy, is a user manually traipsing through the "API" by clicking on stuff.

Thanks for summing it up so succinctly; I thought maybe I was missing something.

[+] vgel|3 years ago|reply
This is incorrect. There are plenty of machines that read the web: web scrapers, and they've fostered a diverse ecosystem of tools. Search engines, archival tools, ML dataset collection, browser extensions and more are able to work with hypertext because it's self-describing. A new site can pop up and Google can index it without knowing if it's a blog, forum, storefront, or some new genre of website that may be invented in 30 years.
[+] simiones|3 years ago|reply
This is a wrong understanding of that point.

The HATEOAS model is designed for one thing: clients and servers that are developed independently of each other. This matches how the Web is designed (browsers and servers are not developed together), but does not match how most Web Apps are developed (there is almost universally a single entity controlling both the server and the client(s) for that app).

The point of HATEOAS-style REST APIs is that the client should decide what it can do based entirely on the responses it receives from the server and its understanding of the data model - not in any way on its own knowledge of what the server may be doing. This allows both to evolve separately, while still being able to communicate.

To contrast the two approaches, let's say we are building a clone of HN. In the more common web app approach, our client may work like this:

  1. Send a POST to https://auth.hnclone.com/login with {"username": "user", "password": "pass"}; wait for a 200 OK and a cookie

  2. Send a GET request to https://hnclone.com/threads?id=user (including the cookie) and display the results to the user
In the REST approach, our client and server could work like this:

  1. Send a GET to https://hnclone.com; expect a body that contains {"links": [{"rel": "threads", "href": "https://hnclone.com/threads", "query_params": [{"name":"id", "role": "username"}]}]}

  2. Send a GET request to the URL for rel="threads", populating the query param with role="username" with the stored username -> get a 401 NOT AUTHORIZED response with a body like {"links": [{"rel": "auth_page", "href": "https://auth.hnclone.com"}]}

  3. Send a GET request to the URL for auth_page and expect a response that contains {"links": [{"rel": "login", "href": "https://auth.hnclone.com/login"}]}

  4. Send a POST request to the link with rel == "login" with a body of {"username": "user", "password": "pass"}, expecting a 200 OK response and a cookie

  5. Re-send the request to the URL for rel="threads" with the extra cookies, and now get the threads you want to show
More complicated, but the client and server can now evolve more independently - they only have to agree on the meanings of the documents they exchange. The server could move its authentication to https://hnclone.com/api/auth and the client from 2 years ago would still work. The server could add (optional) support for OAUTH without breaking old clients.

You could even go further and define custom media formats and implement media format negotiation between client and server - the JSON format I described could be an explicit media format, and your API could evolve by adding new versions, relying on client Accept headers to decide which version to send.

Now, is this extra complexity (and slowness!) worthwhile to your use case? This depends greatly. For a great many apps, it's probably not. For some, it probably is. It has definitely proven to be extremely useful for building web browsers that can automatically talk to any HTTP server out there without custom Google/Facebook/Reddit/etc. plugins to work properly.

[+] ocimbote|3 years ago|reply
I feel old for I have witnessed many of these battles. But I feel that I have seen history.

There's nothing wrong in this article, in the sense that everything's correct and right. But it is an old person's battle (figuratively, no offense to the author intended, I'm that old person sometimes).

It would be like your grandparents correcting today's authors on their grammar. You may be right historically and normatively, but everyone does and says differently and, in real life, usage prevails over norms.

Same goes for REST.

[+] rrauenza|3 years ago|reply
I feel like the author has conflated hypertext with html.

The REST interface should be self describing, but that can be done in JSON.

If you go to Roy Fielding's post... there is a comment where someone asks for clarification, and he responds:

> When I say hypertext, I mean the simultaneous presentation of information and controls such that the information becomes the affordance through which the user (or automaton) obtains choices and selects actions. Hypermedia is just an expansion on what text means to include temporal anchors within a media stream; most researchers have dropped the distinction.

> Hypertext does not need to be HTML on a browser. Machines can follow links when they understand the data format and relationship types.

So, to me, a proper format is something like...

id: 1234

url: http://.../1234

name: foo

department: http://.../department/5555

projects: http://.../projects/?user_id=1234

This is hypertext in the sense that I can jump around (even in a browser that renders the urls clickable) to other resources related to this resource.

[+] a3w|3 years ago|reply
I found that json-ld, json-hal and other "describe your json" standards were needed to make json human readable-ish. I hate that there are many competing standards and the link syntax feels clumsy. JSON5 for "add comments, allow ES6 features" was perfect for private and in a small team use for a while.

No one seems to listen to the JSON inventor, who said he regrets creating a misnomer name and no successor should use the same naming parts, since it neither dependent on or compatible with JavaScript, nor was it only useful for storing and/or describing objects. (I am paraphrasing from my memory on his reasoning on both points.)

Open API 3 solved that problem for me, transforming JSON-RPC into a documentented API.

[+] recursivedoubts|3 years ago|reply
No, I haven't. JSON can be used as a hypermedia, and people have tried this, but it hasn't really worked out very much and the industry trend is against it, towards a more RPC-like approach.

I'm using HTML as an example to demonstrate the uniform interface constraint of REST, and showing how this particular JSON (and, I claim with confidence, most extent JSON APIs) do not have a uniform interface. Which is fine: the uniform interface hasn't turned out to be as useful when it is consumed by code rather than humans.

There are good examples of successful non-HTML hypermedias. One of my favorites is hyperview, a mobile hypermedia:

https://hyperview.org/

[+] shadowgovt|3 years ago|reply
In practice, this turns out to be a painful way for a client to access the data; instead of a single fetch to get the relevant information, we're now doing multiple fetches and collating the results client-side. For this kind of data access, I'd recommend either

a) just denormalizing the field contents if you know what the client needs them for

b) supporting GraphQL if you want to support a general-purpose query endpoint

[+] janci|3 years ago|reply
Because HATEOAS is stupid for client-server communication.

It mandates discoverability of resources, but no sane client will go around and request random server-provided urls to discover what is available.

On the other hand, it does not provide means to describe semantics of the resource properties, nor its data type or structure. So the client must have knowledge on the resources structure beforehand.

Under HATEOAS the client would need to associate the knowledge of resource structure with a particular resource received. A promising identifier for this association would be the resource collection path, i.e. the URL.

If the client needs to know the URLs, why have them in the response?

Other problems include creating new resource - how the client is supposed to know the structure of to-be created resource, if there is none yet? The client has nothing to request to discover the resource structure and associations.

Also hypertext does not map well to JSON. In JSON you can not differenciate between data and metadata (i.e. links to other resources). To accomodate, you need to wrap or nest the real data to make side-room for metadata. Then you get ugly and hard to work with JSON responses. It maps pretty good to XML (i.e. metadata attributes or metadata namespace), but understandably nobody wants to work with XML.

And the list goes on and on.

[+] throwaway787544|3 years ago|reply
Excellent explanation. But I think plenty of technology has been misunderstood as of late.

I've been in this industry two decades, but it's only in the past 5 years that I've noticed entire teams of absolute morons entering the field and being given six figure jobs without understanding what their job even is, much less how to do it properly. (And by "properly" I mean "know what a REST API is")

The industry is awash in quants who took a four week course in Python and are now called "data scientists"; backend software engineers who don't know what the fuck a 500 error is; senior developers who graduated a year ago; engineering managers who "hire DevOps"; product managers and DMs who don't know how to use Jira or run a stand-up. It's like all that's left is people who think they have "impostor syndrome" when they actually are impostors of professionals.

I tried to find a new job recently, and I couldn't find a single org with at least 50% of the staff properly understanding how to do their jobs. Of course half of them were bullshit VC-funded money-bleeding terrible businesses, and the other half were fat cash cows that through their industry dominance became lazy and stupid. Maybe we just hit peak tech, and all the good teams were formed by boring companies long ago and don't have new positions open. Or maybe all the good people cashed out and retired.

[+] pprotas|3 years ago|reply
This has always been the case, you just weren't experienced enough to realize it
[+] munk-a|3 years ago|reply
The short answer is... the web moved in a different way than expected and the useful portions of rest were preserved while other portions were jettisoned (the biggest one IMO isn't the hypertext portion (JSONs fine, it's fine) but the self-discoverable portion - I haven't seen a self-discoverable REST API ever in the wild).

Unfortunately the name REST was too awesome sounding and short - so we've never had a fork with a different name that has proclaimed itself as something more accurate.

I don't think this is awful, FYI - it's sort of the evolution of tech... the OG REST wouldn't have ever gotten popular due to how stringent it was and I can use "That it's RESTful enough." to reject bad code practices without anyone being able to call me out on it because nobody actually remembers what REST was supposed to be.

I'd also add - what precisely is self-descriptive HTML? All descriptions need reference points and nothing will be universally comprehensible - let alone to a machine... expecting a machine to understand what "Insert tendies for stonks." on an endpoint is unreasonable.

[+] eterevsky|3 years ago|reply
If you are designing a REST API, please don't follow this author's advice and return HTML instead of JSON. HTML takes much longer to parse and makes the API more fragile.

The history did its job: it preserved the most useful features of the original idea (expressing RPCs as URLs in GET and POST requests) and has dropped the unnecessarily complicated bits.

What this article is about is a pedantic terminology battle of whether to call the current practice REST or not.

[+] krainboltgreene|3 years ago|reply
> If you are designing a REST API, please don't follow this author's advice and return HTML instead of JSON. HTML takes much longer to parse and makes the API more fragile.

you're describing what a browser does.

> has dropped the unnecessarily complicated bits.

you're viewing this content from a browser.

[+] irrational|3 years ago|reply
Don’t parse the HTML, just display it. That’s what the browser does best.
[+] z9znz|3 years ago|reply
This article really is about getting people to use their overloaded HTML libraries.
[+] stuckinhell|3 years ago|reply
Software Development is like water filling a container. It's always water but its form takes the shape of its container.

After doing this for 15+ years, I tell my junior developers to take it easy on the "proper way" of doing things. It will change, people will argue, and money talks.

[+] potamic|3 years ago|reply
Great article. Calling APIs RESTful because they return JSON has always been a peeve of mine. But here's the question though, why do APIs need to be RESTful? What is the need for a client to have no knowledge of the server, if the server can also provide logic that can run on the client. In some sense, one could argue that a service that provides both raw data and client logic to transform raw data to hypermedia, is still very much in the spirit of REST. Webapps by definition must satisfy this requirement, so it is moot to be asking if a webapp follows RESTful principles. Of course it does, it runs on the web! Native apps on the other hand have sure ruined everything.
[+] femto113|3 years ago|reply
There are two fundamental reasons HATEOS just doesn't work in practice. The first is that most services can't easily or reliably know their own absolute URLs , and HATEOS (and the behavior of many HTTP libraries) is inconsistent around relative URLs, so hacky and unmaintainable response rewriting gets applied at various levels. The second is that if you are diligently following a convention for how paths are constructed it's utterly redundant information--you can simply synthesize any path you need more easily than you can grok it out of HATEOS. The reasonable bits of REST that are left are not just HTTP verbs and JSON, but significantly the use of entity type names and ids in the path portion of the URL rather than in query parameters.
[+] quickthrower2|3 years ago|reply
I never understood this and still don’t.

The P in API is programmer’s. Specifically it is a programattic call.

REST says you get hyperlinks, which are effectively documentation in the response.

Which is nice.

But a program isn’t a person it doesn’t need docs in the response.

And URL links are not sufficient documentation to use the interface.

So I don’t get the REST use case outside of some university AI project where your program might try to “make sense”’of the API.

And therefore I have never tried to use REST and I have never seen anyone else either at anywhere I have worked.

It is a nonsense concept to me.

REST API is a contradiction in terms.

[+] hsn915|3 years ago|reply
I don't know if you read the article.

REST is a post-hoc description of how the web worked (at the time it was made).

You had web pages with hyptertext content, and that included forms. The forms had fields with names and an "action" destination.

The client (the browser) knows nothing about the server or even its APIs. It just knows how to send an HTTP request with parameters. In the case of forms, those parameters were encoded in the body of a POST request. That's it.

There was no "client side code" that talked to the server.

The "client side" is literally just the browser. Talking to the server is done by the user clicking links and filling forms.

I don't think the article is particularly encouraing you to program this way in 2022. Just telling you that if you are not programming in this way, do not call what you are doing "REST", because it is not.

[+] cmonnow|3 years ago|reply
this whole discussion seems pedantic.

whether it is a local function or a remote function, both caller and callee need to agree on the parameters (input), and returns (output).

I send you X. You send me back Y. That's it - this is the contract we both agree to.

OP is saying - the caller should NEVER do anything with Y other than display it on screen, for it to be called REST. Well - why even display it, why not just discard it ? Calling print(Y) is as good as calling businesslogic(Y). Whatever further logic a human plans to do after print(Y), a machine can do the same.

In other words, REST is just step 1 of returning data from a remote function. The moment you code any additional logic on the returned data (which is 99% of use cases), it's not REST anymore ? Sounds like an extremely limited definition /use case of REST.

[+] twoodfin|3 years ago|reply
This article does a disservice to the benefits of “Richardson Maturity Level 2” i.e. “Use REST verbs”.

A standard set of methods—with agreed upon semantics—is a huge architectural advantage over arbitrary “kingdom of nouns” RPC.

I’d argue that by the time your API is consistently using URLs for all resources and HTTP verbs correctly for all methods to manipulate those resources, you’ve achieved tremendous gains over an RPC model even without HATEOAS.

[+] kelseyfrog|3 years ago|reply
An earnest answer? Because few people have taken the couple hours(?) and read Roy Fielding's dissertation from start to finish. The biggest likely reason for not doing so is that frankly a bunch of people simply don't care, and why should they. There's very little incentive to do so. In fact, the fewer people that do, the less of an incentive there is - there is no one can call them out on it and they can reap the rewards of calling an API RESTful despite the accuracy of the statement.

Having worked in an organization where people were very familiar with the academic definition of REST, the biggest benefit of being a backend developer was that when client-side folks depended on nonRESTful behavior, we had some authority to back that claim. It gave us leeway in making some optimizations we couldn't have made otherwise, and we got to stick to RFCs to resolve many disputes rather than use organizational power to force someone to break compliance to standards. I suppose it meant that we were often free to bikeshed other aspects of design instead.

[+] trixie_|3 years ago|reply
The dissertation is neither a specification or a standard which has led to decades long bickering of what 'REST' really is.

Edit: See how this post has zoomed to hundreds of comments in just minutes by people arguing the 'one true REST'. The situation is insufferable.

[+] fennecfoxen|3 years ago|reply
I tend to find it helpful to ask, "is this proper REST as in HATEOAS, or is just 'REST-ish'?" It's usually just REST-ish: predefined URL patterns, roughly 1:1 with specific resource-APIs; they care about your HTTP verb and return JSON (but usually not with a lot of URLs in that JSON).
[+] sanderjd|3 years ago|reply
Counterpoint: This all got a lot worse and more confusing for me when I read the dissertation. What people seemed to call RESTful struck me as a bit convoluted but fine and workable. But after I read the dissertation, it was clear that all that was "wrong", but I was left without any real concept of what was both right and workable. It took me years to realize I could just ignore the whole thing and focus on designing useful interfaces. (And that, indeed, this is what most people had been doing already.)
[+] wink|3 years ago|reply
I think you're missing a little historical context here, but maybe you were already more experienced back then than I was, for example (which means your bubble was in the know and the unwashed masses weren't). I started making websites in 1998 and earning money with it in 2001ish (that's the year Wikipedia launched) and imho back then it was a lot harder to find the 'correct' way with teaching yourself. I saw REST/RESTful and best practices for the web mostly develop after 2005-2008 (yeah, around the time there was the great Rails exodus as well) and people started to standardize more. What I'm trying to say is that it was all more the Wild West in web development and by the time people had more insight, the naming/switch to JSON had already begun. I also don't remember when I read the REST dissertation first, probably only much later than 2001.
[+] lloydatkinson|3 years ago|reply
At this point I’ve stopped caring about REST. It’s like agile and scrum where everyone says they are doing it but everyone has their own opinion of what’s correct.

As long as there’s an OpenAPI spec, sane API routes, and it uses a format that’s easily consumable with a given ecosystem (so pretty much always JSON anyway), and it doesn’t do anything dumb like return 200OK { error: true } then I’m happy with it. Too much bikeshedding.

Bonus points if the API has a client library also.

[+] weeksie|3 years ago|reply
The usefulness of metadata in rest responses ended up not mattering as much as people thought in most cases. Pagination is the best counter example but many rest apis do return next/prev links in the payload or in the headers. It’s still REST but the parts that mattered (http verbs for semantic info, etc) stayed around.
[+] danielovichdk|3 years ago|reply
Seems strange to me that people fight over REST since it is very clearly communicated in the mentioned PhD dissertation.

I fully understand the movement of complexity, historically belonging on the server, shifted just as much on to the client, but that's not a discussion of being restful - it's not restful to have the client determine application state so to speak - the server does that for you.

In the sense of catering multiple clients via an API is of tremendous value but you still moved the complexity on to the client - you cannot argue against that.

I find it fascinating to have at least blobs of state/representation being served without having to fiddle with inner workings of an API and simply rely on the API to give me what I must show the user.

I am in the HATEOES camp, it sit well with me. But that's just me

[+] ineptech|3 years ago|reply
I regularly interview devs and ask, "What makes a RESTful API RESTful?" and have never heard anyone mention hypertext or hypermedia. A typical answer is: stateless, uses HTTP and HTTP verbs, and (if the person is over 40) easier to read than SOAP.

Related, it seems like "API" is quickly becoming synonymous with "web service API". In my experience, the thing that goes between two classes is almost always referred to as an "interface" only.

[+] ahepp|3 years ago|reply
I'm walking away from this article unable to find a really good reason to implement HATEOAS in an API meant to be consumed by a program (as Application Programming Interfaces typically are).

The best I can come up with (and this is me trying to like it) is that I guess the API is somewhat self documenting?

I see benefits to resource orientation and statelessness, but why do people get so upset about these APIs not following HATEOAS? Is it just a form of pedantry, that it's not really a REST API, it's a sparkling JSON-RPC?

[+] civilized|3 years ago|reply
Because no one knows what the hell the acronym means, except that it sounds good.

Everyone wants to be RESTful. RESTful is chill. It's resting - good programmers are lazy! But RESTful is resting while using an acronym, which is technical and sophisticated. To be RESTful is to be one of the smart lazy ones.

Now if you're one of the few who cares what your acronyms mean, you look it up and ... "representational state transfer". How do you transfer state without representing it? I guess everything that transfers state is RESTful. And everything is state, so everything that transfers is RESTful. So every API is RESTful! Great, I guess if we make an API we're one of those cool smart and lazy people. And let's make sure to call it RESTful so that everyone knows how cool, smart, and lazy-yet-technical we are.

Roy Fielding made a meaningless but cool-sounding acronym popular and has reaped the predictable consequences.

[+] etamponi|3 years ago|reply
REST, noun, acronym.

1. A term to indicate APIs that use HTTP as the transport protocol, and typically JSON as representation.

2. (archaic) A term conied in a paper from 2000, indicating a model that describes how the internet works.