top | item 276820

(no title)

gunderson | 17 years ago

Right it may work for you, which is fine.

The point of REST that I thought really stood out in the comments was the one about caching.

If you know that any GET request can be cached subject to whatever is in the expires header or etag, that is hugely useful information when scaling.

REST is not a straight jacket, it's just a simple way of making the /site/url/you/use + the method mean something consistent and logical.

discuss

order

paul|17 years ago

If your application is a simple key/value store, then scaling won't be a problem. If it's something more complex, then such simplistic caching models won't work.

For example, the FriendFeed api includes a method that fetches multiple feeds at once: http://friendfeed.com/api/feed/user?nickname=paul,bret,jim&#... Where should one user PUT their updates such that a simple HTTP cache will know to invalidate that GET? It's not possible. The cache must understand the internals of the system in order to do proper invalidation here.

gunderson|17 years ago

not really, the system would only need to keep track of the last updated time of each record and when aggregating them, set the proper etag.

gleb|17 years ago

Caching is a poor argument for REST, as is it's provably false.

Experience has shown that you cannot make a useful HTTP cache without replicating some custom per-app business logic into it. Paul's example below is a simple case where you HTTP cache would need to support triggers to invalidate one "resource" when another one is updated.

Worse yet, you only get one set of cache headers in your response, and you really need two, one to control the reverse proxy and one for the browser. So any reasonable reverse proxy will (configurably) ignore all caching instructions in the headers.

In general the whole multi-tiered cache framework envisioned in rfc 2616 was a failure. It's just not powerful enough to do anything useful, and it gets in the way.