top | item 7658237

Your REST API should come with a client

107 points| gane5h | 12 years ago |silota.com | reply

80 comments

order
[+] patio11|12 years ago|reply
Relatedly, you'll want to ship first-party or "blessed" libraries for the big programming stacks as quickly as feasible. Start with the one your team uses and then roll out to the ones which are big with in your customers' industries. They're vastly easier to consume for many of your users than a standard REST API is. (Compare: Twilio::Sms.send(from, to, "This is a message.") with Httparty.post("https://api.twilio.com/2010-04-01/Accounts/{AccountSid}/Mess..., {:From => from, :To => to, :Message => "This is a message"}). Of course, in an actual application, after the third time writing that I'd start working on a buggy poorly tested wrapper for half of the Twilio API, or use someone's OSS buggy poorly tested wrapper for half of the Twilio API. I actually did this prior to Twilio releasing a first-party Ruby API and will be crying tears about that timing for years to come.)
[+] thejosh|12 years ago|reply
Funnily enough Mailgun[0] has started to feature codesnippets for CURL/Ruby/Python/PHP/Java/C# on their homepage for sending mail.

[0] http://www.mailgun.com/

[+] mattmanser|12 years ago|reply
Gah, I have mixed feeling about this.

When you're using their internal language it's often great. Any other and it's often a nightmare. For example, it's very common to see people write a bloody awful .Net wrapper that are completely non-idiomatic and a complete pita to use.

Often they're written in a way the author clearly doesn't understand OO or hasn't kept up with C# and still thinks it's just like Java so writes extremely old fashioned code. And namespaces. They want you use a million namespaces. It's a minor thing, but a completely unnecessary complication. They could stick everything in one namespace and get no conflicts.

And then, because they've put out client libraries, they don't document their API properly.

Google, as usual, are the worst offender, their .Net library is really bad and incredibly overcomplicated. It does make you wonder about all the hype of 'best' engineers.

The other problem is that they think you'll be using their API one way, when it needs to be another and their code just gets in the way, but because they don't have a snippet without using their library, you end up having to ILSpy their library and then get greeted with shockingly bad code with millions of pointless interfaces that only get used once, because, again, they don't understand .Net.

[+] pbreit|12 years ago|reply
You're typically going to copy/past a snippet either way. Doesn't seem like a huge win.
[+] pbreit|12 years ago|reply
I still feel like clients defeat the whole purpose of using simple technologies like HTTP verbs and JSON. But I can see that a) if your constituents want them you probably have to provide them and b) there might be some marketing benefit. Other than that, I think it's a shame.

And the reasons in the post are not particularly compelling. 1) Batch requests are usually unnecessary or benefit from a call optimized for batching. 2) Caching rarely needed, potentially dangerous and can be done elsewhere. 3) Throttling can/should be performed elsewhere and no way to prevent DOS anyway. 4) Timeouts are usually easy. 5) GZIP rarely necessary. 6) Dangerous to let someone else's code do it.

[+] martin-adams|12 years ago|reply
Whenever I use a third party REST library that doesn't have an SDK, usually the first thing I do is wrap it all up in what would be the SDK client.

This way I can bring into my application the following: - Parameterised methods with code doc (so when I reference it I can see what's what in my IDE). - Exception handling. - My own batch methods in the absence of it in the API. E.g. book delivery date API = get delivery slots for address, select appropriate delivery slot matching the date, book it. All this can be one client method which has an exception for when things go wrong.

[+] philsnow|12 years ago|reply
"5) GZIP rarely necessary."

if the server already has the bytes gzipped, it is often pure win to ship the gzipped bytes: consider that the client may be able to finish uncompressing the gzipped response earlier than it could otherwise have received the last byte of the uncompressed response.

[+] jkrems|12 years ago|reply
I don't think the idea in the post was to perform throttling in the client. The idea is to provide a best practice handling of throttling in the client (e.g. what to do when receiving a 429). Since the 429 is mentioned in the article, it's a reasonable assumption that the throttling happens outside of the client.
[+] CraigJPerry|12 years ago|reply
If REST client libs didn't suck, this wouldn't be needed.

It's a load of effort to correctly consume from a RESTful webservice currently.

I should be able to get going in my REPL with something like:

    >>> from rest import client
    >>> proxy = client(url)
    >>> print proxy.resources
    ['foo', 'bar']
    >>> help(client.foo)
    Help text from the rest service...
    >>> client.foo(123)
    321
    # repeat calls transparently handle caching
[+] icebraining|12 years ago|reply
Have you tried Hammock? It fakes it very well:

  >>> from hammock import Hammock as Github

  >>> # Let's create the first chain of hammock using base api url
  >>> github = Github('https://api.github.com')

  >>> # Ok, let the magic happens, ask github for hammock watchers
  >>> resp = github.repos('kadirpekel', 'hammock').watchers.GET()

  >>> # now you're ready to take a rest for the rest the of code :)
  >>> for watcher in resp.json: print watcher.get('login')
  kadirpekel
  ...
  ..
Your "resources" example only works if there's a /resources endpoint, but client.foo(id).GET() works just fine.

https://pypi.python.org/pypi/hammock

[+] woogle|12 years ago|reply
Thanks! I though I was the only one.. The RESTful API is designed to be really simple for clients. If you designed yours well, your users won't need your client API.

For instance, on iOS we have the wonderful RestKit Client[0]. If you create your own client it means I would have to write specific cases for your API and miss all RestKit features. Don't get me wrong, I could still use the RestKit with your API, but when I see an API client available, I always think "this API may be bad designed, it needs specific code".

[0](https://github.com/RestKit/RestKit)

[+] evv|12 years ago|reply
A few things off the top of my head which are different across many RESTful APIs which prevent a unified client from being possible:

- Referring to other data paths in the API in a standard format

- Authentication. Shy of OAuth, it is totally different on each API. Most APIs avoid OAuth because its a PITA

- Partial PUTs vs PATCH vs sub-resources

All sorts of minor things are different across rest APIs, and that results in client libraries needing to be widely different.

We are in need of a standardized API format which we can build compatible server & client libraries against. Something like SOAP for the JSON era. There are a few out there, but none have really gone anywhere. Are any of these extended-REST wrappers in production by any big companies? I'd love to be corrected.

If not, maybe a high-profile company with a really nice API design could publish a standard on their API structure and refactor out their transport code to provide us with these libraries. If successful, they could be known for introducing a widely-used transport layer for the web industry.

[+] lmm|12 years ago|reply
This kind of thing is why I'm predicting a return to WS-*, or else a reimplementation of it on top of REST. The XML backlash was mostly correct but having a standard, well-specified way of creating HTTP APIs and generating clients for them from a single endpoint is a baby we threw out with the bathwater.
[+] voltagex_|12 years ago|reply
I wonder how to implement that. This kind of stuff is generated from WSDLs in C#/VS, but it's an interesting idea to do it in a REST service.
[+] chacham15|12 years ago|reply
Its very difficult if not impossible to make a client which behaves correctly for all services. A simple example to demonstrate my point: the Facebook FQL API limits the maximum size of a result-set at any point in time to 5000. Therefore, you have to work around that limitation in a very specific way that no generic client can handle properly. That is partly why the advice given in the OP is so good: because different services can and do handle each of those points in a different way.
[+] mjs|12 years ago|reply
Agreed; REST clients/frameworks should have ways of handling all six of these problems--none of them are specific to a particular API.

If the client lib has an API-specific way of handling "caching", say, then the number of ways of handling "caching" is O(n) in the number of APIs you consume. If you use the REST APIs directly, then the HTTP's standard, debugged, documented caching mechanism is the only one you ever need to know.

[+] arethuza|12 years ago|reply
Maybe I'm being a bit dim, but could you explain what "client.foo(123)" does in this client - it appears to be a resource and 'callable' - so is this doing a GET on that resource, what about other methods?
[+] johns|12 years ago|reply
Providers should definitely provide clients. This is one of the things I worked on at Twilio and it's extremely important for onboarding new customers. Support as many languages and frameworks as you can sustain and make them first-class (just as well documented as REST, native to the language, etc).

However, I've also seen the other side of this working at IFTTT where (at the time) we had a gemfile a mile long. That got really hairy. I now try to avoid using clients. I go into it in great detail here: https://www.youtube.com/watch?v=dBO62A3XaSs and we've talked about this many times on trafficandweather.io if you want to learn more.

[+] MichaelGG|12 years ago|reply
Is Twilio really rest? Versioning and formatting info in URLs isn't REST-like. Plus the docs suggest you construct URLs, which isn't RESTful.

I love Twilio's API, I just don't understand the REST part. I fail to see how "PATCH /accounts/123/ <postbody>" is better than "POST /accounts/updateaccount <postbody accountid=123 ...>", especially when hidden behind a lovely client.

[+] voltagex_|12 years ago|reply
I just watched the talk. It's really interesting and raises a few things I haven't thought about before.

Do you have any more information about automatic service/endpoint discovery and also about the smart HTTP client you use?

I would have thought that failing over to another endpoint address would have been done at a load balancer level.

[+] cdelsolar|12 years ago|reply
At Leftronic we've also started to avoid clients. When you connect to many APIs you start running into issues with poorly supported clients and mile-long pip freezes. We like providing our own clients to our API because it's a nice quick way to get started but I think nothing beats good old python-requests and reading API docs.
[+] sunkarapk|12 years ago|reply
I think https://github.com/pksunkara/alpaca is a good starting point. Given a web API, it generates client libraries in ruby, python, php and node.js
[+] nodesocket|12 years ago|reply
Are there any other alternatives to alpaca? How is the quality of the generate client code?
[+] svedlin|12 years ago|reply
The Google APIs Client Generator is an awesome tool that automates this process. It takes a service description and generates a complete client library:

https://code.google.com/p/google-apis-client-generator/

The service is defined in a platform-neutral discovery document, which can be used by any provider:

https://developers.google.com/discovery/v1/reference/apis

There are generators for Python, Java, .NET, Objective-C, PHP, Go, GWT, Node.js, Ruby, and others.

[+] quotemstr|12 years ago|reply
> The Google APIs Client Generator is an awesome tool that automates this process. It takes a service description and generates a complete client library

So we've gone full circle and arrived back at SOAP. Why didn't we just keep using SOAP in the first place?

[+] thomasahle|12 years ago|reply

    Caching, throttling, timeouts, gzip, error handling.
This all seems like something any serious user of a REST API should know how to handle very well. If not otherwise, then by use of a standard library.

Why does every API owner have to write basically identical clients in every language out there?

[+] patio11|12 years ago|reply
They wouldn't be substantially identical clients.

For example, I have extensive use of Twilio and Pin Payments (among other APIs) in Appointment Reminder. My use of the Twilio API can potentially spike into the hundreds of requests a second range, but if it gets into thousands of requests a second, that needs to throw a PresidentOfMadagascarException ("Shut. Down. Everything.") Code which does not interact with the Twilio API but instead implements meta-features on top of the Twilio API comprises 80%+ of lines of code implicating the Twilio API in my application.

By comparison, querying the Pin Payments API doesn't need a rate limit at all, but does need a sane caching strategy, because it requires thousands of API calls to answer a simple, common question like "How much did we sell in 2013?" Again, meta-features for the API comprise over 80% of lines of code implicating that API, but they're totally different meta-features.

AR is doing the 90% case with both of these APIs -- sane defaults in first-party clients would have greatly eased my implementation of them, allowing me to focus on features which actually sell AR, to the benefit of both my business and those of the APIs at issue, since their monthly revenue from me scales linearly with my success.

[+] AYBABTME|12 years ago|reply
Another reasons is that your API likely sucks/is broken if you haven't made a client for it (and thus figured out the gaps/problems).
[+] gane5h|12 years ago|reply
I learnt this the hard way. Initially, I designed the API with bigints for ids and found some older versions of PHP didn't support bigints. I had to switch to using strings.
[+] ejain|12 years ago|reply
Your integration tests should count as one client?
[+] ejain|12 years ago|reply
Fine, but you can't provide clients for all languages and frameworks, so make sure your REST API is simple enough for those who need to make do without an official client library.
[+] programminggeek|12 years ago|reply
I think shipping your own client lib sort of defeats the whole point of REST as the "one true way" to build API's. I agree that building a client library is useful and makes it easier to integrate with, but it also proves that REST on its own is not completely superior than something as conceptually simple as JSON-RPC.

I fully understand the benefits of REST, but on a lot of projects, a single endpoint and a simple RPC protocol would be easier to integrate without need for a separate client library.

Also, client libraries don't always do the best job of making it clear when you are making api calls over the wire and that can be quite problematic.

For example, I've worked with code where it made an http request to get a price, which was then used in a Model calculation. This code looked completely harmless at the highest level, but each page request was hitting the server 100+ times to do all the calculations needed.

After finding the problem adding some caching was easy and now things run faster. However, having that level of abstraction and indirection makes it far less obvious if/when/where HTTP requests happen and that isn't always a good thing.

[+] iamwil|12 years ago|reply
I wish clients would write themselves based on REST APIs.
[+] modarts|12 years ago|reply
Sounds like the good ol days of SOAP and WSDL
[+] jbert|12 years ago|reply
> You’ll have to mask GET requests with a HEAD request and appropriately handle HTTP status code 304 for resource not modified.

I'm not sure I understand this. Surely the point of if-modified-since and etag headers is that you can send them in the GET request and get back a 304, there is no need to do a HEAD-then-GET?

[+] resma|12 years ago|reply
In my opinion, much of the described features of an API client actually do not belong in a client implementation. An API client cannot be shipped for all platforms. That either blocks platforms to use your API or leaves room for API consumers to get around your policies, like throttling and caching. I think enforcing policies should be done much closer to your API on the server side. An API proxy gateway could be used for most of the described points and would secure your API much better, without the extra effort of writing client libraries.
[+] nailer|12 years ago|reply
I think there's another benefit here: edge cases will show up. I wrote a Python client for a company's REST API which revealed an RFC bug during testing. From the source:

> For POSTs we do NOT encode our data, as CompanyX's REST API expects square brackets which are normally encoded according to RFC 1738. urllib.urlencode encodes square brackets which the API doesn't like.

Retrospectively, I should have asked the company to fix their bug, rather than work around it myself, but at the time I was a less confident programmer.

[+] dreamfactory2|12 years ago|reply
Your API should handle collections in any case. Otherwise this is just reinventing ESB while losing most of the stability and service management benefits by fragmenting and decentralising.
[+] krob|12 years ago|reply
It's nice that this article claims everyone who makes an api should provide a client. But what happens when it uses hypermedia restful constraints? I think that any company which is able to provide multiple implementations for their API has already made it financially to afford those luxuries.

I know of some large companies which do this already, one of them being braintree, which offers an amazing API in many different languages, but they are already profitable and were bought by paypal.

[+] fosk|12 years ago|reply
I'm biased, since I'm the founder of Mashape [1], but maybe this could be of some interest. I'd like to point out that one of the features that Mashape offers is auto-generating client libraries in 8 different languages, leveraging Unirest [2] that we open sourced last year.

[1] http://mashape.com

[2] http://unirest.io

[+] skion|12 years ago|reply
Agree with this, even if just done in one reference language. It is much easier for the community to port an existing binding to other languages, than to implement a new client from scratch.

Two notes:

2) No need to mask requests with a HEAD; a GET can also return a 304 directly.

6) De-duplication of calls: Any method except POST should be idempotent already, hence also a retry-on-error is trivial in those cases.

[+] cwmma|12 years ago|reply
While I agree with everything you say it leads me to the opposite conclusion that I should not roll my own half assed rest client.
[+] EGreg|12 years ago|reply
Wow! We did exactly these things in Q. I was surprised to see exactly the features we tout in our SDK mentioned one by one. Compare to this:

http://platform.qbix.com/guide/patterns

The Q platform was supposed to take care of the things that you have to do anyway when writing social apps.

[+] tlrobinson|12 years ago|reply
Most of these things should be (optional) features of a good HTTP client library, which is kind of the point of following conventions like REST.

Maybe that just means writing a thin wrapper around one of these libraries specific for your API. But don't reinvent the wheel every time.