For people who have been dealing with the churn of JSON API, represented throughout this thread, I'm genuinely sorry.
Let me try to give some perspective on the history. For a long time, JSON API was more of a set of guidelines than a strict specification. A lot of our early adopters hated all of the MAYs in the spec (you can see examples of that throughout this thread), so we decided to tighten things up considerably towards the end of last year.
That meant trying to eliminate the vast majority of options in the primary spec, and moving all optional features to named extensions that clients could programmatically detect.
Of course, significantly tightening up the semantics forced us to grapple with harder problems and pushed some ambiguities into the light of day. RC2 in particular was an attempt to seriously pare down the scope of the main spec, while making the semantics of what was left stricter. Dan (the primary editor) and I spent countless hours discussing various details, and people contributed hundreds and hundreds of (very useful!) comments during this period about various details.
RC3 was a smaller delta, but I could easily imagine that one of the changes had a large impact on existing APIs.
My overall goal for the project from the beginning was to nail down a full protocol (both the format and the wire protocol) that could be implemented by multiple languages on both sides. Originally, it was created because I was frustrated by the ambiguity of "REST-style API" was during the development of Ember Data.
The earliest versions of JSON API didn't really nail things down well enough to deliver on that promise, but I hope that the latest versions will be able to. Time will tell.
I appreciate the massive amount of work and the recent updates to tighten the spec.
One challenge with the churn is a lack of any high-level changelog (git commit history doesn't count). I have a team working off an earlier version of the spec, and I check back semi-frequently. But I haven't been able to find a document outlining "here are the major changes since the previous versions." I understand that would represent more work on top of work, but for such a large spec, the changes have been disorienting.
In case you're thinking, "Wow, I bet in the real world this would be a misery to impose on a team - I'm sure no one would really want to do this!", well, I worked briefly with a team where they imposed this. It was part of the reason why I say "briefly".
At Gragg, I've been experimenting with a different approach: I wrote a self-describing schema language for all of the JSON payloads that'd be sent, wrote a service and tested it with a "blob" type that doesn't validate, then wrote the spec when it was clear enough, then it turns out that writing a client from the spec is usually super-simple, you get to see all of the things which the spec can do.
Here's the basic things that have been mostly useful:
Maybe x, for optional parameters.
Process by defining what to do when null.
Doesn't have to be tagged to be useful.
Repetition in the form of [x] and Map String x.
Process only via a for-each loop.
Tagged-sums-of-products: ["tag", arg1, arg2, ...]
Process only with a switch statement dispatching over arr[0].
Record types like {"id": 3, "name": "Gandalf", "color": "grey"}
Feel free to base logic on record.name etc.
Primitives: dates, ints, strings, blobs.
The tagged-sums bit is probably a dealbreaker for me at this time: I picked this up from Haskell and I would not easily let go. The syntax in JSON is a little clumsy, but it's still important. Cf. Abelson and Sussman's lectures, "All interesting programs start with a case dispatch."
We're starting to build API endpoints with this where I work. We didn't have compound documents in an earlier version of our API, and we ended up having to make a lot of roundtrips for some things. So I had been thinking about something like the jsonapi "include" mechanism, and when I saw how jsonapi was doing it, it was pretty close to what I would have done. That made me feel better about going along with their decisions in areas I haven't thought about.
They keep saying that they're closing in on 1.0, but there's a few issues left:
It's good that they want to get all of the breaking changes done so that they can declare a stable 1.0. But it also seems like they're going to try and call it stable right after making a bunch of changes, which seems risky.
I also pitched JSON-LD/Hydra at work, because they're w3c-backed and JSON-LD has some uptake. But the other people who looked at those specs found them hard to digest. And I agree; as an implementer, I can read the jsonapi docs quickly and have a pretty clear idea of what to do. But with JSON-LD/Hydra, not so much.
I get the sense that JSON-LD/Hydra is more flexible than jsonapi, but I think jsonapi does what we need. And if it does what we need, then additional flexibility might actually be a drawback. I guess we'll see how it goes.
Thanks for the feedback. Personally, I feel like we nailed down the important changes earlier this month, and I agree that further churn at this point is likely to cause more harm than good.
For what it's worth, the issues you linked to are mostly about adding more rigor to possibly underspecified areas, not changing things that are already specified, but those things could easily be done after 1.0.
Oh boy, cue the haters. I'd like to address a few things that JSON API gets right, and where it fails horrendously (just in time for 1.0!)
The good:
- It opens a path for standardized tooling among API clients. Rather than having a whole mess of JSON-over-HTTP clients with a hard-coded understanding of the particular API, one could theoretically use a hypertext client to interact with any API using this standard.
- It establishes a baseline of what an API server must implement.
The bad:
- It tries to control not just the media type, but the protocol (HTTP) and server implementation. This is problematic because it dictates how your server must implement its routes, and is tightly coupled with HTTP.
- It tries to be very prescriptive, but it cannot cover all edge cases without being exhaustive. This comes at a heavy burden for implementers to get every part of the specification correct. Despite this, some extremely basic features are missing, such as providing an index document so that a client can enter the API knowing only a single entry point (as it stands now, clients must have a priori knowledge of what resources exist on the server).
The ugly:
- Since so many parts of the base specification are optional MAYs, and there is no prescribed mechanism for feature detection, there is no way to figure out if a server supports a particular feature of this spec without trying to do something and failing.
- The spec has made many breaking changes on the road to 1.0 (as other commenters have mentioned, and there is still room for breaking changes).
At this point, I think that this project gained traction due to the clout of the original authors (Yehuda Katz and Steve Klabnik) and the promise of no-more-bikeshedding, though I would argue that the bikeshedding has just shifted from particular APIs to the spec. Disclosure: I authored and maintained some libraries that implement this spec, and authored another media type for hypertext APIs.
> Since so many parts of the base specification are optional MAYs, and there is no prescribed mechanism for feature detection, there is no way to figure out if a server supports a particular feature of this spec without trying to do something and failing
> The spec has made many breaking changes on the road to 1.0
Interestingly, most of the breaking changes on the road to 1.0 were about drastically reducing MAY in favor of MUSTs.
Optional features were moved to named extensions, and there is now an explicit way to negotiate about those extensions.
For anyone who was put off earlier on by the number of MAYs, know that we heard you loud and clear. It may (no pun intended) be worth another look.
I used to rave about how RESTful APIs with JSON were so much better than SOAP and other RPC style interfaces, because the responses were so simple and easy to parse.
This holds true for small projects, but as soon as you are working within a large, complex system that involves multiple teams, the benefits of using standards pays off.
I struggled in the past working on projects with companies that had built dozens of loosely-defined APIs, built with the good design intentions in mind, but suffered later from incomplete or inconsistent implementations. The app codebase became much fatter, in an attempt to abstract away those differences into a consistent mental model.
When complexity and communication reaches a certain threshold, it makes sense to invest in standardizing the APIs and responses. I've seen the payoff and am convinced: client libraries and frontend implementations get much simpler, documentation becomes easier, and discussions about how to make changes or design new APIs all but go away.
On the other hand, for small teams and simple services, using a standard like this is probably overkill, unless everyone involved is used to doing APIs this way.
> but as soon as you are working within a large, complex system that involves multiple teams, the benefits of using standards pays off.
That depends a lot on the nature of the project and the standards involved. In my experience, introducing complexity early on in the hope that it will pay off when the project itself becomes complex, leads to exactly what you would intuitively expect: a huge combinatorial code nightmare where productive work asymptotically approaches zero as time progresses.
There is a failure mode in the development of enterprise projects where actual work is being done only at the fringes where the tight external standards have loopholes allowing for the introduction of actual functionality through the backdoor. The resulting systems are of course extremely brittle.
> for small teams and simple services, using a standard like this is probably overkill
There are also simple standards deserving of the name. Small teams and simple services use them quite adamantly.
The "problem" I see with JSON API is not one of complexity (it really isn't, very), it's specificity. It's designed to cover a good range of common web-centric data interchange problems, but like any higher-order standard it carries the weight of certain abstractions and concepts. The pain comes, in my opinion, not from a project/team size mismatch but in cases where these abstractions are ill-suited for the problem at hand.
One of the key factors why plain JSON has become so popular: it's completely malleable. As a result, the actual JSON data on the wire can closely reflect the internal data structures (or at least a logical representation of them) of producers and consumers. The price for this is a relatively tight coupling, but the pain of it is lessened somewhat by the simplicity of the format.
In the end, the old adage of the structure of the project mirroring the structure of the organization probably holds true. When selecting a standard to work with, people choose one that innately reflects how their company works, and they do it for good reason: to reduce friction.
So much fear and loathing in this thread, which puzzles me. Anyone who sets out to write a hypermedia API in JSON is going to end up with a structure not dissimilar to this one. If this fits your needs, then use it, by all means and let your clients use an off-the-shelf lib for using your service.
I'm not a fan of the spec, but overall it seems to strike a good balance between what is specified and what is left out. To compare this to SOAP is missing the point.
What killed SOAP was an insistence on treating HTTP as a dumb transport - thereby breaking the constraints of the interwebs, the inherent brittleness of RPC, and the lunacy of the WS-* stack.
None of that applies here, it's closer in spirit to AtomPub, which is still a pretty decent standard, but just happens to be in XML which everybody hates nowadays.
I think a lot of the commentators in this thread seriously believe that having a different data format and interaction style for every API on the internet is somehow "simpler" than adopting a loosely defined set of conventions and writing them down somewhere as a standard.
It seems to me this standard is only really useful for APIs to be consumed by a specific GUI or GUI framework, as opposed to "data-centered" APIs that exist to provide access to a particular dataset for myriad purposes.
Having a "links" section makes no sense in the latter case -- that's what docs/Swagger/RAML are for.
My stab at imposing some consistency on data APIs boils down to a header and a data section. The header's utility is in describing the data, most obvious utility comes from including a "count: 235" or paging data.
Less-obvious is having self-describing data, namely including the path and query parameters in the header, so you could ingest the data sans request and still know what it represents.
But it's a little bikesheddy, and that might be that data-only APIs are so freaking simple that no standard is really necessary. If so, I must question writing a standard around how we happen to build GUIs today as it seems doomed to SOAPy failure. But hey I'm not "full-stack" ...
And that is why any sane (experienced?) API and/or standard author should have versioning in mind at all times. In my experience there is no such thing as "the" API or "the" standard.
It's a valiant effort but as someone who has dealt with unraveling a strict API built with SOAP I am going to pass. If I want something standardized I will use a schema based solution such as protobuf. It's already a quasi standard, has clients in many languages and is a binary protocol.
We also should not confuse an API with a transport protocol. I will tilt my hat to this not being as verbose as past attempts, but why reinvent the wheel yet again? It's not like prior attempts didn't function as expected - they did - but we in the industry chastised them for being too strict.
Let's work on improving the semantics and documentation around what constitutes an API. Swagger is an excellent example of this.
I'm a fan of swagger, especially because it's not tied to one language and helps the developer to design his API in an easy way. The tools provided for describing your API using the Swagger spec is really fascinating.
This tries to standardize API transports, which doesn't make a sense to me, because there is no need for that. If you are going to develop an API with (open) SDK clients that support consuming your services, then I don't need to care for your API transport to be written in JSON-API. Especially making transports human readable appears to be a waste of ressources in my eyes. Those API's are meant to be consumed by machines and debugging can and should be done with tools, not by enforcing a standard.
I was recently exposed to this standard on a green field sinatra project. It quickly became burdensome to maintain all of the linkages. It really deteriorates your ability to keep things DRY on the JSON generation (we were using RABL). Of course, these problems were exacerbated by the fact that there were other issues with how the data was structured and how the client wanted to access resources.
Yeah...no. Standardizing something like this is useless and only full of edge cases. Keep API's free form and flexible without imposing some sort of restriction. That's how you end up with bigger messes than what you started with. API's are not a mess currently anyway. The point of building a unique application isn't to create uniform systems across the map.
One of my primary motivations when I started this (a few years ago now) was a straight forward representation of graphs of data (including bits of data without dedicated URLs), rather than trees of data.
Consider the case of a blog. Each blog post has many comments. Each post has an author, and each comment has an author. Some of those entities may have dedicated URLs, and others may not. Additionally, the authors are highly repetitive; you want to refer to them by identifier, not by embedding them.
Because a tree of data can be represented easily as a graph, but not vice versa, JSON API provides a straight-forward way to provide a list of records that are linked to each other. The goal is simple, straight-forward processing regardless of the shape of the graph.
One of the primary ways in which JSON API is different from HAL (or JSON-LD) is that it specifies both a format for data as well as a full protocol for fetching and manipulating that data.
Most of the trouble in building a JSON API comes not from deciding on the format, but in nailing down the precise requests and responses that handle common kinds of interactions. This became very clear to me as I worked on Ember Data.
Besides minor things like _links -> links, it specifies a lot of behavior more explicitly than HAL. HAL is a very minimal specification and intentionally doesn't nail down every corner case, preferring implementors to choose what's right for them, perhaps at the cost of a more powerful generic client
But my "fear" is that it will all get to complicated, too. Currently I really like to work with JSON Schema (http://json-schema.org/), because it's simple and extensible.
Wow. No one needs this. API means application programming interface. It starts and ends its standardization at the application level.
Make your API consistent and write decent documentation for it. That's all anybody needs and will be simpler then trying to conform to some insane metastandard.
This reminds me of hypermedia API. Generally speaking, you're provided with an index of resources via xml/json, you then fetch those resources which provide a single or many levels of additional results which are fetch-able. In essence, your api data is almost browse-able. Objective is Idempotent response, fetch as little as possible, provide new functionality via new resources, and resources with discover-ability for auto RPC. https://github.com/swagger-api/swagger-spec/
I had to write an API with lots of one-to-many and many-to-many relationships, with iOS, Android and JS apps accessing the APIs so it needed a mean to sideload data.
I found JSONApi to be a good guide to building the API. Of course, I was a bit sad when the spec changed quite radically in its last iteration, but, hey, my API still works, that's not so bad.
Does anyone know when we could expect Ember Data to be fully compatible to JSON API? In a stable manner.
Our architecture has Sinatra for the backend and Ember.js for the frontend. We're always in a struggle to support JSON API for our third party clients but maintain compatibility with Ember Data.
The first step in the standardization process is to register with IANA. We've done that. Then, you get a bunch of implementations. Then, you go to the IETF.
Any idea if this overlaps even more with RAML? I've seen some projects based around RAML but none so far with JSONApi and I'm trying to understand what advantage there is either way when I don't see JSON being more expressive in this goal.
[+] [-] wycats|11 years ago|reply
Let me try to give some perspective on the history. For a long time, JSON API was more of a set of guidelines than a strict specification. A lot of our early adopters hated all of the MAYs in the spec (you can see examples of that throughout this thread), so we decided to tighten things up considerably towards the end of last year.
That meant trying to eliminate the vast majority of options in the primary spec, and moving all optional features to named extensions that clients could programmatically detect.
Of course, significantly tightening up the semantics forced us to grapple with harder problems and pushed some ambiguities into the light of day. RC2 in particular was an attempt to seriously pare down the scope of the main spec, while making the semantics of what was left stricter. Dan (the primary editor) and I spent countless hours discussing various details, and people contributed hundreds and hundreds of (very useful!) comments during this period about various details.
RC3 was a smaller delta, but I could easily imagine that one of the changes had a large impact on existing APIs.
My overall goal for the project from the beginning was to nail down a full protocol (both the format and the wire protocol) that could be implemented by multiple languages on both sides. Originally, it was created because I was frustrated by the ambiguity of "REST-style API" was during the development of Ember Data.
The earliest versions of JSON API didn't really nail things down well enough to deliver on that promise, but I hope that the latest versions will be able to. Time will tell.
[+] [-] hallmark|11 years ago|reply
One challenge with the churn is a lack of any high-level changelog (git commit history doesn't count). I have a team working off an earlier version of the spec, and I check back semi-frequently. But I haven't been able to find a document outlining "here are the major changes since the previous versions." I understand that would represent more work on top of work, but for such a large spec, the changes have been disorienting.
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] EugeneOZ|11 years ago|reply
[+] [-] bshimmin|11 years ago|reply
[+] [-] drostie|11 years ago|reply
Here's the basic things that have been mostly useful:
The tagged-sums bit is probably a dealbreaker for me at this time: I picked this up from Haskell and I would not easily let go. The syntax in JSON is a little clumsy, but it's still important. Cf. Abelson and Sussman's lectures, "All interesting programs start with a case dispatch."[+] [-] jimktrains2|11 years ago|reply
[+] [-] surrealize|11 years ago|reply
They keep saying that they're closing in on 1.0, but there's a few issues left:
https://github.com/json-api/json-api/issues?q=is%3Aopen+is%3...
They've been making a lot of changes lately:
https://github.com/json-api/json-api/graphs/contributors
It's good that they want to get all of the breaking changes done so that they can declare a stable 1.0. But it also seems like they're going to try and call it stable right after making a bunch of changes, which seems risky.
I also pitched JSON-LD/Hydra at work, because they're w3c-backed and JSON-LD has some uptake. But the other people who looked at those specs found them hard to digest. And I agree; as an implementer, I can read the jsonapi docs quickly and have a pretty clear idea of what to do. But with JSON-LD/Hydra, not so much.
I get the sense that JSON-LD/Hydra is more flexible than jsonapi, but I think jsonapi does what we need. And if it does what we need, then additional flexibility might actually be a drawback. I guess we'll see how it goes.
[+] [-] wycats|11 years ago|reply
For what it's worth, the issues you linked to are mostly about adding more rigor to possibly underspecified areas, not changing things that are already specified, but those things could easily be done after 1.0.
[+] [-] daliwali|11 years ago|reply
The good:
- It opens a path for standardized tooling among API clients. Rather than having a whole mess of JSON-over-HTTP clients with a hard-coded understanding of the particular API, one could theoretically use a hypertext client to interact with any API using this standard.
- It establishes a baseline of what an API server must implement.
The bad:
- It tries to control not just the media type, but the protocol (HTTP) and server implementation. This is problematic because it dictates how your server must implement its routes, and is tightly coupled with HTTP.
- It tries to be very prescriptive, but it cannot cover all edge cases without being exhaustive. This comes at a heavy burden for implementers to get every part of the specification correct. Despite this, some extremely basic features are missing, such as providing an index document so that a client can enter the API knowing only a single entry point (as it stands now, clients must have a priori knowledge of what resources exist on the server).
The ugly:
- Since so many parts of the base specification are optional MAYs, and there is no prescribed mechanism for feature detection, there is no way to figure out if a server supports a particular feature of this spec without trying to do something and failing.
- The spec has made many breaking changes on the road to 1.0 (as other commenters have mentioned, and there is still room for breaking changes).
At this point, I think that this project gained traction due to the clout of the original authors (Yehuda Katz and Steve Klabnik) and the promise of no-more-bikeshedding, though I would argue that the bikeshedding has just shifted from particular APIs to the spec. Disclosure: I authored and maintained some libraries that implement this spec, and authored another media type for hypertext APIs.
[+] [-] wycats|11 years ago|reply
> The spec has made many breaking changes on the road to 1.0
Interestingly, most of the breaking changes on the road to 1.0 were about drastically reducing MAY in favor of MUSTs.
Optional features were moved to named extensions, and there is now an explicit way to negotiate about those extensions.
For anyone who was put off earlier on by the number of MAYs, know that we heard you loud and clear. It may (no pun intended) be worth another look.
[+] [-] jes5199|11 years ago|reply
[+] [-] foz|11 years ago|reply
This holds true for small projects, but as soon as you are working within a large, complex system that involves multiple teams, the benefits of using standards pays off.
I struggled in the past working on projects with companies that had built dozens of loosely-defined APIs, built with the good design intentions in mind, but suffered later from incomplete or inconsistent implementations. The app codebase became much fatter, in an attempt to abstract away those differences into a consistent mental model.
When complexity and communication reaches a certain threshold, it makes sense to invest in standardizing the APIs and responses. I've seen the payoff and am convinced: client libraries and frontend implementations get much simpler, documentation becomes easier, and discussions about how to make changes or design new APIs all but go away.
On the other hand, for small teams and simple services, using a standard like this is probably overkill, unless everyone involved is used to doing APIs this way.
[+] [-] Udo|11 years ago|reply
That depends a lot on the nature of the project and the standards involved. In my experience, introducing complexity early on in the hope that it will pay off when the project itself becomes complex, leads to exactly what you would intuitively expect: a huge combinatorial code nightmare where productive work asymptotically approaches zero as time progresses.
There is a failure mode in the development of enterprise projects where actual work is being done only at the fringes where the tight external standards have loopholes allowing for the introduction of actual functionality through the backdoor. The resulting systems are of course extremely brittle.
> for small teams and simple services, using a standard like this is probably overkill
There are also simple standards deserving of the name. Small teams and simple services use them quite adamantly.
The "problem" I see with JSON API is not one of complexity (it really isn't, very), it's specificity. It's designed to cover a good range of common web-centric data interchange problems, but like any higher-order standard it carries the weight of certain abstractions and concepts. The pain comes, in my opinion, not from a project/team size mismatch but in cases where these abstractions are ill-suited for the problem at hand.
One of the key factors why plain JSON has become so popular: it's completely malleable. As a result, the actual JSON data on the wire can closely reflect the internal data structures (or at least a logical representation of them) of producers and consumers. The price for this is a relatively tight coupling, but the pain of it is lessened somewhat by the simplicity of the format.
In the end, the old adage of the structure of the project mirroring the structure of the organization probably holds true. When selecting a standard to work with, people choose one that innately reflects how their company works, and they do it for good reason: to reduce friction.
[+] [-] bobfromhuddle|11 years ago|reply
I'm not a fan of the spec, but overall it seems to strike a good balance between what is specified and what is left out. To compare this to SOAP is missing the point.
What killed SOAP was an insistence on treating HTTP as a dumb transport - thereby breaking the constraints of the interwebs, the inherent brittleness of RPC, and the lunacy of the WS-* stack.
None of that applies here, it's closer in spirit to AtomPub, which is still a pretty decent standard, but just happens to be in XML which everybody hates nowadays.
I think a lot of the commentators in this thread seriously believe that having a different data format and interaction style for every API on the internet is somehow "simpler" than adopting a loosely defined set of conventions and writing them down somewhere as a standard.
[+] [-] spopejoy|11 years ago|reply
My stab at imposing some consistency on data APIs boils down to a header and a data section. The header's utility is in describing the data, most obvious utility comes from including a "count: 235" or paging data.
Less-obvious is having self-describing data, namely including the path and query parameters in the header, so you could ingest the data sans request and still know what it represents.
But it's a little bikesheddy, and that might be that data-only APIs are so freaking simple that no standard is really necessary. If so, I must question writing a standard around how we happen to build GUIs today as it seems doomed to SOAPy failure. But hey I'm not "full-stack" ...
[+] [-] mcphage|11 years ago|reply
[+] [-] happyshadows|11 years ago|reply
[+] [-] mdaniel|11 years ago|reply
[+] [-] jimktrains2|11 years ago|reply
[+] [-] johnbellone|11 years ago|reply
We also should not confuse an API with a transport protocol. I will tilt my hat to this not being as verbose as past attempts, but why reinvent the wheel yet again? It's not like prior attempts didn't function as expected - they did - but we in the industry chastised them for being too strict.
Let's work on improving the semantics and documentation around what constitutes an API. Swagger is an excellent example of this.
[+] [-] nardi|11 years ago|reply
[+] [-] FractalNerve|11 years ago|reply
This tries to standardize API transports, which doesn't make a sense to me, because there is no need for that. If you are going to develop an API with (open) SDK clients that support consuming your services, then I don't need to care for your API transport to be written in JSON-API. Especially making transports human readable appears to be a waste of ressources in my eyes. Those API's are meant to be consumed by machines and debugging can and should be done with tools, not by enforcing a standard.
[+] [-] mberning|11 years ago|reply
[+] [-] jurre|11 years ago|reply
[0] https://github.com/apotonick/roar
[+] [-] alexkavon|11 years ago|reply
[+] [-] rsuelzer|11 years ago|reply
[+] [-] wycats|11 years ago|reply
Consider the case of a blog. Each blog post has many comments. Each post has an author, and each comment has an author. Some of those entities may have dedicated URLs, and others may not. Additionally, the authors are highly repetitive; you want to refer to them by identifier, not by embedding them.
Because a tree of data can be represented easily as a graph, but not vice versa, JSON API provides a straight-forward way to provide a list of records that are linked to each other. The goal is simple, straight-forward processing regardless of the shape of the graph.
[+] [-] wycats|11 years ago|reply
Most of the trouble in building a JSON API comes not from deciding on the format, but in nailing down the precise requests and responses that handle common kinds of interactions. This became very clear to me as I worked on Ember Data.
[+] [-] habitue|11 years ago|reply
Edit: and hey, why not, I'm the author of a generic HAL client myself: https://github.com/deontologician/restnavigator
[+] [-] Fannon|11 years ago|reply
See http://de.slideshare.net/lanthaler/building-next-generation-...
But my "fear" is that it will all get to complicated, too. Currently I really like to work with JSON Schema (http://json-schema.org/), because it's simple and extensible.
[+] [-] XorNot|11 years ago|reply
Make your API consistent and write decent documentation for it. That's all anybody needs and will be simpler then trying to conform to some insane metastandard.
[+] [-] krob|11 years ago|reply
[+] [-] steveklabnik|11 years ago|reply
[+] [-] coldcode|11 years ago|reply
[+] [-] fabien_gasser|11 years ago|reply
You can have a look of a simple way to do it in php with symfony : https://github.com/dunglas/DunglasJsonLdApiBundle
[+] [-] geoffroy|11 years ago|reply
[+] [-] paddi91|11 years ago|reply
[+] [-] steveklabnik|11 years ago|reply
[+] [-] konstruktor|11 years ago|reply
[+] [-] steveklabnik|11 years ago|reply
[+] [-] devonkim|11 years ago|reply
[+] [-] steveklabnik|11 years ago|reply