If going with this means adopting a specialized library with its own format, sticking with a fixed schema, and giving up human readable formatting, why not go whole hog and use protocol buffers instead? It'd be cheaper to convert and store.
Yeah, exactly. On the other hand, are there any good JavaScript implementations of serialization formats like Thrift/Protobufs?
Any new project like this that wants to be taken seriously should really provide metrics comparing both wire size and decode speed (ideally in 2 or 3 browsers) versus (at least):
* Raw JSON
* Gzipped JSON
* Protobufs or some similar format
In a similar vein of "alternative ways to maximally leverage JSON" there is the Universal Binary JSON Specification http://ubjson.org
Unlike BSON or BJSON, UBJSON is 1:1 compatible with the original JSON spec, it doesn't introduce any incompatible data types that have no ancillary in JSON.
Simple is similar, but utilizes more complex data structures for the purpose of further data compression which is great, but introduces complexities in generation and parsing while UBJSON is intended to be a binary JSON representation as simple as JSON.
"as simple as" defined here as you being able to open the files in a HEX editor and read through it easily in addition to being able to grok the spec in under 10 mins (and generate or parse it just as easily as JSON itself).
Because it has 1:1 compatibility with JSON, the general parsing and generation logic stays the same, it is just the format of the bytes written out that change.
There has been a lot of great community collaboration on this spec from the JSON specification group and more recently the CouchDB team that has improved the performance and usability quite a bit.
There is a Java and .NET impl with a handful of people working on implementations in Node.JS, Erlang and C but I don't have much info on the status of those impls yet.
You have two types of situations for JSON. Static and dynamic.
For static JSON, you have a flat file and that's that. It's always the same. In this case, all these fancy optimization formats are pointless. GZip the hell out of it once, cache it, and simply serve the compressed version. Binary compressed data is always optimal. The browser will automatically decompress and utilize. CPU is plentiful on the client-side. Debugging is easy, Firebug will display the clean JSON.
For dynamic JSON, you have a problem. You can't optimize or compress. Remember, it's all about total time. It doesn't matter if your data is now smaller if the amount of time it took to make it smaller is larger than the amount of time it would have taken to download the difference between the original data and the optimized data.
Why does jsPerf use a Java applet on test pages?
Do I have to enable Java to use jsPerf?
The applet you’re talking about is just a clever trick
we’re using to expose Java’s nanosecond timer to
JavaScript, so we can use it to get more accurate test
results. jsPerf will still work fine if you disable
Java; it might just take a little longer to run tests.
[+] [-] resnamen|14 years ago|reply
[+] [-] jacobolus|14 years ago|reply
Any new project like this that wants to be taken seriously should really provide metrics comparing both wire size and decode speed (ideally in 2 or 3 browsers) versus (at least):
[+] [-] mdonahoe|14 years ago|reply
[+] [-] rkalla|14 years ago|reply
Unlike BSON or BJSON, UBJSON is 1:1 compatible with the original JSON spec, it doesn't introduce any incompatible data types that have no ancillary in JSON.
Simple is similar, but utilizes more complex data structures for the purpose of further data compression which is great, but introduces complexities in generation and parsing while UBJSON is intended to be a binary JSON representation as simple as JSON.
"as simple as" defined here as you being able to open the files in a HEX editor and read through it easily in addition to being able to grok the spec in under 10 mins (and generate or parse it just as easily as JSON itself).
Because it has 1:1 compatibility with JSON, the general parsing and generation logic stays the same, it is just the format of the bytes written out that change.
There has been a lot of great community collaboration on this spec from the JSON specification group and more recently the CouchDB team that has improved the performance and usability quite a bit.
There is a Java and .NET impl with a handful of people working on implementations in Node.JS, Erlang and C but I don't have much info on the status of those impls yet.
[1] https://github.com/thebuzzmedia/universal-binary-json-java/b...
[2] https://github.com/eishay/jvm-serializers/wiki
[+] [-] maratd|14 years ago|reply
For static JSON, you have a flat file and that's that. It's always the same. In this case, all these fancy optimization formats are pointless. GZip the hell out of it once, cache it, and simply serve the compressed version. Binary compressed data is always optimal. The browser will automatically decompress and utilize. CPU is plentiful on the client-side. Debugging is easy, Firebug will display the clean JSON.
For dynamic JSON, you have a problem. You can't optimize or compress. Remember, it's all about total time. It doesn't matter if your data is now smaller if the amount of time it took to make it smaller is larger than the amount of time it would have taken to download the difference between the original data and the optimized data.
[+] [-] shaunxcode|14 years ago|reply
http://jsfiddle.net/uxAFb/
I imagine passing a flag to indicate the properties which are arrays of objects should also be packed could work too.
[+] [-] Andi|14 years ago|reply
[+] [-] lordmatty|14 years ago|reply
[+] [-] gojomo|14 years ago|reply
[+] [-] yahelc|14 years ago|reply