top | item 6292723

(no title)

jchulce | 12 years ago

even though base64 increases raw size by about a third, this is mitigated by gzip or deflate encoding by the webserver. The actual transmitted size is only about 5% bigger

discuss

order

ajross|12 years ago

I measure less than that:

    dd if=/dev/urandom bs=1024 count=64 | base64 | gzip | wc -c
    64+0 records in
    64+0 records out
    65536 bytes (66 kB) copied, 0.0127386 s, 5.1 MB/s
    67302
This particular run comes out to ~2.7% overhead, and in fact it's very repeatable. The half dozen runs I did were within 20 bytes of each other.

Scaevolus|12 years ago

The overhead is slightly worse when it's compressed alongside normal HTML content (the Huffman trees aren't so favorable).

    $ wget http://en.wikipedia.org/wiki/ASCII
    $ (cat ASCII;dd if=/dev/urandom bs=8000 count=1 | base64) | gzip | wc -c
    50578
    $ (cat ASCII;dd if=/dev/urandom bs=9000 count=1 | base64) | gzip | wc -c
    51621
Around 4% overhead. Either way, it's negligible.

b1tr0t|12 years ago

Dead on! The overhead with gzip is very tiny. The size of the payload should also not be a factor in the cached condition.