top | item 47130874

(no title)

gnotstic | 7 days ago

yes i was under the same impression, but i think this LUT/dictionary solution is counter intuitive to both of our current understandings of the web.

The "aha" moment for me was that, without this dict, the user is going to always request a full download of the data. For instance, let's say the NYT published an article and you read it. Then an editors note is added to the article. When you go back to read the article, the data transfer is miniscule. Now that is an edge case, but imagine a website that allows comments.. twitter.. reddit.. small text based pages that at first seem incosequential until you think about how we use the web, millions of users, returning to pages over and over again.

For me, my mental model of this structure is a LUT(key/value pair) wrapped in a Version Control(hash).

Now i think your comment is correct if we were to add how many requests the webpage is recieving and how frequently changes are happening to said webpage. My blog would recieve no benefits from implementing this tech, and using napkin math, my blog would need 1000 days to break even. Microsofts' blog however... less than a day, in theory.

discuss

order

everforward|7 days ago

If the version control hash changes you have to re-download the dictionary, which is similar to redownloading the whole page.

Reddit/NYT would have to publish their changes without changing the dictionary, meaning some portions would be largely absent from the dictionary and have worse compression than gzip. Probably fine for NYT, something like Reddit might actually have worse ratios than gzip in that case.

superb_dev|7 days ago

Or you could use the previous version to generate the dictionary for the current version?

I would assume chunks that didn’t benefit from the dictionary would receive the standard compression, so you can’t get worse than gzip.