top | item 13031599

(no title)

dmethvin | 9 years ago

It's quite a burden to maintain two different sets of APIs.

discuss

order

alphapapa|9 years ago

I'm sorry, but I hear this all the time--it's the primary argument used for deprecating or removing any functionality in any software. And my response is: so what? Maintaining software is a burden, period.

Software exists to be useful. The APIs in question make it useful. The developers have been maintaining it for nearly 20 years. Their employer receives millions of dollars a year to do so.

They don't want to maintain this "burden" anymore because it's not fun to maintain old code. It's not glamorous. No one becomes a rock star by unloading the bus. But if you remove the baggage compartments, the bus ceases to be useful, and the show doesn't go on.

Chromium is such a cooler project. It was started from scratch (except for the WebKit part), and it's made by Google (which at least used to be cool), and it's got all these modern APIs. They're so great that it only took them 7 years to support resumable HTTP requests[1].

Firefox used to be about the users. Now it's about the developers. (This is not to say that individual developers are selfish, but that the organization as a whole is behaving in a way that disregards the needs of users and prioritizes the desires of the developers.)

1: https://bugs.chromium.org/p/chromium/issues/detail?id=7648. Just look at this cleverness: "In the absence of a crypto::SecureHash object, DownloadFile reads the partial file and calculates the partial hash state in a new crypto::SecureHash object. If a prefix hash value is available, then the hash of the partial file is matched against this prefix hash. A mismatch causes a FILE_HASH_MISMATCH error which in turn causes the download to abandon its partial state and restart." Any other software in the world would just restart the download. I mean, they already give it a special ".crdownload" extension, so it's not like any other program is going to mess with it. But no, they can't just resume the download, they have to make 15 hash checks and pass around 7 different objects and find every possible reason to start the download all over again. What could have been a 5-line patch turned into 7 years of waiting, a dozen revisions across 50 files...

duskwuff|9 years ago

With regard to resumable downloads -- that "cleverness" isn't for the sake of being clever; it's in there to avoid two very common cases where a naïve download resume will corrupt a file:

1. The remote file has changed since the previous download, so "resuming" the download will end up combining two different files.

2. The user's computer crashed during the first download, and some of the data in the partial download was not written to disk properly. (A particularly common case: the last few blocks of the file are zeroed out.)