(Disclaimer: I used to work on Chrome. I'm more than a year out at this point but I still talk to Chrome developers.)
Chrome wins some tests but loses the most on the start-up benchmarks, which is a real bummer. There was a time where we'd spend literally days figuring out why startup regressed by a handful of milliseconds, and this test shows an end-to-end test taking nearly six seconds.
One idea I saw mentioned is that there may be a bug related to how Chrome reads the system proxy settings (something about an API change between Windows 7 and Windows 8). I think most of the Chrome developers are still on Windows 7; the bots that are tracking startup performance are definitely still Windows 7 as well. So maybe that number is more of a single bug than a systematic thing.
All of the above is not intended to diminish Firefox's impressive results -- just thought I'd provide some background! What these sorts of tests show most is that competition pushes browsers ever faster. If the proxy theory above is right, then maybe this test will inspire Google to invest more into Windows 8 testing, which ultimately benefits users.
(Edit: Firefox won other tests too, I also didn't mean to say this was the only factor.)
Chrome is my main browser and I use it all day at work. The browser window opens instantly for me but it only becomes usable about a minute or so later after it has finished grinding on the hard drive. From the looks of it I'd say it's loading all the history, settings and extensions that takes so long.
Surprisingly IE10 handles all of these perfectly. Of course, IE10 has totally broken canvas clipping (non-rectangular clipping regions are impossible. They worked fine in IE9.)
Firefox does render large text much better than Chrome, which is why I used it when taking screenshots for my book. But scaling the text (as opposed to setting a larger font) is a disaster.
This is just an example. I could go on for days about canvas bugs. I wish there was a bigger push to fix those instead of eking out a performance advantage.
To Firefox's huge credit, I've submitted a lot of bugs to Chromium and the FF team, and the FF team consistently gets back to me within a week and usually fixes the bug within a month. The bug reporting experience with Chrome on the other hand is rather disenchanting.
For a cross-platform bug example, the context's miterLimit is just plain broken by default in Chrome. I reported this (with examples) back in April and have yet to receive any kind of reply. Thank god it's an easy workaround.
If you want people to use your browser and develop with it in mind, working elements are more important than being slightly faster than the others. (Good developer tools come in close second I think, at least for this crowd).
This is not a comment to you in particular, but when authors find bugs in browsers there are several things that they can so to ensure that the bug is fixed as quickly as possible:
1. Verify that the bug occurs in the latest available (possibly unstable/nightly) build.
2. Submit the bug to the relevant browser/engine bug tracking system (see below). If you aren't confident about how to write a good bug report, read [1] first.
3. Write a testcase. If at all possible this should be minimal (i.e. not using the whole of jQuery/Angular/whatever) and should certainly be clear (e.g. no minimised code unless it is actually needed to reproduce the issue).
4. If the bug is a conformance bug rather than a QoI issue, submit your testcase to the W3C [2] so that it can be incorporated into the standard testsuite and automatically run by all browser vendors. To do this the test needs to be written in a standard format [3] and submitted using the process at [4]. At the moment the documentation is a bit sucky, but there is a big revamp in the works [5], so that should improve in the near future.
That last one may sound like a lot of effort, but testing is the only way that we will end up with a web platform that is both technologically competitive and open both in spirit and in practice i.e. with multiple interoperable implementations. People put a huge amount of time into working around, and complaining about, cross browser issues. Devoting a small fraction of that time to submitting regression tests instead would dramatically decrease the number of problems in the future. For example submitting a test for canvas clipping both makes it likely that IE will be fixed in the near future, and also ensures that it won't then just regress again one version later.
It seems that it would be in the self-interest of big projects like jQuery to commit to creating a test for each time they have to work around some browser bug, so they can expediate fixing the issue, track which browser versions have the bug, and remove the workaround — and thus reduce their code complexity — as soon as the test passes in all the browsers they are targeting. However this is not just something that matters to big projects. If you have run across a bug that is making it difficult to implement something, you are likely the best person in the world to write a testcase for that issue.
If you are trying to submit bug reports and tests and need help, please ask on #whatwg on Freenode irc or #testing on irc.w3.org [6].
My experience with reporting bugs in FF/Chrome is a bit different. I reported a bug in canvas text implementation, which occurs in both of those browsers, on Feb 20. I provided all the necessary info and even a minimal test case. The bug has been confirmed in FF quite quickly (within days) and then... nothing. It's quite serious, basically you can't do smooth animation (movement) of text. Similar to the text scaling bug.
Reporting to Chrome has been even worse. No word, no nothing. Not even a confirmation. (Although the submitting process itself has been simpler AFAIR).
That's really uncool, after a few of those you lose interest in submitting any more since the feedback feels like "we don't care". So you end up using the time to find workarounds instead.
Now obviously someone will point out everyone can go ahead and submit a patch for the bug itself since it's open source. That's all cool and dandy, I love open source as much as the next guy, but please, let's get serious. Who has the time to dive into a massive codebase like that and fix the bug? Not everyone has plans to become a browser developer.
The kerning differences you're seeing only show up on Windows when not using hardware acceleration (Direct2D). When not using hardware acceleration, Firefox draws text with GDI. GDI doesn't deal well with arbitrary scales applied during text drawing and this explains why the same problems show up in Chrome and Firefox. When Firefox is using DirectWrite/Direct2D (i.e., is using hardware acceleration)—like IE 10—these problems don't show up. Short of heroics, there's not much that can be done to improve text drawing at arbitrary scales with GDI.
In-canvas kerning seems to be affected by the system rendering. In Arch Linux, using Firefox 22 and the Infinality patches, I get http://imgur.com/eRVru67 IE kerning is better when scaled up, but it looks just as good or better in regular or scaled down sizes.
Just checking in to say that your kerning example looks fine on Safari on OS X (latest stable versions). Which is not to say it's perfect, but there's nothing egregious (no overlaps, no large gaps).
Despite the name this program in fact doesn't use WebGL, it runs using three.js 2D canvas renderer (just check the source).
Additionally even the results they got there are suspicious, I got completely opposite results on my system (Windows 7), with Chrome being faster than Firefox by a large margin:
They got 777 on Firefox vs 437 on Chrome, I got 290 on Firefox vs 441 on Chrome (this is fully in line with my everyday experience, doing browser graphics).
I just switched from Chrome to FF. The performance is close enough, and good enough for both, that it's a secondary consideration at this point.
I switched because I got tired of hearing Chrome constantly accessing my hard drive. I wound up going through the list of Chrome switches here (http://src.chromium.org/svn/trunk/src/chrome/common/chrome_s...) to try to alleviate it. Some things helped but not to an acceptable level. I use W7. Using procmon I could see Chrome constantly re-reading keys from the registry and writing to temp files even though caching and pre-fetching were disabled.
I was also was concerned for Chrome accessing my laptop SSD. Even though I couldn't hear it I could see the lifetime allotment of reads and writes being flushed.
On a similar note, my PC's hard drive starts making crazy noises as soon as Windows 7 goes into screensaver mode. I've turned off every disk-related option I can find and it's still doing... something. Maddeningly it stops as soon as the screensaver is interrupted.
i'm developing some js compute-heavy apps and Chrome still pulls ahead quite significantly (by as much as 5x) in many cases. i've filed js-perf bugs with mozilla that have been accepted but seem to be quite low on the priority list. artificial benchmarks don't always tell the full story :(
if any mozillian js dev is lacking stuff to work on ;)
I gave up on Firefox a year or two back because of performance concerns - not raw speed but ram consumption (leading to thrashing when tabbing between ram-intensive processes) and its poor single-threaded freezing problems. Nice to hear that FF is getting better and I'll be able to go back to it - its extension community is far better than Chrome's, and Firebug is without peer.
Frankly I think Firefox performs a lot better than Chrome for memory usage now. After a few days of having Chrome open it's usually using between 3 and 6GB of RAM (Usually Facebook is using a gigabyte so killing that tab frees up a lot) and this is with <50 tabs open. Friends who use Firefox end up with ~1GB of RAM usage in a similar timeframe.
It's funny, I was using Chrome for about a month (when they finally released a Linux version) but was forced back to Firefox mainly because of massive memory consumption (because I keep a lot of open tabs). The other problem I had with it was lack of AdBlock Plus back then - without it the web is unusable.
I'm really surprised that no one in these comments and the thread on the actual story[1] (instead of this...whatever the equivalent of blogspam is but for slashdot) hasn't mentioned the horrible averaging methodology of this benchmark suite.
The individual tests aren't even a problem (though I would maybe pick some more and/or different ones, especially their somewhat odd benchmark choices in performance and graphics), but the averaging makes no sense at all.
Averaging time-based benchmarks is problematic enough (it feels more right, but it still isn't a good idea), but how on earth do you convert a "number of times we had to refresh a page" result into a number that you can then average with a "standards compliance" count and a measure of memory efficiency?? Even if you normalize (which it doesn't seem like they did from the output numbers, but maybe they did), the numbers still aren't comparable because you've given no account for the relative magnitude of their effect.
e.g. if you have a test of "does the browser have a konami code easter egg?", it doesn't matter if the geometric mean is less sensitive to outliers, because it still doesn't make any sense to take an average of that with "the playback framerate of an HD Trololo video" and then pretend that the average provides any insight. And it's even worse if you then compare that average to other crazy averages!
At best you can look at relative ranking, which they actually mention but then proceed to give exact numbers for their relative ranking. There's no information about "betterness" in there, though, except if you then divide up the numbers again to say "these points came from the win in test A, and these points from test B"...at which point you just have the original tests again. Better to count win/no-wins and use that as your final result. Then at least it's obvious that if some tests are much more non-trivial than others that you'll have to give arbitrary weights for the final result, as opposed to having the arbitrary weights being implicit in the tests themselves.
Sorry for the rant :) This is a good recognition of Mozilla's hard work, though notably they've been winning many of these tests for a while now (especially the memory ones), but it would be nice if tomshardware could drop the basically meaningless overall scores (or we could just collectively ignore them).
So much of any comparison like this is about performance, but I really don't think that's as important as it used to be, and that's coming from someone who generally uses slow computers and optimizes software until they run fast.
I'm typing this on a bit of an exceptional example, a 2.6 GHz Northwood Pentium 4 with 1 GB of single-channel DDR-400 RAM. The one saving grace is that it has an SSD, but I put the swap file on the spinning drive (which is modern). It's running Linux Mint 14, Xfce edition, with a handful of minor OS-level optimizations. Firefox 21, with a fairly standard configuration, flawlessly handles a dozen or more tabs on a daily basis. It's even pretty snappy, more limited by my internet connection (1.5 Mb DSL) than by the hardware it's running on.
If this sorry excuse for a computer does that well, are the relatively minor differences in performance between browsers going to be a big deal on modern hardware? There will be edge cases, such as the people who have hundreds of tabs open at a time, but for the average user I'm having trouble envisioning that.
The things that make a difference anymore are very tough to quantify in tests like Tom's Hardware did. I will always have Firefox around because I think Mozilla actually cares about privacy. I use Presto on my phone because it's the only one I've found that renders things how I want. Many people are tied to a browser because of extensions. Standards support doesn't matter until you find a page where a browser doesn't work, and those pages will be different for different people. Browsers can be rock solid on one computer and worthlessly crashy on another.
I don't think a round of benchmarks has meant anything to me in browser selection for a long time, and when it did I did them myself so as to account for the computer they were running on. I choose by trying to use a variety of them for a while, and a winner always emerges quite quickly.
I wish they (FF) would also spend some time on the small annoyances:
Lack of a restart button. Now when I upgrade FF, I have to "kill -9" it from my terminal to get it to restore windows upon restart
Memory leaks. Leave yourself logged in to Facebook for a few days, and watch it take up 2GB of memory.
No way to easily filter sites with cookies like you can in the next tab over, where you manage sites with saved passwords. Why is this? Does FF secretly want you to not muck with cookies?
Now, I'm sure there are addons and plugins for the above. But I should _not_ need addons for basic UX. Save the addons for fancier stuff.
* For the restart after upgrade, either it is a bug, or you have not seen the little "restore tabs from last time" button in the lower right part of the browser.
* I have not experienced that, I don't use Facebook.
* For the cookie stuff, you'll be pleased to learn about the about:permissions feature (type it in your address bar).
>Leave yourself logged in to Facebook for a few days, and watch it take up 2GB of memory.
It's entirely possible for a website to have a memory leak. (Or, at any rate, consume an ever increasing amount of memory.) If closing the tab and opening it again frees the memory, then there might not be anything FF can do about it. (It's also possible you've got an addon causing the problem.)
You can also just go into the settings, and set "When Firefox starts:" to "Show my windows and tabs from last time".
And I don't see why a "normal" user should want a default button in the UI to restart the browser.
>Memory leaks
The situation is much better now, there still are some leaks here and there, but nothing huge, at least with my usage (I usually restart my browser every 1-2 days to update to the latest Nightly, and have about 50 tabs open).
I'm not entirely sure what the third point is referring to (I just let the browser handle cookies), but in the Privacy tab of Options, you can view and remove individual cookies and search by site. So maybe that helps with what you want?
I think Facebook itself has the memory leak: I never leave it open for more than about an hour at a time, but I've heard from people that leave it open for days that it can take up between 1 and 2 GB; some of these people use Firefox, some Chrome and some Safari.
In "general" under "preferences", you can select whether the browser restores tabs from last time it was exited. Maybe that somehow get set to something besides "Show my windows and tabs from last time"?
I just learned something my math teachers never told me from a tech blog:
Geometric mean is useful for comparing when the expected range or units of values is different. For example, startup time is measured in seconds, but BrowsingBench numbers are things like the unitless 6646. The arithmetic mean would fail to "normalize" these values and give disproportionate weight to some over others; the geometric mean is one way of trying to account for this.
I suspect this was "by a nose". But that's good: I hope to see both browsers trading blows in this little war, leap frogging back and forth in the lead.
I did not find this objective, the test scope was limited and the sample set of browsers tested were not the latest version. Not that I use IE but why use IE10 when IE11 is available for testing? If you are doing a performance benchmark browser vs browser you will not be taken seriously if your test is not objective. In this case I find the sample set skewed.
These types of headlines are so meaningless. On what benchmark on what platform?
On OS X Safari outperforms both for many tasks (again, YMMV depending on what you're doing.) But that's meaningless for the billions of Windows and Linux users out there.
I wish our industry would move past these kinds of silly headlines. When it first started when Chrome came out there was a massive difference between Chrome and the other browsers, now all of them are very capable and competitive for pretty much anything you want to do.
TomHardware's browser benchmarks are pretty comprehensive. However, what annoys me about this test and this headline, is that Chrome 28, with its new faster Blink core, is literally a week away from being released, which means Firefox will only have its 15 seconds of fame (or rather a week).
Why did they make the test immediately after Firefox was out? Or do they repeat the test immediately after each browser version comes out? In that case I'm looking forward to the test with Chrome 28 included.
I'm not saying Chrome 28 will necessarily win in the next one. I just find it a little strange that they did it without waiting a bit more for the next Chrome version, too, before writing a headline like that. It just reminds of me those polls who turn out a certain way depending on how you ask the question.
In my daily use, I find Chrome to be overall faster and easier on the computer in general. I have recently run into a case during my development where Chrome actually has trouble compared to Firefox ANNNNNNND IE. It involves a large unordered list with several different divs, buttons, and links in it. There is some serious jumpy scrolling compared to both of the others, which surprisingly render smoothly.
This is the only instance I've ever run into where it didn't quite measure up.
The reason I use Firefox is because it has a set of extensions I find useful, and because I like Firefox Sync (end-to-end encryption of bookmarks/history etc.).
Whether the browsers shave a few microseconds off JavaScript performance is neither here nor there for me: but perhaps I'm a weirdo.
Performance is nice, but not enough of a reason for me to switch browser.
I just want to know one thing. When I have only two add ons in Firefox why does it take up nearly 200m of memory with no pages displayed? IE is taking 38m with this page displayed, FF has moved to 208m. Firefox is so damn quick to eat memory it causes my laptop to start caching which decreases performance and eats battery.
[+] [-] evmar|12 years ago|reply
Chrome wins some tests but loses the most on the start-up benchmarks, which is a real bummer. There was a time where we'd spend literally days figuring out why startup regressed by a handful of milliseconds, and this test shows an end-to-end test taking nearly six seconds.
One idea I saw mentioned is that there may be a bug related to how Chrome reads the system proxy settings (something about an API change between Windows 7 and Windows 8). I think most of the Chrome developers are still on Windows 7; the bots that are tracking startup performance are definitely still Windows 7 as well. So maybe that number is more of a single bug than a systematic thing.
All of the above is not intended to diminish Firefox's impressive results -- just thought I'd provide some background! What these sorts of tests show most is that competition pushes browsers ever faster. If the proxy theory above is right, then maybe this test will inspire Google to invest more into Windows 8 testing, which ultimately benefits users.
(Edit: Firefox won other tests too, I also didn't mean to say this was the only factor.)
[+] [-] krelian|12 years ago|reply
[+] [-] _pmf_|12 years ago|reply
Well, congratulations to FF for being the perfect browser for the 0.01 percent of people who don't keep their browser open all day.
[+] [-] simonsarris|12 years ago|reply
For one small example the in-canvas kerning on both Firefox and Chrome are awful compared to IE10.
From a while ago: http://i.imgur.com/62WBzVZ.png
You can see for yourself here: http://jsfiddle.net/vVC4s/
Notice also the difference between doubling the font and doubling the scale, esp where each line gets cut off: http://jsfiddle.net/jGcrL/
This makes animations rickety and un-smooth (text animation): http://jsfiddle.net/simonsarris/HZFcR/
Surprisingly IE10 handles all of these perfectly. Of course, IE10 has totally broken canvas clipping (non-rectangular clipping regions are impossible. They worked fine in IE9.)
Firefox does render large text much better than Chrome, which is why I used it when taking screenshots for my book. But scaling the text (as opposed to setting a larger font) is a disaster.
This is just an example. I could go on for days about canvas bugs. I wish there was a bigger push to fix those instead of eking out a performance advantage.
To Firefox's huge credit, I've submitted a lot of bugs to Chromium and the FF team, and the FF team consistently gets back to me within a week and usually fixes the bug within a month. The bug reporting experience with Chrome on the other hand is rather disenchanting.
For a cross-platform bug example, the context's miterLimit is just plain broken by default in Chrome. I reported this (with examples) back in April and have yet to receive any kind of reply. Thank god it's an easy workaround.
https://code.google.com/p/chromium/issues/detail?id=225512
If you want people to use your browser and develop with it in mind, working elements are more important than being slightly faster than the others. (Good developer tools come in close second I think, at least for this crowd).
[+] [-] jgraham|12 years ago|reply
1. Verify that the bug occurs in the latest available (possibly unstable/nightly) build.
2. Submit the bug to the relevant browser/engine bug tracking system (see below). If you aren't confident about how to write a good bug report, read [1] first.
3. Write a testcase. If at all possible this should be minimal (i.e. not using the whole of jQuery/Angular/whatever) and should certainly be clear (e.g. no minimised code unless it is actually needed to reproduce the issue).
4. If the bug is a conformance bug rather than a QoI issue, submit your testcase to the W3C [2] so that it can be incorporated into the standard testsuite and automatically run by all browser vendors. To do this the test needs to be written in a standard format [3] and submitted using the process at [4]. At the moment the documentation is a bit sucky, but there is a big revamp in the works [5], so that should improve in the near future.
That last one may sound like a lot of effort, but testing is the only way that we will end up with a web platform that is both technologically competitive and open both in spirit and in practice i.e. with multiple interoperable implementations. People put a huge amount of time into working around, and complaining about, cross browser issues. Devoting a small fraction of that time to submitting regression tests instead would dramatically decrease the number of problems in the future. For example submitting a test for canvas clipping both makes it likely that IE will be fixed in the near future, and also ensures that it won't then just regress again one version later.
It seems that it would be in the self-interest of big projects like jQuery to commit to creating a test for each time they have to work around some browser bug, so they can expediate fixing the issue, track which browser versions have the bug, and remove the workaround — and thus reduce their code complexity — as soon as the test passes in all the browsers they are targeting. However this is not just something that matters to big projects. If you have run across a bug that is making it difficult to implement something, you are likely the best person in the world to write a testcase for that issue.
If you are trying to submit bug reports and tests and need help, please ask on #whatwg on Freenode irc or #testing on irc.w3.org [6].
Bug trackers:
[1] http://fantasai.inkedblade.net/style/talks/filing-good-bugs/[2] https://github.com/w3c/web-platform-tests
[3] https://sites.google.com/site/forthenewbies/home/writing
[4] http://testthewebforward.org/resources/github_test_submissio...
[5] https://github.com/w3c/testtwf-website
[6] http://irc.w3.org/
[+] [-] mati|12 years ago|reply
https://bugzilla.mozilla.org/show_bug.cgi?id=843310
Reporting to Chrome has been even worse. No word, no nothing. Not even a confirmation. (Although the submitting process itself has been simpler AFAIR).
That's really uncool, after a few of those you lose interest in submitting any more since the feedback feels like "we don't care". So you end up using the time to find workarounds instead.
Now obviously someone will point out everyone can go ahead and submit a patch for the bug itself since it's open source. That's all cool and dandy, I love open source as much as the next guy, but please, let's get serious. Who has the time to dive into a massive codebase like that and fix the bug? Not everyone has plans to become a browser developer.
[+] [-] joedrew|12 years ago|reply
[+] [-] Aloisius|12 years ago|reply
[+] [-] fmoralesc|12 years ago|reply
In-canvas kerning seems to be affected by the system rendering. In Arch Linux, using Firefox 22 and the Infinality patches, I get http://imgur.com/eRVru67 IE kerning is better when scaled up, but it looks just as good or better in regular or scaled down sizes.
On the differences between doubling the font and scaling it, I get http://imgur.com/sgu8KYG
[+] [-] siddboots|12 years ago|reply
[+] [-] Osmium|12 years ago|reply
[+] [-] homeomorphic|12 years ago|reply
Firefox 22 on GNU/Linux looks a lot better, but still not as good as IE: http://imgur.com/GOCmiN9
[+] [-] soundgecko|12 years ago|reply
[deleted]
[+] [-] bd|12 years ago|reply
http://luic.github.io/WebGL-Performance-Benchmark/
Despite the name this program in fact doesn't use WebGL, it runs using three.js 2D canvas renderer (just check the source).
Additionally even the results they got there are suspicious, I got completely opposite results on my system (Windows 7), with Chrome being faster than Firefox by a large margin:
http://www.tomshardware.com/reviews/chrome-27-firefox-21-ope...
They got 777 on Firefox vs 437 on Chrome, I got 290 on Firefox vs 441 on Chrome (this is fully in line with my everyday experience, doing browser graphics).
[+] [-] glomph|12 years ago|reply
[+] [-] unknown|12 years ago|reply
[deleted]
[+] [-] forgotAgain|12 years ago|reply
I switched because I got tired of hearing Chrome constantly accessing my hard drive. I wound up going through the list of Chrome switches here (http://src.chromium.org/svn/trunk/src/chrome/common/chrome_s...) to try to alleviate it. Some things helped but not to an acceptable level. I use W7. Using procmon I could see Chrome constantly re-reading keys from the registry and writing to temp files even though caching and pre-fetching were disabled.
I was also was concerned for Chrome accessing my laptop SSD. Even though I couldn't hear it I could see the lifetime allotment of reads and writes being flushed.
[+] [-] rjh29|12 years ago|reply
Linux is a lot more predictable in that regard...
[+] [-] leeoniya|12 years ago|reply
if any mozillian js dev is lacking stuff to work on ;)
https://bugzilla.mozilla.org/show_bug.cgi?id=879393
https://bugzilla.mozilla.org/show_bug.cgi?id=858986
[+] [-] driverdan|12 years ago|reply
[+] [-] speedyrev|12 years ago|reply
[+] [-] zimbatm|12 years ago|reply
[+] [-] Pxtl|12 years ago|reply
[+] [-] UberMouse|12 years ago|reply
[+] [-] krzyk|12 years ago|reply
[+] [-] magicalist|12 years ago|reply
The individual tests aren't even a problem (though I would maybe pick some more and/or different ones, especially their somewhat odd benchmark choices in performance and graphics), but the averaging makes no sense at all.
Averaging time-based benchmarks is problematic enough (it feels more right, but it still isn't a good idea), but how on earth do you convert a "number of times we had to refresh a page" result into a number that you can then average with a "standards compliance" count and a measure of memory efficiency?? Even if you normalize (which it doesn't seem like they did from the output numbers, but maybe they did), the numbers still aren't comparable because you've given no account for the relative magnitude of their effect.
e.g. if you have a test of "does the browser have a konami code easter egg?", it doesn't matter if the geometric mean is less sensitive to outliers, because it still doesn't make any sense to take an average of that with "the playback framerate of an HD Trololo video" and then pretend that the average provides any insight. And it's even worse if you then compare that average to other crazy averages!
At best you can look at relative ranking, which they actually mention but then proceed to give exact numbers for their relative ranking. There's no information about "betterness" in there, though, except if you then divide up the numbers again to say "these points came from the win in test A, and these points from test B"...at which point you just have the original tests again. Better to count win/no-wins and use that as your final result. Then at least it's obvious that if some tests are much more non-trivial than others that you'll have to give arbitrary weights for the final result, as opposed to having the arbitrary weights being implicit in the tests themselves.
Sorry for the rant :) This is a good recognition of Mozilla's hard work, though notably they've been winning many of these tests for a while now (especially the memory ones), but it would be nice if tomshardware could drop the basically meaningless overall scores (or we could just collectively ignore them).
[1] https://news.ycombinator.com/item?id=5970429
[+] [-] deepblueq|12 years ago|reply
I'm typing this on a bit of an exceptional example, a 2.6 GHz Northwood Pentium 4 with 1 GB of single-channel DDR-400 RAM. The one saving grace is that it has an SSD, but I put the swap file on the spinning drive (which is modern). It's running Linux Mint 14, Xfce edition, with a handful of minor OS-level optimizations. Firefox 21, with a fairly standard configuration, flawlessly handles a dozen or more tabs on a daily basis. It's even pretty snappy, more limited by my internet connection (1.5 Mb DSL) than by the hardware it's running on.
If this sorry excuse for a computer does that well, are the relatively minor differences in performance between browsers going to be a big deal on modern hardware? There will be edge cases, such as the people who have hundreds of tabs open at a time, but for the average user I'm having trouble envisioning that.
The things that make a difference anymore are very tough to quantify in tests like Tom's Hardware did. I will always have Firefox around because I think Mozilla actually cares about privacy. I use Presto on my phone because it's the only one I've found that renders things how I want. Many people are tied to a browser because of extensions. Standards support doesn't matter until you find a page where a browser doesn't work, and those pages will be different for different people. Browsers can be rock solid on one computer and worthlessly crashy on another.
I don't think a round of benchmarks has meant anything to me in browser selection for a long time, and when it did I did them myself so as to account for the computer they were running on. I choose by trying to use a variety of them for a while, and a winner always emerges quite quickly.
[+] [-] Myrth|12 years ago|reply
[+] [-] ajays|12 years ago|reply
Lack of a restart button. Now when I upgrade FF, I have to "kill -9" it from my terminal to get it to restore windows upon restart
Memory leaks. Leave yourself logged in to Facebook for a few days, and watch it take up 2GB of memory.
No way to easily filter sites with cookies like you can in the next tab over, where you manage sites with saved passwords. Why is this? Does FF secretly want you to not muck with cookies?
Now, I'm sure there are addons and plugins for the above. But I should _not_ need addons for basic UX. Save the addons for fancier stuff.
[+] [-] padenot|12 years ago|reply
* I have not experienced that, I don't use Facebook.
* For the cookie stuff, you'll be pleased to learn about the about:permissions feature (type it in your address bar).
[+] [-] shardling|12 years ago|reply
It's entirely possible for a website to have a memory leak. (Or, at any rate, consume an ever increasing amount of memory.) If closing the tab and opening it again frees the memory, then there might not be anything FF can do about it. (It's also possible you've got an addon causing the problem.)
[+] [-] Spittie|12 years ago|reply
I don't think Chrome has one. Anyway, either press shift+f2 to bring up the Developer Toolbar and type "restart", or install an addon like Restartless Restart (https://addons.mozilla.org/en-us/firefox/addon/restartless-r...).
You can also just go into the settings, and set "When Firefox starts:" to "Show my windows and tabs from last time".
And I don't see why a "normal" user should want a default button in the UI to restart the browser.
>Memory leaks
The situation is much better now, there still are some leaks here and there, but nothing huge, at least with my usage (I usually restart my browser every 1-2 days to update to the latest Nightly, and have about 50 tabs open).
The team responsible for memory leaks is the MemShrink team, you can see their progress here: https://blog.mozilla.org/nnethercote/
I'd also suggest to install the addon "about:addons-memory", and see if any of your addons is leaking memory (https://addons.mozilla.org/en-us/firefox/addon/about-addons-...)
If you still see memory leaks, please report them on bugzilla.
>Cookies
While something like this would be nice, I don't think that "normal users" needs to mess up with Cookies.
If you're a power user, and want to enable/disable cookies on a per-sites base, install an addon that let you do it.
I'm personally very happy with Cookie Controller: https://addons.mozilla.org/en-US/firefox/addon/cookie-contro...
[+] [-] epmatsw|12 years ago|reply
As for memory leaks, it's been a project for a while now: https://areweslimyet.com/
I'm not entirely sure what the third point is referring to (I just let the browser handle cookies), but in the Privacy tab of Options, you can view and remove individual cookies and search by site. So maybe that helps with what you want?
[+] [-] omaranto|12 years ago|reply
[+] [-] mccr8|12 years ago|reply
[+] [-] ekm2|12 years ago|reply
Geometric mean is useful for comparing when the expected range or units of values is different. For example, startup time is measured in seconds, but BrowsingBench numbers are things like the unitless 6646. The arithmetic mean would fail to "normalize" these values and give disproportionate weight to some over others; the geometric mean is one way of trying to account for this.
[+] [-] Roboprog|12 years ago|reply
[+] [-] 3327|12 years ago|reply
[+] [-] ebbv|12 years ago|reply
On OS X Safari outperforms both for many tasks (again, YMMV depending on what you're doing.) But that's meaningless for the billions of Windows and Linux users out there.
I wish our industry would move past these kinds of silly headlines. When it first started when Chrome came out there was a massive difference between Chrome and the other browsers, now all of them are very capable and competitive for pretty much anything you want to do.
[+] [-] mtgx|12 years ago|reply
Why did they make the test immediately after Firefox was out? Or do they repeat the test immediately after each browser version comes out? In that case I'm looking forward to the test with Chrome 28 included.
I'm not saying Chrome 28 will necessarily win in the next one. I just find it a little strange that they did it without waiting a bit more for the next Chrome version, too, before writing a headline like that. It just reminds of me those polls who turn out a certain way depending on how you ask the question.
[+] [-] joosters|12 years ago|reply
That's what the article is for, you've got no chance fitting those details into the headline!
[+] [-] jumpbug|12 years ago|reply
This is the only instance I've ever run into where it didn't quite measure up.
[+] [-] tommorris|12 years ago|reply
Whether the browsers shave a few microseconds off JavaScript performance is neither here nor there for me: but perhaps I'm a weirdo.
Performance is nice, but not enough of a reason for me to switch browser.
[+] [-] Shivetya|12 years ago|reply