Twenty five what? Megabytes? Mebibytes? Percent? Libraries of Congress? Furlongs per fortnight? Inverse femtobarns?
I can guess you mean "MiB" (mebibytes) from the charts, but units are always important. Bare numbers leads to confusion! It's good practice to always include units, even if it's a simple "All numbers are in MiB" at the top.
My FastMail accounts (personal and work) tend to sit stably around 10–12MB (when open for days) if I have only opened the mail part, and nudge up to 15–17MB if I have also opened the calendar, settings and address book (which are modules loaded separately on request). As you would imagine, large emails will inevitably increase the memory footprint while they’re loaded, until they perhaps get evicted from the LRU cache after viewing many more emails.
This is what comes of caring about performance and memory footprint. It doesn’t hurt also that almost all of it has been done by one guy rather than having fifty or more people all adding, adding, adding in uncontrolled fashion.
And Topicbox, our group email (mailing lists) product, is using 7–8MB for browsing archives. (It’s definitely simpler than FastMail.)
Somehow, large teams like Gmail’s, which have vastly more resources than us, are never good at memory usage, and seldom good at performance. I have some vague ideas about why this is, but it’s initially quite counterintuitive. It does seem to be a fairly consistent observation, though: small teams actually have a big advantage in such matters.
I’m almost sad that this was all under control before I started working at FastMail early last year, because it’s hard to justify improving it further, and I do find optimising things like memory usage and running performance to be such fun. (I know of a couple of ways memory usage and startup performance could made reduced; but the main thing for startup performance will be service workers and a persistent data cache.)
Meanwhile, I often have to interact with a Jenkins instance with a plugin that has a habit of redrawing a large table from scratch every second or two when a build is running, and keeping a reference to the orphaned DOM node. It can consume almost a gigabyte an hour.
I feel a desire to switch back to the original HTML version - credit to Google for keeping it going. Here's the handy support page [0] with a link to convert back.
Edit: it appears you just need the `/h` appended to the URL [1]
The very lightweight HTML Gmail lacks all of "normal" Gmail's latency-hiding features, which is one reason it uses so little memory. Gmail preloads all of the messages in the thread list so when you click them they are displayed instantly. HTML Gmail doesn't, and when you click a message it fetches the body from the origin. The tradeoff is yours to make. I find the HTML version infuriating when I'm tethered on mobile because every mouse click takes 10 seconds. On the same tether I can leave normal Gmail open all the time and it's fast. Ironically the lightweight Gmail is more usable on a fast, reliable wired connection.
Wasn't this the initial reason gmail was designed? It had a small footprint and wasn't a performance hog which made it fast, responsive and easy to use?
I've read lately Inbox is probably going away after the recent gmail redesign which is incorporating some of Inbox's features.
This is interesting, but it's not the whole story. Apps that are media heavy can often use large amounts of memory outside the JavaScript heap.
E.g., if you load and decode a 3MB MP3 with the Web Audio API you can easily find yourself swallowing 30MB of RAM, depending upon the uncompressed sample rate. Another example: image decompression can lead to large amounts of GPU memory being swallowed.
You can see the effect of situations like this by using Chrome Task Manager, which will give you a more realistic view of total memory usage by a page.
Nice to see The Guardian so low, especially compared the NYTimes. I often come across issues with the NYT where the page starts to load and then just goes completely blank.
I think overall this list is a good indication of sites that have respect for their users.
> where the page starts to load and then just goes completely blank.
That's almost certainly a bug on your end. You do have 50MB of free RAM available, right? Other than actually running out of memory (and swap), normal usage should never result in a no-render.
My best guess would be a content blocker interfering.
By making it visible, people would be more cognizant of which sites have poor experiences or have a big impact on their computer.
You can't simplify this problem down to "This uses more RAM than an arbitrary threshold therefore it's a problem." If I spend 99% of my time using an app then I want it to cache hundreds of megabytes of data in to memory so I can work fast. Saying it's a bad experience if it does that is wrong.
I'm not convinced that such a number could be calculated in such a way that wouldn't be utterly meaningless to all but the biggest nerds.
The resources a 'web app' 'should' use is highly context dependent. As a web developer, I can determine some of that context, as I know what functionality is resource intensive. I don't think that you can distill that down in any useful way.
I'm wondering if this could be made a relative score instead of an absolute one that measures the current site's memory consumption relative to your available free memory. It might be more useful to see the impact it has for you.
That seems like a convoluted scheme to get "normal" people to care about something that you care a lot about, and they don't.
They don't care because it's not a problem for them. Yeah, maybe they could have bought a cheaper phone if everyone had spend twice as much time writing the code. But hardware advances have actually caught up with requirements (and then some), and all these sites work fine on even lower-end current phones.
You care about it like a watchmaker cares about the mechanical drive of his watch.
They don't need a watch. They have smartphone. It's 8 magnitudes more precise than your mechanical watch.
I use the uBlock origin badge in my browser as a proxy for making page judgements - I avoid revisiting sites with a high number (>5?) of blocked elements...
It's definitely noticeable on the Google/Facebook/Twitter properties, which is unfortunate because they're the ones behind a lot of the tech (and ads) commonly used across the web.
I wonder how much of the bloat comes from everyone using React/Angular/Polymer/Bootstrap and the layers and layers of libraries and another DOM and rendering engine.
That's nice, but now please do the same after having that tab open in the browser for 24 hours... it's unbelievable how much memory some crap (i'm looking at you google docs) can leak.
One of the sources of bloat may be not just apps, but also extensions.
E.g. I see the Okta authentication extension (which is a required part of work setup) spending nearly a hundred megs after prolonged usage in Firefox. Mong other things, it appears to allocate a lot of identical strings (like 500M of them), likely by a thoughtless `substring` call somewhere.
Awesome analysis, quite instructive. I am even considering adding it to my server performance tool, as a frontend performance metric, since I guess it could be easily automated. Imagine little README badges saying "quite bloated" and "pretty good" :p
If feeding this back to use as a general performance metric, you would have to be very careful to make sure you were measuring the same thing each time which for a complex application could be difficult unless you are only measuring on initial page load (which might not be as useful as you are hoping for). Without this control you would need a lot of results to make any average or other analysis of the metric meaningful.
For controlled tests run by yourself in dev (rather than a performance metric for your app in production) it could be useful though.
Opened https://www.reddit.com/ in Private Browsing and measured it after some ten or fifteen seconds, 92.69MB (38MB of objects—of which 16MB is ArrayBuffer—33MB of scripts, 7MB of DOM nodes, 5MB of other, 2MB of strings).
https://old.reddit.com/, 13.70MB (1MB of scripts, 1MB of other, 2MB of objects, 6MB of DOM nodes, 782KB of strings).
Their desktop counterparts will more likely than not be the exact same shoddily thrown together heap of disjointed third party bloat, for the occasion bundled with the affront to professional developer pride and integrity known as 'Electron'.
[+] [-] pdkl95|7 years ago|reply
Twenty five what? Megabytes? Mebibytes? Percent? Libraries of Congress? Furlongs per fortnight? Inverse femtobarns?
I can guess you mean "MiB" (mebibytes) from the charts, but units are always important. Bare numbers leads to confusion! It's good practice to always include units, even if it's a simple "All numbers are in MiB" at the top.
[+] [-] icc97|7 years ago|reply
[+] [-] chrismorgan|7 years ago|reply
This is what comes of caring about performance and memory footprint. It doesn’t hurt also that almost all of it has been done by one guy rather than having fifty or more people all adding, adding, adding in uncontrolled fashion.
And Topicbox, our group email (mailing lists) product, is using 7–8MB for browsing archives. (It’s definitely simpler than FastMail.)
Somehow, large teams like Gmail’s, which have vastly more resources than us, are never good at memory usage, and seldom good at performance. I have some vague ideas about why this is, but it’s initially quite counterintuitive. It does seem to be a fairly consistent observation, though: small teams actually have a big advantage in such matters.
I’m almost sad that this was all under control before I started working at FastMail early last year, because it’s hard to justify improving it further, and I do find optimising things like memory usage and running performance to be such fun. (I know of a couple of ways memory usage and startup performance could made reduced; but the main thing for startup performance will be service workers and a persistent data cache.)
Meanwhile, I often have to interact with a Jenkins instance with a plugin that has a habit of redrawing a large table from scratch every second or two when a build is running, and keeping a reference to the orphaned DOM node. It can consume almost a gigabyte an hour.
[+] [-] icc97|7 years ago|reply
Gmail vintage (0.8 MiB) -> Gmail (158 MiB) -> Inbox (215 MiB)
I feel a desire to switch back to the original HTML version - credit to Google for keeping it going. Here's the handy support page [0] with a link to convert back.
Edit: it appears you just need the `/h` appended to the URL [1]
[0]: https://support.google.com/mail/answer/15049?hl=en
[1]: https://mail.google.com/mail/u/0/h
[+] [-] ebikelaw|7 years ago|reply
[+] [-] at-fates-hands|7 years ago|reply
I've read lately Inbox is probably going away after the recent gmail redesign which is incorporating some of Inbox's features.
[+] [-] bartread|7 years ago|reply
E.g., if you load and decode a 3MB MP3 with the Web Audio API you can easily find yourself swallowing 30MB of RAM, depending upon the uncompressed sample rate. Another example: image decompression can lead to large amounts of GPU memory being swallowed.
You can see the effect of situations like this by using Chrome Task Manager, which will give you a more realistic view of total memory usage by a page.
[+] [-] amanzi|7 years ago|reply
I think overall this list is a good indication of sites that have respect for their users.
[+] [-] shaki-dora|7 years ago|reply
That's almost certainly a bug on your end. You do have 50MB of free RAM available, right? Other than actually running out of memory (and swap), normal usage should never result in a no-render.
My best guess would be a content blocker interfering.
[+] [-] fouc|7 years ago|reply
It could be a score based on asset sizes, memory impact, cpu impact, etc.
By making it visible, people would be more cognizant of which sites have poor experiences or have a big impact on their computer.
[+] [-] onion2k|7 years ago|reply
You can't simplify this problem down to "This uses more RAM than an arbitrary threshold therefore it's a problem." If I spend 99% of my time using an app then I want it to cache hundreds of megabytes of data in to memory so I can work fast. Saying it's a bad experience if it does that is wrong.
[+] [-] KyeRussell|7 years ago|reply
The resources a 'web app' 'should' use is highly context dependent. As a web developer, I can determine some of that context, as I know what functionality is resource intensive. I don't think that you can distill that down in any useful way.
[+] [-] natecavanaugh|7 years ago|reply
[+] [-] shaki-dora|7 years ago|reply
They don't care because it's not a problem for them. Yeah, maybe they could have bought a cheaper phone if everyone had spend twice as much time writing the code. But hardware advances have actually caught up with requirements (and then some), and all these sites work fine on even lower-end current phones.
You care about it like a watchmaker cares about the mechanical drive of his watch.
They don't need a watch. They have smartphone. It's 8 magnitudes more precise than your mechanical watch.
[+] [-] prplhaz4|7 years ago|reply
[+] [-] kenhwang|7 years ago|reply
I wonder how much of the bloat comes from everyone using React/Angular/Polymer/Bootstrap and the layers and layers of libraries and another DOM and rendering engine.
[+] [-] acchow|7 years ago|reply
I don't think React is the problem.
[+] [-] cozzyd|7 years ago|reply
[+] [-] nottorp|7 years ago|reply
[+] [-] alexmat|7 years ago|reply
[+] [-] pjmlp|7 years ago|reply
90% of the time I use it via IMAP, quite snappy indeed.
[+] [-] swebs|7 years ago|reply
https://i.imgur.com/smpvriF.png
[+] [-] nine_k|7 years ago|reply
E.g. I see the Okta authentication extension (which is a required part of work setup) spending nearly a hundred megs after prolonged usage in Firefox. Mong other things, it appears to allocate a lot of identical strings (like 500M of them), likely by a thoughtless `substring` call somewhere.
[+] [-] grawlinson|7 years ago|reply
1. Was the cache emptied before each test? (A lot of these sites would share scripts on CDNs)
2. What Firefox addons were enabled? (README says that uBlock was active, so that definitely has an effect)
[+] [-] EZ-E|7 years ago|reply
[+] [-] LandR|7 years ago|reply
It's the attitude of if it works who cares how much memory it uses.
Or when I bring up allocations in something like .net, I get don't worry about it .NET GC will sort it out eventually.
Modern developers....
[+] [-] tinus_hn|7 years ago|reply
[+] [-] suixo|7 years ago|reply
[+] [-] dspillett|7 years ago|reply
Certainly at the add-on level, as I presume this is how https://addons.mozilla.org/en-GB/firefox/addon/tab-memory-us... is implemented.
If feeding this back to use as a general performance metric, you would have to be very careful to make sure you were measuring the same thing each time which for a complex application could be difficult unless you are only measuring on initial page load (which might not be as useful as you are hoping for). Without this control you would need a lot of results to make any average or other analysis of the metric meaningful.
For controlled tests run by yourself in dev (rather than a performance metric for your app in production) it could be useful though.
[+] [-] TheBeardKing|7 years ago|reply
[+] [-] chrismorgan|7 years ago|reply
https://old.reddit.com/, 13.70MB (1MB of scripts, 1MB of other, 2MB of objects, 6MB of DOM nodes, 782KB of strings).
[+] [-] at-fates-hands|7 years ago|reply
[+] [-] samirm|7 years ago|reply
[+] [-] have_faith|7 years ago|reply
[+] [-] interfixus|7 years ago|reply
[+] [-] magnat|7 years ago|reply