I was recently talking at a conference and got in an argument with another speaker (not publicly) because he had a technique to improve API response time. Thing is, said technique would delay the time to market on the product and make the code needlessly complicated to maintain. And I pointed out, that it is probably better for the bottom line of most companies to do the way that is slightly slower but easier to maintain and faster to get to market.
At which point he accused me of being another product shill (not a real software engineer) and shut down the debate.
Thing is, I have a computer science degree and take software engineering very seriously. I just also understand that companies have a bottom like and sometimes good is good enough.
So I ask, this "world's fastest web site"... how much did it cost to develop in time/money? How long is the return on investment? And is it going to be more successful because it is faster? Is it maintainable.
I'm guessing the answers are: too much, never, no and no.
With that said, I fully appreciate making thing fast as a challenge. If his goal was to challenge himself like some sort of game where beating the benchmarks is the only way to win. Then kudos. I love doing stuff just to show I can just as much as the next person.
Of course the impact of latency is going to depend on your particular circumstances but there are certainly circumstances where it can make a big impact.
An acceptable 2s load time on Wifi might turn into 2 minutes on Edge.
You might say, 95% of our customers have 3G, to which I'll reply, 100% of your customers sometimes don't have 3G.
And when your page takes a minute to load, it doesn't matter what your time to market was, because noone will look at it.
When your news website is sluggish every time I'm on the train, I'll stop reading it, and do something else, like browse hacker news, which is always fast.
Let's imagine for a while, that an engineer is designing a bridge, or an architect a new building. The companies that pay for them are in a hurry and want to cut costs as much as any other.
Do you think it would be an ethical thing to build a less secure bridge or building just for the sake of getting them out quicker and/or cheaper?
So this is how I see it with software engineering. Of all the engineering branches, we take our job the least serious and are not good at defending our decisions or taking the required time to build our software the way it should be. We just assume that our customers know better and have better reasons to get out to the market and that there is nothing we, as software engineers, can do about it.
So in a way, that guy you talked to was kind of right, because it is your responsibility to defend the need of fast, efficient and maintainable software. It is the customer's responsibility to take care of the product and plan accordingly.
> And is it going to be more successful because it is faster?
There's a lot of research out there about the link between page performance and user retention rate. And this makes sense: If newegg is taking forever to browse, I'll switch to amazon and newegg loses out on a decent chunk of change.
So, up to a point, yes, yes they are going to be more successful because it is faster. 200ms on my broadband connected desktop isn't that much, but Google is able to measure it's impact. And that might be a second or two on my cellular connected phone.
> Is it maintainable.
A lot of optimizations I've seen involve simplifying stuff. Fast and maintainable don't have to be at odds. I wouldn't care to guess for the whole, but, for example, do you really think using system fonts instead of embedding your own is more complicated, harder to maintain, and more work? I doubt it, and that's one of the optimizations suggested.
Now, yes, with optimizing, there is a break even point where it's no longer worth it to push further, but it's also not necessarily obvious where that is if you're just taking it a task at a time. Keep in mind: some of this is research for effective and ineffective techniques for optimizing other websites, and evaluating which ones are maintainable (or not) for future projects. To know what to bother with and what not to bother with when implementing the rest of the codebase. If you're just worried about the next JIRA milestone, you'll be sacrificing long term gains for short term metrics.
Is it worth micro-optimizing everything before launch? Probably not.
Is it worth testing out what techniques and technologies perform well enough before launch? I've been part of the mad scramble to do some major overhauls to optimize things that were unshippably slow before launch. Building it out right the first time would've saved us a lot of effort. I'd say "probably yes."
You were right if it would take a lot of resourses to make things fast. But most of the time it doesn't.
I made a lot of sites fast(er) in maybe 4 hours time. Yes, slow frameworks are slow so you cannot change that in 4 hours time. But most frameworks arent that slow.
My work involved: rewriting a slow query used on every page, changing PNGs to JPEG and reducing the resolution, moving an event that is fired on every DOM redraw, and so on.
And every single time I was just fixing someone's lazy programming.
Ofcourse I agree that there should be a limit to optimizations, but most of the time simple fixes will reduce seconds.
An engineer's job is to solve a problem within real world restrictions. Cost, implementation time, maintainability are all parts of the equation an engineer has to solve.
Your approach was correct. Ideally you would take into account how response time affects a site's metrics and try to balance between all constraints.
Because its all about making compromises to manage an app and achieve its goals. You are right about the time to market and launching the product sooner should be the number one priority. But of all the factors that make your product worthwhile, performance is a pretty darn good factor.
There are several websites today on the internet who have the potential to become great, if only they pay some heed to the performance factor. Take the Upwork freelancing site for example, its performance was really solid when it was oDesk, its predecessor. Its basically, because of the earlier oDesk goodwill that it still even has a sizable userbase today. Sometime in 2013, along the lines of your thinking, some management guru must have cut corners in development of the repolished upwork site, and the result was an absolute performance nightmare! As a freelancer, Upwork is a third or fourth priority for me now, whereas the former oDesk was actually number one.
Another example of a nightmarish performance is Quora - it has a fantastic readership that supplies solid content to the site. Its a solid proof that really good content is so much valued in the online world - that despite its lagging performance, people are willing to endure a site with good content, but that doesn't mean its ideal. Quora still has a lot of potential, it can match or maybe even surpass the levels of Reddit and HN, or even Facebook and Linkedin if they pay heed to the performance factor, but I don't see that happening soon!
I think performance is one of the reasons, if not the main reason why WhatsApp is the leading mobile chat application.
They can manage more millions of messages per second than all their competitors.
The 'build slower applications much faster' mantra has some value, except when everyone can build that application in a month and the market is full of clones.
(If you search for "lighthouse benchmark", "lighthouse speed test", or "lighthouse app" you get nothing. "lighthouse tool" and "lighthouse web" works.)
I think eventually we should start distinguishing between a website whose main purpose it is to present some minimally interactive hypertext to the user and an application that uses HTML as a GUI description language.
The former is trivial to make fast: just write it like it's 1999. You don't really need a database or Javascript, just make sure that your images are not huge and don't load too much stuff from other sites.
This is exactly what I do and why the average render time for my e-commerce site is less than 200ms. I use JS for minor DOM manipulations, everything else is static.
The secret ingredient is using Varnish cache server that beats the pants off everything else and that can fill a 10G pipe while still have plenty of capacity left on a single cpu core.
He lost me somewhere during #3. Simply too much story telling around the facts for my liking. I don't want to know that somebody recently got a tattoo and stuff like that, when I click on a link about making fast websites.
Counterpoint - I really enjoyed the style. It was entertainingly over-the-top and chatty. I'd probably have given up on a very dry "just the facts" article quicker, and I'm fairly sure I'll remember this one better because of the stylistic touches.
As an old binary file hacker I can't help but to think that the performance would also be improved by using a different format than Json that doesn't require 75,000 lines to be parsed on load.
One drawback of formats like Json and XML is that they require reaching the ending tag ( "}" or "]" usually for Json and </root> for XML ) before the file can be considered parsed. A properly designed binary format can be used like a database of sorts with very efficient reads.
The point is that the JSON parser runs in the browser - as native code - so it's rather difficult to match it's performance with some other parser in JS, even if that one has an easier job to do.
I have already fathered a child with Firebase and quite enjoyed myself
Our <scripts>, much like our feet, should be at the bottom of our <body>. Oh I just realised that’s why they call it a footer! OMG and header too! Oh holy crap, body as well! I’ve just blown my own mind.
Sometimes I think I’m too easily distracted. I was once told that my mind doesn’t just wander, it runs around screaming in its underpants.
Average page generation time from the database is 2-3ms & cached pages generate in 300 microseconds. Also, this includes GZip.
One day I'll write a blog post on it, but it's a Django app with a custom view controller. I use prepared statements and chunked encoding & materialized database views & all sorts of techniques I forgot that I need to write down. I also gzip data before storing in Redis, so now my Redis cache is automatically 10x bigger.
Impressive! You can do some more front-end stuff to improve the speed too, like adding a good CDN and optimising your images. For example, [1] is ~790KB which is great for viewing on a 5k iMac, but is probably unnecessary when someone is viewing your website from a low powered phone on a slow 3G connection. There are various tools like ImageOptim or Dexecure (which I work on) to solve this issue. See [1] and [2] for a comparision on how this single image can be compressed further.
I don't mean to detract from the great back-end time that you've managed to achieve, but I think it's important to point out that web performance is about _so_ much more than the back-end.
Using WebPagetest, you can test your page with real devices to get an idea of how it will load for users who are not on overpowered desktop devices hooked up to the Internet over a fibre connection. If you look at the "Filmstrip view" for futureclaw[0], you don't even see any text until ~4.7 seconds in, and the page doesn't look finished until well after 13 seconds because that image takes so long to come in. Hacker News, on the other hand, has content visible at 2.3 seconds[1].
So yes, back-end times are important. But it's a small slice of the overall load time. Just keep in mind that the proportion of people on mobile connections is actually going _up_ in the developed world, and the average speed of mobile connections is not improving. It's really important that we (web developers) recognise this and build sites that can be used by everyone, no matter the speed of their device or connection.
The backend number is nice, really, but the quoted load time by the tool isn't. My jenkins loads almost as fast, (and according to the tool faster than HN), and isn't even hosted in a DC. Some pages hosted on the same machine load in almost a quarter of that. Most of that is internet latency, the same pages requested from the same network take just a couple ms. Even Jenkins only takes 65 ms for /.
Its only good if you have good network conditions. For me, sitting in a hotel in Europe it takes 19 Secs to load. 49 Reqs/Page Load is just awfully much.
Also the fastest website by far is blog.fefe.de, 2 reqs/pageload. Also written in C.
This is a well written article. I only read it because he initially presented a link to his website, which ended up loading in 250ms. This begged the question "why did it load so fast?" which in turn begged the entire article. That presentation was genius, at some level. You can't not read the article.
Reading this article is like witnessing a race between a bunch of lions, only to see the winner unzip what appears to be a lion costume covering a cheetah. The website appears as if it has the overhead of a DB, other networks, etc., when it actually doesn't.
Instead the article reads as "a few tricks to make really fast websites that don't do anything". Of course, that's not entirely true, since the optimizations mentioned do apply to more involved websites, but with much lower efficacy. Anyways, funny article.
> I have a big fat i7 CPU that I do most of my work on, and a brand new Pixel XL: the fastest phone in the world. What do you think the phone performance will be? It’s only 10%. If I slow my i7 down by ten times, it is the same speed as the $1,400 phone in front of me.
That's the most surprising thing I learned from the article. But I'm still a little skeptical about this. One well-known CPU comparison site[0] gives the following scores:
- The fastest desktop system they rated[1], which happens to have an i7 CPU just as the author is using, is rated 9206.
- Google Pixel XL smartphone[2] gets a score of 8153.
In other words, the Pixel comes out as having 89% of the performance of the fastest desktop system.
I looked at some other metrics for comparing systems, and I'm not seeing how smartphones are only 10% as fast as desktops -- neither in average cases nor in extreme cases (fastest desktop vs fastest smartphone).
[0] PassMark Software Pty Ltd. Their PassMark rating is a composite of tests for CPU, disk, 2D & 3D graphics, and memory.
Take a look at this talk on why mobile devices are so much slower than desktop devices ( they are indeed 10x slower in the traces I have looked at too) - https://youtu.be/4bZvq3nodf4
That metric is likely multi-core performance. Contrarily, since JS is single-threaded (usually), the very weak single cores in a mobile processor don't do as well.
Slightly OT: I was once like the OP. I shaved off every bit of my pages, almost no libs, CDN delivery, benchmarked and stress tested it with ab (Apache Bench) after every commit (!), this thing was fast like a beast but tricky to maintain. I did it for SEO reasons next to tons of other SEO techniques.
And you know what? My competitors still ranked all higher despite their clumsy, megabyte heavy pages, loading for more than ten seconds. I know that size and speed are not the only SEO signals but they seems to be not that important as believed. Not sure though how AMP's importance will play in future.
I still like his post, some of his tips are good (eg, start mobile first) and some depend heavily on the use case (eg, don't render HTML on the server). As other her in the thread say, does the ROI reflect the extra mile? If yes then go for it.
Looks like website performance optimization is becoming a high-valued and high-paid skill for the next 10 years at least.
10 years ago (and now, to a degree) it was database or C++ performance optimization. You could specialize in these to earn above 10-15 percentile of all the crowd.
That's actually a good thought. In most companies you could probably immediately make the site 10% faster by eliminating competing analytics. I've personally seen sites with 4 different Google 'UA-XXXXX-Y' sections! The hard part is talking to the various business owners and getting them to agree to one central account.
Wow, it did load super fast on a slow phone and over 3g; however, once loaded, and a couple of seconds after content skimming, interacting with the site was clumsy. In particular, touching a category element was unresponsive. On a second experiment I waited 10 seconds before interacting and it worked fine. I guess there is some UI blocking JS scripiting going on right after load.
This was very funny to read and has several good tips that I'll use, but the polyfilling tips need more testing: the site doesn't work at all on IE11 (on Windows 10 and 8.1).
There are two problems why I stopped reading this article: it is not fast (took 20s to load compared to 3s on Hacker News. Note that I'm on a crappy mobile network) and the site is non-responsive on mobile (I clicked things which looked like clickable stuff and nothing happened).
"Whenever I feel reluctant to throw out some work, I recall that life is pointless, and nothing we do even exists if the power goes out. There’s a handy hint for ya."
[+] [-] throwaway2016a|9 years ago|reply
I was recently talking at a conference and got in an argument with another speaker (not publicly) because he had a technique to improve API response time. Thing is, said technique would delay the time to market on the product and make the code needlessly complicated to maintain. And I pointed out, that it is probably better for the bottom line of most companies to do the way that is slightly slower but easier to maintain and faster to get to market.
At which point he accused me of being another product shill (not a real software engineer) and shut down the debate.
Thing is, I have a computer science degree and take software engineering very seriously. I just also understand that companies have a bottom like and sometimes good is good enough.
So I ask, this "world's fastest web site"... how much did it cost to develop in time/money? How long is the return on investment? And is it going to be more successful because it is faster? Is it maintainable.
I'm guessing the answers are: too much, never, no and no.
With that said, I fully appreciate making thing fast as a challenge. If his goal was to challenge himself like some sort of game where beating the benchmarks is the only way to win. Then kudos. I love doing stuff just to show I can just as much as the next person.
[+] [-] Veratyr|9 years ago|reply
For example Marissa Mayer claimed (though this was in 2006) that a 500ms delay directly caused a 20% drop in traffic and revenue: https://glinden.blogspot.com/2006/11/marissa-mayer-at-web-20...
Optimizely found that for The Telegraph, adding a 4s delay led to a 10% drop in pageviews: https://blog.optimizely.com/2016/07/13/how-does-page-load-ti...
100ms cost Amazon 1% of sales: http://blog.gigaspaces.com/amazon-found-every-100ms-of-laten...
Akamai has a number of claims: https://www.akamai.com/us/en/about/news/press/2009-press/aka...
Of course the impact of latency is going to depend on your particular circumstances but there are certainly circumstances where it can make a big impact.
[+] [-] jakobegger|9 years ago|reply
Because mobile.
An acceptable 2s load time on Wifi might turn into 2 minutes on Edge.
You might say, 95% of our customers have 3G, to which I'll reply, 100% of your customers sometimes don't have 3G.
And when your page takes a minute to load, it doesn't matter what your time to market was, because noone will look at it.
When your news website is sluggish every time I'm on the train, I'll stop reading it, and do something else, like browse hacker news, which is always fast.
[+] [-] iagooar|9 years ago|reply
Do you think it would be an ethical thing to build a less secure bridge or building just for the sake of getting them out quicker and/or cheaper?
So this is how I see it with software engineering. Of all the engineering branches, we take our job the least serious and are not good at defending our decisions or taking the required time to build our software the way it should be. We just assume that our customers know better and have better reasons to get out to the market and that there is nothing we, as software engineers, can do about it.
So in a way, that guy you talked to was kind of right, because it is your responsibility to defend the need of fast, efficient and maintainable software. It is the customer's responsibility to take care of the product and plan accordingly.
[+] [-] MaulingMonkey|9 years ago|reply
There's a lot of research out there about the link between page performance and user retention rate. And this makes sense: If newegg is taking forever to browse, I'll switch to amazon and newegg loses out on a decent chunk of change.
https://research.googleblog.com/2009/06/speed-matters.html
So, up to a point, yes, yes they are going to be more successful because it is faster. 200ms on my broadband connected desktop isn't that much, but Google is able to measure it's impact. And that might be a second or two on my cellular connected phone.
> Is it maintainable.
A lot of optimizations I've seen involve simplifying stuff. Fast and maintainable don't have to be at odds. I wouldn't care to guess for the whole, but, for example, do you really think using system fonts instead of embedding your own is more complicated, harder to maintain, and more work? I doubt it, and that's one of the optimizations suggested.
Now, yes, with optimizing, there is a break even point where it's no longer worth it to push further, but it's also not necessarily obvious where that is if you're just taking it a task at a time. Keep in mind: some of this is research for effective and ineffective techniques for optimizing other websites, and evaluating which ones are maintainable (or not) for future projects. To know what to bother with and what not to bother with when implementing the rest of the codebase. If you're just worried about the next JIRA milestone, you'll be sacrificing long term gains for short term metrics.
Is it worth micro-optimizing everything before launch? Probably not.
Is it worth testing out what techniques and technologies perform well enough before launch? I've been part of the mad scramble to do some major overhauls to optimize things that were unshippably slow before launch. Building it out right the first time would've saved us a lot of effort. I'd say "probably yes."
[+] [-] pasta|9 years ago|reply
I made a lot of sites fast(er) in maybe 4 hours time. Yes, slow frameworks are slow so you cannot change that in 4 hours time. But most frameworks arent that slow. My work involved: rewriting a slow query used on every page, changing PNGs to JPEG and reducing the resolution, moving an event that is fired on every DOM redraw, and so on.
And every single time I was just fixing someone's lazy programming.
Ofcourse I agree that there should be a limit to optimizations, but most of the time simple fixes will reduce seconds.
[+] [-] andmarios|9 years ago|reply
Your approach was correct. Ideally you would take into account how response time affects a site's metrics and try to balance between all constraints.
[+] [-] rms_returns|9 years ago|reply
Because its all about making compromises to manage an app and achieve its goals. You are right about the time to market and launching the product sooner should be the number one priority. But of all the factors that make your product worthwhile, performance is a pretty darn good factor.
There are several websites today on the internet who have the potential to become great, if only they pay some heed to the performance factor. Take the Upwork freelancing site for example, its performance was really solid when it was oDesk, its predecessor. Its basically, because of the earlier oDesk goodwill that it still even has a sizable userbase today. Sometime in 2013, along the lines of your thinking, some management guru must have cut corners in development of the repolished upwork site, and the result was an absolute performance nightmare! As a freelancer, Upwork is a third or fourth priority for me now, whereas the former oDesk was actually number one.
Another example of a nightmarish performance is Quora - it has a fantastic readership that supplies solid content to the site. Its a solid proof that really good content is so much valued in the online world - that despite its lagging performance, people are willing to endure a site with good content, but that doesn't mean its ideal. Quora still has a lot of potential, it can match or maybe even surpass the levels of Reddit and HN, or even Facebook and Linkedin if they pay heed to the performance factor, but I don't see that happening soon!
[+] [-] _Codemonkeyism|9 years ago|reply
[+] [-] Shorel|9 years ago|reply
They can manage more millions of messages per second than all their competitors.
The 'build slower applications much faster' mantra has some value, except when everyone can build that application in a month and the market is full of clones.
[+] [-] guitarbill|9 years ago|reply
(If you search for "lighthouse benchmark", "lighthouse speed test", or "lighthouse app" you get nothing. "lighthouse tool" and "lighthouse web" works.)
[+] [-] adrianN|9 years ago|reply
The former is trivial to make fast: just write it like it's 1999. You don't really need a database or Javascript, just make sure that your images are not huge and don't load too much stuff from other sites.
The latter requires a lot more work.
[+] [-] RandomInteger4|9 years ago|reply
[+] [-] gareim|9 years ago|reply
[+] [-] olavgg|9 years ago|reply
This is exactly what I do and why the average render time for my e-commerce site is less than 200ms. I use JS for minor DOM manipulations, everything else is static.
The secret ingredient is using Varnish cache server that beats the pants off everything else and that can fill a 10G pipe while still have plenty of capacity left on a single cpu core.
[+] [-] Dunedan|9 years ago|reply
[+] [-] thenomad|9 years ago|reply
[+] [-] throwaway2016a|9 years ago|reply
One drawback of formats like Json and XML is that they require reaching the ending tag ( "}" or "]" usually for Json and </root> for XML ) before the file can be considered parsed. A properly designed binary format can be used like a database of sorts with very efficient reads.
[+] [-] kelvin0|9 years ago|reply
Haven't used it myself, but worked with people on a project which did, and boy does it fly compared to JSON!
[+] [-] dom0|9 years ago|reply
[+] [-] the_duke|9 years ago|reply
Our <scripts>, much like our feet, should be at the bottom of our <body>. Oh I just realised that’s why they call it a footer! OMG and header too! Oh holy crap, body as well! I’ve just blown my own mind.
Sometimes I think I’m too easily distracted. I was once told that my mind doesn’t just wander, it runs around screaming in its underpants.
Pure gold. :D
[+] [-] mozumder|9 years ago|reply
Speed test: https://tools.pingdom.com/#!/FMRYc/http://www.futureclaw.com
Average page generation time from the database is 2-3ms & cached pages generate in 300 microseconds. Also, this includes GZip.
One day I'll write a blog post on it, but it's a Django app with a custom view controller. I use prepared statements and chunked encoding & materialized database views & all sorts of techniques I forgot that I need to write down. I also gzip data before storing in Redis, so now my Redis cache is automatically 10x bigger.
I just checked and it's faster than Hacker News: http://tools.pingdom.com/fpt/bHrP9i/http://news.ycombinator....
[+] [-] ilarum|9 years ago|reply
[1] https://farm9.staticflickr.com/8380/29778359335_f59e0b4fcb_k... [2] https://bigdemosyncopt.dexecure.net/https://farm9.staticflic...
Note that [2] varies in size and format depending on your user agent and it becomes ~70KB when viewed from a chrome mobile device.
[+] [-] wldlyinaccurate|9 years ago|reply
Using WebPagetest, you can test your page with real devices to get an idea of how it will load for users who are not on overpowered desktop devices hooked up to the Internet over a fibre connection. If you look at the "Filmstrip view" for futureclaw[0], you don't even see any text until ~4.7 seconds in, and the page doesn't look finished until well after 13 seconds because that image takes so long to come in. Hacker News, on the other hand, has content visible at 2.3 seconds[1].
So yes, back-end times are important. But it's a small slice of the overall load time. Just keep in mind that the proportion of people on mobile connections is actually going _up_ in the developed world, and the average speed of mobile connections is not improving. It's really important that we (web developers) recognise this and build sites that can be used by everyone, no matter the speed of their device or connection.
[0] https://www.webpagetest.org/result/161224_TS_264/ [1] https://www.webpagetest.org/result/161224_H9_26Z/
[+] [-] dom0|9 years ago|reply
The backend number is nice, really, but the quoted load time by the tool isn't. My jenkins loads almost as fast, (and according to the tool faster than HN), and isn't even hosted in a DC. Some pages hosted on the same machine load in almost a quarter of that. Most of that is internet latency, the same pages requested from the same network take just a couple ms. Even Jenkins only takes 65 ms for /.
[+] [-] aibottle|9 years ago|reply
[+] [-] sazers|9 years ago|reply
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] girzel|9 years ago|reply
[+] [-] jayajay|9 years ago|reply
Reading this article is like witnessing a race between a bunch of lions, only to see the winner unzip what appears to be a lion costume covering a cheetah. The website appears as if it has the overhead of a DB, other networks, etc., when it actually doesn't.
Instead the article reads as "a few tricks to make really fast websites that don't do anything". Of course, that's not entirely true, since the optimizations mentioned do apply to more involved websites, but with much lower efficacy. Anyways, funny article.
[+] [-] quizotic|9 years ago|reply
[+] [-] hueving|9 years ago|reply
[+] [-] aibottle|9 years ago|reply
[+] [-] computator|9 years ago|reply
That's the most surprising thing I learned from the article. But I'm still a little skeptical about this. One well-known CPU comparison site[0] gives the following scores:
- The fastest desktop system they rated[1], which happens to have an i7 CPU just as the author is using, is rated 9206.
- Google Pixel XL smartphone[2] gets a score of 8153.
In other words, the Pixel comes out as having 89% of the performance of the fastest desktop system.
I looked at some other metrics for comparing systems, and I'm not seeing how smartphones are only 10% as fast as desktops -- neither in average cases nor in extreme cases (fastest desktop vs fastest smartphone).
[0] PassMark Software Pty Ltd. Their PassMark rating is a composite of tests for CPU, disk, 2D & 3D graphics, and memory.
[1] http://www.passmark.com/baselines/top.html
[2] http://www.androidbenchmark.net/phone.php?phone=Google+Pixel...
[+] [-] inian|9 years ago|reply
Also look at the speedometer benchmark (http://browserbench.org/Speedometer/) for a more realistic comparison of how slow your mobile device is compared to your desktop when running real world apps. Here are some numbers from the benchmark from some devices - http://benediktmeurer.de/2016/12/16/the-truth-about-traditio...
[+] [-] dom0|9 years ago|reply
Do you really think we're expending 20-30 times the power for a 12 % compute speed-up?
Just to clarify, the two numbers you quote are different benchmarks. You can't compare them, lest say "oh, they're only different by 10 percent".
[+] [-] pitaj|9 years ago|reply
[+] [-] mrswag|9 years ago|reply
Power optimization come at a performance cost.
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] alkonaut|9 years ago|reply
If a beefy desktop is 100 I'd guess the real world (todomvc) benchmarks are 100/90/20 for desktop/android/iOS, for a recent android and an iPhone 7.
So CPU power (sadly) doesn't translate directly to browser performance as well on android as it does on iOS.
[+] [-] greenspot|9 years ago|reply
And you know what? My competitors still ranked all higher despite their clumsy, megabyte heavy pages, loading for more than ten seconds. I know that size and speed are not the only SEO signals but they seems to be not that important as believed. Not sure though how AMP's importance will play in future.
I still like his post, some of his tips are good (eg, start mobile first) and some depend heavily on the use case (eg, don't render HTML on the server). As other her in the thread say, does the ROI reflect the extra mile? If yes then go for it.
[+] [-] ComodoHacker|9 years ago|reply
10 years ago (and now, to a degree) it was database or C++ performance optimization. You could specialize in these to earn above 10-15 percentile of all the crowd.
[+] [-] Corrado|9 years ago|reply
[+] [-] bikamonki|9 years ago|reply
[+] [-] Maarten88|9 years ago|reply
[+] [-] k__|9 years ago|reply
go below 200kb at your entry points, so people see something fast.
now they have something to do and you can ship the rest of your app asynchronously in the background.
I think this is the most important. Even more than build time rendering or something. The 80% yield with 20% effort.
[+] [-] edem|9 years ago|reply
[+] [-] RandomInteger4|9 years ago|reply
[+] [-] tomohawk|9 years ago|reply
Worth reading just for this gem.
[+] [-] ing33k|9 years ago|reply
Just uninstalled that and it loads within 10 seconds.