This is really an overly simplistic way to code. I urge people to think deeper up front about performance.
Know your performance goals going in and code accordingly. If you require 100 micro average latency and you coded in Node.js, step 3 will be a rewrite.
Every single line of code I write, I can tell you my performance goals. If indeed it is a simple crud screen by a user, the goal may be "meh, document.ready called when viewed from 100ms browser lag within 1 second". Backend trading code would have different goals..
Exactly. Completely ignoring performance considerations until after you got everything working right potentially means a substantial or complete rewrite of much of the code.
My posit was that 80% of the software an engineer will write will not require optimization and based on the responses i've been getting I should have used "rule of thumb" vs mantra. My intention was to warn the ones that read this and say, oh i gotta do all of this for every piece of software that I write, which will undoubtedly lead to overly complex code when a simple solution would have worked just as well.
"Know your performance goals going in and code accordingly."
I this is key sentence here and its worth repeating. Know your performance goals before your fingers touch the keyboard.
How do you know the speed expectations? Do you always talk concrete numbers with the stakeholders before each change you make, or just use reasonable rules of thumb you determine?
What you'll often find however, is that in the last step ("make it fast") you have limited room to maneuver, because of the stuff you did in the first 2 steps. If you really need high performance, you need to design for high performance, not just leave it as an afterthought.
Usually it's almost impossible to predict where your design flaws would be until you actually use the thing in production. Because you make a lot of assumptions and some of them will inevitably be false. So, make it fast is mostly about changing design.
I've seen more performance problems caused by people not having clean code than I ever have from people not thinking about performance from the get go.
I've also seen plenty of performance issues ironically caused by performance hacks wedged in early on.
Sure; but a working version is still really really useful to compare against (e.g. build up a suite of test cases) even if you have to redesign large parts for performance later.
My personal approach is to do some order-of-magnitude performance testing as early as possible, to validate that the approach chosen to solve a particular problem is at least tenable.
If you ignore performance completely until late in the project, you can paint yourself into a corner. This includes cases like knowing that performance is 50x slower than will be acceptable throughout development, but saying "we can add X later for an easy performance win". If you don't actually test that X gets you within reach of your performance target at an early stage, you can end up with a fully built system that is unusable.
One should put performance considerations under "Make it Work", but that is probably obvious. The definition of "It works" (or eventually "Its correct") should include certain latency or throughput requirements and sometimes can't be left out for "Make it Fast" phase.
On the one hand I understand that most programs don't need to be specially fast. But on the other it leads to such of waste of time for the users. Where it really matters we usually see some kind of rewrite or new program that is designed to be fast and it can take significant slice of the pie. At the same time there is also a case for ease of use - ease of use can make even slow programs not only feel fast, but also take shorter time from decision to install/run said program to achieving user's goal.
There is no silver bullet, but few (or many?) rules of thumb ;)
As with any engineering problem, there is usually a trade-off involved. Premature optimization is bad but software does need to be fast enough to start with and flexible enough to be improved later.
You don't want to code yourself into a corner by accident. It's fine to knowingly take on technical debt, as long as you have a plan to fix it in the future. Even if this is never required.
I always try to take a pragmatic approach, as things are usually not as binary as these simple rules of thumb assume. The real world is typically very nuanced, which is why engineering can sometimes seem like more of an art than a science, and why experience is so valuable.
Shameless plug - I hope this attitude comes across in my book that focuses on web application performance issues: "ASP.NET Core 1.0 High Performance" (https://unop.uk/book/).
Yes and no; we all know the Knuth quote etc, but there are a lot of design decisions you have to make up front (language, database, server, framework, etc.) that you can't change later, but which will set the floor and the ceiling for what your performance looks like.
For instance, if you're working in a resource constrained environment, garbage collected vs not garbage collected is a big decision. Or if you're working on a web app, how you layout your database tables or your nosql equivalents is going to have a huge effect, and is much harder to change later.
There's a huge difference between premature optimization vs making decisions that will have performance consequences down the road. If you wait till the end of a project/release cycle to think about performance, the amount you can do about it will usually be disappointing.
Edge cases. When something "works" it handles the requirements of your customers in the normal case, but may misbehave in edge cases. When it is right it won't.
If this is your mantra, you aren't really working on code that require performance. If you where, the "make it fast" and "make it right" would be the same thing.
brianwawok|9 years ago
Know your performance goals going in and code accordingly. If you require 100 micro average latency and you coded in Node.js, step 3 will be a rewrite.
Every single line of code I write, I can tell you my performance goals. If indeed it is a simple crud screen by a user, the goal may be "meh, document.ready called when viewed from 100ms browser lag within 1 second". Backend trading code would have different goals..
BatFastard|9 years ago
If the engineer knows little about the problem space, then any type of optimization is not warranted until a working version is achieved.
The biggest secret to writing software is "not painting yourself in the corner".
andrepd|9 years ago
critium|9 years ago
karmelapple|9 years ago
vinceguidry|9 years ago
That's part of "making it work." If you're writing backend trading code, then performance is among the first consideration.
svdree|9 years ago
zzzcpan|9 years ago
critium|9 years ago
crdoconnor|9 years ago
I've also seen plenty of performance issues ironically caused by performance hacks wedged in early on.
barrkel|9 years ago
gabemart|9 years ago
If you ignore performance completely until late in the project, you can paint yourself into a corner. This includes cases like knowing that performance is 50x slower than will be acceptable throughout development, but saying "we can add X later for an easy performance win". If you don't actually test that X gets you within reach of your performance target at an early stage, you can end up with a fully built system that is unusable.
hawski|9 years ago
On the one hand I understand that most programs don't need to be specially fast. But on the other it leads to such of waste of time for the users. Where it really matters we usually see some kind of rewrite or new program that is designed to be fast and it can take significant slice of the pie. At the same time there is also a case for ease of use - ease of use can make even slow programs not only feel fast, but also take shorter time from decision to install/run said program to achieving user's goal.
There is no silver bullet, but few (or many?) rules of thumb ;)
jsingleton|9 years ago
You don't want to code yourself into a corner by accident. It's fine to knowingly take on technical debt, as long as you have a plan to fix it in the future. Even if this is never required.
I always try to take a pragmatic approach, as things are usually not as binary as these simple rules of thumb assume. The real world is typically very nuanced, which is why engineering can sometimes seem like more of an art than a science, and why experience is so valuable.
Shameless plug - I hope this attitude comes across in my book that focuses on web application performance issues: "ASP.NET Core 1.0 High Performance" (https://unop.uk/book/).
overgard|9 years ago
For instance, if you're working in a resource constrained environment, garbage collected vs not garbage collected is a big decision. Or if you're working on a web app, how you layout your database tables or your nosql equivalents is going to have a huge effect, and is much harder to change later.
There's a huge difference between premature optimization vs making decisions that will have performance consequences down the road. If you wait till the end of a project/release cycle to think about performance, the amount you can do about it will usually be disappointing.
losvedir|9 years ago
sgift|9 years ago
arethuza|9 years ago
jnordwick|9 years ago
oldmanjay|9 years ago