> We have computers which are ridiculous fast, but tend to write code which is freaking slow. It's sad that most programs could be 100x faster.
I feel this sort of comment misses the whole point of going with code which is patently slower than alternatives.
The main reason is that performance is a constraint but not a goal, and once it is good enough then there is absolutely nothing to be gained by wasting time on seeking performance gains.
Meanwhile, the main resource in software development is man*hours. The faster you write software (add features, fix bugs, etc) the cheaper it is. Thus, software projects benefit the most by adopting and using technology which favour turnaround time, which means higher-level languages, generic full-featured frameworks, and code reuse. They are slow and bloated and not optimized, and they are used without optimization in mind. But they work and they work acceptably.
Your client and project manager does not care if you go with a O(n2) implementation that you can whip out right now by reusing a package and has acceptable performance even if there is a O(n) alternative that requires you a few weeks to implement and forces you to test, debug, and maintain a lower level implementation.
Performance is meaningless once it's good enough. It matters nothing if your browser wastes 1GB of RAM even though it could just use 100MB because practically everyone already has 8GB to begin with, and if would be foolish to waste resources prioritizing that if no one is willing to pay for that improvement. It matters nothing if your server has a relatively low throughput because it is wasting 10s of milliseconds doing nothing in each response if all your servers barely break 50% utilization. It matters nothing if your frontend is wasting 10s of milliseconds rendering a page if end users don't even notice any delay. If performance is good enough, work is focused on where it matters.
We know it's possible to get formula1-level performance by using formula1-level of engineering and maintenance, but the world runs on Volkswagen hatchbacks.
I wonder how much performance is gained by having the code written by just one person. At work we have a reasonably large (16 MLOC) codebase. At this point I don't think it's possible for one person to do everything, especially while adding new features. So what we do is that we create relatively clean interfaces, and have people code mostly inside those interfaces. But every time we go through one of those interfaces, there's a performance hit.
This can be a good interview question for Google to screw people over.
I was asked to write a code to generate maze (100x100) with the constraints that it should neither be easy not it should be hard. This was for L6 position.
That's a very interesting. I am curious to hear what your answer was :D
Here's my thoughts on what a solution would need:
1. At least one path from source to target.
2. Some quantification of hardness. No of steps in optimal path? Number of turns in optimal path? Number of paths? Some weighted combination of all three?
Some preliminaries:
Finding optimal path, and finding number of paths can both be down in O(N^2) using DP. Finding number of turns is then trivial.
Now the Algos for 1:
Algo for 1: Naive backtracking, i.e., randomly generate paths until there are no paths. Evaluate each maze using the heuristic and output best one. Run for some fixed time t. This is exp time.
Another algo for 1. Generate a path as following: select K points on the grid, with the start and end being the first and last; then finding optimal paths in sequence (notice that this guarantees not cyclical path). Next, generate fresh path on an empty grid and overlay on the previous path. Keep repeating for some M paths. Now fill in non path pixels. Pick the best grid among these M steps. This is O(M*N^2).
Last Algo for 1: Throw the grid into an ILP solver and optimize the heuristic (exp time).
I guess he filmed through a pane of glass, and then flipped the video? It's trippy that he's physically writing backwards, but the text we see isn't reversed.
That's exactly how i managed to enhance the performance of my site. https://vashishthakapoor.com/
Keeping the features along with performance is a big challenge. Video is really explainatory to fix performance issues.
[+] [-] cryo|4 years ago|reply
We have computers which are ridiculous fast, but tend to write code which is freaking slow. It's sad that most programs could be 100x faster.
[+] [-] nuerow|4 years ago|reply
I feel this sort of comment misses the whole point of going with code which is patently slower than alternatives.
The main reason is that performance is a constraint but not a goal, and once it is good enough then there is absolutely nothing to be gained by wasting time on seeking performance gains.
Meanwhile, the main resource in software development is man*hours. The faster you write software (add features, fix bugs, etc) the cheaper it is. Thus, software projects benefit the most by adopting and using technology which favour turnaround time, which means higher-level languages, generic full-featured frameworks, and code reuse. They are slow and bloated and not optimized, and they are used without optimization in mind. But they work and they work acceptably.
Your client and project manager does not care if you go with a O(n2) implementation that you can whip out right now by reusing a package and has acceptable performance even if there is a O(n) alternative that requires you a few weeks to implement and forces you to test, debug, and maintain a lower level implementation.
Performance is meaningless once it's good enough. It matters nothing if your browser wastes 1GB of RAM even though it could just use 100MB because practically everyone already has 8GB to begin with, and if would be foolish to waste resources prioritizing that if no one is willing to pay for that improvement. It matters nothing if your server has a relatively low throughput because it is wasting 10s of milliseconds doing nothing in each response if all your servers barely break 50% utilization. It matters nothing if your frontend is wasting 10s of milliseconds rendering a page if end users don't even notice any delay. If performance is good enough, work is focused on where it matters.
We know it's possible to get formula1-level performance by using formula1-level of engineering and maintenance, but the world runs on Volkswagen hatchbacks.
[+] [-] Zababa|4 years ago|reply
[+] [-] kjksf|4 years ago|reply
The fast renderer was written by one person in a few days. It's clearly a "few thousand loc" codebase, not "millions loc".
The 100x slower renderer was also written by one person (or at most few).
It really is the difference between programmers.
It's also a good example that 10x programmers do exist, at least in certain situations.
The Microsoft programmers are not dumb. They certainly are not cheap.
And yet here we have one guy who can write code that is 10-100x faster, in just few days.
[+] [-] anandoza|4 years ago|reply
[+] [-] Attummm|4 years ago|reply
[+] [-] tigerwash|4 years ago|reply
At least adding some major timestamps in the description would be great.
[+] [-] devnull3|4 years ago|reply
---------
This can be a good interview question for Google to screw people over.
I was asked to write a code to generate maze (100x100) with the constraints that it should neither be easy not it should be hard. This was for L6 position.
[+] [-] omegalulw|4 years ago|reply
Here's my thoughts on what a solution would need:
1. At least one path from source to target.
2. Some quantification of hardness. No of steps in optimal path? Number of turns in optimal path? Number of paths? Some weighted combination of all three?
Some preliminaries:
Finding optimal path, and finding number of paths can both be down in O(N^2) using DP. Finding number of turns is then trivial.
Now the Algos for 1:
Algo for 1: Naive backtracking, i.e., randomly generate paths until there are no paths. Evaluate each maze using the heuristic and output best one. Run for some fixed time t. This is exp time.
Another algo for 1. Generate a path as following: select K points on the grid, with the start and end being the first and last; then finding optimal paths in sequence (notice that this guarantees not cyclical path). Next, generate fresh path on an empty grid and overlay on the previous path. Keep repeating for some M paths. Now fill in non path pixels. Pick the best grid among these M steps. This is O(M*N^2).
Last Algo for 1: Throw the grid into an ILP solver and optimize the heuristic (exp time).
[+] [-] jgwil2|4 years ago|reply
[+] [-] isaacimagine|4 years ago|reply
[+] [-] superjan|4 years ago|reply
[+] [-] archagon|4 years ago|reply
[+] [-] generichuman|4 years ago|reply
[0] https://www.lightboard.info/
[+] [-] gary_0|4 years ago|reply
[+] [-] vashishthak|4 years ago|reply
[+] [-] VHRanger|4 years ago|reply
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] aetherspawn|4 years ago|reply
He may have deleted 1000 lines of code, but he'll need a 2 hour inline video (with a 1000 line transcript) to explain how it works.
[+] [-] dralley|4 years ago|reply
I do think it would have been useful to demonstrate that before going straight to hard mode.
[+] [-] werner1886|4 years ago|reply