top | item 29092037

(no title)

MarblePillar | 4 years ago

Fundamentally, it comes down to the "human REPL". To a surprising and perhaps unsettling degree, no one really understands what's going on in the computer, we just have to infer what's going on by making changes and observing. So the human being is sitting at his computer making changes and refreshing to see what's changed, and he can only see lag and performance issues that are apparent on his machine. Everything else (not UI) is invisible to him. If the computer did computation instantly, there would be no real way for him to know (much less incentive to know) the performance or purely "mechanical" difference between, say, subtracting 1 by counting down 1 and subtracting 1 but counting up from 0 until the "next" number is equal to the "current" number. Weird concept, huh?

P.S. I've just replied to your very excellent post from 6 days ago.

discuss

order

noduerme|4 years ago

>> (much less incentive to know)

Just riffing here, but I think a lot of times we do optimize just because it seems like doing good work, polishing things up. If it would feel cleaner, faster, and if we have time, we go back and improve it. Sometimes we even test that concept by running some piece of code millions of times even if, in practice, there is virtually zero speed difference. And I think we probably do that because we want to really know our tools better and more closely than just having the divorced sense that the computer is doing something we don't completely understand, under the hood - shaving off the milliseconds of difference between .map and .foreach and for(let...) or reducing what you need to some arcane series of byte array operations is what does give us the sense of control, that we're not just functionaries in a big REPL. I mean, I do take the time to optimize when I'm approaching a tricky problem as a matter of principle, not as code golf. But the underlying principle holds that, as you said, when lag for a given program approaches zero, so does the incentive to improve it. If I were still writing programs in BASIC on a TRS-80, you can be sure I'd optimize a whole lot more. So when you stick hundreds of such programs together into a framework, and each has its own authors, the collection or platform will tend toward maximizing the available hardware until the very last program, the new one, the one you're trying to write, has to find a way to optimize. And then it will optimize just enough.

I guess you could phrase this as: Software tends toward filling or exceeding hardware capacity over time.