(no title)
peterthehacker | 4 years ago
This is only true if immutability is enforced. In js you see map used to mutate variables outside the scope of the map closure all the time.
const o = {}
const d = [1, 2, 3, 4]
d.map(i => o[i] = i**2)
Which is equivalent to this python. o = {}
d = [1, 2, 3, 4]
for i in d:
o[i] = i**2
The cognitive load is the same in both. The strength of pure FP languages come from enforced immutability, but that constraint often adds cognitive load in other ways.
dnautics|4 years ago
In my experience (and others) those constraint only reduces cognitive load, it can increase actual performance load, and can make certain algorithms "basically impossible", but you're also never actually writing those algorithms. When was the last time you ACTUALLY used dykstra's A*? Come on, most of us are writing SAASes, APIs/backends and basic frontends here (yes, the rest of you do exist), and even for shitty O(n^2) algos, your n is probably in the 10-20 range. Your bad algorithm will not take down the server.
peterthehacker|4 years ago
> Come on, most of us are writing SAASes, APIs/backends and basic frontends here (yes, the rest of you do exist), and even for shitty O(n^2) algos, your n is probably in the 10-20 range. Your bad algorithm will not take down the server.
This isn’t related to the earlier point but I’ll bite. This thought process assuming “Your bad algorithm will not take down the server” is a recipe for bad engineering.
For example, we had a bulk action (1-500 records) in our API where the first implementation pulled all the data into memory and processed the data in the request. This ended up being disastrous in prod. It took down our server many times and was tricky to track down because the process would be killed when it maxed out memory.
The solution wasn’t to switch languages or anything. It was just to move the operation to our async worker queue and stream through chunks of data to avoid running out of memory. It cause a lot of headaches for devops that should have never happened.
While you’re right that there are many cases where n is not large, engineers must consider how large n can be or explicitly restrict n before pushing a bad algo to prod.
toxik|4 years ago
blindmute|4 years ago
weatherlight|4 years ago
peterthehacker|4 years ago