top | item 41932392

(no title)

piyush_soni | 1 year ago

> Keep data transforms and algorithmic calculations in functional style

What annoys me greatly though is kids coding various 'fancy' functional paradigms for data transformation without realizing their performance implications, still thinking they've actually done the 'smarter' thing by transforming it multiple times and changing a simple loop to three or four loops. Example : Array.map.filter.map.reduce. Also when talked about it, they have learned to respond with another fancy term : "that would be premature optimization". :|

discuss

order

eigenspace|1 year ago

This is just an unfortunate consequence of how map and filter are implemented via iterators.

Of you work with transducers, the map filter map reduce is still just one single loop.

eru|1 year ago

A smart enough compiler can and will fuse your loops / iterators.

GHC does for example.

davedx|1 year ago

That kind of thing really depends on the language. Some of the stronger functional languages like Haskell have lazy evaluation, so that operation won't be as bad as it looks. But then you really need to fully understand the tradeoffs of lazy evaluation too.

DeathArrow|1 year ago

I don't know how things are implemented in other languages but in C# 9, these operations are optimized.

TeMPOraL|1 year ago

There are ways to keep functional transformations and immutable data structures efficient. Copy-on-write, unrolling expressions into loops, etc. Proper functional languages have them built into the runtime - your clean map-reduce chain will get translated to some gnarly, state-mutating imperative code during compilation. In non-FP or mixed-paradigm languages, where functional building blocks are just regular library functions (standard or otherwise), map-reduce is exactly what it says on the tin - two loops and a lot of copying; you want it fast, you have to mostly optimize it yourself.

In other words, you need to know which things in your language are considered to be language/compiler/runtime primitives, and which are just regular code.

HdS84|1 year ago

Most languages don't have these facilities at all - so you need to be really careful what you are doing. This works "fine" with test data, because your test data usually is a few hundert items max. A few years back people at our firm build all data filtering in the frontend, to keep the "backend clean". That worked fine in testing. In production with 100k rows? Not so much.

Even in C# it depends on the linq provider - if you are talking to a DB, your quers should be optimized. Linq to objects doesn't do that and repeated scanning can kill your performance. E.g. repeated filtering on large lists.

high_na_euv|1 year ago

Are they? LINQ is usually slower than for loop

Gunax|1 year ago

Aren't those all linear operations?

piyush_soni|1 year ago

Yes, wrote quickly without thinking. Even if it doesn't change the complexity, it's still three or four times the operations.

kosmozaut|1 year ago

Why would your example be O(n³)?

piyush_soni|1 year ago

Oh yes, sorry I meant to write 3 * O(n) which though doesn't change the order is still three times the operations. The example I was remembering was doing filters 'inside' maps.

bluGill|1 year ago

That is premature pessimization. They have no idea what premature optimization is as nobody has done that optimization at all since 1990 or so. Premature optimiiation is about manually unrolling loops or doing inline assembly - things any modern compiler can do for you automatically.

sgarland|1 year ago

> Premature optimiiation is about manually unrolling loops or doing inline assembly - things any modern compiler can do for you automatically.

If compilers consistently did this, projects like ffmpeg wouldn’t need to sprinkle assembly into their code base. And yet.