top | item 21553840

(no title)

h_r | 6 years ago

Can you explain why you think it makes O(n) blocks trivial to identify?

How is such identification made easier by manually writing out a `for` loop applying a function foo to each element rather than writing `map foo myCollection`?

discuss

order

kjksf|6 years ago

In every mainstream language I can think of `map foo myCollection` creates an intermediary map.

Memory allocation is so expensive that making that copy is often more expensive that calling `foo` on each element.

Sometimes making a copy is exactly what you need and there's no way around that cost (but hold that thought).

But I've also seen `sum map foo myCollection` so many times (especially in JavaScript).

Here you have a short, neat but also extremely wasteful way of doing things. I see it so frequently that I assume that many people are unaware of this cost (or completely disregard performance concerns).

If you were to write this imperatively, it would be obvious that you're making a copy and maybe you would stop and re-think your approach.

But there's more.

If you're paying attention to performance, an easy way to improve performance of `map foo myCollection` in e.g. Go is to pre-allocate the resulting array.

For large arrays, this avoid re-allocating of underlying memory over and over again, which is the best a `map` can do.

In imperative code those costs are more visible. When you have to type that code that does memory allocation, you suddenly realize that the equivalent of `map` is extremely wasteful.

h_r|6 years ago

I don't want to discuss this forever but I have a couple of comments:

Your points about efficiency are a separate topic entirely from the original claim that manually writing out an imperative solution makes it easier to see the algorithmic complexity. That was a surprising claim to me because, in my experience, if I understand what some HOF is doing, reading the code is even easier because there is less of it to wade through (and mental exhaustion doesn't make one easier to read vs the other).

> In every mainstream language I can think of `map foo myCollection` creates an intermediary map

You need to build up the final result, sure. Not an intermediary map but whatever structure (or functor) you're mapping over. That is the whole point of immutable data structures. Also when you're using persistent data structures, which all modern FP languages do, the cost of constructing the result can be far less than what you expect, especially if the result is lazy and you need only some of the results. There is a cost to immutability and if it's unbearable in some situation, fall back to in-place mutation but the semantics of these two approaches are definitely not the same.

> But I've also seen `sum map foo myCollection` so many times

Yeah... that should be a fold (reduce, whatever). :)

LandR|6 years ago

>> In every mainstream language I can think of `map foo myCollection` creates an intermediary map.

Language support for Transducers can fix this, you can compose functions like map / filter / reduce over a collection and only hit each item once.

Gaelan|6 years ago

Is Rust mainstream? If so, this code:

    numbers.iter().map(|n| { ... }).sum();
Compiles to a plain loop, with no allocation.

tfha|6 years ago

The way many people read code, it's much easier to overlook three characters than it is to overlook three lines with indentation.

h_r|6 years ago

That's quite an interesting, if frightening, take on it. Assuming a developer knows what `map` does, then I wouldn't have very much confidence in their reading comprehension if they somehow mentally skipped over the main higher order function being called on a three word line of code.

Would anyone expect such a developer to read and parse several more lines of code more reliably, in order to understand the algorithmic complexity? Seems unwarranted to me...