(no title)
_bsless | 4 years ago
Two things in the implementation are performance killers: Laziness and boxed maths.
- baseline: Taking your original implementation and running it on a sequence or 1e6 elements I generated, I start off at 1.2 seconds.
- Naive transducer: Needs a transducer of a sliding window which doesn't exist in the core yet[0], 470ms
- Throw away function composition, use peek and nth to access the vector: 54ms
- Map & filter -> keep: 49ms
- Vectors -> arrays: 29ms
I'd argue only the last step makes the code slightly less idiomatic. Might even say that aggressively using juxt, partial and apply is less idiomatic than simple lambdas
You can see the implementation here
[0] https://gist.github.com/nornagon/03b85fbc22b3613087f6
[1] https://gist.github.com/bsless/0d9863424a00abbd325327cff1ea0...
Edit: formatting
No comments yet.