While this is a cool demo, I don't really like it even as a Haskell enthusiast. Anything that relies on compiler's eta reduction/expansion rules for performance is too subtle for my taste. I'd personally argue that more explicit code could be clearer.
Eta reduction/expansion has nothing to do with performance. It's a form of point free programming that allows you to abstract out intermediate variables and use partial application instead.
f x = reverse (sort x)
Becomes:
f = reverse . sort
The compiler reaches the same conclusion either way.
Perhaps you are thinking of RULES pragmas? Which honestly you wouldn't usually reach for unless you really need to force the compiler to optimize in a particular way that is non-trivial.
Maybe you mean using lazy evaluation as a memoization hack? That is a well known trick in Haskell though I agree it is in some ways cringy, relying on the evaluation strategy like that. But there are some Hackage packages that make it fairly convenient to use. I did a naive implementation myself and benchmarked it against an explicit STRef as a cache cell. I don't remember the result but it wasn't a huge (OOM+) difference iirc. Rather than using an array as the memo cache (which means you must know its size in advance), it's nicer to use an infinite branching tree.
No disrespect to the dude that I learned Haskell from, but a) he doesn’t achieve his original goal and b) the goal is much less practically useful than he implies.
In practice an awful lot of the functions you want to memoize a) involve continuations and b) want some way of evicting things from the cache.
Haskell has no end of cute tricks for doing deterministic functions, it’s just that deterministic functions are programming’s easy mode.
Memoization is also entry level DP, and too easily confused with caching (if I had a dollar for every time someone equated the two, I could buy a baseball bat to knock heads).
If you want the power of dynamic programming you need to get at least as far as tabulation.
"Wouldn’t it be cool if we could just write the recursive function, and then have some generic machinery make it fast for us by automatically generating a memo table?"
"In other words, we’d like a magic memoization function . . . Then we could just define our slow, recursive function normally, wave our magic memo wand over it, and get a fast version for free!"
I develop calculang, a language for calculations. It converts concise formulae into Javascript.
Since calculation formulae are usually inherently pure, a `--memo` flag to the compiler memoizes the lot.
Many of my models are written in this recursive style and only work because of memoization. Sometimes other tricks are needed e.g. careful ordering of the calls, so that the memo is available before the stack breaks.
My first calculang example is a bouncing ball: the position at t depends on speed and the position at previous t. With loops we'll usually discover the same thing, but it can be so much harder to follow.
First-class memoization to proliferate this style is a really neat thing.
[3] if you can tolerate my slow-load WIP devtools and are on a Desktop, this is better, with buttons to navigate through the model development (only some of which are working now!): https://models-on-a-plane.pages.dev/stories/bounce/
Does this technique keep the memoization table in memory while the top level function is in scope (so essentially forever?) Or does it become unreachable once the initial call to fib from somewhere else finish?
I would also disagree with that, but maybe not as much as the other guy.
I had the pleasure of being able to use Haskell "in anger" for about a year. Still my favorite language to toy around with, but would not want to use it for work again.
One take away is that half of the time Haskell code "just works". You write algorithms as you think they should be and often the resulting binary is "good enough™".
The other half you spend in the trenches hunting down memory leaks because some thunk was not evaluated and your program is amassing unevaluater expressions until it dies :-)
I can't disagree more with this. Running any modern language is magic. If you don't want to use magic. You only get to write assembly or maybe C. Not that I agree with this solution.
[+] [-] kccqzy|2 years ago|reply
[+] [-] bojo|2 years ago|reply
Perhaps you are thinking of RULES pragmas? Which honestly you wouldn't usually reach for unless you really need to force the compiler to optimize in a particular way that is non-trivial.
[+] [-] lgas|2 years ago|reply
[+] [-] throwaway81523|2 years ago|reply
[+] [-] moomin|2 years ago|reply
In practice an awful lot of the functions you want to memoize a) involve continuations and b) want some way of evicting things from the cache.
Haskell has no end of cute tricks for doing deterministic functions, it’s just that deterministic functions are programming’s easy mode.
[+] [-] hinkley|2 years ago|reply
If you want the power of dynamic programming you need to get at least as far as tabulation.
[+] [-] thesuperbigfrog|2 years ago|reply
"In other words, we’d like a magic memoization function . . . Then we could just define our slow, recursive function normally, wave our magic memo wand over it, and get a fast version for free!"
Perl has this exact functionality with Memoize:
https://perldoc.perl.org/Memoize
More programming languages should have this functionality built in.
[+] [-] dndn1|2 years ago|reply
Since calculation formulae are usually inherently pure, a `--memo` flag to the compiler memoizes the lot.
Many of my models are written in this recursive style and only work because of memoization. Sometimes other tricks are needed e.g. careful ordering of the calls, so that the memo is available before the stack breaks.
My first calculang example is a bouncing ball: the position at t depends on speed and the position at previous t. With loops we'll usually discover the same thing, but it can be so much harder to follow.
First-class memoization to proliferate this style is a really neat thing.
[0] https://github.com/calculang/calculang
[1] bouncing ball code: https://github.com/declann/calculang-miscellaneous-models/bl...
[2] bouncing ball initial post: https://observablehq.com/@declann/calculang-bouncing-ball?co...
[3] if you can tolerate my slow-load WIP devtools and are on a Desktop, this is better, with buttons to navigate through the model development (only some of which are working now!): https://models-on-a-plane.pages.dev/stories/bounce/
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] mrfox321|2 years ago|reply
@functools.cache
def fn(x):
https://docs.python.org/3/library/functools.html#functools.c...
[+] [-] simiones|2 years ago|reply
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] byorgey|2 years ago|reply
[+] [-] lewtds|2 years ago|reply
[0]: https://www.swi-prolog.org/pldoc/man?section=tabling-memoize [1]: http://retina.inf.ufsc.br/picat_guide/#x1-210001.5
[+] [-] thesz|2 years ago|reply
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] dermesser|2 years ago|reply
[+] [-] moi2388|2 years ago|reply
[+] [-] EdwardDiego|2 years ago|reply
Any code you describe with "Magically" is going to bite you in the arse in about two months. Maybe three.
Leave magic to the guy who also does balloon animals at kids parties.
[+] [-] fho|2 years ago|reply
I had the pleasure of being able to use Haskell "in anger" for about a year. Still my favorite language to toy around with, but would not want to use it for work again.
One take away is that half of the time Haskell code "just works". You write algorithms as you think they should be and often the resulting binary is "good enough™".
The other half you spend in the trenches hunting down memory leaks because some thunk was not evaluated and your program is amassing unevaluater expressions until it dies :-)
[+] [-] rowanG077|2 years ago|reply
[+] [-] lincpa|2 years ago|reply
[deleted]