I built a Ruby gem for memoization with TTL + LRU cache. It’s thread-safe, and has been helpful in my own apps. Would love to get some feedback: https://github.com/mishalzaman/memo_ttl
Good call, but I think I would like to ensure it remains thread-safe as @store is a hash. Although I will consider something like this in a future update. Thanks!
Nice! In rails I end up using Rails.cache most of the time because it's always "right there" but I like how you break out the cache to be a per-method to minimize contention. Depending on your workload it might make sense to use a ReadWrite lock instead of a Monitor.
Only suggestion is to not wrap the error of the caller in your memo wrapper.
> raise MemoTTL::Error, "Failed to execute memoized method '#{method_name}': #{e.message}"
It doesn't look like you need to catch this for any operational or state tracking reason so IMO you should not catch and wrap. When errors are wrapped with a string like this (and caught/ re-raised) you lose the original stacktrace which make debugging challenging. Especially when your error is like, "pg condition failed for select" and you can't see where it failed in the driver.
I thought ruby would auto-wrap the original exception as long as you are raising from a rescue block (i.e. as long as $! is non-nil). So in that case you can just
raise "Failed to execute memoized method '#{method_name}'"
For those who don’t understand why you might want something like this: if you’re doing high enough throughput where eventual consistency is effectively the same as atomic consistency and IO hurts (i.e. redis calls) you may want to cache in memory with something like this.
My implementation above was born out of the need to adjust global state on-the-fly in a system processing hundreds of thousands of requests per second.
Can you pitch me on why I would want to use this, instead of Rails.cache.fetch (which supports TTL) powered by redis (with the "allkeys-lru" config option)?
I'm not OP nor have I read through all the code, but this gem has no external dependencies and runs in a single process (as does activesupport::Cache::MemoryStore). Could be a "why you should," or a "why you should not" use this gem, depending on your use case.
Good question. I built this gem because I needed a few things that Rails.cache (and Redis) didn’t quite fit:
- Local and zero-dependency. It caches per object in memory, so no Redis setup, no serialization, no network latency.
-Isolated and self-managed. Caches aren’t global. Each object/method manages its own LRU + TTL lifecycle and can be cleared with instance helpers.
- Easy to use — You just declare the method, set the TTL and max size, and you're done. No key names, no block wrapping, no external config.
Redis is great for caching a customer config that's hit 2000 times/second by your services, but even then, an in-mem cache with short TTL would make redis more tolerant to failure. This would be great for the in-mem part.
This is neat, thanks for posting. I am using memo_wise in my current project (TableTennis) in part because it allows memoization of module functions. This is a requirement for my library.
Anyway, I ended up with a hack like this, which works fine but didn't feel great.
def some_method(arg)
@_memo_wise[__method__].tap { _1.clear if _1.length > 100 }
...
end
memo_wise :some_method
[+] [-] madsohm|10 months ago|reply
[+] [-] hp_hovercraft84|10 months ago|reply
[+] [-] JamesSwift|10 months ago|reply
[+] [-] chowells|10 months ago|reply
[+] [-] hp_hovercraft84|10 months ago|reply
[+] [-] film42|10 months ago|reply
Only suggestion is to not wrap the error of the caller in your memo wrapper.
> raise MemoTTL::Error, "Failed to execute memoized method '#{method_name}': #{e.message}"
It doesn't look like you need to catch this for any operational or state tracking reason so IMO you should not catch and wrap. When errors are wrapped with a string like this (and caught/ re-raised) you lose the original stacktrace which make debugging challenging. Especially when your error is like, "pg condition failed for select" and you can't see where it failed in the driver.
[+] [-] JamesSwift|10 months ago|reply
https://pablofernandez.tech/2014/02/05/wrapped-exceptions-in...
[+] [-] hp_hovercraft84|10 months ago|reply
[+] [-] deedubaya|10 months ago|reply
For those who don’t understand why you might want something like this: if you’re doing high enough throughput where eventual consistency is effectively the same as atomic consistency and IO hurts (i.e. redis calls) you may want to cache in memory with something like this.
My implementation above was born out of the need to adjust global state on-the-fly in a system processing hundreds of thousands of requests per second.
[+] [-] locofocos|10 months ago|reply
[+] [-] thomascountz|10 months ago|reply
[+] [-] hp_hovercraft84|10 months ago|reply
- Local and zero-dependency. It caches per object in memory, so no Redis setup, no serialization, no network latency. -Isolated and self-managed. Caches aren’t global. Each object/method manages its own LRU + TTL lifecycle and can be cleared with instance helpers. - Easy to use — You just declare the method, set the TTL and max size, and you're done. No key names, no block wrapping, no external config.
[+] [-] film42|10 months ago|reply
[+] [-] gurgeous|10 months ago|reply
Anyway, I ended up with a hack like this, which works fine but didn't feel great.
[+] [-] qrush|10 months ago|reply
I found this pretty easy to read through. I'd suggest setting a description on the repo too so it's easy to find.
https://github.com/mishalzaman/memo_ttl/blob/main/lib/memo_t...
[+] [-] unknown|10 months ago|reply
[deleted]
[+] [-] hp_hovercraft84|10 months ago|reply
[+] [-] wood-porch|10 months ago|reply
``` return nil unless entry ```
[+] [-] chowells|10 months ago|reply
[+] [-] kartik_malik|10 months ago|reply