top | item 43764122

Show HN: I built a Ruby gem that handles memoization with a ttl

48 points| hp_hovercraft84 | 10 months ago |github.com

I built a Ruby gem for memoization with TTL + LRU cache. It’s thread-safe, and has been helpful in my own apps. Would love to get some feedback: https://github.com/mishalzaman/memo_ttl

26 comments

order
[+] madsohm|10 months ago|reply
Since using `def` to create a method returns a symbol with the method name, you can do something like this too:

  memoize def expensive_calculation(arg)
    @calculation_count += 1
    arg * 2
  end, ttl: 10, max_size: 2

  memoize def nil_returning_method
    @calculation_count += 1
    nil
  end
[+] JamesSwift|10 months ago|reply
Looks good. Id suggest making your `get` wait to acquire the lock until needed. eg instead of

  @lock.synchronize do
    entry = @store[key]
    return nil unless entry

    ...
you can do

  entry = @store[key]
  return nil unless entry

  @lock.synchronize do
    entry = @store[key]
And similarly for other codepaths
[+] chowells|10 months ago|reply
Does the memory model guarantee that double-check locking will be correct? I don't actually know for ruby.
[+] hp_hovercraft84|10 months ago|reply
Good call, but I think I would like to ensure it remains thread-safe as @store is a hash. Although I will consider something like this in a future update. Thanks!
[+] film42|10 months ago|reply
Nice! In rails I end up using Rails.cache most of the time because it's always "right there" but I like how you break out the cache to be a per-method to minimize contention. Depending on your workload it might make sense to use a ReadWrite lock instead of a Monitor.

Only suggestion is to not wrap the error of the caller in your memo wrapper.

> raise MemoTTL::Error, "Failed to execute memoized method '#{method_name}': #{e.message}"

It doesn't look like you need to catch this for any operational or state tracking reason so IMO you should not catch and wrap. When errors are wrapped with a string like this (and caught/ re-raised) you lose the original stacktrace which make debugging challenging. Especially when your error is like, "pg condition failed for select" and you can't see where it failed in the driver.

[+] hp_hovercraft84|10 months ago|reply
Thanks for the feedback! That's a very good point, I'll update the gem and let it bubble up.
[+] deedubaya|10 months ago|reply
See https://github.com/huntresslabs/ttl_memoizeable for an alternative implementation.

For those who don’t understand why you might want something like this: if you’re doing high enough throughput where eventual consistency is effectively the same as atomic consistency and IO hurts (i.e. redis calls) you may want to cache in memory with something like this.

My implementation above was born out of the need to adjust global state on-the-fly in a system processing hundreds of thousands of requests per second.

[+] locofocos|10 months ago|reply
Can you pitch me on why I would want to use this, instead of Rails.cache.fetch (which supports TTL) powered by redis (with the "allkeys-lru" config option)?
[+] thomascountz|10 months ago|reply
I'm not OP nor have I read through all the code, but this gem has no external dependencies and runs in a single process (as does activesupport::Cache::MemoryStore). Could be a "why you should," or a "why you should not" use this gem, depending on your use case.
[+] hp_hovercraft84|10 months ago|reply
Good question. I built this gem because I needed a few things that Rails.cache (and Redis) didn’t quite fit:

- Local and zero-dependency. It caches per object in memory, so no Redis setup, no serialization, no network latency. -Isolated and self-managed. Caches aren’t global. Each object/method manages its own LRU + TTL lifecycle and can be cleared with instance helpers. - Easy to use — You just declare the method, set the TTL and max size, and you're done. No key names, no block wrapping, no external config.

[+] film42|10 months ago|reply
Redis is great for caching a customer config that's hit 2000 times/second by your services, but even then, an in-mem cache with short TTL would make redis more tolerant to failure. This would be great for the in-mem part.
[+] gurgeous|10 months ago|reply
This is neat, thanks for posting. I am using memo_wise in my current project (TableTennis) in part because it allows memoization of module functions. This is a requirement for my library.

Anyway, I ended up with a hack like this, which works fine but didn't feel great.

   def some_method(arg)
     @_memo_wise[__method__].tap { _1.clear if _1.length > 100 }
     ...
   end
   memo_wise :some_method
[+] wood-porch|10 months ago|reply
Will this correctly retrieve 0 values? AFAIK 0 is falsey in Ruby

``` return nil unless entry ```

[+] chowells|10 months ago|reply
No, Ruby is more strict than that. Only nil and false are falsely.