2

Caching - Our biggest enemy

Comments
  • 3
    What a joke
  • 3
    I don't understand why caching would be an enemy 🤷🏻‍♀️
  • 1
    @Elyz data staleness and evictions have always been big problems but to classify those as biggest enemies makes it nothing but a joke
  • 2
    @asgs worst joke I've heard all day then and that's saying something because I'm currently under the same roof as my dad.
  • 0
    What's the "random" tag for?!?!?!?! Just default to it.
  • 1
    Every few months I lose a day or two to caching issues.
  • 1
    Caching is easier when you force yourself into a box.

    Every key should expire. No exceptions.

    Use LRU or (preferably) LFU (latter uses bloom-based eviction) guided by least TTL. Redis has support for this out of the box.

    Everything you do should be atomic, unless where it doesn't need to be. This seems trivial and obvious but my God it's a concept I have to explain over and over again.

    Passive vs active optimizations are IMO the best design. Have a guaranteed but possibly slower update loop that lazily updates and/or checks cache data from a source of truth. When the data changes, do a best-effort delete and then a best effort update in the cache. Worst case second fails and causes a refetch, but at least the data isn't stale. Send invalidations after user responses; log all failures as time series data points at the very least.

    Lastly, make sure your application can survive without the cache at all. This doesn't mean services must work to their full potential, but at least shoot for a read-only mode. Degraded is better than down.

    These are generalizations but help a lot
Add Comment