top of page
Search
Writer's pictureAnvita Bajpai

Caching Strategies for Distributed Systems

Updated: Dec 14, 2020



What is Caching ?


Caching is one of the easiest technique to increase system performance. A cache is like a short-term memory: it has a limited amount of space, but is typically faster than the original data source and contains the most recently accessed items. Caches can exist at all levels in architecture, but are often found at the level nearest to the front end where they are implemented to return data quickly without taxing downstream levels.


What Is Distributed Caching?

A distributed cache is a cache which has its data spread across several nodes in a cluster. Also across several clusters across several data centres located around the globe.


Advantages of Distributed Caching ?

Systems are designed distributed to scale inherently, they are designed in a way such that their power can be augmented on the fly be it compute, storage or anything. The same goes for the distributed cache.

The traditional cache is hosted on a few instances & has a limit to it. It’s hard to scale it on the fly. It’s not so much available & fault-tolerant in comparison to the distributed cache design.


High Availibility - Redundancy



High Availibility - Replication



Three key benefits of distributed caching:

  • Scalability,

  • High Availability,

  • Fault-tolerance


Caching Strategies


While caching is fantastic, it does require some maintenance for keeping cache coherent with the source of truth (e.g., database). If the data is modified in the database, it should be invalidated in the cache; if not, this can cause inconsistent application behavior.

Solving this problem is known as cache invalidation; there are following five main schemes that are used:

Cache -Aside

This is perhaps the most commonly used caching approach, at least in the projects that I worked on. The cache sits on the side and the application directly talks to both the cache and the database.


Here’s what’s happening:

  1. The application first checks the cache.

  2. If the data is found in cache, we’ve cache hit. The data is read and returned to the client.

  3. If the data is not found in cache, we’ve cache miss. The application has to do some extra work. It queries the database to read the data, returns it to the client and stores the data in cache so the subsequent reads for the same data results in a cache hit.




Read -Through Cache

Unlike cache aside, Read-through cache sits in-line between the application and the database. When there is a cache miss, it loads missing data from database, hydrates the cache and returns it to the application.


Both cache-aside and read-through strategies load data lazily, that is, only when it is first read.


Read-through caches work best for read-heavy workloads when the same data is requested many times. For example, a news story. The disadvantage is that when the data is requested the first time, it always results in cache miss and incurs the extra penalty of loading data to the cache. Developers deal with this by ‘warming’ or ‘pre-heating’ the cache by issuing queries manually. Just like cache-aside, it is also possible for data to become inconsistent between cache and the database, and solution lies in the write strategy, as we’ll see next.



Write-Through Cache

In this write strategy, data is first written to the cache and then to the database. The cache sits in-line with the database and writes always go through the cache to the main database.




Write -Around

Here, data is written directly to the database and only the data that is read makes it way into the cache

This technique is similar to write through cache, but data is written directly to permanent storage, bypassing the cache. This can reduce the cache being flooded with write operations that will not subsequently be re-read, but has the disadvantage that a read request for recently written data will create a “cache miss” and must be read from slower back-end storage and experience higher latency.



Write -Back (aka Write -Behind)

Under this strategy, data is written to cache alone and completion is immediately confirmed to the client. The write to the permanent storage is done after specified intervals or under certain conditions. This results in low latency and high throughput for write-intensive applications, however, this speed comes with the risk of data loss in case of a crash or other adverse event because the only copy of the written data is in the cache.



Cache Eviction policies #

Following are some of the most common cache eviction policies:

  1. First In First Out (FIFO): The cache evicts the first block accessed first without any regard to how often or how many times it was accessed before.

  2. Last In First Out (LIFO): The cache evicts the block accessed most recently first without any regard to how often or how many times it was accessed before.

  3. Least Recently Used (LRU): Discards the least recently used items first.

  4. Most Recently Used (MRU): Discards, in contrast to LRU, the most recently used items first.

  5. Least Frequently Used (LFU): Counts how often an item is needed. Those that are used least often are discarded first.

  6. Random Replacement (RR): Randomly selects a candidate item and discards it to make space when necessary.

Content Distribution Network (CDN)


CDNs are a kind of cache that comes into play for sites serving large amounts of static media. In a typical CDN setup, a request will first ask the CDN for a piece of static media; the CDN will serve that content if it has it locally available. If it isn’t available, the CDN will query the back-end servers for the file, cache it locally, and serve it to the requesting user.


Some of the popular caches used in the industry

The popular distributed caches used in the industry are:

  • Memcache

  • Redis

  • Riak

  • Hazelcast

  • Eh-cache

Summary

In this post, we explored different caching strategies primarily from distributed database cache. In practice, carefully evaluate your goals, understand data access (read/write) patterns and choose the best strategy or a combination.

The reason it is essential to evaluate and determine the best caching strategy suitable for your system is that, in case you choose a strategy that doesn’t match your application goals or access patterns, you may end up introducing additional latency, or at the very least, not see the full benefits of using a good caching strategy.


References


2,857 views0 comments

Comments


Copyright © 2020 TechTalksByAnvita

bottom of page