SoFunction
Updated on 2025-05-04

Detailed explanation of 6 cache update strategies in Redis

introduction

As a high-performance in-memory database, Redis has become the preferred solution for the cache layer. However, the biggest challenge when using cache is to ensure consistency between cached data and underlying data sources. The cache update policy directly affects the performance, reliability and data consistency of the system, and choosing the right policy is crucial.

This article will introduce 6 cache update strategies in Redis.

Policy 1: Cache-Aside (bypass cache) policy

How it works

Cache-Aside is the most commonly used cache mode, and the application layer is responsible for the interactive logic of cache and database:

  • Read data: Query the cache first, return directly if hit; query the database if missed, write the result to the cache and return
  • Update data: Update the database first, then delete the cache (or update the cache)

Code Example

@Service
public class UserServiceCacheAside {
    
    @Autowired
    private RedisTemplate<String, User> redisTemplate;
    
    @Autowired
    private UserRepository userRepository;
    
    private static final String CACHE_KEY_PREFIX = "user:";
    private static final long CACHE_EXPIRATION = 30; // Cache expiration time (minutes)    
    public User getUserById(Long userId) {
        String cacheKey = CACHE_KEY_PREFIX + userId;
        
        // 1. Query cache        User user = ().get(cacheKey);
        
        // 2. Cache hit and return directly        if (user != null) {
            return user;
        }
        
        // 3. Cache misses and query the database        user = (userId).orElse(null);
        
        // 4. Write database results to cache (set expiration time)        if (user != null) {
            ().set(cacheKey, user, CACHE_EXPIRATION, );
        }
        
        return user;
    }
    
    public void updateUser(User user) {
        // 1. Update the database first        (user);
        
        // 2. Delete the cache again        String cacheKey = CACHE_KEY_PREFIX + ();
        (cacheKey);
        
        // Or choose to update the cache        // ().set(cacheKey, user, CACHE_EXPIRATION, );
    }
}

Pros and cons analysis

advantage

  • Simple implementation and flexible control
  • Suitable for business scenarios with more reading, less writing and less reading
  • Only cache necessary data to save memory space

shortcoming

  • There will be some delay for the first access (cache missed)
  • There is a concurrency problem: if you delete the cache first and then update the database, it may cause inconsistent data
  • It requires application code to maintain cache consistency, which increases development complexity

Applicable scenarios

  • Business scenarios with more reading and less writing
  • Applications that do not require very high data consistency
  • Scenarios where caching strategies need to be flexibly controlled in distributed systems

Strategy 2: Read-through strategy

How it works

The Read-Through policy uses cache as the proxy for the main data source, and the cache layer is responsible for data loading:

  • The application only interacts with the cache layer
  • When the cache misses, the cache manager is responsible for loading data from the database and storing it into the cache
  • The application does not need to care about whether the cache exists, and the cache layer automatically processes the loading logic

Code Example

First define the cache loader interface:

public interface CacheLoader<K, V> {
    V load(K key);
}

Implement Read-Through cache manager:

@Component
public class ReadThroughCacheManager&lt;K, V&gt; {
    
    @Autowired
    private RedisTemplate&lt;String, V&gt; redisTemplate;
    
    private final ConcurrentHashMap&lt;String, CacheLoader&lt;K, V&gt;&gt; loaders = new ConcurrentHashMap&lt;&gt;();
    
    public void registerLoader(String cachePrefix, CacheLoader&lt;K, V&gt; loader) {
        (cachePrefix, loader);
    }
    
    public V get(String cachePrefix, K key, long expiration, TimeUnit timeUnit) {
        String cacheKey = cachePrefix + key;
        
        // 1. Query cache        V value = ().get(cacheKey);
        
        // 2. Cache hit and return directly        if (value != null) {
            return value;
        }
        
        // 3. Cache misses and get data through the loader        CacheLoader&lt;K, V&gt; loader = (cachePrefix);
        if (loader == null) {
            throw new IllegalStateException("No cache loader registered for prefix: " + cachePrefix);
        }
        
        // Load data from a data source using a loader        value = (key);
        
        // 4. Store the loaded data into the cache        if (value != null) {
            ().set(cacheKey, value, expiration, timeUnit);
        }
        
        return value;
    }
}

Example of usage:

@Service
public class UserServiceReadThrough {
    
    private static final String CACHE_PREFIX = "user:";
    private static final long CACHE_EXPIRATION = 30;
    
    @Autowired
    private ReadThroughCacheManager&lt;Long, User&gt; cacheManager;
    
    @Autowired
    private UserRepository userRepository;
    
    @PostConstruct
    public void init() {
        // Registered user data loader        (CACHE_PREFIX, this::loadUserFromDb);
    }
    
    private User loadUserFromDb(Long userId) {
        return (userId).orElse(null);
    }
    
    public User getUserById(Long userId) {
        // Get data directly through the cache manager, and the cache logic is processed by the manager        return (CACHE_PREFIX, userId, CACHE_EXPIRATION, );
    }
}

Pros and cons analysis

advantage

  • Good encapsulation, application code does not need to care about cache logic
  • Centrally process cache loading to reduce redundant code
  • Suitable for read-only or read more or write less data

shortcoming

  • Cache misses trigger database requests, which may cause increased database load
  • Cannot handle write operations directly, and need to be used in combination with other strategies
  • An additional cache management layer is required

Applicable scenarios

  • Business systems with frequent reading operations
  • Applications that require centralized management of cache loading logic
  • Complex cache warm-up and loading scenarios

Strategy Three: Write-Through (write-through) strategy

How it works

Write-Through policy synchronously updates the underlying data source by the cache layer:

  • Write to cache when the application updates data
  • The cache layer is then responsible for synchronous writing to the database
  • Only when the data is successfully written to the database is considered to be updated successfully

Code Example

First define the write interface:

public interface CacheWriter<K, V> {
    void write(K key, V value);
}

Implement Write-Through cache manager:

@Component
public class WriteThroughCacheManager&lt;K, V&gt; {
    
    @Autowired
    private RedisTemplate&lt;String, V&gt; redisTemplate;
    
    private final ConcurrentHashMap&lt;String, CacheWriter&lt;K, V&gt;&gt; writers = new ConcurrentHashMap&lt;&gt;();
    
    public void registerWriter(String cachePrefix, CacheWriter&lt;K, V&gt; writer) {
        (cachePrefix, writer);
    }
    
    public void put(String cachePrefix, K key, V value, long expiration, TimeUnit timeUnit) {
        String cacheKey = cachePrefix + key;
        
        // 1. Get the corresponding cache writer        CacheWriter&lt;K, V&gt; writer = (cachePrefix);
        if (writer == null) {
            throw new IllegalStateException("No cache writer registered for prefix: " + cachePrefix);
        }
        
        // 2. Synchronously write to the database        (key, value);
        
        // 3. Update cache        ().set(cacheKey, value, expiration, timeUnit);
    }
}

Example of usage:

@Service
public class UserServiceWriteThrough {
    
    private static final String CACHE_PREFIX = "user:";
    private static final long CACHE_EXPIRATION = 30;
    
    @Autowired
    private WriteThroughCacheManager&lt;Long, User&gt; cacheManager;
    
    @Autowired
    private UserRepository userRepository;
    
    @PostConstruct
    public void init() {
        // Registered user data writer        (CACHE_PREFIX, this::saveUserToDb);
    }
    
    private void saveUserToDb(Long userId, User user) {
        (user);
    }
    
    public void updateUser(User user) {
        // Update data through the cache manager, and the database and cache will be updated simultaneously        (CACHE_PREFIX, (), user, CACHE_EXPIRATION, );
    }
}

Pros and cons analysis

advantage

  • Ensure strong consistency between database and cache
  • Encapsulate cache update logic in the cache layer to simplify application code
  • High hit rate when reading cache, no need to return to the database

shortcoming

  • Real-time writing to the database increases write operation delay
  • Increase system complexity and transaction consistency is required
  • Scenarios with high pressure to write databases may become performance bottlenecks

Applicable scenarios

  • Systems with high data consistency requirements
  • Applications that write operations are not performance bottlenecks
  • Scenarios where cache and database need to be synchronized in real time

Strategy 4: Write-Behind (write back) strategy

How it works

Write-Behind strategy asynchronously processes write operations:

  • Only update cache when the application updates data
  • The cache maintains a write queue that will update asynchronous batch writes to the database
  • Reduce database pressure through batch operations

Code Example

Implement asynchronous write queues and processors:

@Component
public class WriteBehindCacheManager&lt;K, V&gt; {
    
    @Autowired
    private RedisTemplate&lt;String, V&gt; redisTemplate;
    
    private final BlockingQueue&lt;CacheUpdate&lt;K, V&gt;&gt; updateQueue = new LinkedBlockingQueue&lt;&gt;();
    private final ConcurrentHashMap&lt;String, CacheWriter&lt;K, V&gt;&gt; writers = new ConcurrentHashMap&lt;&gt;();
    
    public void registerWriter(String cachePrefix, CacheWriter&lt;K, V&gt; writer) {
        (cachePrefix, writer);
    }
    
    @PostConstruct
    public void init() {
        // Start the asynchronous write thread        Thread writerThread = new Thread(this::processWriteBehindQueue);
        (true);
        ();
    }
    
    public void put(String cachePrefix, K key, V value, long expiration, TimeUnit timeUnit) {
        String cacheKey = cachePrefix + key;
        
        // 1. Update the cache        ().set(cacheKey, value, expiration, timeUnit);
        
        // 2. Put the update into the queue and wait for asynchronous writing to the database        (new CacheUpdate&lt;&gt;(cachePrefix, key, value));
    }
    
    private void processWriteBehindQueue() {
        List&lt;CacheUpdate&lt;K, V&gt;&gt; batch = new ArrayList&lt;&gt;(100);
        
        while (true) {
            try {
                // Get updates in the queue, wait for up to 100ms                CacheUpdate&lt;K, V&gt; update = (100, );
                
                if (update != null) {
                    (update);
                }
                
                // Continue to collect the available updates in the queue, collect up to 100 or wait for 200ms                (batch, 100 - ());
                
                if (!()) {
                    // Group batch processing by cache prefix                    Map&lt;String, List&lt;CacheUpdate&lt;K, V&gt;&gt;&gt; groupedUpdates = ()
                            .collect((CacheUpdate::getCachePrefix));
                    
                    for (&lt;String, List&lt;CacheUpdate&lt;K, V&gt;&gt;&gt; entry : ()) {
                        String cachePrefix = ();
                        List&lt;CacheUpdate&lt;K, V&gt;&gt; updates = ();
                        
                        CacheWriter&lt;K, V&gt; writer = (cachePrefix);
                        if (writer != null) {
                            // Batch writing to the database                            for (CacheUpdate&lt;K, V&gt; u : updates) {
                                try {
                                    ((), ());
                                } catch (Exception e) {
                                    // Handle exceptions, you can try again or log log                                    ("Failed to write-behind for key {}: {}", (), ());
                                }
                            }
                        }
                    }
                    
                    ();
                }
                
            } catch (InterruptedException e) {
                ().interrupt();
                break;
            } catch (Exception e) {
                ("Error in write-behind process", e);
            }
        }
    }
    
    @Data
    @AllArgsConstructor
    private static class CacheUpdate&lt;K, V&gt; {
        private String cachePrefix;
        private K key;
        private V value;
    }
}

Example of usage:

@Service
public class UserServiceWriteBehind {
    
    private static final String CACHE_PREFIX = "user:";
    private static final long CACHE_EXPIRATION = 30;
    
    @Autowired
    private WriteBehindCacheManager&lt;Long, User&gt; cacheManager;
    
    @Autowired
    private UserRepository userRepository;
    
    @PostConstruct
    public void init() {
        // Registered user data writer        (CACHE_PREFIX, this::saveUserToDb);
    }
    
    private void saveUserToDb(Long userId, User user) {
        (user);
    }
    
    public void updateUser(User user) {
        // Update only write to cache, asynchronously write to database        (CACHE_PREFIX, (), user, CACHE_EXPIRATION, );
    }
}

Pros and cons analysis

advantage

  • Significantly improve write operation performance and reduce response delay
  • Reduce database pressure through batch operations
  • Smoothly process write peaks to improve system throughput

shortcoming

  • There is a window period for data consistency and is not suitable for scenarios where strong consistency requirements are required.
  • System crashes may result in lost data that is not written
  • Implementation is complex and requires handling failed retry and conflict resolution

Applicable scenarios

  • High concurrent write scenarios, such as logging and statistics
  • Applications that are sensitive to write delay but have low consistency requirements
  • Database writing is a scenario where system bottlenecks are

Strategy 5: Refresh-Ahead strategy

How it works

The Refresh-Ahead policy predictably updates before the cache expires:

  • Cache sets normal expiration time
  • Asynchronous refresh is triggered when accessing cache items that are close to expired
  • Users always access cached data to avoid delays in directly querying databases

Code Example

@Component
public class RefreshAheadCacheManager&lt;K, V&gt; {
    
    @Autowired
    private RedisTemplate&lt;String, Object&gt; redisTemplate;
    
    @Autowired
    private ThreadPoolTaskExecutor refreshExecutor;
    
    private final ConcurrentHashMap&lt;String, CacheLoader&lt;K, V&gt;&gt; loaders = new ConcurrentHashMap&lt;&gt;();
    
    // Refresh the threshold, triggering refresh when the expiration time is less than the threshold ratio.    private final double refreshThreshold = 0.75; // 75%
    
    public void registerLoader(String cachePrefix, CacheLoader&lt;K, V&gt; loader) {
        (cachePrefix, loader);
    }
    
    @SuppressWarnings("unchecked")
    public V get(String cachePrefix, K key, long expiration, TimeUnit timeUnit) {
        String cacheKey = cachePrefix + key;
        
        // 1. Get cache items and their TTLs        V value = (V) ().get(cacheKey);
        Long ttl = (cacheKey, );
        
        if (value != null) {
            // 2. If the cache exists but is close to expiration, trigger an asynchronous refresh            if (ttl != null &amp;&amp; ttl &gt; 0) {
                long expirationMs = (expiration);
                if (ttl &lt; expirationMs * (1 - refreshThreshold)) {
                    refreshAsync(cachePrefix, key, cacheKey, expiration, timeUnit);
                }
            }
            return value;
        }
        
        // 3. The cache does not exist, synchronous loading        return loadAndCache(cachePrefix, key, cacheKey, expiration, timeUnit);
    }
    
    private void refreshAsync(String cachePrefix, K key, String cacheKey, long expiration, TimeUnit timeUnit) {
        (() -&gt; {
            try {
                loadAndCache(cachePrefix, key, cacheKey, expiration, timeUnit);
            } catch (Exception e) {
                // Asynchronous refresh failed, logging the log but not affecting the current request                ("Failed to refresh cache for key {}: {}", cacheKey, ());
            }
        });
    }
    
    private V loadAndCache(String cachePrefix, K key, String cacheKey, long expiration, TimeUnit timeUnit) {
        CacheLoader&lt;K, V&gt; loader = (cachePrefix);
        if (loader == null) {
            throw new IllegalStateException("No cache loader registered for prefix: " + cachePrefix);
        }
        
        // Load from the data source        V value = (key);
        
        // Update cache        if (value != null) {
            ().set(cacheKey, value, expiration, timeUnit);
        }
        
        return value;
    }
}

Example of usage:

@Service
public class ProductServiceRefreshAhead {
    
    private static final String CACHE_PREFIX = "product:";
    private static final long CACHE_EXPIRATION = 60; // 1 hour    
    @Autowired
    private RefreshAheadCacheManager&lt;String, Product&gt; cacheManager;
    
    @Autowired
    private ProductRepository productRepository;
    
    @PostConstruct
    public void init() {
        // Register product data loader        (CACHE_PREFIX, this::loadProductFromDb);
    }
    
    private Product loadProductFromDb(String productId) {
        return (productId).orElse(null);
    }
    
    public Product getProduct(String productId) {
        return (CACHE_PREFIX, productId, CACHE_EXPIRATION, );
    }
}

Thread pool configuration

@Configuration
public class ThreadPoolConfig {
    
    @Bean
    public ThreadPoolTaskExecutor refreshExecutor() {
        ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
        (5);
        (20);
        (100);
        ("cache-refresh-");
        (new ());
        ();
        return executor;
    }
}

Pros and cons analysis

advantage

  • Users always access cached data to avoid delays caused by cache expiration
  • Asynchronous refresh reduces database load peak
  • High cache hit rate, better user experience

shortcoming

  • High complexity and requires additional thread pool management
  • The prediction algorithm may be inaccurate, resulting in unnecessary refreshes
  • For data that is rarely accessed, refreshing can be a waste

Applicable scenarios

  • High-flow systems that require severe response time
  • Scenarios with predictable data update frequency
  • Systems with limited database resources but sufficient cache capacity

Strategy 6: Eventual Consistency strategy

How it works

The ultimate consistency strategy implements data synchronization based on a distributed event system:

  • Publish events to message queue when data changes
  • Cache service subscribes to related events and updates cache
  • Even if some operations fail temporarily, the system will eventually reach a consistent state.

Code Example

First define the data change event:

@Data
@AllArgsConstructor
public class DataChangeEvent {
    private String entityType;
    private String entityId;
    private String operation; // CREATE, UPDATE, DELETE
    private String payload;   // JSON format entity data}

Implementation Event Publisher:

@Component
public class DataChangePublisher {
    
    @Autowired
    private KafkaTemplate&lt;String, DataChangeEvent&gt; kafkaTemplate;
    
    private static final String TOPIC = "data-changes";
    
    public void publishChange(String entityType, String entityId, String operation, Object entity) {
        try {
            // Serialize the entity to JSON            String payload = new ObjectMapper().writeValueAsString(entity);
            
            // Create event            DataChangeEvent event = new DataChangeEvent(entityType, entityId, operation, payload);
            
            // Post to Kafka            (TOPIC, entityId, event);
        } catch (Exception e) {
            ("Failed to publish data change event", e);
            throw new RuntimeException("Failed to publish event", e);
        }
    }
}

Implement event consumer update cache:

@Component
@Slf4j
public class CacheUpdateConsumer {
    
    @Autowired
    private RedisTemplate&lt;String, Object&gt; redisTemplate;
    
    private static final long CACHE_EXPIRATION = 30;
    
    @KafkaListener(topics = "data-changes")
    public void handleDataChangeEvent(DataChangeEvent event) {
        try {
            String cacheKey = buildCacheKey((), ());
            
            switch (()) {
                case "CREATE":
                case "UPDATE":
                    // parse JSON data                    Object entity = parseEntity((), ());
                    // Update cache                    ().set(
                            cacheKey, entity, CACHE_EXPIRATION, );
                    ("Updated cache for {}: {}", cacheKey, ());
                    break;
                    
                case "DELETE":
                    // Delete the cache                    (cacheKey);
                    ("Deleted cache for {}", cacheKey);
                    break;
                    
                default:
                    ("Unknown operation: {}", ());
            }
        } catch (Exception e) {
            ("Error handling data change event: {}", (), e);
            // Failure handling: You can put the failed event into the dead letter queue, etc.        }
    }
    
    private String buildCacheKey(String entityType, String entityId) {
        return () + ":" + entityId;
    }
    
    private Object parseEntity(String payload, String entityType) throws JsonProcessingException {
        // Select the deserialization target class according to the entity type        Class&lt;?&gt; targetClass = getClassForEntityType(entityType);
        return new ObjectMapper().readValue(payload, targetClass);
    }
    
    private Class&lt;?&gt; getClassForEntityType(String entityType) {
        switch (entityType) {
            case "User": return ;
            case "Product": return ;
            // Other entity types            default: throw new IllegalArgumentException("Unknown entity type: " + entityType);
        }
    }
}

Example of usage:

@Service
@Transactional
public class UserServiceEventDriven {
    
    @Autowired
    private UserRepository userRepository;
    
    @Autowired
    private DataChangePublisher publisher;
    
    public User createUser(User user) {
        // 1. Save the user to the database        User savedUser = (user);
        
        // 2. Publish creation event        ("User", ().toString(), "CREATE", savedUser);
        
        return savedUser;
    }
    
    public User updateUser(User user) {
        // 1. Update the user to the database        User updatedUser = (user);
        
        // 2. Publish update events        ("User", ().toString(), "UPDATE", updatedUser);
        
        return updatedUser;
    }
    
    public void deleteUser(Long userId) {
        // 1. Delete user from database        (userId);
        
        // 2. Publish deletion events        ("User", (), "DELETE", null);
    }
}

Pros and cons analysis

advantage

  • Supports data consistency in distributed systems
  • Peak cutting and valley filling to reduce the peak load of the system
  • Service decoupling to improve system flexibility and scalability

shortcoming

  • Consistency delay can only ensure final consistency
  • Implementation and maintenance are more complex and require message queue infrastructure
  • Message duplication and out-of-order issues may need to be handled

Applicable scenarios

  • Large distributed system
  • Can accept short-term inconsistent business scenarios
  • Systems that need to decouple data sources and cache update logic

Cache Update Policy Selection Guide

The following factors are considered when choosing a suitable cache update strategy:

1. Business Characteristics Considerations

Business Characteristics Recommendation strategy
Read more and write less Cache-Aside or Read-Through
Write intensive Write-Behind
High consistency requirements Write-Through
Response time sensitive Refresh-Ahead
Distributed Systems Final consistency

2. Resource limitation considerations

Resource constraints Recommendation strategy
Memory limit Cache-Aside (on-demand cache)
High database load Write-Behind (reduce write pressure)
Network bandwidth is limited Write-Behind or Refresh-Ahead

3. Development complexity considerations

Complexity requirements Recommendation strategy
Simple implementation Cache-Aside
Medium complexity Read-Through or Write-Through
High complexity but high performance Write-Behind or Final Consistency

in conclusion

Cache updates are the core challenge in Redis application design, and there is no universal strategy for all scenarios. It is best practice to choose the appropriate cache update strategy or combine multiple strategies based on business needs, data characteristics and system resources.

In practical applications, different cache policies can be selected according to the characteristics of different data, and even multiple policies can be combined in the same system to achieve the best balance of performance and consistency.

The above is the detailed explanation of the 6 cache update strategies in Redis. For more information about Redis cache updates, please follow my other related articles!