SoFunction
Updated on 2025-05-07

4 ways to implement client caching in Redis

As the most popular in-memory database and cache system today, Redis is widely used in various application scenarios. However, even if Redis itself performs well, network communication between applications and Redis servers may still become a performance bottleneck in high concurrency scenarios.

At this time, client caching technology becomes particularly important.

Client cache refers to maintaining a local copy of Redis data in application memory to reduce the number of network requests, reduce latency, and reduce the burden on Redis servers.

This article will share four implementation methods of Redis client caching, analyzing its principles, advantages and disadvantages, applicable scenarios and best practices.

Method 1: Local In-Memory Cache

Technical Principles

Local memory cache is the most direct way to implement client cache. It uses data structures (such as HashMap, ConcurrentHashMap or professional cache libraries such as Caffeine, Guava Cache, etc.) to store data obtained from Redis in application memory. This method is entirely managed by the application itself and has nothing to do with the Redis server.

Implementation example

Here is an example of a simple local cache implemented using Spring Boot and Caffeine:

@Service
public class RedisLocalCacheService {
    
    private final StringRedisTemplate redisTemplate;
    private final Cache<String, String> localCache;
    
    public RedisLocalCacheService(StringRedisTemplate redisTemplate) {
         = redisTemplate;
        
        // Configure Caffeine cache         = ()
                .maximumSize(10_000)  // Maximum cache entries                .expireAfterWrite((5))  // Expiration time after writing                .recordStats()  // Record statistics                .build();
    }
    
    public String get(String key) {
        // First try to get it from the local cache        String value = (key);
        
        if (value != null) {
            // Local cache hit            return value;
        }
        
        // Local cache misses, get from Redis        value = ().get(key);
        
        if (value != null) {
            // Put the value obtained from Redis into the local cache            (key, value);
        }
        
        return value;
    }
    
    public void set(String key, String value) {
        // Update Redis        ().set(key, value);
        
        // Update local cache        (key, value);
    }
    
    public void delete(String key) {
        // Remove from Redis        (key);
        
        // Delete from local cache        (key);
    }
    
    // Get cache statistics    public Map<String, Object> getCacheStats() {
        CacheStats stats = ();
        Map<String, Object> statsMap = new HashMap<>();
        
        ("hitCount", ());
        ("missCount", ());
        ("hitRate", ());
        ("evictionCount", ());
        
        return statsMap;
    }
}

Pros and cons analysis

advantage

  • Simple implementation, easy integration
  • No additional server support required
  • Fully control over cache behavior (size, expired policies, etc.)
  • Significantly reduce network requests
  • Completely transparent to Redis servers

shortcoming

  • Cache consistency problem: When Redis data is updated by other applications or services, local cache cannot perceive changes
  • Memory usage: It requires consuming the application's memory resources
  • Cold startup problem: The cache needs to be reheated after the application restarts
  • Cache inconsistency between multiple instances in distributed environments

Applicable scenarios

  • Read more and write less data (such as configuration information, static data)
  • Scenarios with low requirements for real-time data
  • A distributed system with low data consistency requirements
  • As a supplementary means of other caching strategies

Best Practices

  • Set cache size and expiration time reasonably: Avoid excessive memory usage
  • Select the correct cache eviction strategy: LRU, LFU, etc. are selected according to business characteristics
  • Refresh important data regularly: Actively update instead of always passively waiting for expiration
  • Add monitoring and statistics: Track hit rate, memory usage and other indicators
  • Consider cache preheating: Actively load common data when the application starts

Method 2: Redis server-assisted Client-Side Caching

Technical Principles

Redis 6.0 introduces server-assisted client caching, also known as tracking mode.

In this mode, the Redis server tracks the keys requested by the client, and when these keys are modified, the server sends an invalidation notification to the client. This mechanism ensures data consistency between the client cache and the Redis server.

Redis provides two tracking modes:

  • Default Mode: The server tracks the keys that each client pays attention to accurately
  • Broadcast mode: The server broadcasts all key changes, and the client filters the keys it cares about

Implementation example

Implement server-assisted client caching using Lettuce (the default client for Spring Boot Redis):

@Service
public class RedisTrackingCacheService {
    
    private final StatefulRedisConnection<String, String> connection;
    private final RedisCommands<String, String> commands;
    private final Map<String, String> localCache = new ConcurrentHashMap<>();
    private final Set<String> trackedKeys = ();
    
    public RedisTrackingCacheService(RedisClient redisClient) {
         = ();
         = ();
        
        // Configure client cache invalid listener        (message -> {
            if (message instanceof PushMessage) {
                PushMessage pushMessage = (PushMessage) message;
                if ("invalidate".equals(())) {
                    List<Object> invalidations = ();
                    handleInvalidations(invalidations);
                }
            }
        });
        
        // Enable client cache tracking        (());
    }
    
    public String get(String key) {
        // First try to get it from the local cache        String value = (key);
        
        if (value != null) {
            return value;
        }
        
        // Local cache misses, get from Redis        value = (key);
        
        if (value != null) {
            // After tracing is enabled, the Redis server will record that the client is tracking this key            (key, value);
            (key);
        }
        
        return value;
    }
    
    public void set(String key, String value) {
        // Update Redis        (key, value);
        
        // Update local cache        (key, value);
        (key);
    }
    
    private void handleInvalidations(List<Object> invalidations) {
        if (invalidations != null && () >= 2) {
            // parse invalid message            String invalidationType = new String((byte[]) (0));
            
            if ("key".equals(invalidationType)) {
                // Single key fails                String invalidatedKey = new String((byte[]) (1));
                (invalidatedKey);
                (invalidatedKey);
            } else if ("prefix".equals(invalidationType)) {
                // Prefix invalid                String prefix = new String((byte[]) (1));
                Iterator<<String, String>> it = ().iterator();
                while (()) {
                    String key = ().getKey();
                    if ((prefix)) {
                        ();
                        (key);
                    }
                }
            }
        }
    }
    
    // Get cache statistics    public Map<String, Object> getCacheStats() {
        Map<String, Object> stats = new HashMap<>();
        ("cacheSize", ());
        ("trackedKeys", ());
        return stats;
    }
    
    // Clear local cache but keep track    public void clearLocalCache() {
        ();
    }
    
    // Close the connection and clean the resources    @PreDestroy
    public void cleanup() {
        if (connection != null) {
            ();
        }
    }
}

Pros and cons analysis

advantage

  • Automatically maintain cache consistency without manual synchronization
  • Redis server can sense client cache status
  • Significantly reduce the number of network requests
  • Supports fine-grained (key level) cache control
  • Real-time perception of data changes, guaranteeing strong data consistency

shortcoming

  • Redis 6.0 or above requires support
  • Increase Redis server memory usage (tracking status)
  • Client connection must remain active
  • Server broadcast mode may generate a large number of failure messages
  • Implementation complexity is higher than simple local cache

Applicable scenarios

  • Scenarios with high requirements for data consistency
  • Scenarios where more reads and less writes are needed to reflect write changes in real time
  • Multiple clients access the same data set in distributed systems
  • Large applications need to reduce Redis load but cannot tolerate data inconsistencies

Best Practices

  • Select the right tracking mode

    • Default mode: small number of clients and each accesses different data sets
    • Broadcast mode: the number of clients or the access mode is unpredictable
  • Use prefix tracking: Key prefix organizes data and tracks to reduce tracking overhead

  • Set REDIRECT parameters reasonably: When multiple clients share tracking connections

  • Active reconnect strategy: Rebuild the connection and cache as soon as possible after the connection is disconnected

  • Set a reasonable local cache size: Avoid excessive use of application memory

Method 3: TTL-based Cache Invalidation Strategy (TTL-based Cache Invalidation)

Technical Principles

A cache failure strategy based on expiration time (Time-To-Live, TTL) is a simple and effective client caching solution.

It sets an expiration time for each entry in the local cache, and it will automatically delete or refresh after expiration.

This method does not rely on server notifications, but controls the freshness of the cache through a preset time window, balancing data consistency and system complexity.

Implementation example

Implement TTL caching using Spring Cache and Caffeine:

@Configuration
public class CacheConfig {
    
    @Bean
    public CacheManager cacheManager() {
        CaffeineCacheManager cacheManager = new CaffeineCacheManager();
        ((
                "maximumSize=10000,expireAfterWrite=300s,recordStats"));
        return cacheManager;
    }
}

@Service
public class RedisTtlCacheService {
    
    private final StringRedisTemplate redisTemplate;
    
    @Autowired
    public RedisTtlCacheService(StringRedisTemplate redisTemplate) {
         = redisTemplate;
    }
    
    @Cacheable(value = "redisCache", key = "#key")
    public String get(String key) {
        return ().get(key);
    }
    
    @CachePut(value = "redisCache", key = "#key")
    public String set(String key, String value) {
        ().set(key, value);
        return value;
    }
    
    @CacheEvict(value = "redisCache", key = "#key")
    public void delete(String key) {
        (key);
    }
    
    // Tiered cache - Caches at different expiration times    @Cacheable(value = "shortTermCache", key = "#key")
    public String getWithShortTtl(String key) {
        return ().get(key);
    }
    
    @Cacheable(value = "longTermCache", key = "#key")
    public String getWithLongTtl(String key) {
        return ().get(key);
    }
    
    // Manually control expiration time in program logic    public String getWithDynamicTtl(String key, Duration ttl) {
        // Using LoadingCache, you can dynamically set the expiration time        Cache<String, String> dynamicCache = ()
                .expireAfterWrite(ttl)
                .build();
        
        return (key, k -> ().get(k));
    }
    
    //Refresh cache regularly    @Scheduled(fixedRate = 60000) // Perform every minute    public void refreshCache() {
        // Get the list of keys that need to be refreshed        List<String> keysToRefresh = getKeysToRefresh();
        
        for (String key : keysToRefresh) {
            // Trigger reload, the method annotated by @Cacheable will be called            (key);
        }
    }
    
    private List<String> getKeysToRefresh() {
        // In actual application, it may be obtained from the configuration system or a specific Redis set        return ("config:app", "config:features", "daily:stats");
    }
    
    // Use L2 cache mode to use longer TTL for hotspot data    public String getWithTwoLevelCache(String key) {
        // First query the local level 1 cache (short TTL)        Cache<String, String> l1Cache = ()
                .maximumSize(1000)
                .expireAfterWrite((10))
                .build();
        
        String value = (key);
        if (value != null) {
            return value;
        }
        
        // Query the local level 2 cache (long TTL)        Cache<String, String> l2Cache = ()
                .maximumSize(10000)
                .expireAfterWrite((5))
                .build();
        
        value = (key);
        if (value != null) {
            // Improve to Level 1 cache            (key, value);
            return value;
        }
        
        // Query Redis        value = ().get(key);
        if (value != null) {
            // Update two-level cache            (key, value);
            (key, value);
        }
        
        return value;
    }
}

Pros and cons analysis

advantage

  • Simple implementation and easy integration into existing systems
  • Not relying on Redis server special features
  • Applicable to any Redis version
  • Memory usage is controllable, expired caches will be automatically cleaned
  • Balance between consistency and performance by tuning TTL

shortcoming

  • Unable to immediately perceive data changes, there is a consistency window
  • The short TTL setting will cause poor cache effect
  • Too long TTL settings will increase the risk of data inconsistency
  • Lack of flexibility when using unified TTL policy for all keys
  • "Cache Storm" may occur (large caches expire at the same time, resulting in burst traffic)

Applicable scenarios

  • Applications that can tolerate short-term data inconsistencies
  • Data access mode with more read and less read
  • Update data with relatively predictable frequency
  • Scenarios with less impact from using old data
  • Simple application or supplement to other caching strategies

Best Practices

  • Set different TTLs based on data characteristics

    • Frequently changing data: short TTL
    • Relatively stable data: long TTL
  • Adding random factors: TTL plus random offset to avoid cache expiration at the same time

  • Implement cache preheating mechanism: Actively load hotspot data when the application starts

  • Combined with background refresh: Use timed tasks for key data to actively refresh before expiration

  • Monitor cache efficiency: Track hit rate, expiration rate and other indicators, and dynamically adjust TTL strategy

Method 4: Pub/Sub-based Cache Invalidation Notification

Technical Principles

Publish/Sub-based cache failure notifications utilize Redis's publish/subscription capabilities to coordinate cache consistency in distributed systems.

When the data changes, the application publishes an invalid message to a specific channel through Redis. All clients subscribe to the channel clear the corresponding local cache after receiving the message.

This method implements active cache invalidation notifications without relying on the tracking function of Redis 6.0 or above.

Implementation example

@Service
public class RedisPubSubCacheService {
    
    private final StringRedisTemplate redisTemplate;
    private final Map<String, String> localCache = new ConcurrentHashMap<>();
    
    @Autowired
    public RedisPubSubCacheService(StringRedisTemplate redisTemplate) {
         = redisTemplate;
        
        // Subscription cache invalidation notification        subscribeToInvalidations();
    }
    
    private void subscribeToInvalidations() {
        // Subscribe to cache invalidation notifications using a standalone Redis connection        RedisConnectionFactory connectionFactory = ();
        
        if (connectionFactory != null) {
            // Create message listening container            RedisMessageListenerContainer container = new RedisMessageListenerContainer();
            (connectionFactory);
            
            // Message listener, handles cache invalidation notification            MessageListener invalidationListener = (message, pattern) -> {
                String invalidationMessage = new String(());
                handleCacheInvalidation(invalidationMessage);
            };
            
            // Subscribe to cache invalidation notification channel            (invalidationListener, new PatternTopic("cache:invalidations"));
            ();
        }
    }
    
    private void handleCacheInvalidation(String invalidationMessage) {
        try {
            // parse invalid message            Map<String, Object> invalidation = new ObjectMapper().readValue(
                    invalidationMessage, new TypeReference<Map<String, Object>>() {});
            
            String type = (String) ("type");
            
            if ("key".equals(type)) {
                // Single key fails                String key = (String) ("key");
                (key);
            } else if ("prefix".equals(type)) {
                // Prefix invalid                String prefix = (String) ("prefix");
                ().removeIf(key -> (prefix));
            } else if ("all".equals(type)) {
                // Clear the entire cache                ();
            }
        } catch (Exception e) {
            // Handle parsing errors        }
    }
    
    public String get(String key) {
        // First try to get it from the local cache        String value = (key);
        
        if (value != null) {
            return value;
        }
        
        // Local cache misses, get from Redis        value = ().get(key);
        
        if (value != null) {
            // Save it in local cache            (key, value);
        }
        
        return value;
    }
    
    public void set(String key, String value) {
        // Update Redis        ().set(key, value);
        
        // Update local cache        (key, value);
        
        // Publish cache update notification        publishInvalidation("key", key);
    }
    
    public void delete(String key) {
        // Remove from Redis        (key);
        
        // Delete from local cache        (key);
        
        // Publish cache invalidation notification        publishInvalidation("key", key);
    }
    
    public void deleteByPrefix(String prefix) {
        // Get and delete the key with the specified prefix        Set<String> keys = (prefix + "*");
        if (keys != null && !()) {
            (keys);
        }
        
        // Clear matching keys in the local cache        ().removeIf(key -> (prefix));
        
        // Publish prefix invalidation notification        publishInvalidation("prefix", prefix);
    }
    
    public void clearAllCache() {
        // Clear the local cache        ();
        
        // Publish global invalidation notification        publishInvalidation("all", null);
    }
    
    private void publishInvalidation(String type, String key) {
        try {
            // Create invalid message            Map<String, Object> invalidation = new HashMap<>();
            ("type", type);
            if (key != null) {
                (("key") ? "key" : "prefix", key);
            }
            ("timestamp", ());
            
            // Add source identifier to prevent yourself from receiving messages sent by yourself            ("source", getApplicationInstanceId());
            
            // Serialize and publish a message            String message = new ObjectMapper().writeValueAsString(invalidation);
            ("cache:invalidations", message);
        } catch (Exception e) {
            // Handle serialization errors        }
    }
    
    private String getApplicationInstanceId() {
        // Return the unique identity of the application instance to avoid processing the messages sent by itself        return "app-instance-" + ().toString();
    }
    
    // Get cache statistics    public Map<String, Object> getCacheStats() {
        Map<String, Object> stats = new HashMap<>();
        ("cacheSize", ());
        return stats;
    }
}

Pros and cons analysis

advantage

  • Advanced features that do not depend on Redis specific versions
  • Can achieve near real-time cache consistency
  • Applicable to multi-instance coordination in distributed systems
  • High flexibility, supports key level, prefix level and global cache operations
  • Scalable to handle complex cache dependencies

shortcoming

  • Messages may be lost, resulting in inconsistent caches
  • Publish/subscribe does not guarantee persistence and orderly delivery of messages
  • System complexity increases, requiring additional message processing logic
  • Improper implementation may lead to a news storm
  • Network partitioning may cause notification failure

Applicable scenarios

  • Multi-instance distributed applications require coordinated cache status
  • It has high requirements for cache consistency but does not want to rely on the tracking function of Redis 6.0+
  • A system that requires a cross-service caching coordination
  • Data change propagation in microservice architecture
  • Applications that require fine-grained control of cache failure

Best Practices

  • Avoid processing your own messages: Filter messages by source identification
  • Implement message idempotence processing: The same message may be received multiple times
  • Set the message expiration time: Ignore messages that are delayed too long
  • Batch processing intensive updates: Multiple failure notices in a short period of merger
  • Combined with TTL strategy: As a security guarantee, set the maximum cache life cycle
  • Monitor subscription connections: Ensure that the failure notification can be received normally
  • Consider message reliability: Key scenarios can be combined with message queues to achieve more reliable notifications

Performance comparison and selection guide

Performance comparison of various caching strategies:

Implementation method Real-time Complexity Memory usage Network overhead Consistency Guarantee Redis version requirements
Local memory cache Low Low high Low weak Any
Server assisted cache high high middle middle powerful 6.0+
TTL Expiry Strategy middle Low middle middle middle Any
Pub/Sub Notifications high middle middle high Medium-strong Any

Selection Guide

Choose the appropriate caching strategy based on the following factors:

  • Data consistency requirements

    • Strict consistency requirements: Select server-assisted cache
    • Allow transient inconsistencies: consider TTL or Pub/Sub schemes
    • Low consistency requirements: Simple local cache is sufficient
  • Application Architecture

    • Monopoly application: local cache or TTL scheme is simple and effective
    • Microservice architecture: Pub/Sub or server-assisted cache is more suitable
    • High scalability requirements: Avoid pure local cache
  • Redis version

    • Redis 6.0+: Server-assisted cache can be considered
    • Old version of Redis: Use three other solutions
  • Read and write ratio

    • High read and low write: All solutions are applicable
    • Frequent writing: Use pure local cache with caution, consider TTL or server-assisted solutions
  • Resource limitations

    • Memory restricted: Use TTL to control cache size
    • Network restricted: Local cache is preferred
    • Redis load is already high: local cache reduces stress

Summarize

Redis client caching is a powerful tool to improve application performance, which can significantly reduce latency and improve throughput by reducing network requests and database access.

In practical applications, these strategies are often not mutually exclusive, but can be used in combination, using different caching strategies for different types of data to achieve optimal performance and data consistency balance.

No matter which caching strategy you choose, the key is to understand the data access patterns and consistency requirements of your application and design the most appropriate caching solution based on it.

By correctly applying client caching technology, system performance and user experience can be significantly improved while maintaining data consistency.

This is the end of this article about 4 ways to implement Redis client caching. For more related Redis client caching content, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!