The so-called hot spot KEY refers to a small number of key values that are frequently accessed in the cache or database. These keys often carry most of the access traffic in the system.
According to the 28 principle, usually 20% of the data bears 80% of the traffic, and even in some extreme cases, a single KEY may attract more than 50% of the system's traffic.
When these hotspots KEYs are not properly processed, it may lead to:
- CPU usage rate soars in cache nodes
- Network bandwidth competition
- Cache service response delay increased
- Cache penetration causes database pressure to increase dramatically
- In extreme cases, even system avalanche
This article will conduct in-depth discussions on three mainstream hot-spot KEY cache optimization strategies in SpringBoot to improve the performance of the system when facing hot-spot KEYs.
1. Graded Caching Policy
1.1 Principle Analysis
The hierarchical caching strategy adopts a multi-level caching architecture, usually including local cache (L1) and distributed cache (L2). When accessing hotspot KEY, the system first queryes the local memory cache to avoid network overhead; only when the local cache misses, the distributed cache is requested.
Open source implementations include JetCache and J2Cache
This strategy can effectively reduce the access pressure of hotspot KEY to distributed caches, while greatly improving the access speed of hotspot data.
Core workflow for hierarchical caching:
- Request to access the local cache first (such as Caffeine)
- Local cache hits directly return data (nanosecond level)
- Local cache misses, requesting distributed cache (such as Redis)
- Distributed cache hits, return data and backfill local cache
- Distributed cache misses, query data sources and update local and distributed caches at the same time
1.2 Implementation method
Step 1: Add related dependencies
<dependency> <groupId></groupId> <artifactId>spring-boot-starter-data-redis</artifactId> </dependency> <dependency> <groupId></groupId> <artifactId>spring-boot-starter-cache</artifactId> </dependency> <dependency> <groupId></groupId> <artifactId>caffeine</artifactId> </dependency>
Step 2: Configure the Leveled Cache Manager
@Configuration @EnableCaching public class LayeredCacheConfig { @Bean public CacheManager cacheManager(RedisConnectionFactory redisConnectionFactory) { LayeredCacheManager cacheManager = new LayeredCacheManager( createLocalCacheManager(), createRedisCacheManager(redisConnectionFactory) ); return cacheManager; } private CacheManager createLocalCacheManager() { CaffeineCacheManager caffeineCacheManager = new CaffeineCacheManager(); // Local cache configuration - specially optimized for hotspot KEY (() .initialCapacity(100) // Initial size .maximumSize(1000) // Maximum number of cached objects .expireAfterWrite(1, ) // Expired 1 minute after writing .recordStats()); // Turn on statistics return caffeineCacheManager; } private CacheManager createRedisCacheManager(RedisConnectionFactory redisConnectionFactory) { RedisCacheConfiguration config = () .entryTtl((10)) // Redis cache expires for 10 minutes .serializeKeysWith( .fromSerializer(new StringRedisSerializer())) .serializeValuesWith( .fromSerializer(new GenericJackson2JsonRedisSerializer())); return (redisConnectionFactory) .cacheDefaults(config) .build(); } }
Step 3: Implement a custom hierarchical cache manager
public class LayeredCacheManager implements CacheManager { private final CacheManager localCacheManager; // Local cache (L1) private final CacheManager remoteCacheManager; // Distributed Cache (L2) private final Map<String, Cache> cacheMap = new ConcurrentHashMap<>(); public LayeredCacheManager(CacheManager localCacheManager, CacheManager remoteCacheManager) { = localCacheManager; = remoteCacheManager; } @Override public Cache getCache(String name) { return (name, this::createLayeredCache); } @Override public Collection<String> getCacheNames() { Set<String> names = new LinkedHashSet<>(); (()); (()); return names; } private Cache createLayeredCache(String name) { Cache localCache = (name); Cache remoteCache = (name); return new LayeredCache(name, localCache, remoteCache); } // Grade cache implementation static class LayeredCache implements Cache { private final String name; private final Cache localCache; private final Cache remoteCache; public LayeredCache(String name, Cache localCache, Cache remoteCache) { = name; = localCache; = remoteCache; } @Override public String getName() { return name; } @Override public Object getNativeCache() { return this; } @Override public ValueWrapper get(Object key) { // Check the local cache first ValueWrapper localValue = (key); if (localValue != null) { return localValue; } // Local misses, check remote cache ValueWrapper remoteValue = (key); if (remoteValue != null) { // Backfill local cache (key, ()); return remoteValue; } return null; } @Override public <T> T get(Object key, Class<T> type) { // Check the local cache first T localValue = (key, type); if (localValue != null) { return localValue; } // Local misses, check remote cache T remoteValue = (key, type); if (remoteValue != null) { // Backfill local cache (key, remoteValue); return remoteValue; } return null; } @Override public <T> T get(Object key, Callable<T> valueLoader) { // Check the local cache first ValueWrapper localValue = (key); if (localValue != null) { return (T) (); } // Local misses, check remote cache ValueWrapper remoteValue = (key); if (remoteValue != null) { // Backfill local cache T value = (T) (); (key, value); return value; } // The remote also misses, and the value loader is called try { T value = (); if (value != null) { // Update local and remote caches at the same time put(key, value); } return value; } catch (Exception e) { throw new ValueRetrievalException(key, valueLoader, e); } } @Override public void put(Object key, Object value) { (key, value); (key, value); } @Override public void evict(Object key) { (key); (key); } @Override public void clear() { (); (); } } }
Step 4: Use hierarchical cache in the service
@Service public class ProductService { private final ProductRepository productRepository; public ProductService(ProductRepository productRepository) { = productRepository; } // Use custom cache to process hot product data @Cacheable(value = "products", key = "#id", cacheManager = "cacheManager") public Product getProductById(Long id) { // Simulate database access delay try { (200); } catch (InterruptedException e) { ().interrupt(); } return (id) .orElseThrow(() -> new ProductNotFoundException("Product not found: " + id)); } // Handle the list of popular products @Cacheable(value = "hotProducts", key = "'top' + #limit", cacheManager = "cacheManager") public List<Product> getHotProducts(int limit) { // Get popular products by complex query return (limit); } // Update product information - update cache at the same time @CachePut(value = "products", key = "#", cacheManager = "cacheManager") public Product updateProduct(Product product) { return (product); } // Delete the product - delete the cache at the same time @CacheEvict(value = "products", key = "#id", cacheManager = "cacheManager") public void deleteProduct(Long id) { (id); } }
1.3 Pros and cons analysis
advantage
- Significantly reduce access delay of hotspot KEY, and local cache access speed can reach nanoseconds
- Significantly reduce the load pressure of distributed cache and improve overall system throughput
- Reduce network IO overhead and save bandwidth resources
- Even if distributed cache is temporarily unavailable, local cache can still provide services, enhancing system flexibility
shortcoming
- Increases system complexity and requires management of two layers of cache
- There is a data consistency challenge, and the local caches of different nodes may be out of sync.
- Local cache occupies the application server's memory resources
- Suitable for scenes where more reads and less writes, and the effect of frequent writes is limited
Applicable scenarios
- High-frequency access and relatively stable hot data (such as product details, user configuration)
- Business scenarios with more reading and less writing
- Critical business sensitive to access latency
- Distributed cache faces high load systems
2. Cache sharding strategy
2.1 Principle Analysis
The cache sharding strategy targets the single point of stress problems that a single hotspot KEY can cause. By splitting a hotspot KEY into multiple physical sub-KEYs, the access load is evenly distributed across multiple cache nodes or instances. This strategy effectively improves the system's ability to handle hotspot KEY without changing the business logic.
Its core principle is:
- Map a logical hotspot KEY into multiple physical sub-KEYs
- During access, select a sub-KEY randomly or according to some rule to operate.
- During writing, all sub-KEYs are updated synchronously to ensure data consistency
- Avoid performance bottlenecks in a single cache node by dispersing access pressure
2.2 Implementation method
Step 1: Create a cache shard manager
@Component public class ShardedCacheManager { private final RedisTemplate<String, Object> redisTemplate; private final Random random = new Random(); // Number of hot spots KEY shards private static final int DEFAULT_SHARDS = 10; //The validity period of sharded KEY is slightly different to avoid expiration at the same time private static final int BASE_TTL_MINUTES = 30; private static final int TTL_VARIATION_MINUTES = 10; public ShardedCacheManager(RedisTemplate<String, Object> redisTemplate) { = redisTemplate; } /** * Get the value of the shard cache */ public <T> T getValue(String key, Class<T> type) { // Randomly select a shard String shardKey = generateShardKey(key, (DEFAULT_SHARDS)); return (T) ().get(shardKey); } /** * Set the value of the shard cache */ public void setValue(String key, Object value) { // Write to all shards for (int i = 0; i < DEFAULT_SHARDS; i++) { String shardKey = generateShardKey(key, i); // Calculate slightly different TTL to avoid expiration at the same time int ttlMinutes = BASE_TTL_MINUTES + (TTL_VARIATION_MINUTES); ().set( shardKey, value, ttlMinutes, ); } } /** * Delete shard cache */ public void deleteValue(String key) { // Delete all shards List<String> keys = new ArrayList<>(DEFAULT_SHARDS); for (int i = 0; i < DEFAULT_SHARDS; i++) { (generateShardKey(key, i)); } (keys); } /** * Generate shard KEY */ private String generateShardKey(String key, int shardIndex) { return ("%s:%d", key, shardIndex); } }
Step 2: Create Hotspot KEY Identification and Processing Components
@Component public class HotKeyDetector { private final RedisTemplate<String, Object> redisTemplate; private final ShardedCacheManager shardedCacheManager; // Hash name of the hotspot KEY counter private static final String HOT_KEY_COUNTER = "hotkey:counter"; // Hotspot determination threshold - Number of visits per minute private static final int HOT_KEY_THRESHOLD = 1000; // Hot Spot KEY Record private final Set<String> detectedHotKeys = (); public HotKeyDetector(RedisTemplate<String, Object> redisTemplate, ShardedCacheManager shardedCacheManager) { = redisTemplate; = shardedCacheManager; // Start the timing task and regularly identify hot spots KEY scheduleHotKeyDetection(); } /** * Record the number of visits to KEY */ public void recordKeyAccess(String key) { ().increment(HOT_KEY_COUNTER, key, 1); } /** * Check if KEY is a hot topic KEY */ public boolean isHotKey(String key) { return (key); } /** * Get the value using the appropriate cache strategy */ public <T> T getValue(String key, Class<T> type, Supplier<T> dataLoader) { if (isHotKey(key)) { // Use sharding strategy to handle hotspots KEY T value = (key, type); if (value != null) { return value; } // Not found in the shard, load and update the shard from the data source value = (); if (value != null) { (key, value); } return value; } else { // For non-hotspot KEYs, use the usual way to handle it T value = (T) ().get(key); if (value != null) { return value; } // Cache misses, record access and load from data source recordKeyAccess(key); value = (); if (value != null) { ().set(key, value, 30, ); } return value; } } /** * Tasks for identifying hot spots KEY regularly */ private void scheduleHotKeyDetection() { ScheduledExecutorService executor = (); (() -> { try { // Get access counts for all KEYs Map<Object, Object> counts = ().entries(HOT_KEY_COUNTER); // Clear the previously identified hot spot KEY Set<String> newHotKeys = new HashSet<>(); // Identify new hotspots KEY for (<Object, Object> entry : ()) { String key = (String) (); int count = ((Number) ()).intValue(); if (count > HOT_KEY_THRESHOLD) { (key); // For newly discovered hot spots KEY, preheat shard cache if (!(key)) { preloadHotKeyToShards(key); } } } // Update the hotspot KEY collection (); (newHotKeys); // Clear the counter and start a new round of counting (HOT_KEY_COUNTER); } catch (Exception e) { // Exception handling (); } }, 1, 1, ); } /** * Preheat hotspot KEY to shard cache */ private void preloadHotKeyToShards(String key) { // Get the value in the original cache Object value = ().get(key); if (value != null) { // Copy the value to all shards (key, value); } } }
Step 3: Integrate hotspot KEY processing in the service
@Service public class EnhancedProductService { private final ProductRepository productRepository; private final HotKeyDetector hotKeyDetector; public EnhancedProductService(ProductRepository productRepository, HotKeyDetector hotKeyDetector) { = productRepository; = hotKeyDetector; } /** * Obtain product information and automatically process hotspot KEY */ public Product getProductById(Long id) { String cacheKey = "product:" + id; return (cacheKey, , () -> { // Load product information from the database return (id) .orElseThrow(() -> new ProductNotFoundException("Product not found: " + id)); }); } /** * Get a list of popular products and automatically process hotspots KEY */ public List<Product> getHotProducts(int limit) { String cacheKey = "products:hot:" + limit; return (cacheKey, , () -> { // Load popular products from the database return (limit); }); } /** * Update product information and process cache */ public Product updateProduct(Product product) { Product savedProduct = (product); // Clear all related caches String cacheKey = "product:" + (); if ((cacheKey)) { // If it is a hotspot, clear the shard cache ().deleteValue(cacheKey); } else { // Regular cache clearance (cacheKey); } return savedProduct; } }
2.3 Pros and cons analysis
advantage
- Effectively disperse access pressure of a single hot spot KEY
- It does not depend on a specific cache architecture and can be used in multiple cache systems
- Transparent to the client without modifying the caller code
- Processing strategies that can dynamically identify and adjust hot spots KEY
- Avoid cache avalanche problems by staggering the expiration time
shortcoming
- Increase write overhead, and multiple cache shards need to be updated synchronously
- The implementation is complex and requires maintenance of hotspot KEY detection and sharding logic
- Additional memory footprint (storage multiple copies of one value)
- A brief data inconsistency window may be introduced
Applicable scenarios
- Scenarios where specific KEY access frequency is much higher than other KEYs
- Read more and write less data (product details, event information, etc.)
- Foreseeable traffic surge scenarios such as large-scale promotions and hot products
- Systems with Redis clusters facing a single KEY access hotspot issue
Comparison of the two strategies
characteristic | Leveled caching strategy | Cache sharding strategy |
---|---|---|
Mainly solve the problem | Hotspot KEY access delay | Hot Spot KEY Single Point Pressure |
Implement complexity | medium | high |
Additional storage overhead | medium | high |
Impact of write performance | medium | big |
Consistency guarantee | Final agreement | Final agreement |
Changes to the original code | medium | big |
Applicable hot spot types | General hot spots | Super hot spot |
Summarize
In practical applications, we can choose appropriate strategies based on business characteristics and system architecture, and even combine multiple strategies to build a more robust cache system.
No matter which strategy you choose, you should combine best practices such as monitoring, preheating, and downgrading to truly exert the value of caching and ensure the performance and stability of the system when facing hot spots.
Finally, cache optimization is a process of continuous improvement. As business development and traffic changes, cache policies need to be continuously adjusted and optimized to ensure that the system always maintains high performance and high availability.
The above is the detailed content of the two mainstream strategies for hotspot KEY cache optimization in SpringBoot. For more information about hotspot KEY cache optimization in SpringBoot, please pay attention to my other related articles!