In distributed systems, coordinating shared resource access between multiple service instances is a classic challenge. Traditional stand-alone locks (e.g.) cannot achieve cross-process work, and distributed locks are needed at this time. This article will introduce the distributed lock library implemented based on Redis in the Go language ecosystem
redsync
, and discuss its usage methods and implementation principles.
Distributed lock
First, let’s discuss why distributed locks are needed? When there is resource competition in the program we write, we need to use mutex locks to ensure concurrency security. It is very likely that our services will not be deployed stand-alone, but will adopt a multi-replica cluster deployment solution. No matter which solution runs the program, we need the right tools to solve the concurrency problem. When solving the problem of concurrent resource preemption between multiple coroutines between a single process, we often adopt. When solving the problem of concurrent resource preemption between multiple processes, distributed locks are needed, which leads to what we are going to explain today
redsync
。
Why redsync
There are many open source implementations of distributed locks in Go, why choose to introduce and useredsync
Woolen cloth? In a simple sentence:redsync
yesRedis OfficialThe only recommended Go Redis distributed lock solution, followRedlock algorithm. It allows creating highly available locks on multiple independent Redis nodes, suitable for distributed scenarios that require strong consistency.
We can compareand
redsync
The difference between them gives you a sensory understanding.
characteristic | redsync | |
---|---|---|
Scope of application | Multiple goroutines within a single process | Multiple processes (allowed across machines) |
rely | none | Redis |
performance | High (no network overhead) | Lower (involving network communication) |
Implement complexity | Simple | More complex (need to deal with network, timeout and other issues) |
Typical scenarios | Memory sharing resource protection | Distributed system shared resource protection |
The two are suitable for different concurrent scenarios. When choosing, it needs to be decided based on actual needs (stand-alone or distributed).
redsync quickly
redsync
Although the internal implementation is more complicated, don't be scared, its usage is super simple.
The sample code is as follows:
package main import ( "context" "/go-redsync/redsync/v4" // Introduce the redsync library to implement distributed locks based on Redis "/go-redsync/redsync/v4/redis/goredis/v9" // Introduce redsync goredis connection pool goredislib "/redis/go-redis/v9" // Introduce the go-redis library for communication with Redis servers) func main() { // Create a Redis client client := (&{ Addr: "localhost:36379", // Redis server address Password: "nightwatch", }) // Create a redsync connection pool using the go-redis client pool := (client) // Create a redsync instance to manage distributed locks rs := (pool) // Create a mutex called "test-redsync" (Mutex) mutex := ("test-redsync") // Create a context that is generally used to control the timeout and cancellation of locks ctx := () // Acquisition of the lock, if the acquisition fails (for example, the lock has been held by other processes), an error will be returned if err := (ctx); err != nil { panic(err) // If the lock is acquired, the program will panic } // TODO Execute business logic // ... // Release the lock, if it fails (for example, the lock has expired or does not belong to the current process), an error will be returned if _, err := (ctx); err != nil { panic(err) // If the unlock fails, the program will panic } }
becauseredsync
Depend on Redis, so we first need to create a Redis client objectclient
, call(client)
Will be based on thisclient
Create aredsync
The connection pool, with this connection poolpool
You can call it(pool)
Create aredsync
An example is here to apply for distributed locks.
redsync
ProvidedNewMutex
Methods can create a distributed lock that receives aname
The parameter is the name of the lock, and this name will be used as the name in Rediskey
。
Get the lock objectmutex
Later, call(ctx)
You can add a lock. After adding a lock, you can access competitive resources. After the resource access is completed, call(ctx)
Then you can release the lock.
Can be found,redsync
Usage andVery similar, the core is
Lock/Unlock
Two operations. The use of redsync is nothing more than an extra step in connecting Redis.
Configuration Options
I wonder if you have thought about a question, we are using itWhen, if a gorutine is locked and does not release it, then other gorutines cannot acquire the lock. In a distributed scenario, if a process acquires the Redis distributed lock and then hangs up before the lock is released, how should other processes acquire the lock? Do you have to wait for it?
Here we have to introduce a very important issue in using distributed locks, that is, we must set an expiration time so that even if the process that gets the lock is hung up, as long as the expiration time of the lock has arrived, the lock will be automatically released. Only in this way can other processes have the opportunity to acquire the lock.
In our example above, the reason why the expiration time of the lock can be set is thatredsync
The default value is set internally. The following isredsync
middleNewMutex
Source code of the method:
// NewMutex returns a new distributed mutex with given name. func (r *Redsync) NewMutex(name string, options ...Option) *Mutex { m := &Mutex{ name: name, expiry: 8 * , tries: 32, delayFunc: func(tries int) { return ((maxRetryDelayMilliSec-minRetryDelayMilliSec)+minRetryDelayMilliSec) * }, genValueFunc: genValue, driftFactor: 0.01, timeoutFactor: 0.05, quorum: len()/2 + 1, pools: , } for _, o := range options { (m) } if { randomPools() } return m }
hereMutex
The second field of the objectexpiry
It is the expiration time of the distributed lock, which is set to 8 seconds by default.tries
The field is the number of retry times for acquiring the lock, that is, the lock failure will be returned after 32 attempts to acquire the lock. Because failure in distributed scenarios is normal, 32 times is not an exaggerated value.delayFunc
Fields are the interval between retry after each failure. I won't explain the other fields one by one, most of them cannot be used.
According to the code, it is easy to think that these fields are set through the option mode.
-
WithExpiry()
: Set the automatic expiration time of the lock (recommended to be greater than the business execution time). -
WithTries(int)
: Set the maximum number of retry times. -
WithRetryDelay()
: Set the retry interval.
Example of usage:
mutex := ("test-redsync", (30*), (3), (500*), )
Watchdog
We now know that using distributed locks requires an expiration time, but this will bring about another problem: if our business code has not been executed, the lock will expire and be automatically released, then another process successfully obtains the lock and also visits competitive resources. Wouldn't the distributed lock lose its meaning?
This leads to another important issue in using distributed locks, which automatically renews the lock. I'll give you a code example and you'll understand:
package main import ( "context" "log/slog" "time" "/go-redsync/redsync/v4" // Introduce the redsync library to implement distributed locks based on Redis "/go-redsync/redsync/v4/redis/goredis/v9" // Introduce redsync goredis connection pool goredislib "/redis/go-redis/v9" // Introduce the go-redis library for communication with Redis servers) func main() { // Create a Redis client client := (&{ Addr: "localhost:36379", // Redis server address Password: "nightwatch", }) // Create a redsync connection pool using the go-redis client pool := (client) // Create a redsync instance to manage distributed locks rs := (pool) // Create a mutex called "test-redsync" (Mutex) mutex := ("test-redsync", (5*)) // Create a context that is generally used to control the timeout and cancellation of locks ctx, cancel := (()) defer cancel() // Acquisition of the lock, if the acquisition fails (for example, the lock has been held by other processes), an error will be returned if err := (ctx); err != nil { panic(err) // If the lock is acquired, the program will panic } // Watchdog, automatic lock renewal stopCh := make(chan struct{}) ticker := (2 * ) // Renew your contract every 2s defer () go func() { for { select { case <-: // Renew the contract and extend the expiration time of the lock if ok, err := (ctx); !ok || err != nil { ("Failed to extend mutex", "err", err, "status", ok) } else { ("Successfully extend mutex") } case <-stopCh: ("Exiting mutex watchdog") return } } }() // Execute business logic (6 * ) // Notify the watchdog to stop automatically renewing stopCh <- struct{}{} // Release the lock, if it fails (for example, the lock has expired or does not belong to the current process), an error will be returned if _, err := (ctx); err != nil { panic(err) // If the unlock fails, the program will panic } }
This example continues the example code in the previous article. What you need to focus on is the following logic:
// Watchdog, automatic lock renewalstopCh := make(chan struct{}) ticker := (2 * ) // Renew your contract every 2sdefer () go func() { for { select { case <-: // Renew the contract and extend the expiration time of the lock if ok, err := (ctx); !ok || err != nil { ("Failed to extend mutex", "err", err, "status", ok) } else { ("Successfully extend mutex") } case <-stopCh: ("Exiting mutex watchdog") return } } }()
redsync
Provided(ctx)
Methods can extend the expiration time of the lock. Assuming that the expiration time of the distributed lock we applied for is 5 seconds, and the execution time of the business code is unknown, then after we get the lock, we can open a goroutine separately to regularly extend the expiration time of the lock. After the execution of the business code is completed, the main goroutine passesstopCh <- struct{}{}
Send a stop signal to the sub goroutine, then the sub goroutine is<-stopCh
The case will receive a notification, and the sub-goroutine will exit, which will stop the automatic lock renewal.
By setting an expiration time for distributed locks and combining with the function of automatically renewing sub goroutine, we can ensure that the process holding the lock will not affect other processes' acquisition of the lock when it is hung up, and it can also enable the lock to be released only after the business execution is completed. We usually call this program that implements automatic renewal of distributed locks "watchdog".
Let me say something extra, about the issue of frequent and interval periods of distributed locks, generally speaking, the renewal time can be set to equal to the expiration time, that is, the expiration time of the lock is set to 5 seconds, so only 5 seconds will be renewed each time.redsync
This is also done internally. As for how long the interval is renewed, this time must be less than 5 seconds of the expiration time. It is usually set to 1/3 or 1/2 of the lock expiration time.
redsync principle
What I explained aboveredsync
The usage can basically cover most scenarios in business development, forredsync
I will not introduce more functions. With the existing knowledge, you can also check the documents and learn by yourself if you encounter problems. Next I want to talk about something more valuable. We will implement a miniature Redis distributed lock ourselves to deepen your understanding.redsync
Understanding.
How to implement a Redis distributed lock
To implement a minimized distributed lock based on Redis, we can define a structureMiniRedisMutex
As a lock object:
type MiniRedisMutex struct { name string // It will be locked as a key in Redis in a distributed way expiry // Lock expiration time conn // Redis Client }
It contains only the necessary fields,name
is the name of the lock,expiry
It is the expiration time that a distributed lock must have.conn
Used to store Redis client connections.
We can define a constructorNewMutex
To create a distributed lock object:
func NewMutex(name string, expiry , conn ) *MiniRedisMutex { return &MiniRedisMutex{name, expiry, conn} }
Next, we need to implement the two functions of locking and unlocking.
Locking methodLock
Implementation is as follows:
func (m *MiniRedisMutex) Lock(ctx , value string) (bool, error) { reply, err := (ctx, , value, ).Result() if err != nil { return false, err } return reply, nil }
Lock
The method receives two parameters,ctx
Used to control cancellation,value
It will be used as the value of the lock.
Lock
The internal logic of the method is very simple, and the Redis is called directlySetNX
Command to set a key-value pair exclusively, lock namename
As Rediskey
, the value of the lockvalue
As Redisvalue
, and specify the expiration time asexpiry
, this is the locking principle of distributed locks.
Here are two key points that you need to pay attention to:
- use
SetNX
Command: Why is it used hereSetNX
Commands instead of ordinarySet
The command is because the lock operation is requiredExclusive. We know,SetNX
The full name of the command isSET if Not eXists
, that is,SetNX
When the command sets a key value pair, ifkey
Does not exist, set itvalue
,likekey
If already exists, no action is performed. This just fits mutex and is the key to implementing distributed mutex locks. -
value
Uniqueness: AlthoughSetNX
Commands can achieve mutual exclusion, but Redis'svalue
It is still necessary to ensure uniqueness. We will continue to look at this point and you will understand it.
Release lock methodUnlock
Implementation is as follows:
// Release the lock's lua script to ensure concurrency securityvar deleteScript = ` local val = ("GET", KEYS[1]) if val == ARGV[1] then return ("DEL", KEYS[1]) elseif val == false then return -1 else return 0 end ` // Unlock releases the lockfunc (m *MiniRedisMutex) Unlock(ctx , value string) (bool, error) { // Execute lua scripts, Redis will ensure its concurrency security status, err := (ctx, deleteScript, []string{}, value).Result() if err != nil { return false, err } if status == int64(-1) { return false, ErrLockAlreadyExpired } return status != int64(0), nil }
In the logic of releasing the lock, we do not simply delete the specified Redis key-value pair, but call itThe method executes a lua script to release the lock.
In this lua script, we first get the specified from Rediskey
forkey-value pair, and then determine its
value
Is it equal toUnlock
Method passed invalue
Parameter values, if equal, remove the specified key-value pair from Redis to indicate that the lock is released, otherwise nothing is done.
The reason to be correctvalue
The reason for making judgments is because we must ensure that this lock is a lock held by the current process, not a lock held by other processes. So what is the basis to indicate that this lock is held by the current process? This is what we want to guaranteevalue
The only reason is that when each process is locked, it needs to generate a random one.value
As the identifier of your own lock, then when released, you can use thisvalue
To determine whether it is the lock you hold. The purpose of this is to avoid the lock being released by another process when a process grabs the lock and is still executing business logic.
Unfortunately, Redis does not provide this logic for releasing locks likeSetNX
The same shortcut command, so we can only execute it in the lua script to ensure concurrency security.
At this point, we have explained the core functions of a miniature Redis distributed lock.
The following isMiniRedisMutex
Complete code implementation of distributed lock:
package miniredislock import ( "context" "errors" "time" "/redis/go-redis/v9" ) var ErrLockAlreadyExpired = ("miniredislock: failed to unlock, lock was already expired") // MiniRedisMutex A miniature Redis distributed locktype MiniRedisMutex struct { name string // It will be locked as a key in Redis in a distributed way expiry // Lock expiration time conn // Redis Client } // NewMutex creates Redis distributed lockfunc NewMutex(name string, expiry , conn ) *MiniRedisMutex { return &MiniRedisMutex{name, expiry, conn} } // Lock add lockfunc (m *MiniRedisMutex) Lock(ctx , value string) (bool, error) { reply, err := (ctx, , value, ).Result() if err != nil { return false, err } return reply, nil } // Release the lock's lua script to ensure concurrency securityvar deleteScript = ` local val = ("GET", KEYS[1]) if val == ARGV[1] then return ("DEL", KEYS[1]) elseif val == false then return -1 else return 0 end ` // Unlock releases the lockfunc (m *MiniRedisMutex) Unlock(ctx , value string) (bool, error) { // Execute lua scripts, Redis will ensure its concurrency security status, err := (ctx, deleteScript, []string{}, value).Result() if err != nil { return false, err } if status == int64(-1) { return false, ErrLockAlreadyExpired } return status != int64(0), nil }
In fact, the main logic of this code is fromredsync
Extracted from the source code. soredsync
In fact, this is also achieved, except that it adds a lot of logical code such as reliability and edge scenarios, and the most core locking and unlocking logic is the same.
Use of micro distributed locks
Let's write a sample program to demonstrate how to use this mini distributed lock:
package main import ( "fmt" "time" goredislib "/redis/go-redis/v9" "/x/net/context" "/jianghushinian/blog-go-example/redsync/miniredislock" ) func main() { // Create a Redis client client := (&{ Addr: "localhost:36379", // Redis server address Password: "nightwatch", }) defer () // Create a mutex lock named "test-miniredislock" mutex := ("test-miniredislock", 5*, client) ctx := () // The value of the mutex should be a random value value := "random-string" // Get the lock _, err := (ctx, value) if err != nil { panic(err) } // Execute business logic ("do something...") (3 * ) // Release the lock you hold _, err = (ctx, value) if err != nil { panic(err) } }
I won't explain the specific logic of this example line by line, I believe you will understand it as soon as you see it. I also hope that you can run this code on the machine by yourself and use distributed locks to deepen your understanding.
Finally, I will leave another homework, you can try the method of renewing the lockExtend
。
Summarize
Distributed locks can ensure concurrent and secure access to competitive resources in distributed systems.redsync
As the most popular Redis distributed lock solution in Go, it is worth learning and using.
redsync
The usage is very simple, locking and unlocking operations areIt's very similar, without much learning cost. However, in order to avoid the lock-holding process lapsed, other processes still have the opportunity to acquire the lock, we need to implement the watchdog function.
This is the end of this article about how to use distributed locks to solve concurrency problems in Go. For more related content on the Go distributed locks to solve concurrency, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!