A Redis Distributed Lock Pattern enhanced by Client Side Caching.
package main
import (
"context"
"github.com/redis/rueidis"
"github.com/redis/rueidis/rueidislock"
)
func main() {
locker, err := rueidislock.NewLocker(rueidislock.LockerOption{
ClientOption: rueidis.ClientOption{InitAddress: []string{"node1:6379", "node2:6380", "node3:6379"}},
KeyMajority: 2, // please make sure that all your `Locker`s share the same KeyMajority
})
if err != nil {
panic(err)
}
defer locker.Close()
// acquire the lock "my_lock"
ctx, cancel, err := locker.WithContext(context.Background(), "my_lock")
if err != nil {
panic(err)
}
// "my_lock" is acquired. use the ctx as normal.
doSomething(ctx)
// invoke cancel() to release the lock.
cancel()
}
- The returned
ctx
will be canceled automatically and immediately once theKeyMajority
is not held anymore, for example:- Redis down.
- Related keys has been deleted by other program or administrator.
- The waiting
Locker.WithContext
will try acquiring the lock again automatically and immediately once it has been released by someone even from other program.
When the locker.WithContext
is invoked, it will:
- Try acquiring 3 keys (given that
KeyMajority
is 2), which arerueidislock:0:my_lock
,rueidislock:1:my_lock
andrueidislock:2:my_lock
, by sending redis commandSET NX PXAT
orSET NX PX
ifFallbackSETPX
is set. - If the
KeyMajority
is satisfied within theKeyValidity
duration, the invocation is successful and actx
is returned as the lock. - If the invocation is not successful, it will wait for client-side caching notification to retry again.
- If the invocation is successful, the
Locker
will extend thectx
validity periodically and also watch client-side caching notification for canceling thectx
if theKeyMajority
is not held anymore.