etcd提供了线性一致性。在线性一致性的基础上。etcd提供了node
这篇blog讨论了etcd选举机制的实现细节,以及应该如何利用etcd选举来避免脑裂。
若是仅仅是须要知道本身是否主节点,那么只须要Campaign指令,当Campaign返回时,本身便成为主节点。
在一个分布式系统中,是没有同时这个概念的,假设Campaign指令的返回tcp包因丢包屡次重试,晚了1分钟才到达Campaigner,那么Campaign返回的同时,这个结果就已经失效。
因此,关键是:心跳间隔必定要短于value的ttl,且心跳失败的时间不能长于ttl,心跳失败数次时就应从新参加选举。这个时候,其它节点因value的ttl没过不能成为leader,而leader会因心跳失败放弃成为leader,从而规避Campaign指令滞后问题,或其它缘由致使的,leader campaigner的value过时后,该campaigner还认为本身是leader的问题,即避免出现脑裂。git
见 https://github.com/etcd-io/et... 8ee1dd9e23bce4d9770816edf5816b13767ac51dgithub
waitDeletes (等待prefix下全部早于本身的value,即本身排到第一)网络
type Election struct { session *Session keyPrefix string leaderKey string leaderRev int64 leaderSession *Session hdr *pb.ResponseHeader }
election.go: 59session
// Campaign puts a value as eligible for the election on the prefix // key. // Multiple sessions can participate in the election for the // same prefix, but only one can be the leader at a time. // // If the context is 'context.TODO()/context.Background()', the Campaign // will continue to be blocked for other keys to be deleted, unless server // returns a non-recoverable error (e.g. ErrCompacted). // Otherwise, until the context is not cancelled or timed-out, Campaign will // continue to be blocked until it becomes the leader. func (e *Election) Campaign(ctx context.Context, val string) error { s := e.session client := e.session.Client() k := fmt.Sprintf("%s%x", e.keyPrefix, s.Lease()) // put 一个 key value,通常而言,不会有冲突。 // 若是同一个session 重复put,致使key的 revision 不为0,txn才会失败。 txn := client.Txn(ctx).If(v3.Compare(v3.CreateRevision(k), "=", 0)) txn = txn.Then(v3.OpPut(k, val, v3.WithLease(s.Lease()))) txn = txn.Else(v3.OpGet(k)) resp, err := txn.Commit() if err != nil { return err } e.leaderKey, e.leaderRev, e.leaderSession = k, resp.Header.Revision, s // 若是 put 时发现 key 的revision 不为 0 if !resp.Succeeded { kv := resp.Responses[0].GetResponseRange().Kvs[0] // 更新 leaderRev e.leaderRev = kv.CreateRevision if string(kv.Value) != val { // value 不相等,更新value if err = e.Proclaim(ctx, val); err != nil { // 失败则经过删除本身的key (辞职) // 从新开始选举, 返回错误 // 若是从新开始选举错误?有心跳超时。 e.Resign(ctx) return err } } } _, err = waitDeletes(ctx, client, e.keyPrefix, e.leaderRev-1) if err != nil { // clean up in case of context cancel select { case <-ctx.Done(): // 发生错误,删除本身的key,避免被误认为leader. e.Resign(client.Ctx()) default: e.leaderSession = nil } return err } // 成为leader e.hdr = resp.Header return nil }
@startuml autonumber participant "actor1" as actor1 participant "actor2" as actor2 participant "etcd" as etcd activate actor1 activate actor2 activate etcd actor1 -> etcd: put "prefix/lease_id_a" value1 actor2 -> etcd: put "prefix/lease_id_b" value2 etcd -> actor2: key_revision:1 total_revision: 10000 etcd -> actor1: key_revision:1 total_revision: 10002 actor1 -> etcd: wait for "prefix" delete with revision 10001 actor2 -> actor2: i'm leader note right: 当前队列的状态是[actor2_lease_id_b, actor1_lease_id_a], actor2 排在最前面,因此返回actor2,actor2成为leader,actor1须要等待 deactivate actor2 actor2 -> etcd: put "prefix/lease_id_c" value2 etcd -> actor1: 10000 delete actor1 -> actor1: i'm leader note right: 当前队列的状态是[actor1_lease_id_a, actor2_lease_id_c], actor1 排在最前面,因此返回actor1,actor1成为leader, actor2须要等待 etcd -> actor2: key_revision:1 total_revision: 10003 actor2 -> etcd: wait for "prefix" delete with revision 10002 @enduml
能够建立campaigner,即选举人去参选,选举人本身负责参选,维持心跳。前面分析过,系统里绝对不会同时存在两个节点认为本身是leader。选举过程被封装,用户只须要实现下面的事件:less