ZK实现分布式事务锁代码及原理验证

先复习一下ZK实现分布式锁的原理:java

每一个客户端对某个方法加锁时,在zookeeper上的与该方法对应的指定节点的目录下,生成一个惟一的瞬时有序节点。 判断是否获取锁的方式很简单,只须要判断有序节点中序号最小的一个。 当释放锁的时候,只需将这个瞬时节点删除便可。同时,其能够避免服务宕机致使的锁没法释放,而产生的死锁问题。shell

经过代码验证是否生成了瞬时的有序节点apache

package com.jv.zookeeper.curator;

import java.util.concurrent.TimeUnit;

import org.apache.curator.RetryPolicy;
import org.apache.curator.framework.CuratorFramework;
import org.apache.curator.framework.CuratorFrameworkFactory;
import org.apache.curator.framework.recipes.locks.InterProcessMutex;
import org.apache.curator.retry.ExponentialBackoffRetry;

public class TestInterProcessMutex {
	public static void main(String[] args) throws Exception {
		RetryPolicy retryPolicy = new ExponentialBackoffRetry(1000, 3);
		CuratorFramework client = CuratorFrameworkFactory.newClient("192.168.245.101:2181", retryPolicy);
		client.start();
		InterProcessMutex lock = new InterProcessMutex(client, "/mylock");
		//lock.acquire(1000, TimeUnit.MILLISECONDS) 获取锁,超时时间为1000毫秒
		if ( lock.acquire(1000, TimeUnit.MILLISECONDS) ) 
		{
		    try 
		    {
		       System.out.println("获得锁,并执行");
		       //模拟线程须要执行很长时间,观察ZK中/mylock下的临时ZNODE状况
		       Thread.sleep(10000000);
		    }
		    finally
		    {
		        lock.release();
		        System.out.println("释放锁");
		    }
		}
	}
}
package com.jv.zookeeper.curator;

import java.util.concurrent.TimeUnit;

import org.apache.curator.RetryPolicy;
import org.apache.curator.framework.CuratorFramework;
import org.apache.curator.framework.CuratorFrameworkFactory;
import org.apache.curator.framework.recipes.locks.InterProcessMutex;
import org.apache.curator.retry.ExponentialBackoffRetry;

public class TestInterProcessMutex2 {
	public static void main(String[] args) throws Exception {
		RetryPolicy retryPolicy = new ExponentialBackoffRetry(1000, 3);
		CuratorFramework client = CuratorFrameworkFactory.newClient("192.168.245.101:2181", retryPolicy);
		client.start();
		InterProcessMutex lock = new InterProcessMutex(client, "/mylock");
		//将超时时间设置足够长,观察ZK中ZNODE的状况,以验证分布式锁的原理是不是使用创建临时顺序ZNODE实现的
		if ( lock.acquire(1000000, TimeUnit.MILLISECONDS) ) 
		{
		    try 
		    {
		       System.out.println("获得锁,并执行");
		       Thread.sleep(10000000);
		    }
		    finally
		    {
		        lock.release();
		        System.out.println("释放锁");
		    }
		}
	}
}

要把代码跑起来,在pom.xml中加入以下依赖服务器

<dependency>
			<groupId>org.apache.zookeeper</groupId>
			<artifactId>zookeeper</artifactId>
			<version>3.4.6</version>
		</dependency>
		<!-- https://mvnrepository.com/artifact/org.apache.curator/curator-recipes -->
		<dependency>
			<groupId>org.apache.curator</groupId>
			<artifactId>curator-recipes</artifactId>
			<version>4.0.0</version>
		</dependency>

 

先运行TestInterProcessMutex,在运行TestInterProcessMutex2分布式

使用xshell或者securityCRT登陆zookeeper主机ui

进入到zookeeper的安装目录/binthis

./zkCli.sh线程

ls /mylockcode

能够看到确实生成了两个瞬时有序节点,而且序号小的客户端得到了锁xml

curator封装事后使用确实很方便

 

补充一点,curator还能够很方便的实现选举

LeaderSelectorListener listener = new LeaderSelectorListenerAdapter()
{
    public void takeLeadership(CuratorFramework client) throws Exception
    {
        // 这是你变成leader时执行的方法,你能够在这里执行leader的全部操做
        // 若是你想放弃leader,你必须退出此方法
    }
}

LeaderSelector selector = new LeaderSelector(client, path, listener);
selector.autoRequeue();  // not required, but this is behavior that you will probably expect
selector.start();

它的原理就是包装了InterProcessMutex,而后LeaderSelector跑起来以后就去获取锁,一旦获取到锁就调用listener.takeLeadership方法

这种选举仍是有点太简单了,没有去考虑资源、数据问题。zk自己的选举就须要考虑参考事务ID的大小,拥有最大事务ID的服务器才能是leader,而后follower同步leader中比本身更大的事务,达到数据一致

实际应用的话,须要考虑分布式组件的状况,选择是否使用ZK提供的简单选举策略

相关文章
相关标签/搜索