做者:独笔孤行@TaoCloudhtml
DRBD(Distributed Replicated Block Device)是一个用软件实现的、无共享的、服务器之间镜像块设备内容的存储复制解决方案。能够简单的理解为网络RAID。node
DRBD的核心功能经过Linux的内核实现,最接近系统的IO栈,DRBD的位置处于文件系统如下,比文件系统更加靠近操做系统内核及IO栈。linux
节点 | 主机名 | IP地址 | 磁盘 | 操做系统 |
---|---|---|---|---|
节点1 | node1 | 172.16.201.53 | sda,sdb | centos7.6 |
节点2 | node2 | 172.16.201.54 | sda,sdb | centos7.6 |
关闭防火墙和selinuxc++
#2节点都须要配置 systemctl stop firewalld systemctl disable firewalld setenforce 0 sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
配置epel源centos
#2节点都须要配置 yum install epel-release
若是yum源中有完整的drbd软件,可直接经过yum进行安装,若是yum没法找到部分软件包,可经过编译安装。如下2中方法二选一便可。bash
yum install drbd drbd-bash-completion drbd-udev drbd-utils kmod-drbd
yum方式进行安装可能没法找到kmod-drbd软件包,所以须要编译安装。服务器
2.1准备编译环境微信
yum update yum -y install gcc gcc-c++ make automake autoconf help2man libxslt libxslt-devel flex rpm-build kernel-devel pygobject2 pygobject2-devel reboot
2.2在官网下载源码包,网络
在官网 https://www.linbit.com/en/drbd-community/drbd-download/中获取源码包下载地址,并进行下载。app
wget https://www.linbit.com/downloads/drbd/9.0/drbd-9.0.21-1.tar.gz wget https://www.linbit.com/downloads/drbd/utils/drbd-utils-9.13.0.tar.gz wget https://www.linbit.com/downloads/drbdmanage/drbdmanage-0.99.18.tar.gz mkdir -p rpmbuild/{BUILD,BUILDROOT,RPMS,SOURCES,SPECS,SRPMS} mkdir DRBD9
2.3.编译生成rpm包
tar xvf drbd-9.0.21-1.tar.gz cd drbd-9.0.21-1 make kmp-rpm cp /root/rpmbuild/RPMS/x86_64/*.rpm /root/DRBD9/
tar xvf drbdmanage-0.99.18.tar.gz cd drbdmanage-0.99.18 make rpm cp dist/drbdmanage-0.99.18*.rpm /root/DRBD9/
2.4.开始安装drbd
#2节点都须要安装 cd /root/DRBD9 yum install drbd-kernel-debuginfo-9.0.21-1.x86_64.rpm drbdmanage-0.99.18-1.noarch.rpm drbdmanage-0.99.18-1.src.rpm kmod-drbd-9.0.21_3.10.0_1160.6.1-1.x86_64.rpm
1.主节点划分vg
#节点1操做 pvcreate /dev/sdb1 vgcreate drbdpool /dev/sdb1
2.初始化DRBD集群并添加节点
#节点1操做 [root@node1 ~]# drbdmanage init 172.16.201.53 You are going to initialize a new drbdmanage cluster. CAUTION! Note that: * Any previous drbdmanage cluster information may be removed * Any remaining resources managed by a previous drbdmanage installation that still exist on this system will no longer be managed by drbdmanage Confirm: yes/no: yes Empty drbdmanage control volume initialized on '/dev/drbd0'. Empty drbdmanage control volume initialized on '/dev/drbd1'. Waiting for server: . Operation completed successfully #添加节点2 [root@node1 ~]# drbdmanage add-node node2 172.16.201.54 Operation completed successfully Operation completed successfully Host key verification failed. Give leader time to contact the new node Operation completed successfully Operation completed successfully Join command for node node2: drbdmanage join -p 6999 172.16.201.54 1 node1 172.16.201.53 0 G3F1h/pAcGwV1LnlxhFE
记录返回结果中的最后一行:“drbdmanage join -p 6999 172.16.201.54 1 node1 172.16.201.53 0 G3F1h/pAcGwV1LnlxhFE” 并在节点2中执行,以加入集群。
3.从节点划分vg
#节点2操做 pvcreate /dev/sdb vgcreate drbdpool /dev/sdb
4.从节点加入集群
#节点2操做 [root@node2 ~]# drbdmanage join -p 6999 172.16.201.54 1 node1 172.16.201.53 0 G3F1h/pAcGwV1LnlxhFE You are going to join an existing drbdmanage cluster. CAUTION! Note that: * Any previous drbdmanage cluster information may be removed * Any remaining resources managed by a previous drbdmanage installation that still exist on this system will no longer be managed by drbdmanage Confirm: yes/no: yes Waiting for server to start up (can take up to 1 min) Operation completed successfully
5.检查集群状态
#节点1操做,如下返回结果为正常状态 [root@node1 ~]# drbdadm status .drbdctrl role:Primary volume:0 disk:UpToDate volume:1 disk:UpToDate node2 role:Secondary volume:0 peer-disk:UpToDate volume:1 peer-disk:UpToDate
6.建立资源
#节点1操做 #建立资源test01 [root@node1 ~]# drbdmanage add-resource test01 Operation completed successfully [root@node1 ~]# drbdmanage list-resources +----------------+ | Name | State | |----------------| | test01 | ok | +----------------+
7.建立卷
#节点1操做 #建立5GB的卷test01 [root@node1 ~]# drbdmanage add-volume test01 5GB Operation completed successfully [root@node1 ~]# drbdmanage list-volumes +-----------------------------------------------------------------------------+ | Name | Vol ID | Size | Minor | | State | |-----------------------------------------------------------------------------| | test01 | 0 | 4.66 GiB | 100 | | ok | +-----------------------------------------------------------------------------+ [root@node1 ~]#
8.部署资源
末尾数字 “2” 表示节点数量
#节点1操做 [root@node1 ~]# drbdmanage deploy-resource test01 2 Operation completed successfully #建立完时,状态为Inconsistent,正在进行同步 [root@node1 ~]# drbdadm status .drbdctrl role:Primary volume:0 disk:UpToDate volume:1 disk:UpToDate node2 role:Secondary volume:0 peer-disk:UpToDate volume:1 peer-disk:UpToDate test01 role:Secondary disk:UpToDate node2 role:Secondary replication:SyncSource peer-disk:Inconsistent done:5.70 #同步完成后,状态内容以下 [root@node1 ~]# drbdadm status .drbdctrl role:Primary volume:0 disk:UpToDate volume:1 disk:UpToDate node2 role:Secondary volume:0 peer-disk:UpToDate volume:1 peer-disk:UpToDate test01 role:Secondary disk:UpToDate node2 role:Secondary peer-disk:UpToDate
9.配置DRBD设备完成后,建立文件系统并进行挂载
#节点1操做 # [/dev/drbd***]的数字,是经过命令[drbdmanage list-volumes]获取的[Minor]值 [root@node1 ~]# mkfs.xfs /dev/drbd100 meta-data=/dev/drbd100 isize=512 agcount=4, agsize=305176 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=1220703, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@node1 ~]# mount /dev/drbd100 /mnt/ [root@node1 ~]# echo "Hello World" > /mnt/test.txt [root@node1 ~]# ll /mnt/ total 4 -rw-r--r-- 1 root root 12 Nov 26 15:43 test.txt [root@node1 ~]# cat /mnt/test.txt Hello World
10.在节点2上挂载DRBD设备,可进行以下操做:
#在节点1操做 #卸载/mnt目录,配置为从节点 [root@node1 ~]# umount /mnt/ [root@node1 ~]# drbdadm secondary test01 #在节点2操做 #配置为主节点 [root@node2 ~]# drbdadm primary test01 [root@node2 ~]# mount /dev/drbd100 /mnt/ [root@node2 ~]# df -hT Filesystem Type Size Used Avail Use% Mounted on devtmpfs devtmpfs 3.9G 0 3.9G 0% /dev tmpfs tmpfs 3.9G 0 3.9G 0% /dev/shm tmpfs tmpfs 3.9G 8.9M 3.9G 1% /run tmpfs tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/mapper/centos-root xfs 35G 1.5G 34G 5% / /dev/sda1 xfs 1014M 190M 825M 19% /boot tmpfs tmpfs 783M 0 783M 0% /run/user/0 /dev/drbd100 xfs 4.7G 33M 4.7G 1% /mnt [root@node2 ~]# ls -l /mnt/ total 4 -rw-r--r-- 1 root root 12 Nov 26 15:43 test.txt [root@node2 ~]# cat /mnt/test.txt Hello World
关注微信公众号“云实战”,欢迎更多问题咨询