使用Glusterfs做为kvm后端存储

1.测试环境

centos6.4 x86-64git

gluster-3.4web

qemu-1.5.2centos

机器:api

192.168.1.100:glusterfs+kvmdom

192.168.1.101-103:glusterfs socket

2.Glusterfs存储集群部署

先部署好glusterfs集群,部署教程参考这里http://my.oschina.net/kisops/blog/151073tcp

 

gluster集群部署完毕后,建立一个volume用于存放vm镜像性能

  
gluster volume create vm-images stripe 2 replica 2 192.168 . 1 . { 100 , 101 , 102 , 103 }: / data / vm-images gluster volume start vm-images

3.虚拟化机器上安装qemu

centos6自带的qemu1.2版本并不支持glusterfs,这里要编译新版的qemu.测试

 

安装qemu前先装上glusterfs-develspa

  
1 rpm –ivh http: // download . gluster . org / pub / gluster / glusterfs / 3.4 / 3.4 . 0 / CentOS / epel-6Server / x86_64 / glusterfs-devel- 3.4 . 0 - 8 . el6 . x86_64 . rpm

安装依赖的包

  
1 yum install zlib-devel glib2-devel -y

开始编译qemu,加上参数--enable-glusterfs

  
1 wget http: // wiki . qemu-project . org / download / qemu- 1.5 . 2 . tar . bz2 2 tar jxvf qemu- 1.5 . 2 . tar . bz2 3 cd qemu- 1.5 . 2 4 ./ configure --enable-glusterfs # 这里加上enable-glusterfs 5 make ; make install

4.使用

环境安装完毕,使用qemu-img建立一个虚拟机磁盘

  
1 / usr / local / bin / qemu-img create -f qcow2 gluster: // 192.168 . 1.100 / vm-images / disk1 10G

建立一个虚拟机

  
qemu-system-x86_64 --enable-kvm -m 1024 -drive file = gluster: // 192.168 . 1.100 / vm-images / disk1 -vnc : 15 -cdrom / data / CentOS- 6.4 -i386-minimal . iso

如今可使用VNC连上并安装系统了.

5.后记

qemu链接glusterfs支持多种格式:

gluster://1.2.3.4/testvol/a.img
gluster+tcp://1.2.3.4/testvol/a.img
gluster+tcp://1.2.3.4:24007/testvol/dir/a.img
gluster+tcp://[1:2:3:4:5:6:7:8]/testvol/dir/a.img
gluster+tcp://[1:2:3:4:5:6:7:8]:24007/testvol/dir/a.img
gluster+tcp://server.domain.com:24007/testvol/dir/a.img
gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
gluster+rdma://1.2.3.4:24007/testvol/a.img

 

通过测试发现qemu链接glusterfs是支持高可用的,如gluster://1.2.3.4/testvol/a.img,1.2.3.4宕机将不会影响虚拟机运行.

另外测试过程发现gluster对xfs分区格式兼容性不是太好,会有使用空间大小不正确的现象,解决方法是:

  
gluster volume set < volname > cluster . stripe-coalesce enable

6.附上老外的性能测试

由测试结果能够看到使用api链接glusterfs性能比fuse提升近一倍,而且接近本地磁盘的速度.

The following numbers from FIO benchmark are to show the performance advantage of using QEMU’s GlusterFS block driver instead of the usual FUSE mount while accessing the VM image.

Test setup

Host Dual core x86_64 system running Fedora 17 kernel (3.5.6-1.fc17.x86_64)
Guest Fedora 17 image, 4 way SMP, 2GB RAM, using virtio and cache=none QEMU options

QEMU options

FUSE mount qemu-system-x86_64 –enable-kvm –nographic -smp 4 -m 2048 -drive file=/mnt/F17,if=virtio,cache=none => /mnt is GlusterFS FUSE mount point
GlusterFS block driver in QEMU (FUSE bypass) qemu-system-x86_64 –enable-kvm –nographic -smp 4 -m 2048 -drive file=gluster://bharata/test/F17,if=virtio,cache=none
Base (VM image accessed directly from brick) qemu-system-x86_64 –enable-kvm –nographic -smp 4 -m 2048 -drive file=/test/F17,if=virtio,cache=none => /test is brick directory

FIO load files

Sequential read direct IO ; Read 4 files with aio at different depths
[global]
ioengine=libaio

direct=1
rw=read
bs=128k
size=512m
directory=/data1
[file1]
iodepth=4
[file2]
iodepth=32
[file3]
iodepth=8
[file4]
iodepth=16
Sequential write direct IO ; Write 4 files with aio at different depths
[global]
ioengine=libaio

direct=1
rw=write
bs=128k
size=512m
directory=/data1
[file1]
iodepth=4
[file2]
iodepth=32
[file3]
iodepth=8
[file4]
iodepth=16


FIO READ numbers

  aggrb (KB/s) minb (KB/s) maxb (KB/s)
FUSE mount 15219 3804 5792
QEMU’s GlusterFS block driver (FUSE bypass) 39357 9839 12946
Base 43802 10950 12918

FIO WRITE numbers

  aggrb (KB/s) minb (KB/s) maxb (KB/s)
FUSE mount 24579 6144 8423
QEMU’s GlusterFS block driver (FUSE bypass) 42707 10676 17262
Base 42393 10598 15646

Updated numbers

Here are the recent FIO numbers averaged from 5 runs using latest QEMU (git commit: 03a36f17d77) and GlusterFS (git commit: cee1b62d01). The test environment remains same as above with the following two changes:

  • The GlusterFS volume has write-behind translator turned off
  • The host kernel is upgraded to 3.6.7-4.fc17.x86_64

FIO READ numbers

  aggrb (KB/s) % Reduction from Base
Base 44464 0
FUSE mount 21637 -51
QEMU’s GlusterFS block driver (FUSE bypass) 38847 -12.6

FIO WRITE numbers

  aggrb (KB/s) % Reduction from Base
Base 45824 0
FUSE mount 40919 -10.7
QEMU’s GlusterFS block driver (FUSE bypass) 45627 -0.43
相关文章
相关标签/搜索