MFS是一种半分布式文件系统,它是由波兰人开发的。MFS文件系统可以实现RAID的功能,不但可以更节约存储成本,并且不比专业的存储系统差,它还能够实如今线扩展。
分布式文件系统是指文件系统管理的物理存储资源下不必定直接链接在本地节点上,而是经过计算机网络与节点相连。web分布式文件系统的优势是集中访问、简化操做、数据容灾,以及提升了文件的存取性能。vim
- 元数据服务器(Master):在整个体系中负责管理文件系统,维护元数据;
- 元数据日志服务器(Metalogger):备份Master服务器的变化日志文件,文件类型为changlog_ml.*.mfs。当Master服务器数据丢失或者损坏时,能够从日志服务器中取得文件,进行恢复;
- 数据存储服务器(Chunk Server):真正存储的数据的服务器。存储文件时,会把文件分块保存,并在数据服务器之间进行复制。数据服务器越多,可以使用的容量则越大,可靠性就越高,性能也就越好;
- 客户端(Client):能够像挂载NFS同样挂载MFS文件系统,其操做是相同的。
- 客户端向元数据服务器发出读请求;
- 元数据服务器把所需数据存放的位置(ChunkServer的IP地址和Chunk编号)告知客户端;
- 客户端向已知的ChunkServer请求发送数据;
- Chunkserver向客户端发送数据。
- 客户端向元数据服务器发送写入请求;
- 元数据服务器与ChunkServer进行交互,但元数据服务器只在某些服务器建立新的分块Chunks,建立成功后由ChunkServers告知元数据服务器操做成功;
- 元数据服务器告知客户端,能够在哪一个ChunkServer的哪些Chunks吸入数据;
- 客户端向指定的ChunkServer写入数据;
- 该ChunkServer与其余ChunkServer进行数据同步,同步成功后ChunkServer告知客户端数据写入成功;
- 客户端告知元数据服务器本次写入完毕。
主机 | 操做系统 | IP地址 |
---|---|---|
Master Server | Centos 7.3 X86_64 | 192.168.1.11 |
Metalogger | Centos 7.3 X86_64 | 192.168.1.12 |
Chunk1 | Centos 7.3 X86_64 | 192.168.1.13 |
Chunk2 | Centos 7.3 X86_64 | 192.168.1.14 |
Chunk3 | Centos 7.3 X86_64 | 192.168.1.15 |
Client | Centos 7.3 X86_64 | 192.168.1.22 |
# curl "https://ppa.moosefs.com/RPM-GPG-KEY-MooseFS" > /etc/pki/rpm-gpg/RPM-GPG-KEY-MooseFS
# curl "http://ppa.moosefs.com/MooseFS-3-el7.repo" > /etc/yum.repos.d/MooseFS.repo
yum -y install moosefs-master moosefs-cgi moosefs-cgiserv moosefs-cli
确认配置文件,在/etc/mfs下生成了相关的配置文件(mfsexports.cfg、mfsmaster.cfg等)
如下配置文件均采用默认值,不需作修改:mfsmaster.cfg、mfsexports.cfg、mfstopology.cfg
浏览器
mfsmaster start ps -ef | grep mfs
# curl "https://ppa.moosefs.com/RPM-GPG-KEY-MooseFS" > /etc/pki/rpm-gpg/RPM-GPG-KEY-MooseFS
# curl "http://ppa.moosefs.com/MooseFS-3-el7.repo" > /etc/yum.repos.d/MooseFS.repo
yum -y install moosefs-metalogger
## vim /etc/mfs/mfsmetalogger.cfg ······省略部分语句 ############################################### # RUNTIME OPTIONS # ############################################### # user to run daemon as (default is mfs) # WORKING_USER = mfs # group to run daemon as (optional - if empty then default user group will be used) # WORKING_GROUP = mfs # name of process to place in syslog messages (default is mfsmetalogger) # SYSLOG_IDENT = mfsmetalogger # whether to perform mlockall() to avoid swapping out mfsmetalogger process (default is 0, i.e. no) # LOCK_MEMORY = 0 # Linux only: limit malloc arenas to given value - prevents server from using huge amount of virtual memor y (default is 4) # LIMIT_GLIBC_MALLOC_ARENAS = 4 # Linux only: disable out of memory killer (default is 1) # DISABLE_OOM_KILLER = 1 # nice level to run daemon with (default is -19; note: process must be started as root to increase priorit y, if setting of priority fails, process retains the nice level it started with) # NICE_LEVEL = -19 # set default umask for group and others (user has always 0, default is 027 - block write for group and bl ock all for others) # FILE_UMASK = 027 # where to store daemon lock file (default is /var/lib/mfs) # DATA_PATH = /var/lib/mfs # number of metadata change log files (default is 50) # BACK_LOGS = 50 # number of previous metadata files to be kept (default is 3) # BACK_META_KEEP_PREVIOUS = 3 # metadata download frequency in hours (default is 24, should be at least BACK_LOGS/2) # META_DOWNLOAD_FREQ = 24 ############################################### # MASTER CONNECTION OPTIONS # ############################################### # delay in seconds before next try to reconnect to master if not connected (default is 5) # MASTER_RECONNECTION_DELAY = 5 # local address to use for connecting with master (default is *, i.e. default local address) # BIND_HOST = * # MooseFS master host, IP is allowed only in single-master installations (default is mfsmaster) 修改成Master的IP地址 MASTER_HOST = 192.168.1.11 # MooseFS master supervisor port (default is 9419) # MASTER_PORT = 9419 # timeout in seconds for master connections (default is 10) # MASTER_TIMEOUT = 10
mfsmetalogger start ps -ef | grep mfs
以上三台数据存储服务器配置一致
安全
# curl "https://ppa.moosefs.com/RPM-GPG-KEY-MooseFS" > /etc/pki/rpm-gpg/RPM-GPG-KEY-MooseFS
# curl "http://ppa.moosefs.com/MooseFS-3-el7.repo" > /etc/yum.repos.d/MooseFS.repo
yum -y install moosefs-chunkserver
## vim /etc/mfs/mfschunkserver.cfg ······省略部分信息 ############################################### # MASTER CONNECTION OPTIONS # ############################################### # labels string (default is empty - no labels) # LABELS = # local address to use for master connections (default is *, i.e. default local address) # BIND_HOST = * # MooseFS master host, IP is allowed only in single-master installations (default is mfsmaster) # 修改成Master的IP地址 MASTER_HOST = 192.168.1.11 # MooseFS master command port (default is 9420) # MASTER_PORT = 9420 # timeout in seconds for master connections. Value >0 forces given timeout, but when value is 0 then CS as ks master for timeout (default is 0 - ask master) # MASTER_TIMEOUT = 0 # delay in seconds before next try to reconnect to master if not connected # MASTER_RECONNECTION_DELAY = 5 # authentication string (used only when master requires authorization) # AUTH_CODE = mfspassword
## vim /etc/mfs/mfshdd.cfg ······省略部分信息 # This file keeps definitions of mounting points (paths) of hard drives to use with chunk server. # A path may begin with extra characters which swiches additional options: # - '*' means that this hard drive is 'marked for removal' and all data will be replicated to other hard drives (usually on other chunkservers) # - '<' means that all data from this hard drive should be moved to other hard drives # - '>' means that all data from other hard drives should be moved to this hard drive # - '~' means that significant change of total blocks count will not mark this drive as damaged # If there are both '<' and '>' drives then data will be moved only between these drives # It is possible to specify optional space limit (after each mounting point), there are two ways of doing that: # - set space to be left unused on a hard drive (this overrides the default setting from mfschunkserver.cfg) # - limit space to be used on a hard drive # Space limit definition: [0-9]*(.[0-9]*)?([kMGTPE]|[KMGTPE]i)?B?, add minus in front for the first option. # # Examples: # # use hard drive '/mnt/hd1' with default options: #/mnt/hd1 # # use hard drive '/mnt/hd2', but replicate all data from it: #*/mnt/hd2 # # use hard drive '/mnt/hd3', but try to leave 5GiB on it: #/mnt/hd3 -5GiB # # use hard drive '/mnt/hd4', but use only 1.5TiB on it: #/mnt/hd4 1.5TiB # # use hard drive '/mnt/hd5', but fill it up using data from other drives #>/mnt/hd5 # # use hard drive '/mnt/hd6', but move all data to other hard drives #</mnt/hd6 # # use hard drive '/mnt/hd7', but ignore significant change of hard drive total size (e.g. compressed file systems) #~/mnt/hd7 #提供给MFS的分区目录 /data
mkdir /data chown -R mfs:mfs /data mfschunkserver start ps -ef | grep mfs
# curl "https://ppa.moosefs.com/RPM-GPG-KEY-MooseFS" > /etc/pki/rpm-gpg/RPM-GPG-KEY-MooseFS
# curl "http://ppa.moosefs.com/MooseFS-3-el7.repo" > /etc/yum.repos.d/MooseFS.repo
yum -y install moosefs-client
mkdir -p /mfs/data modprobe fuse mfsmount /mfs/data -H 192.168.1.11
经过yum安装方式已经默认安装好Mfscgiserv功能,它是同Python编写的一个web服务器,其监听端口为9425,能够在Master Server上经过mfscgiserv命令开启,而后利用浏览器打开就能够全面监控全部客户端挂载、Chunk Server、Master Server,以及客户端的各类操做等。
其中各部分的含义以下:服务器
- Info部分:显示了MFS的基本信息
- Server部分:列出现有的Chunk Server
- Disks部分:列出每一台Chunk Server的磁盘目录及使用量
- Exports部分:列出被共享的目录,便可被挂载的目录
- Mounts部分:显示被挂载的状况
- Operations部分:显示正在执行的操做
- Master Charts部分:显示Master Server的操做状况,包括读取、写入、建立目录、删除目录等
mfsgetgoal与mfssetgoal命令
网络
目标是指文件被复制的份数,设定了复制的份数后就能够经过mfsgetgoal命令来证明,也能够经过mfssetgoal来改变设定。架构
mfscheckfile与mfsfileinfo命令app
实际的副本分数能够经过mfscheckfile和mfsfileinfo命令来证明。curl
mfsdirinfo命令分布式
整个目录树的内容摘要能够经过一个功能加强的、等同于“du -s”的命令mfsdirinfo来显示。
最重要的就是维护元数据服务器,而元数据服务器最重要的目录为/var/lib/mfs/,MFS数据的存储、修改、更新等操做变化都会记录咋这个目录的某个文件中,所以只要保证这个目录的数据安全,就可以保证整个MFS文件系统的安全性和可靠性。/var/lib/mfs/目录下的数据由两部分组成:一部分是元数据服务器的改变日志,文件名称相似于changelog.*.mfs;另外一部分是元数据文件metadata.mfs,运行mfsmaster时该文件会被命名为metadata.mfs.back。只要保证了这两部数据的安全,即便元数据服务器遭到致命×××,也能够经过备份的元数据文件来部署一套元数据服务器。