RAID5
:至少须要
3
块磁盘,是
raid0
和
raid1
的折中方案,采用奇偶校验的方式将数据拆分存储在不一样的磁盘中,而且其数据和对应的校验信息存储在不一样的磁盘上,最多容许有一块磁盘故障,在更换了故障的磁盘后能够使用校验信息来恢复丢失的数据。
本实验中将使用
4
块磁盘建立软
RAID5
,其中一块磁盘作备份磁盘。软
RAID
,即操做系统级的
RAID
。
一、
建立以来创建
RAID5
的
4
个分区。
[root@vm ~]# fdisk /dev/sdb //建立4个磁盘分区
The number of cylinders for this disk is set to 2610.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): n //建立分区
Command action
e extended
p primary partition (1-4)
p //选择主分区
Partition number (1-4): 1 //分区ID 1
First cylinder (1-2610, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-2610, default 2610): 100
......
Command (m for help): t //更改分区类型为 “Linux raid autodetect”
Partition number (1-4): 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): t
Partition number (1-4): 2
Hex code (type L to list codes): fd
Changed system type of partition 2 to fd (Linux raid autodetect)
Command (m for help): t
Partition number (1-4): 3
Hex code (type L to list codes): fd
Changed system type of partition 3 to fd (Linux raid autodetect)
Command (m for help): t
Partition number (1-4): 4
Hex code (type L to list codes): fd
Changed system type of partition 4 to fd (Linux raid autodetect)
Command (m for help): p //查看分区结构
Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 100 803218+ fd Linux raid autodetect
/dev/sdb2 101 200 803250 fd Linux raid autodetect
/dev/sdb3 201 300 803250 fd Linux raid autodetect
/dev/sdb4 301 400 803250 fd Linux raid autodetect
Command (m for help): w //保存退出
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@vm ~]#
二、
建立软
RAID5
阵列。
[root@vm ~]# mdadm -C /dev/md0 -l 5 –n 3 -x 1 -c 128 /dev/sdb{1,2,3,4}
-C 建立 后面接建立的RAID块设备名称
-l 5 建立raid 5
-n 3 用于建立raid5磁盘的数量,即活动磁盘的数量,RAID5最少为3
-x 5 备用磁盘的数量
-c 128 设置块大小为128K ,默认为64K。
三、
格式化建立的
RAID
阵列并挂载。
[root@vm /]# mkfs.ext3 /dev/md0 //格式化文件系统为ext3
[root@vm /]# mount /dev/md0 /mnt/ //挂载文件系统
[root@vm /]# mdadm --detail /dev/md0 //查看详细信息
/dev/md0:
Version : 0.90
Creation Time : Fri Jul 30 15:14:09 2010
Raid Level : raid5
Array Size : 1606144 (1568.76 MiB 1644.69 MB)
Used Dev Size : 803072 (784.38 MiB 822.35 MB)
Raid Devices : 3
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Fri Jul 30 15:19:14 2010
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 128K
UUID : 7035b6e4:31c6f22f:cb44717b:a34273bf
Events : 0.2
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1 //
注意状态
”active”
1 8 18 1 active sync /dev/sdb2
2 8 19 2 active sync /dev/sdb3
3 8 20 - spare /dev/sdb4 //备用分区
[root@vm /]#
四、
模拟阵列中的某个分区失效。
[root@vm /]# mdadm /dev/md0 -f /dev/sdb3 //模拟组成rdia5的sdb3磁盘失效
mdadm: set /dev/sdb3 faulty in /dev/md0
[root@vm /]#
[root@vm /]# mdadm --detail /dev/md0 //再次查看raid5 信息
/dev/md0:
Version : 0.90
Creation Time : Fri Jul 30 15:14:09 2010
Raid Level : raid5
Array Size : 1606144 (1568.76 MiB 1644.69 MB)
Used Dev Size : 803072 (784.38 MiB 822.35 MB)
Raid Devices : 3
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Fri Jul 30 15:27:50 2010
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 1
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 128K
UUID : 7035b6e4:31c6f22f:cb44717b:a34273bf
Events : 0.6
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 18 1 active sync /dev/sdb2
2 8 20 2 active sync /dev/sdb4
3 8 19 - faulty spare /dev/sdb3
//此时备用磁盘sdb4自动转为active,sdb3为faulty状态。
五、
移除失效的分区。
[root@vm /]# mdadm /dev/md0 --remove /dev/sdb3 //移除sdb3
mdadm: hot removed /dev/sdb3
[root@vm /]#
[root@vm /]# mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Fri Jul 30 15:14:09 2010
Raid Level : raid5
Array Size : 1606144 (1568.76 MiB 1644.69 MB)
Used Dev Size : 803072 (784.38 MiB 822.35 MB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Fri Jul 30 15:30:44 2010
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 128K
UUID : 7035b6e4:31c6f22f:cb44717b:a34273bf
Events : 0.8
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 18 1 active sync /dev/sdb2
2 8 20 2 active sync /dev/sdb4
[root@vm /]#
//此时sdb3已经移除了。
六、
从新添加分区。
[root@vm /]# mdadm /dev/md0 -a /dev/sdb3 //“-a”参数添加sdb3
mdadm: added /dev/sdb3
[root@vm /]#
[root@vm /]#
[root@vm /]#
[root@vm /]# mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Fri Jul 30 15:14:09 2010
Raid Level : raid5
Array Size : 1606144 (1568.76 MiB 1644.69 MB)
Used Dev Size : 803072 (784.38 MiB 822.35 MB)
Raid Devices : 3
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Fri Jul 30 15:30:44 2010
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 128K
UUID : 7035b6e4:31c6f22f:cb44717b:a34273bf
Events : 0.8
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 18 1 active sync /dev/sdb2
2 8 20 2 active sync /dev/sdb4
3 8 19 - spare /dev/sdb3
[root@vm /]#
七、
7
、创建
RAID
配置文件。
若是没有配置文件,在中止raid后就没法再激活
[root@mylab ~]# echo DEVICE /dev/sdb{1,2,3,4} > /etc/mdadm.conf
[root@mylab ~]# mdadm -Ds >> /etc/mdadm.conf
[root@mylab ~]# mdadm -D /dev/md0 >> /etc/mdadm.conf
八、
停用,启用或移除
RAID.
执行此操做以前须要完成第
7
步的操做。
首先卸载阵列,而后中止
RAID
。命令以下:
[root@vm ~]# umount /dev/md0
[root@vm ~]# mdadm --stop /dev/md0
启用
RAID
,命令以下:
[root@vm ~]# mdadm --assemble --scan /dev/md0