参考:http://www.javashuo.com/article/p-rgdtbnha-bm.htmlhtml
软件环境:VMware、redhat6.6、oracle12c(linuxx64_12201_database.zip)、12cgrid(linuxx64_12201_grid_home.zip)node
虚拟机先配置一个节点便可,第二个节点由第一个节点克隆再修改相关参数(环境变量中的sid名称、网络等)linux
(操做系统、安装包、网络、用户、环境变量)c++
1.1.一、服务器安装操做系统shell
选择最小安装便可,磁盘分配:35G,内存:4G(最少可能也得2G),swap:8Gbash
关闭防火墙、SELinux服务器
关闭ntpd(mv /etc/ntp.conf /etc/ntp.conf_bak)网络
添加四块网卡:分别用于公网2块(仅主机模式,并进行bonding)、私网2块(随便划分一个vlan1模拟私网,同事做为存储双路径)session
1.1.二、检查并安装oracle12c须要的rpm包oracle
检查
rpm -q binutils compat-libcap1 compat-libstdc++-33 gcc gcc-c++\ e2fsprogs e2fsprogs-libs glibc glibc-devel ksh libaio-devel libaio libgcc libstdc++ libstdc++-devel \ libxcb libX11 libXau libXi libXtst make \ net-tools nfs-utils smartmontools sysstat
//基本上就须要这些包,到安装那一步的检查的时候若是有其余包提示未安装则补充安装
将查询到的未安装的包安装(VMware链接镜像,配置本地yum)
[root@jydb1 ~]#mount /dev/cdrom /mnt
[root@jydb1 ~]# cat /etc/yum.repos.d/rhel-source.repo
[ISO] name=iso baseurl=file:///mnt enabled=1 gpgcheck=0
yum install安装
yum install binutils compat-libcap1 compat-libstdc++-33 \ e2fsprogs e2fsprogs-libs glibc glibc-devel ksh libaio-devel libaio libgcc libstdc++ libstdc++-devel \ libxcb libX11 libXau libXi libXtst make \ net-tools nfs-utils smartmontools sysstat
另外再安装cvuqdisk包(rac_grid自检须要的包,在grid的安装包中有)
rpm -qi cvuqdisk CVUQDISK_GRP=oinstall; export CVUQDISK_GRP \\这里须要先建立oinstall组再安装,后面教程有建立,因此等建立后再进行这一步 rpm -iv cvuqdisk-1.0.10-1.rpm
1.1.三、配置各节点的/etc/hosts
[root@jydb1 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 jydb1.rac ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 jydb1.rac #eth0 public 192.168.137.11 jydb1 192.168.137.12 jydb2 #eth0 vip 192.168.137.21 jydb1-vip 192.168.137.22 jydb2-vip #eth1 private 10.0.0.1 jydb1-priv 10.0.0.2 jydb2-priv 10.0.0.11 jydb1-priv2 10.0.0.22 jydb2-priv2 #scan ip 192.168.137.137 jydb-cluster-scan
1.1.四、各节点建立须要的用户和组
建立group & user:
groupadd -g 54321 oinstall groupadd -g 54322 dba groupadd -g 54323 oper groupadd -g 54324 backupdba groupadd -g 54325 dgdba groupadd -g 54326 kmdba groupadd -g 54327 asmdba groupadd -g 54328 asmoper groupadd -g 54329 asmadmin groupadd -g 54330 racdba useradd -u 54321 -g oinstall -G dba,asmdba,backupdba,dgdba,kmdba,racdba,oper oracle useradd -u 54322 -g oinstall -G asmadmin,asmdba,asmoper,dba grid
自行设置oracle、grid密码
1.1.五、各节点建立安装目录(root)
mkdir -p /u01/app/12.2.0/grid mkdir -p /u01/app/grid mkdir -p /u01/app/oracle chown -R grid:oinstall /u01 chown oracle:oinstall /u01/app/oracle chmod -R 775 /u01/
1.1.六、各节点配置文件修改
内核参数修改:vi /etc/sysctl.conf
# vi /etc/sysctl.conf 增长以下内容: fs.file-max = 6815744 kernel.sem = 250 32000 100 128 kernel.shmmni = 4096 kernel.shmall = 1073741824 kernel.shmmax = 6597069766656 kernel.panic_on_oops = 1 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 #net.ipv4.conf.eth3.rp_filter = 2 #net.ipv4.conf.eth2.rp_filter = 2 #net.ipv4.conf.eth0.rp_filter = 1 fs.aio-max-nr = 1048576 net.ipv4.ip_local_port_range = 9000 65500
修改生效:sysctl -p
用户shell的限制:vi /etc/security/limits.conf
#在/etc/security/limits.conf 增长以下内容: grid soft nproc 2047 grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536 grid soft stack 10240 oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 oracle soft stack 10240
-加载 pam_limits.so插入式认证模块:vi /etc/pam.d/login
vi /etc/pam.d/login 添加以下内容:
session required pam_limits.so
1.1.七、各节点用户环境变量配置
[root@jydb1 ~]# cat /home/grid/.bash_profile
export ORACLE_SID=+ASM1; export ORACLE_HOME=/u01/app/12.2.0/grid; export PATH=$ORACLE_HOME/bin:$PATH; export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib export DISPLAY=192.168.88.121:0.0
[root@jydb1 ~]# cat /home/oracle/.bash_profile
export ORACLE_SID=racdb1; export ORACLE_HOME=/u01/app/oracle/product/12.2.0/db_1; export ORACLE_HOSTNAME=jydb1; export PATH=$ORACLE_HOME/bin:$PATH; export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export DISPLAY=192.168.88.121:0.0
上面的步骤完成后能够克隆node2了,克隆完后,修改下第二台的环境变量
1.1.八、配置各节点ssh互信
克隆出第二台,网络更改没问题后
以grid用户为例,oracle用户一样要配置互信:
①先生成节点一grid的公钥 [grid@jydb1 ~]$ ssh-keygen -t rsa -P '' Generating public/private rsa key pair. Enter file in which to save the key (/home/grid/.ssh/id_rsa): Your identification has been saved in /home/grid/.ssh/id_rsa. Your public key has been saved in /home/grid/.ssh/id_rsa.pub. The key fingerprint is: b6:07:65:3f:a2:e8:75:14:33:26:c0:de:47:73:5b:95 grid@jydb1.rac The key's randomart image is: +--[ RSA 2048]----+ | .. .o| | .. o . .E | | . ...Bo o | | . .=.=. | | S.o o | | o = . . | | . + o | | . . o | | . | +-----------------+ 把它经过命令传到节点二, [grid@jydb1 ~]$ ssh-copy-id -i .ssh/id_rsa.pub grid@10.0.0.2 grid@10.0.0.2's password: Now try logging into the machine, with "ssh 'grid@10.0.0.2'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. ②在第二个节点上也生成公钥,并追加到authorized_keys [grid@jydb2 .ssh]$ ssh-keygen -t rsa -P '' ...... [grid@jydb2 .ssh]$ cat id_rsa.pub >> authorized_keys [grid@jydb2 .ssh]$ scp authorized_keys grid@10.0.0.1:.ssh/ The authenticity of host '10.0.0.1 (10.0.0.1)' can't be established. RSA key fingerprint is d1:21:03:35:9d:f2:a2:81:e7:e1:7b:d0:79:f4:d3:be. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '10.0.0.1' (RSA) to the list of known hosts. grid@10.0.0.1's password: authorized_keys 100% 792 0.8KB/s 00:00 ③验证 [grid@jydb1 .ssh]$ ssh jydb1 date 2018年 03月 30日 星期五 08:01:20 CST
[grid@jydb1 .ssh]$ ssh jydb2 date
2018年 03月 30日 星期五 08:01:20 CST
[grid@jydb1 .ssh]$ ssh jydb1-priv date
2018年 03月 30日 星期五 08:01:20 CST
[grid@jydb2 .ssh]$ ssh jydb2-priv date
2018年 03月 30日 星期五 08:01:20 CST
jydb2上只须要修改蓝色字体
添加一台服务器模拟存储服务器,配置两个私有地址和rac客户端链接多路径,磁盘划分和配置
目标:从存储中划分出来两台主机能够同时看到的共享LUN,一共六个:3个1G的盘用做OCR和Voting Disk,1个40G的盘作GIMR,其他规划作DATA和FRA。
注:因为是实验环境,重点说明磁盘的做用,生产环境须要将DATA规划的大一些。
为存储服务器加63g的硬盘
//2.3的lv划分 asmdisk1 1G asmdisk2 1G asmdisk3 1G asmdisk4 40G asmdisk5 10G asmdisk6 10G
1.2.一、检查存储网络
rac为存储客户端
VMware创建vlan1,两个rac节点、存储服务器上的两块网卡,划分到vlan1,这样就能够经过多路径和存储进行链接。
存储(服务端):10.0.0.1十一、10.0.0.222
rac-jydb1(客户端):10.0.0.一、10.0.0.2
rac-jydb2(客户端):10.0.0.十一、10.0.0.22
最后测试网路互通没问题便可进行下一步
1.2.二、安装iscsi软件包
--服务端
yum安装scsi-target-utils
yum install scsi-target-utils
--客户端
yum安装iscsi-initiator-utils
yum install iscsi-initiator-utils
1.2.三、模拟存储加盘
--服务端操做
填加一个63G的盘,实际就是用来模拟存储新增实际的一块盘。
我这里新增长的盘显示为/dev/sdb,我将它建立成lvm
# pvcreate /dev/sdb Physical volume "/dev/sdb" successfully created # vgcreate vg_storage /dev/sdb Volume group "vg_storage" successfully created # lvcreate -L 10g -n lv_lun1 vg_storage //按照以前划分的磁盘容量分配多少g Logical volume "lv_lun1" created
1.2.四、配置iscsi服务端
iSCSI服务端主要配置文件:/etc/tgt/targets.conf
因此我这里按照规范设置的名称,添加好以下配置:
<target iqn.2018-03.com.cnblogs.test:alfreddisk> backing-store /dev/vg_storage/lv_lun1 # Becomes LUN 1 backing-store /dev/vg_storage/lv_lun2 # Becomes LUN 2 backing-store /dev/vg_storage/lv_lun3 # Becomes LUN 3 backing-store /dev/vg_storage/lv_lun4 # Becomes LUN 4 backing-store /dev/vg_storage/lv_lun5 # Becomes LUN 5 backing-store /dev/vg_storage/lv_lun6 # Becomes LUN 6 </target>
配置完成后,就启动服务和设置开机自启动:
[root@Storage ~]# service tgtd start Starting SCSI target daemon: [ OK ] [root@Storage ~]# chkconfig tgtd on [root@Storage ~]# chkconfig --list|grep tgtd tgtd 0:off 1:off 2:on 3:on 4:on 5:on 6:off [root@Storage ~]# service tgtd status tgtd (pid 1763 1760) is running...
而后查询下相关的信息,好比占用的端口、LUN信息(Type:disk):
[root@Storage ~]# netstat -tlunp |grep tgt tcp 0 0 0.0.0.0:3260 0.0.0.0:* LISTEN 1760/tgtd tcp 0 0 :::3260 :::* LISTEN 1760/tgtd [root@Storage ~]# tgt-admin --show Target 1: iqn.2018-03.com.cnblogs.test:alfreddisk System information: Driver: iscsi State: ready I_T nexus information: LUN information: LUN: 0 Type: controller SCSI ID: IET 00010000 SCSI SN: beaf10 Size: 0 MB, Block size: 1 Online: Yes Removable media: No Prevent removal: No Readonly: No Backing store type: null Backing store path: None Backing store flags: LUN: 1 Type: disk SCSI ID: IET 00010001 SCSI SN: beaf11 Size: 10737 MB, Block size: 512 Online: Yes Removable media: No Prevent removal: No Readonly: No Backing store type: rdwr Backing store path: /dev/vg_storage/lv_lun1 Backing store flags: Account information: ACL information: ALL
1.2.五、配置iscsi客户端
确认开机启动项设置开启:
# chkconfig --list|grep scsi iscsi 0:off 1:off 2:off 3:on 4:on 5:on 6:off iscsid 0:off 1:off 2:off 3:on 4:on 5:on 6:off
使用iscsiadm命令扫描服务端的LUN(探测iSCSI Target)
iscsiadm -m discovery -t sendtargets -p 10.0.1.99
[root@jydb1 ~]# iscsiadm -m discovery -t sendtargets -p 10.0.1.99 10.0.1.99:3260,1 iqn.2018-03.com.cnblogs.test:alfreddisk [root@jydb1 ~]# iscsiadm -m discovery -t sendtargets -p 10.0.2.99 10.0.2.99:3260,1 iqn.2018-03.com.cnblogs.test:alfreddisk
查看iscsiadm -m node
[root@jydb1 ~]# iscsiadm -m node
10.0.1.99:3260,1 iqn.2018-03.com.cnblogs.test:alfreddisk
10.0.2.99:3260,1 iqn.2018-03.com.cnblogs.test:alfreddisk
查看/var/lib/iscsi/nodes/下的文件:
[root@jydb1 ~]# ll -R /var/lib/iscsi/nodes/ /var/lib/iscsi/jydbs/: 总用量 4 drw------- 4 root root 4096 3月 29 00:59 iqn.2018-03.com.cnblogs.test:alfreddisk /var/lib/iscsi/jydbs/iqn.2018-03.com.cnblogs.test:alfreddisk: 总用量 8 drw------- 2 root root 4096 3月 29 00:59 10.0.1.99,3260,1 drw------- 2 root root 4096 3月 29 00:59 10.0.2.99,3260,1 /var/lib/iscsi/jydbs/iqn.2018-03.com.cnblogs.test:alfreddisk/10.0.1.99,3260,1: 总用量 4 -rw------- 1 root root 2049 3月 29 00:59 default /var/lib/iscsi/jydbs/iqn.2018-03.com.cnblogs.test:alfreddisk/10.0.2.99,3260,1: 总用量 4 -rw------- 1 root root 2049 3月 29 00:59 default
挂载iscsi磁盘
根据上面探测的结果,执行下面命令,挂载共享磁盘:
iscsiadm -m node -T iqn.2018-03.com.cnblogs.test:alfreddisk --login
[root@jydb1 ~]# iscsiadm -m node -T iqn.2018-03.com.cnblogs.test:alfreddisk --login Logging in to [iface: default, target: iqn.2018-03.com.cnblogs.test:alfreddisk, portal: 10.0.2.99,3260] (multiple) Logging in to [iface: default, target: iqn.2018-03.com.cnblogs.test:alfreddisk, portal: 10.0.1.99,3260] (multiple) Login to [iface: default, target: iqn.2018-03.com.cnblogs.test:alfreddisk, portal: 10.0.2.99,3260] successful. Login to [iface: default, target: iqn.2018-03.com.cnblogs.test:alfreddisk, portal: 10.0.1.99,3260] successful.
显示挂载成功
经过(fdisk -l或lsblk)命令查看挂载的iscsi硬盘
[root@jydb1 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 35G 0 disk ├─sda1 8:1 0 200M 0 part /boot ├─sda2 8:2 0 7.8G 0 part [SWAP] └─sda3 8:3 0 27G 0 part / sr0 11:0 1 3.5G 0 rom /mnt sdb 8:16 0 1G 0 disk sdc 8:32 0 1G 0 disk sdd 8:48 0 1G 0 disk sde 8:64 0 1G 0 disk sdf 8:80 0 1G 0 disk sdg 8:96 0 1G 0 disk sdi 8:128 0 40G 0 disk sdk 8:160 0 10G 0 disk sdm 8:192 0 10G 0 disk sdj 8:144 0 10G 0 disk sdh 8:112 0 40G 0 disk sdl 8:176 0 10G 0 disk
1.2.六、配置multipath多路径
安装多路径软件包:
rpm -qa |grep device-mapper-multipath
没有安装则yum安装
#yum install -y device-mapper-multipath
或下载安装这两个rpm
device-mapper-multipath-libs-0.4.9-72.el6.x86_64.rpm
device-mapper-multipath-0.4.9-72.el6.x86_64.rpm
添加开机启动
chkconfig multipathd on
生成多路径配置文件
--生成multipath配置文件 /sbin/mpathconf --enable --显示多路径的布局 multipath -ll --从新刷取 multipath -v2 或-v3 --清空全部多路径 multipath -F
如下是操做输出,供参考
[root@jydb1 ~]# multipath -ll Mar 29 03:40:10 | multipath.conf line 109, invalid keyword: multipaths Mar 29 03:40:10 | multipath.conf line 115, invalid keyword: multipaths Mar 29 03:40:10 | multipath.conf line 121, invalid keyword: multipaths Mar 29 03:40:10 | multipath.conf line 127, invalid keyword: multipaths Mar 29 03:40:10 | multipath.conf line 133, invalid keyword: multipaths Mar 29 03:40:10 | multipath.conf line 139, invalid keyword: multipaths asmdisk6 (1IET 00010006) dm-5 IET,VIRTUAL-DISK //wwid size=10.0G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 33:0:0:6 sdj 8:144 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 34:0:0:6 sdm 8:192 active ready running asmdisk5 (1IET 00010005) dm-2 IET,VIRTUAL-DISK size=10G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 33:0:0:5 sdh 8:112 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 34:0:0:5 sdl 8:176 active ready running asmdisk4 (1IET 00010004) dm-4 IET,VIRTUAL-DISK size=40G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 33:0:0:4 sdf 8:80 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 34:0:0:4 sdk 8:160 active ready running asmdisk3 (1IET 00010003) dm-3 IET,VIRTUAL-DISK size=1.0G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 33:0:0:3 sdd 8:48 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 34:0:0:3 sdi 8:128 active ready running asmdisk2 (1IET 00010002) dm-1 IET,VIRTUAL-DISK size=1.0G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 33:0:0:2 sdc 8:32 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 34:0:0:2 sdg 8:96 active ready running asmdisk1 (1IET 00010001) dm-0 IET,VIRTUAL-DISK size=1.0G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 33:0:0:1 sdb 8:16 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 34:0:0:1 sde 8:64 active ready running
启动multipath服务
#service multipathd start
配置multipath
修改第一处: #建议user_friendly_names设为no。若是设定为 no,即指定该系统应使用WWID 做为该多路径的别名。若是将其设为 yes,系统使用文件 #/etc/multipath/mpathn 做为别名。 #当将 user_friendly_names 配置选项设为 yes 时,该多路径设备的名称对于一个节点来讲是惟一的,但不保证对使用多路径设备的全部节点都一致。也就是说, 在节点一上的mpath1和节点二上的mpath1可能不是同一个LUN,可是各个服务器上看到的相同LUN的WWID都是同样的,因此不建议设为yes,而是设为#no,用WWID做为别名。 defaults { user_friendly_names no path_grouping_policy failover //表示multipath工做模式为主备,path_grouping_policy multibus为主主 } 添加第二处:绑定wwid<br>这里的wwid在multipath -l中体现 multipaths { multipath { wwid "1IET 00010001" alias asmdisk1 } multipaths { multipath { wwid "1IET 00010002" alias asmdisk2 } multipaths { multipath { wwid "1IET 00010003" alias asmdisk3 } multipaths { multipath { wwid "1IET 00010004" alias asmdisk4 } multipaths { multipath { wwid "1IET 00010005" alias asmdisk5 } multipaths { multipath { wwid "1IET 00010006" alias asmdisk6 }
配置完成要生效得重启multipathd
绑定后查看multipath别名
[root@jydb1 ~]# cd /dev/mapper/
[root@jydb1 mapper]# ls
asmdisk1 asmdisk2 asmdisk3 asmdisk4 asmdisk5 asmdisk6 control
udev绑定裸设备
首先进行UDEV权限绑定,不然权限不对安装时将扫描不到共享磁盘
修改以前:
[root@jydb1 ~]# ls -lh /dev/dm* brw-rw---- 1 root disk 253, 0 4月 2 16:18 /dev/dm-0 brw-rw---- 1 root disk 253, 1 4月 2 16:18 /dev/dm-1 brw-rw---- 1 root disk 253, 2 4月 2 16:18 /dev/dm-2 brw-rw---- 1 root disk 253, 3 4月 2 16:18 /dev/dm-3 brw-rw---- 1 root disk 253, 4 4月 2 16:18 /dev/dm-4 brw-rw---- 1 root disk 253, 5 4月 2 16:18 /dev/dm-5 crw-rw---- 1 root audio 14, 9 4月 2 16:18 /dev/dmmidi
我这里系统是RHEL6.6,对于multipath的权限,手工去修改几秒后会变回root。因此须要使用udev去绑定好权限。
搜索对应的配置文件模板:
[root@jyrac1 ~]# find / -name 12-* /usr/share/doc/device-mapper-1.02.79/12-dm-permissions.rules
根据模板新增12-dm-permissions.rules文件在/etc/udev/rules.d/下面:
vi /etc/udev/rules.d/12-dm-permissions.rules # MULTIPATH DEVICES # # Set permissions for all multipath devices ENV{DM_UUID}=="mpath-?*", OWNER:="grid", GROUP:="asmadmin", MODE:="660" //修改这里 # Set permissions for first two partitions created on a multipath device (and detected by kpartx) # ENV{DM_UUID}=="part[1-2]-mpath-?*", OWNER:="root", GROUP:="root", MODE:="660"
完成后启动start_udev,30s后权限正常则OK
[root@jydb1 ~]# start_udev 正在启动 udev:[肯定] [root@jydb1 ~]# ls -lh /dev/dm* brw-rw---- 1 grid asmadmin 253, 0 4月 2 16:25 /dev/dm-0 brw-rw---- 1 grid asmadmin 253, 1 4月 2 16:25 /dev/dm-1 brw-rw---- 1 grid asmadmin 253, 2 4月 2 16:25 /dev/dm-2 brw-rw---- 1 grid asmadmin 253, 3 4月 2 16:25 /dev/dm-3 brw-rw---- 1 grid asmadmin 253, 4 4月 2 16:25 /dev/dm-4 brw-rw---- 1 grid asmadmin 253, 5 4月 2 16:25 /dev/dm-5 crw-rw---- 1 root audio 14, 9 4月 2 16:24 /dev/dmmidi
磁盘设备绑定
查询裸设备的主设备号、次设备号
[root@jydb1 ~]# ls -lt /dev/dm-* brw-rw---- 1 grid asmadmin 253, 5 3月 29 04:00 /dev/dm-5 brw-rw---- 1 grid asmadmin 253, 3 3月 29 04:00 /dev/dm-3 brw-rw---- 1 grid asmadmin 253, 2 3月 29 04:00 /dev/dm-2 brw-rw---- 1 grid asmadmin 253, 4 3月 29 04:00 /dev/dm-4 brw-rw---- 1 grid asmadmin 253, 1 3月 29 04:00 /dev/dm-1 brw-rw---- 1 grid asmadmin 253, 0 3月 29 04:00 /dev/dm-0 [root@jydb1 ~]# dmsetup ls|sort asmdisk1 (253:0) asmdisk2 (253:1) asmdisk3 (253:3) asmdisk4 (253:4) asmdisk5 (253:2) asmdisk6 (253:5) 根据对应关系绑定裸设备 vi /etc/udev/rules.d/60-raw.rules # Enter raw device bindings here. # # An example would be: # ACTION=="add", KERNEL=="sda", RUN+="/bin/raw /dev/raw/raw1 %N" # to bind /dev/raw/raw1 to /dev/sda, or # ACTION=="add", ENV{MAJOR}=="8", ENV{MINOR}=="1", RUN+="/bin/raw /dev/raw/raw2 %M %m" # to bind /dev/raw/raw2 to the device with major 8, minor 1. ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="0", RUN+="/bin/raw /dev/raw/raw1 %M %m" ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="1", RUN+="/bin/raw /dev/raw/raw2 %M %m" ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="2", RUN+="/bin/raw /dev/raw/raw3 %M %m" ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="3", RUN+="/bin/raw /dev/raw/raw4 %M %m" ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="4", RUN+="/bin/raw /dev/raw/raw5 %M %m" ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="5", RUN+="/bin/raw /dev/raw/raw6 %M %m" ACTION=="add", KERNEL=="raw1", OWNER="grid", GROUP="asmadmin", MODE="660" ACTION=="add", KERNEL=="raw2", OWNER="grid", GROUP="asmadmin", MODE="660" ACTION=="add", KERNEL=="raw3", OWNER="grid", GROUP="asmadmin", MODE="660" ACTION=="add", KERNEL=="raw4", OWNER="grid", GROUP="asmadmin", MODE="660" ACTION=="add", KERNEL=="raw5", OWNER="grid", GROUP="asmadmin", MODE="660" ACTION=="add", KERNEL=="raw6", OWNER="grid", GROUP="asmadmin", MODE="660"
完成后查看
[root@jydb1 ~]# start_udev 正在启动 udev:[肯定][root@jydb1 ~]# ll /dev/raw/raw*crw-rw---- 1 grid asmadmin 162, 1 5月 25 05:03 /dev/raw/raw1crw-rw---- 1 grid asmadmin 162, 2 5月 25 05:03 /dev/raw/raw2crw-rw---- 1 grid asmadmin 162, 3 5月 25 05:03 /dev/raw/raw3crw-rw---- 1 grid asmadmin 162, 4 5月 25 05:03 /dev/raw/raw4crw-rw---- 1 grid asmadmin 162, 5 5月 25 05:03 /dev/raw/raw5crw-rw---- 1 grid asmadmin 162, 6 5月 25 05:03 /dev/raw/raw6crw-rw---- 1 root disk 162, 0 5月 25 05:03 /dev/raw/rawctl