配置项目 | 数据库1 | 数据库2 |
---|---|---|
主机名 | rac1 | rac2 |
操做系统 | Red Hat Enterprise Linux Server release 7.2 (Maipo) | Red Hat Enterprise Linux Server release 7.2 (Maipo) |
内核 | 3.10.0-327.el7.x86_64 | 3.10.0-327.el7.x86_64 |
public网卡 | enp0s9 | enp0s9 |
public ip | 192.168.56.101 | 192.168.56.102 |
private网卡 | enp0s8 | enp0s8 |
private ip | 10.0.0.7 | 10.0.0.0.8 |
virtual ip | 192.168.56.103 | 192.168.56.104 |
scan name | definescan | |
scan ip | 192.168.56.105 |
如无特殊说明,如下操做需在全部节点上进行配置。
rac1(root)
# hostnamectl set-hostname rac1 # su # hostname rac1
rac2(root)
# hostnamectl set-hostname rac2 # su # hostname rac2
rac1(root),rac2(root)
# public IP 192.168.56.101 rac1 192.168.56.102 rac2 # virtual IP 192.168.56.103 rac1-vip 192.168.56.104 rac2-vip # rac scan IP 192.168.56.106 definescan # private IP 10.0.0.7 rac1-priv 10.0.0.8 rac2-priv
!> 全部的网卡都须要设置静态iphtml
nmcli con show
查看全部的网卡[root@rac1 /]# nmcli con show NAME UUID TYPE DEVICE enp0s8 e849869a-de19-4824-bafd-4491e66e8ca4 802-3-ethernet enp0s8 enp0s3 86db33b5-ea89-47aa-a038-98f6029fa608 802-3-ethernet enp0s3 enp0s9 706ffc32-e82c-4a01-8b8f-eefbf92950ff 802-3-ethernet -- virbr0-nic 1ac00d88-3f52-4dad-8da7-006b9073469f 802-3-ethernet virbr0-nic virbr0 00facfd9-5460-4846-8e94-1a12de673348 bridge virbr0
而后到路径/etc/sysconfig//etc/sysconfig/network-scripts
下根据网卡名称找到网卡到配置文件,通常名称命名格式为ifcfg-NAME
node
rac1(root)
ifcfg-enp0s9(public ip)linux
TYPE=Ethernet BOOTPROTO=none DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPADDR=192.168.56.101 IPV4_FAILURE_FATAL=no NAME=enp0s9 UUID=706ffc32-e82c-4a01-8b8f-eefbf92950ff DEVICE=enp0s9 ONBOOT=yes
ifcfg-enp0s8(private ip)c++
TYPE=Ethernet BOOTPROTO=none DEFROUTE=yes IPV4_FAILURE_FATAL=no IPADDR=10.0.0.7 NAME=enp0s8 UUID=e849869a-de19-4824-bafd-4491e66e8ca4 DEVICE=enp0s8 ONBOOT=yes PEERDNS=yes PEERROUTES=yes
rac2(root)
ifcfg-enp0s9(public ip)数据库
HWADDR=08:00:27:26:72:E5 TYPE=Ethernet BOOTPROTO=none DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPADDR=192.168.56.102 IPV4_FAILURE_FATAL=no NAME=enp0s9 UUID=bc89e1c6-2457-41ce-a366-5a505c5d1cd3 ONBOOT=yes
ifcfg-enp0s8(private ip)segmentfault
HWADDR=08:00:27:F9:1B:62 TYPE=Ethernet IPADDR=10.0.0.8 BOOTPROTO=none DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=no NAME=enp0s8 UUID=0d68de3e-74ab-4e0d-af99-12a68f5f7525 ONBOOT=yes
在两个节点上经过ping验证两个节点是否可通centos
ping rac1 ping rac2 ping rac1-priv ping rac2-priv
若是在network-scripts
目录下找不到配置文件,可自行建立一个,网卡的UUID可经过命令nmcli con show
查看
# systemctl status firewalld.service # systemctl stop firewalld.service # systemctl disable firewalld.service
修改文件/etc/selinux/config
,设置安全
SELINUX=disabled
关闭selinuxbash
# setenforce 0 # getenforce
rac安装过程须要依赖较多的包,这些包都包含在系统镜像中,可经过将系统镜像挂载到系统中设置本地yum源来进行安装。每一个虚拟环境挂载镜像的方式不一样,步骤并不复杂,这里以VirtualBox为例。服务器
/dev/sr0
,能够经过mount
命令挂载到/mnt
下# mount /dev/sr0 /mnt # cd /mnt # ll total 872 dr-xr-xr-x. 4 root root 2048 Oct 30 2015 addons dr-xr-xr-x. 3 root root 2048 Oct 30 2015 EFI -r--r--r--. 1 root root 8266 Apr 4 2014 EULA -r--r--r--. 1 root root 18092 Mar 6 2012 GPL dr-xr-xr-x. 3 root root 2048 Oct 30 2015 images dr-xr-xr-x. 2 root root 2048 Oct 30 2015 isolinux dr-xr-xr-x. 2 root root 2048 Oct 30 2015 LiveOS -r--r--r--. 1 root root 114 Oct 30 2015 media.repo dr-xr-xr-x. 2 root root 835584 Oct 30 2015 Packages dr-xr-xr-x. 24 root root 6144 Oct 30 2015 release-notes dr-xr-xr-x. 2 root root 4096 Oct 30 2015 repodata -r--r--r--. 1 root root 3375 Oct 23 2015 RPM-GPG-KEY-redhat-beta -r--r--r--. 1 root root 3211 Oct 23 2015 RPM-GPG-KEY-redhat-release -r--r--r--. 1 root root 1568 Oct 30 2015 TRANS.TBL
# cd /etc/yum.repos.d # cat <<EOF > redhat7.2iso.repo [rhel7] name = Red Hat Enterprise Linux 7.2 baseurl=file:///mnt/ gpgcheck=0 enabled=1 EOF # yum clean all # yum grouplist # yum makecache
能正常输出说明配置成功
Red Hat默认的yum仓库须要注册用户才能使用,若是你的系统未注册,使用yum时会报如下错误
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
解决办法就是删掉自带的仓库,只要删除文件/etc/yum.repos.d/redhat.repo
便可。
本次rac安装是经过GUI界面进行安装,所以须要事先安装VNC,经过VNC进入系统进行数据库安装。
安装vnc以前先确保已经完成以上挂载镜像设置本地yum源相关步骤。
# yum install tigervnc-server
编辑文件/lib/systemd/system/vncserver@.service
,将里面<USER>
替换成登陆用户,这里直接用root登陆。
[Unit] Description=Remote desktop service (VNC) After=syslog.target network.target [Service] Type=forking # Clean any existing files in /tmp/.X11-unix environment ExecStartPre=/bin/sh -c '/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :' ExecStart=/usr/sbin/runuser -l root -c "/usr/bin/vncserver %i" PIDFile=/root/.vnc/%H%i.pid ExecStop=/bin/sh -c '/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :' [Install] WantedBy=multi-user.target
修改完执行如下命令从新加载
# systemctl daemon-reload
# vncserver
首次启动须要输入密码,启动后,默认端口号是5901,也能够经过命令查看端口号
# netstat -npl|grep vnc tcp 0 0 0.0.0.0:5901 0.0.0.0:* LISTEN 7048/Xvnc tcp6 0 0 :::5901 :::* LISTEN 7048/Xvnc
后续全部的GUI操做都经过vnc客户端进行操做。
groupadd -g 1204 oinstall groupadd -g 1200 dba groupadd -g 1203 asmadmin groupadd -g 1201 asmdba groupadd -g 1202 asmoper
useradd -u 1100 -g oinstall -G dba,asmdba,asmadmin -d /home/oracle oracle useradd -u 1200 -g oinstall -G asmadmin,asmdba,asmoper -d /home/grid grid passwd oracle passwd grid
# id nobody
检查是否存在,若无则手动建立,且保证两边的ID一致
cat 1>> /etc/sysctl.conf <<EOF fs.file-max = 6815744 kernel.sem = 250 32000 100 128 kernel.shmmni = 4096 kernel.shmall = 1073741824 kernel.shmmax = 858993459200 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 fs.aio-max-nr = 1048576 net.ipv4.ip_local_port_range = 9000 65500 EOF
执行如下命令生效
sysctl -p
limits.conf
cat 1>>/etc/security/limits.conf <<EOF grid soft nofile 1024 grid soft nofile 1024 grid hard nofile 65536 grid soft nproc 4096 grid hard nproc 16384 grid soft stack 10240 grid hard stack 32768 oracle soft nofile 1024 oracle hard nofile 65536 oracle soft nproc 4096 oracle hard nproc 16384 oracle soft stack 10240 oracle hard stack 32768 grid soft memlock -1 grid hard memlock -1 oracle soft memlock -1 oracle hard memlock -1 EOF
/etc/pam.d/login
cat 1>>/etc/pam.d/login <<EOF session required pam_limits.so EOF
/etc/profile
cat 1>>/etc/profile <<EOF if [ \$USER = "oracle" ] || [ \$USER = "grid" ]; then if [ \$SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022 fi EOF
grid用户环境变量设置
在.bash_profile其中添加如下内容,注意是以RAC节点1为例,在节点2上要写ORACLE_SID=+ASM2,节点3上要写ORACLE_SID=+ASM3。
rac1
# su - grid # vi ~/.bash_profile umask 022 export ORACLE_SID=+ASM1 export ORACLE_HOME=/u01/app/11.2.0/grid export ORACLE_BASE=/u01/app/oracle export PATH=/u01/app/11.2.0/grid/bin:$PATH # source ~/.bash_profile
rac2
# su - grid # vi ~/.bash_profile umask 022 export ORACLE_SID=+ASM2 export ORACLE_HOME=/u01/app/11.2.0/grid export ORACLE_BASE=/u01/app/oracle export PATH=/u01/app/11.2.0/grid/bin:$PATH # source ~/.bash_profile
oracle用户环境变量设置
在.bash_profile其中添加如下内容,注意是以RAC节点1为例,在节点2上要写ORACLE_SID=db2,在节点3上要写ORACLE_SID=db3
rac1
# su - oracle # vi ~/.bash_profile umask 022 export ORACLE_SID=db1 export ORACLE_BASE=/u01/app/oracledb export ORACLE_HOME=/u01/app/oracledb/product/11.2.0/db_1 export PATH=$ORACLE_HOME/bin:$PATH # source ~/.bash_profile
rac2
# su - oracle # vi ~/.bash_profile umask 022 export ORACLE_SID=db2 export ORACLE_BASE=/u01/app/oracledb export ORACLE_HOME=/u01/app/oracledb/product/11.2.0/db_1 export PATH=$ORACLE_HOME/bin:$PATH # source ~/.bash_profile
用root用户在全部节点上执行如下命令建立文件夹。
su - root mkdir -p /u01/app/11.2.0/grid mkdir -p /u01/app/oracle/product/11.2.0/db_1 mkdir -p /u01/soft mkdir -p /u01/app/oracledb mkdir -p /u01/app/oracledb/product/11.2.0/db_1 chown -R grid:oinstall /u01/app/oracle chown -R grid:oinstall /u01 chown -R oracle:oinstall /u01/app/oracle/product/11.2.0/db_1 chown -R oracle:oinstall /u01/app/oracledb chown -R oracle:oinstall /u01/app/oracledb/product/11.2.0/db_1 chown -R grid:oinstall /u01/app/11.2.0/grid chmod -R 775 /u01/
# chmod +x /etc/rc.d/rc.local # cat >>/etc/rc.local <<EOF if test -f /sys/kernel/mm/transparent_hugepage/enabled; then echo never > /sys/kernel/mm/transparent_hugepage/enabled fi EOF # cat >/etc/default/grub <<EOF GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)" GRUB_DEFAULT=saved GRUB_DISABLE_SUBMENU=true GRUB_TERMINAL_OUTPUT="console" GRUB_CMDLINE_LINUX="rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet transparent_hugepage=never" GRUB_DISABLE_RECOVERY="true" EOF # grub2-mkconfig -o /boot/grub2/grub.cfg
# grep SwapTotal /proc/meminfo SwapTotal: 2723836 kB # mkdir -p /usr/swap # dd if=/dev/zero of=/usr/swap/swapfile bs=1G count=2 # mkswap /usr/swap/swapfile # swapon /usr/swap/swapfile # grep SwapTotal /proc/meminfo SwapTotal: 4820984 kB
设置开机启动挂载,编辑/etc/fstab
在文件最后增长一行
/usr/swap/swapfile swap swap defaults 0 0
修改文件/etc/systemd/logind.conf
设置RemoveIPC值为no
RemoveIPC=no
从新加载
systemctl daemon-reload systemctl restart systemd-logind
修改文件/etc/sysconfig/network
NOZEROCONF=yes
执行如下命令,安装依赖包,若是报错请忽略
yum clean all yum install -y binutils* yum install -y compat-libcap1* yum install -y compat-libstdc++* yum install -y compat-libstdc++*686* yum install -y e2fsprogs* yum install -y e2fsprogs-libs* yum install -y glibc*686* yum install -y glibc* yum install -y glibc-devel* yum install -y glibc-devel*686* yum install -y ksh* yum install -y libgcc*686* yum install -y libgcc* yum install -y libs* yum install -y libstdc++* yum install -y libstdc++*686* yum install -y libstdc++-devel* yum install -y libstdc++*686* yum install -y libaio* yum install -y libaio*686* yum install -y libaio-devel* yum install -y libaio-devel*686* yum install -y libXtst* yum install -y libXtst*686* yum install -y libX11*686* yum install -y libX11* yum install -y libXau*686* yum install -y libXau* yum install -y libxcb*686* yum install -y libxcb* yum install -y libXi* yum install -y libXi*686* yum install -y make* yum install -y net-tools* yum install -y nfs-utils* yum install -y sysstat* yum install -y smartmontools* yum install -y unixODBC* yum install -y unixODBC-devel* yum install -y unixODBC*686* yum install -y unixODBC-devel*686* yum install -y gcc-* yum install -y gcc-c++* yum install -y elfutils-libelf-devel
特别说明:RHEL7.2对于Oracle11.2.0.4的认证是后认证(11.2.0.4先于Redhat7.2发布)的,compat-libstdc++-33这个包11.2.0.4安装须要,可是Redhat7.2自带的包中不存在,因此须要从其余版本得到这个包,并手动安装。
两个包能够从如下地址获取
下载完后执行如下命令完成安装
rpm -ivh compat-libstdc++-33-3.2.3-72.el7.x86_64.rpm rpm -ivh compat-libstdc++-33-3.2.3-72.el7.i686.rpm
执行如下命令检查依赖包安装状况
rpm -q binutils compat-libcap1 compat-libstdc++-33 e2fsprogs e2fsprogs-libs glibc glibc glibc-devel glibc-devel ksh libgcc libgcc libs libstdc++ libstdc++ libstdc++-devel libstdc++ libaio libaio libaio-devel libaio-devel libXtst libXtst libX11 libX11 libXau libXau libxcb libxcb libXi libXi make net-tools nfs-utils sysstat smartmontools unixODBC unixODBC-devel unixODBC unixODBC-devel gcc gcc-c++ elfutils-libelf-devel
若是有no install
请安装好再进行下一步。
系统总共挂载了5块共享存储盘,各盘存储状况以下
路径 | 大小 | 用途 |
---|---|---|
/dev/sdb | 2G | vote(投票) |
/dev/sdc | 2G | vote(投票) |
/dev/sdd | 2G | vote(投票) |
/dev/sde | 20G | arch(归档) |
/dev/sdf | 40G | data(数据) |
能够用命令fdisk -l
查看具体状况
由于存储都是共享的,因此分区操做在任一节点上操做便可
对每一个盘进行分区,以/dev/sde
为例,输入命令fdisk /dev/sde
,依次输入n->p->(一路默认到底)->w
。
# fdisk /dev/sde Command (m for help): n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): p Partition number (1-4, default 1): First sector (2048-43548671, default 2048): Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-43548671, default 43548671): Using default value 43548671 Partition 1 of type Linux and of size 20.8 GiB is set Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.
经过命令/usr/lib/udev/scsi_id -g -u /dev/sdxxx
查看磁盘wwid,由于是共享存储,每一个节点看到的wwid都是同样的,udev经过规则文件,给磁盘设置权限,让grid用户有权限操做磁盘。
# /usr/lib/udev/scsi_id -g -u /dev/sdb 1ATA_VBOX_HARDDISK_VB54ce865f-e65a7d00 # /usr/lib/udev/scsi_id -g -u /dev/sdc 1ATA_VBOX_HARDDISK_VB8f9429ee-32f50530 # /usr/lib/udev/scsi_id -g -u /dev/sdd 1ATA_VBOX_HARDDISK_VBc92cde00-a564f90e # /usr/lib/udev/scsi_id -g -u /dev/sde 1ATA_VBOX_HARDDISK_VBcc226ad4-aee5f903 # /usr/lib/udev/scsi_id -g -u /dev/sdf 1ATA_VBOX_HARDDISK_VB3fd31e1a-a035187e
根据查询到的wwid,建立规则文件/etc/udev/rules.d/99-asmdevices.rules
内容以下,RESULT就是上面查到的wwid,每一块盘建立一条记录。
ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="1ATA_VBOX_HARDDISK_VB54ce865f-e65a7d00", SYMLINK+="asmdisk001", OWNER="grid", GROUP="asmadmin", MODE="0660" ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="1ATA_VBOX_HARDDISK_VB8f9429ee-32f50530", SYMLINK+="asmdisk002", OWNER="grid", GROUP="asmadmin", MODE="0660" ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="1ATA_VBOX_HARDDISK_VBc92cde00-a564f90e", SYMLINK+="asmdisk003", OWNER="grid", GROUP="asmadmin", MODE="0660" ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="1ATA_VBOX_HARDDISK_VBcc226ad4-aee5f903", SYMLINK+="asmdisk004", OWNER="grid", GROUP="asmadmin", MODE="0660" ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="1ATA_VBOX_HARDDISK_VB3fd31e1a-a035187e", SYMLINK+="asmdisk005", OWNER="grid", GROUP="asmadmin", MODE="0660"
partprobe udevadm control --reload-rules udevadm trigger --type=devices --action=change
若是是用名称进行绑定,在执行规则文件以前,必定要用
partprobe
命令进行更新磁盘信息
若是目录/dev/
下生成asmdisk*
软连接,则说明执行成功
# cd /dev # ll asmdisk* lrwxrwxrwx 1 root root 3 Sep 26 00:55 asmdisk001 -> sdb lrwxrwxrwx 1 root root 3 Sep 26 00:55 asmdisk002 -> sdc lrwxrwxrwx 1 root root 3 Sep 26 00:55 asmdisk003 -> sdd lrwxrwxrwx 1 root root 3 Sep 26 00:55 asmdisk004 -> sde lrwxrwxrwx 1 root root 3 Sep 26 00:55 asmdisk005 -> sdf
此时查看设备权限,正常的话权限变为660(rw-rw----)
,拥有者变为 grid:asmadmin
# ls -l /dev/sd* brw-rw----. 1 grid asmadmin 8, 16 Sep 26 10:38 /dev/sdb brw-rw----. 1 grid asmadmin 8, 32 Sep 26 10:38 /dev/sdc brw-rw----. 1 grid asmadmin 8, 48 Sep 26 10:38 /dev/sdd brw-rw----. 1 grid asmadmin 8, 64 Sep 26 10:38 /dev/sde brw-rw----. 1 grid asmadmin 8, 80 Sep 26 10:38 /dev/sdf
# ll /u01/soft/ total 9797256 -rw-r--r--@ 1 grid oinstall 1395582860 9 26 14:13 p13390677_112040_Linux-x86-64_1of7.zip -rw-r--r--@ 1 grid oinstall 1151304589 9 26 13:52 p13390677_112040_Linux-x86-64_2of7.zip -rw-r--r--@ 1 grid oinstall 1205251894 9 20 01:57 p13390677_112040_Linux-x86-64_3of7.zip -rw-r--r--@ 1 grid oinstall 1133472011 9 26 09:59 p29255947_112040_Linux-x86-64.zip -rw-r--r--@ 1 grid oinstall 113112960 9 17 13:02 p6880880_112000_Linux-x86-64.zip
# su - grid # cd /u01/soft # unzip *.zip
# su - root # cd /u01/soft/grid/rpm # rpm -ivh cvuqdisk-1.0.9-1.rpm
将文件/u01/soft/grid/rpm/cvuqdisk-1.0.9-1.rpm
拷贝至其余节点tmp目录下,能够用scp命令拷贝
其余节点执行如下操做
# su - root # cd /tmp # scp grid@rac1:/u01/soft/grid/rpm/cvuqdisk-1.0.9-1.rpm . # rpm -ivh cvuqdisk-1.0.9-1.rpm
cvu的包安装完成后在节点一以grid用户启动grid安装。
在节点一登陆grid用户,安装数据库集群软件
# su - grid $ export DISPLAY=:1.0 $ xhost + $ cd /u01/soft/grid $ ./runInstaller -jreLoc /etc/alternatives/jre_1.8.0
export DISPLAY=:1.0
设置图形界面显示到哪一个端口上,1.0是vnc的监听端口,./runInstaller -jreLoc /etc/alternatives/jre_1.8.0
这样写的目的是安装程序在显示上会有一些bug,致使显示不全或者按钮点不了,这样启动能够避免该状况。
Simplified Chinese
1521
能够修改,不启动GNSAdd
增长集群节点此时若是还未配置节点信任,点击下一步会报[INS-30132]
的错误
点击界面上SSH Connectivity
配置互信
输入rac2 grid用户的密码,点击setup,若是提示配置成功,能够点击test测试是否如今是互信状态,若是未setup就test,会报错。
Do Not Use
Oracle ASM
Disk Group Name
输入CRS
,点击Change Discovery Path
,输入/dev/asmdisk*
,这个文件名在咱们前面配置存储规则的时候指定。根据前面的规划,选择 asmdisk001/002/003
三块2G的盘
这里因为系统自带了ksh,能够忽略pdksh的缺包问题;ASM磁盘设备因为使用裸盘,已确认共享,也能够忽略。
若是这里有错误,能够根据提示解决完毕后点击Check Again
进行从新检查若是确认错误能够忽略,把
Ignore All
的选项勾上进行下一步
执行第一个脚本/u01/app/oraInventory/orainstRoot.sh
通常不会有问题。
执行第二个脚本/u01/app/11.2.0/grid/root.sh
时报如下错误
Adding Clusterware entries to inittab ohasd failed to start Failed to start the Clusterware. Last 20 lines of the alert log follow: 2019-09-27 12:54:19.483:
这个地方是RHEL7.x和11.2.0.4.0兼容性问题。由于RHEL7使用systemd而不是initd运行进程和重启进程,而root.sh经过传统的initd运行ohasd进程。解决方法就是在RHEL7中ohasd须要被设置为一个服务,而且在运行脚本root.sh以前启动。
停掉root.sh脚本,以root用户执行如下脚本
# touch /usr/lib/systemd/system/ohas.service # chmod 777 /usr/lib/systemd/system/ohas.service # cat >>/usr/lib/systemd/system/ohas.service <<EOF [Unit] Description=Oracle High Availability Services After=syslog.target [Service] ExecStart=/etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple Restart=always [Install] WantedBy=multi-user.target EOF # systemctl daemon-reload # systemctl enable ohas.service # systemctl start ohas.service # systemctl status ohas.service ● ohas.service - Oracle High Availability Services Loaded: loaded (/usr/lib/systemd/system/ohas.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2019-09-27 00:36:06 CST; 4s ago Main PID: 5730 (init.ohasd) CGroup: /system.slice/ohas.service └─5730 /bin/sh /etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple Sep 27 00:36:06 rac2 systemd[1]: Started Oracle High Availability Services. Sep 27 00:36:06 rac2 systemd[1]: Starting Oracle High Availability Services...
从新执行root.sh
,若是此时仍是报错多是root.sh脚本建立了init.ohasd以后,ohas.service没有立刻启动,解决方法参考如下,当运行root.sh时,一直刷新/etc/init.d,直到出现init.ohasd文件,立刻手动启动ohas.service服务命令
systemctl start ohas.service
当两个节点显示如下信息时说明脚本执行成功
CRS-2672: Attempting to start 'ora.asm' on 'rac1' CRS-2676: Start of 'ora.asm' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.CRS.dg' on 'rac1' CRS-2676: Start of 'ora.CRS.dg' on 'rac1' succeeded Configure Oracle Grid Infrastructure for a Cluster ... succeeded
当全部节点执行完后在任意节点执行如下命令查看各节点状态
# /u01/app/11.2.0/grid/bin/crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.CRS.dg ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.asm ONLINE ONLINE rac1 Started ONLINE ONLINE rac2 Started ora.gsd OFFLINE OFFLINE rac1 OFFLINE OFFLINE rac2 ora.net1.network ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.ons ONLINE ONLINE rac1 ONLINE ONLINE rac2 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE rac1 ora.cvu 1 ONLINE ONLINE rac1 ora.oc4j 1 ONLINE ONLINE rac1 ora.rac1.vip 1 ONLINE ONLINE rac1 ora.rac2.vip 1 ONLINE ONLINE rac2 ora.scan1.vip 1 ONLINE ONLINE rac1
在集群验证过程当中提示scan验证问题,这是因为咱们采用了hosts解析,只要保证全部节点解析definescan正常便可
点击Next
,选择忽略该错误便可
cd /u01/app rm -rf * mkdir -p /u01/app/11.2.0/grid mkdir -p /u01/app/oracle/product/11.2.0/db_1 mkdir -p /u01/soft mkdir -p /u01/app/oracledb mkdir -p /u01/app/oracledb/product/11.2.0/db_1 chown -R grid:oinstall /u01/app/oracle chown -R grid:oinstall /u01 chown -R oracle:oinstall /u01/app/oracle/product/11.2.0/db_1 chown -R oracle:oinstall /u01/app/oracledb chown -R oracle:oinstall /u01/app/oracledb/product/11.2.0/db_1 chown -R grid:oinstall /u01/app/11.2.0/grid chmod -R 775 /u01/ cd /etc/ rm -rf ora* # 移除以前的配置 /u01/app/11.2.0.4/grid/perl/bin/perl /u01/app/11.2.0.4/grid/crs/install/roothas.pl -deconfig -force
在节点一登陆grid用户,安装数据库集群软件
# su - oracle $ export DISPLAY=:1.0 $ xhost + $ cd /u01/soft/database $ ./runInstaller -jreLoc /etc/alternatives/jre_1.8.0
和集群安装同样,这里须要配置互信,输入rac2 oracle用户的密码,点击setup,执行完后点击test测试,测试经过就能够下一步
dba
Error in invoking target 'agent nhms' of makefile....
这里也是因为RHEL7.x与11.2.0.4兼容性的一个bug,解决方法(安装节点执行便可):
su - oracle cd $ORACLE_HOME/sysman/lib vi ins_emagent.mk #搜索关键字 MK_EMAGENT_NMECTL,添加 -lnnz11,以下 #=========================== # emdctl #=========================== $(SYSMANBIN)emdctl: $(MK_EMAGENT_NMECTL) -lnnz11
修改完毕后回到安装界面Retry
asmc
,点击Create建立diskgroupsu - export DISPLAY=:1.0 xhost + su - grid export DISPLAY=:1.0 xhost + asmca
External
若是这边看不到,多是窗口过小了,须要鼠标点击右下角进行放大。
External
最后ASM磁盘组状态以下
点击mount all
点击Exit退出
dbca
su - export DISPLAY=:1.0 xhost + su - oracle export DISPLAY=:1.0 xhost + dbca
将归档放入+ARCH中
进程数调整为1000
字符集选择UTF-8
链接模式默认便可
如下操做须要在全部节点上完成
# ll /u01/soft -rw-r--r--@ 1 grid oinstall 1133472011 9 26 09:59 p29255947_112040_Linux-x86-64.zip -rw-r--r--@ 1 grid oinstall 113112960 9 17 13:02 p6880880_112000_Linux-x86-64.zip
export GRID_HOME=/u01/app/11.2.0/grid export ORACLE_HOME=/u01/app/oracledb/product/11.2.0/db_1 mv $GRID_HOME/OPatch $GRID_HOME/OPatch_bak mv $ORACLE_HOME/OPatch $ORACLE_HOME/OPatch_bak unzip p6880880_112000_Linux-x86-64.zip -d $GRID_HOME unzip p6880880_112000_Linux-x86-64.zip -d $ORACLE_HOME chown -R grid:oinstall $GRID_HOME/OPatch chown -R oracle:oinstall $ORACLE_HOME/OPatch
# su - grid $ cd /u01/soft $ unzip p29255947_112040_Linux-x86-64.zip $ /u01/app/11.2.0/grid/OPatch/ocm/bin/emocmrsp -no_banner -output /tmp/ocm.rsp Provide your email address to be informed of security issues, install and initiate Oracle Configuration Manager. Easier for you if you use your My Oracle Support Email address/User Name. Visit http://www.oracle.com/support/policies.html for details. Email address/User Name: You have not provided an email address for notification of security issues. Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]: y^H The OCM configuration response file (/tmp/ocm.rsp) was successfully created. $ ll /tmp/ocm.rsp -rw-r--r-- 1 grid oinstall 621 Sep 28 16:27 /tmp/ocm.rsp
# su - # export PATH=/u01/app/11.2.0/grid/OPatch:$PATH # opatch auto ./29255947/ -ocmrf /tmp/ocm.rsp Executing /u01/app/11.2.0/grid/perl/bin/perl /u01/app/11.2.0/grid/OPatch/crs/patch11203.pl -patchdir . -patchn 29255947 -ocmrf /tmp/ocm.rsp -paramfile /u01/app/11.2.0/grid/crs/install/crsconfig_params This is the main log file: /u01/app/11.2.0/grid/cfgtoollogs/opatchauto2019-09-28_16-32-59.log This file will show your detected configuration and all the steps that opatchauto attempted to do on your system: /u01/app/11.2.0/grid/cfgtoollogs/opatchauto2019-09-28_16-32-59.report.log 2019-09-28 16:32:59: Starting Clusterware Patch Setup Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params Stopping RAC /u01/app/oracle_base/product/11.2.0/db_1 ... Stopped RAC /u01/app/oracle_base/product/11.2.0/db_1 successfully patch ./29255947/29141201/custom/server/29141201 apply successful for home /u01/app/oracle_base/product/11.2.0/db_1 patch ./29255947/29141056 apply successful for home /u01/app/oracle_base/product/11.2.0/db_1 Stopping CRS... Stopped CRS successfully patch ./29255947/29141201 apply successful for home /u01/app/11.2.0/grid patch ./29255947/29141056 apply successful for home /u01/app/11.2.0/grid patch ./29255947/28729245 apply successful for home /u01/app/11.2.0/grid Starting CRS... Installing Trace File Analyzer CRS-4123: Oracle High Availability Services has been started. Starting RAC /u01/app/oracle_base/product/11.2.0/db_1 ... Started RAC /u01/app/oracle_base/product/11.2.0/db_1 successfully opatch auto succeeded.
当出现opatch auto succeeded
时表示补丁安装成功。若是补丁安装失败,能够根据控制台输出找到日志文件,如上面日志文件位于/u01/app/11.2.0/grid/cfgtoollogs/opatchauto2019-09-28_16-32-59.log
至此,rac数据库安装完毕。