12c rac On redhat 7

1  准备工做

1.1   关于GRID的一些变化

1.1.1  简化的基于映像的Oracle Grid Infrastructure安装

从Oracle Grid Infrastructure 12c第2版(12.2)开始,Oracle Grid Infrastructure软件可用做下载和安装的映像文件。css

此功能大大简化了Oracle Grid Infrastructure的安装过程。html

注意:你必须将GRID软件解压缩到希Grid Home位于的目录中,而后运行gridSetup.sh脚本以启动Oracle Grid Infrastructure安装。java

1.1.2  支持Oracle域服务集群和Oracle成员集群

从Oracle Grid Infrastructure 12c第2版(12.2)开始,Oracle Grid Infrastructure安装程序支持部署Oracle域服务集群和Oracle成员集群的选项。node

更多介绍请看官方文档:linux

http://docs.oracle.com/database/122/CWLIN/understanding-cluster-configuration-options.htm#GUID-4D6C2B52-9845-48E2-AD68-F0586AA20F48c++

1.1.3  支持Oracle可扩展集群

从Oracle Grid Infrastructure 12c第2版(12.2)开始,Oracle Grid Infrastructure安装程序支持将不一样位置的集群节点配置为Oracle扩展集群的选项。 Oracle扩展集群由位于称为站点的多个位置的节点组成。shell

1.1.4  全局网格基础设施管理知识库-GIMR

Oracle Grid Infrastructure部署如今支持全局离群网格基础架构管理存储库(GIMR)。 此存储库是具备用于每一个集群的GIMR的可插入数据库(PDB)的多租户数据库。 全局GIMR在Oracle域服务集群中运行。 全局GIMR使本地群集免于在其磁盘组中为此数据专用存储,并容许长期历史数据存储用于诊断和性能分析。数据库

这个在后面安装GRID时候,会提示你是否为GIMR单首创建一个磁盘组用于存放数据。vim

1.2   硬件最低配置要求

序号bash

组件

内存

1

Oracle Grid Infrastructure installations

4GB以上

2

Oracle Database installations

最小1GB,建议2GB以上


1.3   RAC规划

服务器主机名

rac1

rac2

公共 IP 地址(eth0)

192.168.56.121

192.168.56.123

虚拟 IP 地址(eth0)

192.168.56.122

192.168.56.124

私有 IP 地址(eth1)

192.168.57.121

192.168.57.123

ORACLE RAC SID

cndba1

cndba2

集群实例名称

cndba

 

SCAN IP   

192.168.56.125

 

操做系统

Red hat7.3

 

Oracle   版本

12.2.0.1

 

1.4   磁盘划分

12C R2中对磁盘组空间要求更大。OCR外部冗余最少40G,NORMAL最少80G。

磁盘组名称

磁盘

大小

冗余策略

DATAFILE

data01

40G

NORMAL

data02

40G

OCR

OCRVOTING01

30G

NORMAL

OCRVOTING02

30G

OCRVOTING03

30G


1.5   操做系统安装

具体过程略.....

注意Redhat 7.3 中主机名和IP地址的操做。

相关操做能够参考:

Linux 7.2 修改主机名

http://www.cndba.cn/dave/article/1795

Linux 7 防火墙 配置管理

http://www.cndba.cn/dave/article/153

1.6   配置host

在全部节点修改:

[root@rac1 ~]# cat /etc/hosts
 
127.0.0.1   localhost
 
 
 
192.168.56.121 rac1
 
192.168.57.121 rac1-priv
 
192.168.56.122 rac1-vip
 
 
 
 
192.168.56.123 rac2
 
192.168.57.123 rac2-priv
 
192.168.56.124 rac2-vip
 
 
 
 
192.168.56.125 rac-scan

1.7   添加用户和组

/usr/sbin/groupadd -g 54321 oinstall
 
/usr/sbin/groupadd -g 54322 dba
 
/usr/sbin/groupadd -g 54323 oper
 
/usr/sbin/groupadd -g 54324 backupdba
 
/usr/sbin/groupadd -g 54325 dgdba
 
/usr/sbin/groupadd -g 54326 kmdba
 
/usr/sbin/groupadd -g 54327 asmdba
 
/usr/sbin/groupadd -g 54328 asmoper
 
/usr/sbin/groupadd -g 54329 asmadmin
 
/usr/sbin/groupadd -g 54330 racdba
 
/usr/sbin/useradd -u 54321 -g oinstall -G dba,asmdba,oper oracle
 
/usr/sbin/useradd -u 54322 -g oinstall -G dba,oper,backupdba,dgdba,kmdba,asmdba,asmoper,asmadmin,racdba grid

修改用户密码:

[root@rac1 ~]# passwd grid
 
[root@rac1 ~]# passwd oracle

确认用户信息:

[root@rac1 ~]# id oracle
 
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54327(asmdba)
 
[root@rac1 ~]# id grid
 
uid=54322(grid) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54327(asmdba),54328(asmoper),54329(asmadmin),54330(racdba)
 
[root@rac1 ~]#

1.8   关闭防火墙和selinux

防火墙:

[root@rac1 ~]# systemctl stop firewalld.service
 
[root@rac1 ~]# ]# systemctl disable firewalld.service
 
rm '/etc/systemd/system/basic.target.wants/firewalld.service'
 
rm '/etc/systemd/system/dbus-org.Fedoraproject.FirewallD1.service'

SELINUX:

[root@rac1 ~]# cat /etc/selinux/config
 
# This file controls the state of SELinux on the system.
 
# SELINUX= can take one of these three values:
 
#     enforcing - SELinux security policy is enforced.
 
#     permissive - SELinux prints warnings instead of enforcing.
 
#     disabled - No SELinux policy is loaded.
 
SELINUX=disabled
 
# SELINUXTYPE= can take one of these two values:
 
#     targeted - Targeted processes are protected,
 
#     mls - Multi Level Security protection.
 
SELINUXTYPE=targeted
 
 
 
 

修改方法系统内核,避免出现ASM ora-27157错误

1).设置/etc/systemd/logind.conf中RemoveIPC=no
2).重启服务器或者重启systemd-logind
重启systemd-logind:

1

2

# systemctl daemon-reload

# systemctl restart systemd-logind

 

1.9   配置时间同步

停用NTP

[root@rac1 ~]# systemctl stop ntpd.service
 
[root@rac1 ~]# systemctl disable ntpd.service
[root@rac1 etc]# systemctl stop chronyd.service
 
[root@rac1 etc]# systemctl disable chronyd.service
 
Removed symlink /etc/systemd/system/multi-user.target.wants/chronyd.service.

1.10   建立目录

mkdir -p /u01/app/12.2.0/grid
 
mkdir -p /u01/app/grid
 
mkdir -p /u01/app/oracle/product/12.2.0/dbhome_1
 
chown -R grid:oinstall /u01
 
chown -R oracle:oinstall /u01/app/oracle
 
chmod -R 775 /u01/
 
 
 

 建立Inventory目录

mkdir -p /u01/app/oraInventory

chown -R grid:oinstall /u01/app/oraInventory

chmod -R 775 /u01/app/oraInventory

1.11   配置用户环境变量

1.11.1  ORACLE用户

[root@rac1 ~]# cat /home/oracle/.bash_profile
 
# .bash_profile
 
 
 
 
# Get the aliases and functions
 
if [ -f ~/.bashrc ]; then
 
. ~/.bashrc
 
fi
 
 
 
 
# User specific environment and startup programs
 
 
 
 
ORACLE_SID=cndba1;export ORACLE_SID 
 
#ORACLE_SID=cndba2;export ORACLE_SID 
 
ORACLE_UNQNAME=cndba;export ORACLE_UNQNAME
 
JAVA_HOME=/usr/local/java; export JAVA_HOME
 
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
 
ORACLE_HOME=$ORACLE_BASE/product/12.2.0/dbhome_1; export ORACLE_HOME
 
ORACLE_TERM=xterm; export ORACLE_TERM
 
NLS_DATE_FORMAT="YYYY:MM:DDHH24:MI:SS"; export NLS_DATE_FORMAT
 
NLS_LANG=american_america.ZHS16GBK; export NLS_LANG
 
TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN
 
ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11
 
PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin:$ORA_CRS_HOME/bin
 
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
 
export PATH
 
LD_LIBRARY_PATH=$ORACLE_HOME/lib
 
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
 
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
 
export LD_LIBRARY_PATH
 
CLASSPATH=$ORACLE_HOME/JRE
 
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
 
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
 
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
 
export CLASSPATH
 
THREADS_FLAG=native; export THREADS_FLAG
 
export TEMP=/tmp
 
export TMPDIR=/tmp
 
umask 022

1.11.2  GRID用户

[root@rac1 ~]# cat /home/grid/.bash_profile
 
# .bash_profile
 
 
 
 
# Get the aliases and functions
 
if [ -f ~/.bashrc ]; then
 
. ~/.bashrc
 
fi
 
 
 
 
# User specific environment and startup programs
 
 
 
 
PATH=$PATH:$HOME/bin
 
 
 
 
export ORACLE_SID=+ASM1 
 
#export ORACLE_SID=+ASM2 
 
export ORACLE_BASE=/u01/app/grid
 
export ORACLE_HOME=/u01/app/12.2.0/grid
 
export PATH=$ORACLE_HOME/bin:$PATH:/usr/local/bin/:.
 
export TEMP=/tmp
 
export TMP=/tmp
 
export TMPDIR=/tmp
 
umask 022
 
export PATH

1.12   修改资源限制

1.12.1  修改/etc/security/limits.conf

[root@rac1 ~]# cat >> /etc/security/limits.conf <

  grid soft nproc 2047

grid hard nproc 16384

grid soft nofile 1024

grid hard nofile 65536

grid soft stack 10240

oracle soft nproc 2047

oracle hard nproc 16384

oracle soft nofile 1024

oracle hard nofile 65536

oracle soft stack 10240

在/etc/profile 中加入如下内容

if [ $USER = "oracle" ] || [ $USER = "grid" ]; then

if [ $SHELL = "/bin/ksh" ]; then

ulimit -p 16384

ulimit -n 65536

else

ulimit -u 16384 -n 65536

fi

umask 022

fi
 
  <="" code="">

1.13   配置NOZEROCONF

编辑 /etc/sysconfig/network文件增长如下内容

[root@rac1 ~]# cat >> /etc/sysconfig/network <
   NOZEROCONF=YES
  
   <="" code="">

1.14   修改内核参数

[root@rac1 ~]# vim /etc/sysctl.conf
fs.file-max = 6815744
kernel.sem = 250 32000 100 128
kernel.shmmni = 4096
kernel.shmall = 1073741824
kernel.shmmax = 4398046511104
kernel.panic_on_oops = 1
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
net.ipv4.conf.all.rp_filter = 2
net.ipv4.conf.default.rp_filter = 2
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500
[root@rac1 ~]#sysctl -p

1.15   安装必要的包

yum 的配置参考以下文章:

 Linux 平台下 YUM 源配置 手册

 http://www.cndba.cn/dave/article/154

yum install binutils  compat-libstdc++-33   gcc  gcc-c++  glibc  glibc.i686  glibc-devel   ksh   libgcc.i686   libstdc++-devel  libaio  libaio.i686  libaio-devel  libaio-devel.i686  libXext  libXext.i686  libXtst  libXtst.i686  libX11  libX11.i686 libXau  libXau.i686  libxcb  libxcb.i686  libXi  libXi.i686  make  sysstat  unixODBC  unixODBC-devel  zlib-devel  zlib-devel.i686 compat-libcap1–y

1.16   安装cvuqdisk

cvuqdisk存于oracle安装介质的cv/rpm目录下,解压缩database的安装介质便可看到此包:

export CVUQDISK_GRP=asmadmin
 
[root@rac1 rpm]# pwd
 
/software/database/rpm
 
[root@rac1 rpm]# ll
 
total 12
 
-rwxr-xr-x 1 root root 8860 Jan  5 17:36 cvuqdisk-1.0.10-1.rpm
 
[root@rac1 rpm]# rpm -ivh cvuqdisk-1.0.10-1.rpm
 
Preparing...                          ################################# [100%]
 
Using default group oinstall to install package
 
Updating / installing...
 
   1:cvuqdisk-1.0.10-1                ################################# [100%]
 
[root@rac1 rpm]#

 拷贝至另外一个节点也安装一下。

1.17   配置共享磁盘

执行以下脚本:

[root@rac1 ~]#
 

for i in b c d e f ;

 

do

 

echo "KERNEL==\"sd*\",ENV{DEVTYPE}==\"disk\",SUBSYSTEM==\"block\",PROGRAM==\"/usr/lib/udev/scsi_id -g -u -d \$devnode\",RESULT==\"`/usr/lib/udev/scsi_id -g -u /dev/sd$i`\", RUN+=\"/bin/sh -c 'mknod /dev/asmdisk$i b  \$major \$minor; chown grid:asmadmin /dev/asmdisk$i; chmod 0660 /dev/asmdisk$i'\""

 

done

执行结果:

KERNEL=="sd*",ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VB90ea2842-3d5cfe18", RUN+="/bin/sh -c 'mknod /dev/asmdiskb b  $major $minor; chown grid:asmadmin /dev/asmdiskb; chmod 0660 /dev/asmdiskb'"
 
KERNEL=="sd*",ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VB0c31ed82-ca3c7a2f", RUN+="/bin/sh -c 'mknod /dev/asmdiskc b  $major $minor; chown grid:asmadmin /dev/asmdiskc; chmod 0660 /dev/asmdiskc'"
 
KERNEL=="sd*",ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VBd2eba70f-9707444e", RUN+="/bin/sh -c 'mknod /dev/asmdiskd b  $major $minor; chown grid:asmadmin /dev/asmdiskd; chmod 0660 /dev/asmdiskd'"
 
KERNEL=="sd*",ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VB15946091-75f9c0f4", RUN+="/bin/sh -c 'mknod /dev/asmdiske b  $major $minor; chown grid:asmadmin /dev/asmdiske; chmod 0660 /dev/asmdiske'"
 
KERNEL=="sd*",ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VBac950c6b-de84431c", RUN+="/bin/sh -c 'mknod /dev/asmdiskf b  $major $minor; chown grid:asmadmin /dev/asmdiskf; chmod 0660 /dev/asmdiskf'"

 

建立规则文件:/etc/udev/rules.d/99-oracle-asmdevices.rules,并将上述内容添加到文件中。

[root@rac1 rules.d]# cat 99-oracle-asmdevices.rules
 
KERNEL=="sd*",ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VB90ea2842-3d5cfe18", RUN+="/bin/sh -c 'mknod /dev/asmdiskb b  $major $minor; chown grid:asmadmin /dev/asmdiskb; chmod 0660 /dev/asmdiskb'"
 
KERNEL=="sd*",ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VB0c31ed82-ca3c7a2f", RUN+="/bin/sh -c 'mknod /dev/asmdiskc b  $major $minor; chown grid:asmadmin /dev/asmdiskc; chmod 0660 /dev/asmdiskc'"
 
KERNEL=="sd*",ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VBd2eba70f-9707444e", RUN+="/bin/sh -c 'mknod /dev/asmdiskd b  $major $minor; chown grid:asmadmin /dev/asmdiskd; chmod 0660 /dev/asmdiskd'"
 
KERNEL=="sd*",ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VB15946091-75f9c0f4", RUN+="/bin/sh -c 'mknod /dev/asmdiske b  $major $minor; chown grid:asmadmin /dev/asmdiske; chmod 0660 /dev/asmdiske'"
 
KERNEL=="sd*",ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VBac950c6b-de84431c", RUN+="/bin/sh -c 'mknod /dev/asmdiskf b  $major $minor; chown grid:asmadmin /dev/asmdiskf; chmod 0660 /dev/asmdiskf'"

执行生效

[root@rac1 ~]# /sbin/udevadm trigger --type=devices --action=change
systemctl restart systemd-udev-trigger.service

若是权限没有变,尝试重启。

[root@rac1 rules.d]# ll /dev/asm*
 
brw-rw---- 1 grid asmadmin 8, 16 Mar 21 22:01 /dev/asmdiskb
 
brw-rw---- 1 grid asmadmin 8, 32 Mar 21 22:01 /dev/asmdiskc
 
brw-rw---- 1 grid asmadmin 8, 48 Mar 21 22:01 /dev/asmdiskd
 
brw-rw---- 1 grid asmadmin 8, 64 Mar 21 22:01 /dev/asmdiske
 
brw-rw---- 1 grid asmadmin 8, 80 Mar 21 22:01 /dev/asmdiskf

1.17.1  修改磁盘属性

(1)修改磁盘属性

echo deadline >/sys/block/sdb/queue/scheduler
 
echo deadline > /sys/block/sdc/queue/scheduler
 
echo deadline >/sys/block/sdd/queue/scheduler
 
echo deadline > /sys/block/sde/queue/scheduler
 
echo deadline >/sys/block/sdf/queue/scheduler

(2) 验证属性修改结果:

如:

[root@rac1 dev]#  more /sys/block/sdb/queue/scheduler
 
noop anticipatory [deadline]cfq
 
[root@rac1 dev]#  more /sys/block/sdc/queue/scheduler
 
noop anticipatory [deadline]cfq

 安装GRID

下载地址:

http://www.oracle.com/technetwork/database/enterprise-edition/downloads/oracle12c-linux-12201-3608234.html

2.1   上传并解压介质

注意:12cR2 的GRID 的安装和以前版本不一样,采用的是直接解压缩的模式。 因此须要先把安装介质复制到GRID HOME,而后直接进行解压缩。 这个目录必须在GRID HOME下才能够进行解压缩。

About Image-Based Oracle Grid Infrastructure Installation

Starting with Oracle Grid Infrastructure 12c Release 2 (12.2), installation and configuration of Oracle Grid Infrastructure software is simplified with image-based installation.

[grid@rac1 ~]$ echo $ORACLE_HOME
 
/u01/app/12.2.0/grid
 
[grid@rac1 ~]$ cd $ORACLE_HOME
 
[grid@rac1 grid]$ ll linuxx64_12201_grid_home.zip
 
-rw-r--r-- 1 grid oinstall 2994687209 Mar 21 22:10 linuxx64_12201_grid_home.zip
 
[grid@rac1 grid]$
 
[grid@rac1 grid]$ unzip linuxx64_12201_grid_home.zip

解压缩完成后文件自动就补全了,剩下的在执行脚本便可。 没有了安装的过程了。

[grid@rac1 grid]$ ll
 
total 2924572
 
drwxr-xr-x  2 grid oinstall        102 Jan 27 00:12 addnode
 
drwxr-xr-x 11 grid oinstall        118 Jan 27 00:10 assistants
 
drwxr-xr-x  2 grid oinstall       8192 Jan 27 00:12 bin
 
drwxr-xr-x  3 grid oinstall         23 Jan 27 00:12 cdata
 
drwxr-xr-x  3 grid oinstall         19 Jan 27 00:10 cha
 
drwxr-xr-x  4 grid oinstall         87 Jan 27 00:12 clone
 
drwxr-xr-x 16 grid oinstall        191 Jan 27 00:12 crs
 
drwxr-xr-x  6 grid oinstall         53 Jan 27 00:12 css
 
drwxr-xr-x  7 grid oinstall         71 Jan 27 00:10 cv
 
drwxr-xr-x  3 grid oinstall         19 Jan 27 00:10 dbjava
 
drwxr-xr-x  2 grid oinstall         22 Jan 27 00:11 dbs
 
drwxr-xr-x  2 grid oinstall         32 Jan 27 00:12 dc_ocm
 
drwxr-xr-x  5 grid oinstall        191 Jan 27 00:12 deinstall
 
drwxr-xr-x  3 grid oinstall         20 Jan 27 00:10 demo
 
drwxr-xr-x  3 grid oinstall         20 Jan 27 00:10 diagnostics
 
drwxr-xr-x  8 grid oinstall        179 Jan 27 00:11 dmu
 
-rw-r--r--  1 grid oinstall        852 Aug 19  2015 env.ora
 
drwxr-xr-x  7 grid oinstall         65 Jan 27 00:12 evm
 
drwxr-xr-x  5 grid oinstall         49 Jan 27 00:10 gpnp

2.2   运行安装

在节点1执行安装脚本,这里依赖图形界面,可以使用xshell 或者 vnc 进行调用。

Linux VNC 安装配置

http://www.cndba.cn/dave/article/1814

[grid@rac1 grid]$ pwd
 
/u01/app/12.2.0/grid
 
[grid@rac1 grid]$ ll *.sh
 
-rwxr-x--- 1 grid oinstall 5395 Jul 21  2016 gridSetup.sh
 
-rwx------ 1 grid oinstall  603 Jan 27 00:12 root.sh
 
-rwx------ 1 grid oinstall  612 Jan 27 00:12 rootupgrade.sh
 
-rwxr-x--- 1 grid oinstall  628 Sep  5  2015 runcluvfy.sh
[grid@rac1 grid]$ ./gridSetup.sh
 
Launching Oracle Grid Infrastructure Setup Wizard...

 

 

 

 

添加节点并配置SSH 验证

 

 

 

注意:新增了一个冗余类型FLEX:而且对磁盘组空间也有新的更高要求

官方文档解释:

FLEX REDUNDANCY是一种磁盘组,容许数据库在建立磁盘组后指定本身的冗余。 文件的冗余也能够在建立后进行更改。 此类型的磁盘组支持Oracle ASM文件组和配额组。 灵活磁盘组须要至少存在三个故障组。 若是弹性磁盘组具备少于五个故障组,则它能够容忍丢失一个; 不然,它能够容忍两个故障组的丢失。 要建立一个弹性磁盘组,COMPATIBLE.ASM和COMPATIBLE.RDBMS磁盘组属性必须设置为12.2或更高。

 

若是前提检查出现NTP,内存方面的警告,还有什么avahi-deamon的问题。能够忽略。

 

开始安装

 

执行脚本

 

[root@rac1 etc]# /u01/app/12.2.0/grid/root.sh
 
Performing root user operation.
 
 
 
 
The following environment variables are set as:
 
    ORACLE_OWNER= grid
 
    ORACLE_HOME=  /u01/app/12.2.0/grid
 
 
 
 
Enter the full pathname of the local bin directory: [/usr/local/bin]:
 
   Copying dbhome to /usr/local/bin ...
 
   Copying oraenv to /usr/local/bin ...
 
   Copying coraenv to /usr/local/bin ...
 
 
Creating /etc/oratab file...
 
Entries will be added to the /etc/oratab file as needed by
 
Database Configuration Assistant when a database is created
 
Finished running generic part of root script.
 
Now product-specific root actions will be performed.
 
Relinking oracle with rac_on option
 
Using configuration parameter file: /u01/app/12.2.0/grid/crs/install/crsconfig_params
 
The log of current session can be found at:
 
  /u01/app/grid/crsdata/rac1/crsconfig/rootcrs_rac1_2017-03-21_11-50-15PM.log
 
2017/03/21 23:50:20 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
 
2017/03/21 23:50:20 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.
 
2017/03/21 23:50:53 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
 
2017/03/21 23:50:53 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
 
2017/03/21 23:50:57 CLSRSC-363: User ignored prerequisites during installation
 
2017/03/21 23:50:58 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
 
2017/03/21 23:51:00 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
 
2017/03/21 23:51:01 CLSRSC-594: Executing installation step 5 of 19: 'SaveParamFile'.
 
2017/03/21 23:51:12 CLSRSC-594: Executing installation step 6 of 19: 'SetupOSD'.
 
2017/03/21 23:51:13 CLSRSC-594: Executing installation step 7 of 19: 'CheckCRSConfig'.
 
2017/03/21 23:51:13 CLSRSC-594: Executing installation step 8 of 19: 'SetupLocalGPNP'.
 
2017/03/21 23:51:49 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
 
2017/03/21 23:51:58 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
 
2017/03/21 23:51:58 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
 
2017/03/21 23:52:04 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
 
2017/03/21 23:52:19 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
 
2017/03/21 23:52:42 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
 
2017/03/21 23:52:48 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
 
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
 
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
 
CRS-4133: Oracle High Availability Services has been stopped.
 
CRS-4123: Oracle High Availability Services has been started.
2017/03/21 23:53:17 CLSRSC-400: A system reboot is required to continue installing. The command '/u01/app/12.2.0/grid/perl/bin/perl -I/u01/app/12.2.0/grid/perl/lib -I/u01/app/12.2.0/grid/crs/install /u01/app/12.2.0/grid/crs/install/rootcrs.pl ' execution failed
 
[root@rac1 etc]#

 

执行root.sh脚本时,出现了

2017/03/21 23:53:17 CLSRSC-400: A system reboot is required to continue installing. The command '/u01/app/12.2.0/grid/perl/bin/perl -I/u01/app/12.2.0/grid/perl/lib -I/u01/app/12.2.0/grid/crs/install /u01/app/12.2.0/grid/crs/install/rootcrs.pl ' execution failed

官方文档解释:要求必定要重启服务器,而后再次执行这两个脚本。时间稍微有点长....

 

若是出现CLSRSC-1102: failed to start resource 'qosmserver'这种错误,有多是你分配的内存不够形成的,形成资源不够启动该服务。增长内存后,从新执行root.sh脚本。

root.sh脚本最后:

CRS-6016: Resource auto-start has completed for server rac1
 
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
 
CRS-4123: Oracle High Availability Services has been started.
 
2017/03/21 14:12:39 CLSRSC-343: Successfully started Oracle Clusterware stack
 
2017/03/21 14:12:39 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
 
2017/03/21 14:16:10 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
 
2017/03/21 14:17:55 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

表示成功了。

Log 有点长,有兴趣本身看:

[root@rac1 ~]# /u01/app/12.2.0/grid/root.sh
 
Performing root user operation.
 
 
The following environment variables are set as:
 
    ORACLE_OWNER= grid
 
    ORACLE_HOME=  /u01/app/12.2.0/grid
 
 
 
 
Enter the full pathname of the local bin directory: [/usr/local/bin]:
 
The contents of "dbhome" have not changed. No need to overwrite.
 
The contents of "oraenv" have not changed. No need to overwrite.
 
The contents of "coraenv" have not changed. No need to overwrite.
 
 
 
 
Entries will be added to the /etc/oratab file as needed by
 
Database Configuration Assistant when a database is created
 
Finished running generic part of root script.
 
Now product-specific root actions will be performed.
 
Relinking oracle with rac_on option
 
Using configuration parameter file: /u01/app/12.2.0/grid/crs/install/crsconfig_params
 
The log of current session can be found at:
 
  /u01/app/grid/crsdata/rac1/crsconfig/rootcrs_rac1_2017-03-22_00-00-32AM.log
 
2017/03/22 00:00:37 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
 
2017/03/22 00:00:37 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.
 
2017/03/22 00:00:37 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
 
2017/03/22 00:00:37 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
 
2017/03/22 00:00:40 CLSRSC-363: User ignored prerequisites during installation
 
2017/03/22 00:00:40 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
 
2017/03/22 00:00:42 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
 
2017/03/22 00:00:43 CLSRSC-594: Executing installation step 5 of 19: 'SaveParamFile'.
 
2017/03/22 00:00:45 CLSRSC-594: Executing installation step 6 of 19: 'SetupOSD'.
 
2017/03/22 00:00:47 CLSRSC-594: Executing installation step 7 of 19: 'CheckCRSConfig'.
 
2017/03/22 00:00:47 CLSRSC-594: Executing installation step 8 of 19: 'SetupLocalGPNP'.
 
2017/03/22 00:00:49 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
 
2017/03/22 00:00:51 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
 
2017/03/22 00:01:37 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
 
2017/03/22 00:01:38 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
 
2017/03/22 00:01:53 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
 
2017/03/22 00:02:16 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
 
2017/03/22 00:02:20 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
 
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
 
CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'
 
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
 
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
 
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
 
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
 
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
 
CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded
 
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
 
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
 
CRS-4133: Oracle High Availability Services has been stopped.
 
CRS-4123: Oracle High Availability Services has been started.
 
2017/03/22 00:02:52 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
 
2017/03/22 00:02:57 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
 
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
 
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
 
CRS-4133: Oracle High Availability Services has been stopped.
 
CRS-4123: Oracle High Availability Services has been started.
 
CRS-2672: Attempting to start 'ora.evmd' on 'rac1'
 
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
 
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
 
CRS-2676: Start of 'ora.evmd' on 'rac1' succeeded
 
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
 
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
 
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
 
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
 
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
 
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
 
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
 
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
 
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
 
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
 
 
 
 
Disk groups created successfully. Check /u01/app/grid/cfgtoollogs/asmca/asmca-170322AM120336.log for details.
 
 
 
 
2017/03/22 00:04:39 CLSRSC-482: Running command: '/u01/app/12.2.0/grid/bin/ocrconfig -upgrade grid oinstall'
 
CRS-2672: Attempting to start 'ora.crf' on 'rac1'
 
CRS-2672: Attempting to start 'ora.storage' on 'rac1'
 
CRS-2676: Start of 'ora.storage' on 'rac1' succeeded
 
CRS-2676: Start of 'ora.crf' on 'rac1' succeeded
 
CRS-2672: Attempting to start 'ora.crsd' on 'rac1'
 
CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded
 
CRS-4256: Updating the profile
 
Successful addition of voting disk 07f57bf9f7634f5abfb849735e86d3aa.
 
Successful addition of voting disk 3c930c3a19f34f25bfddc3a5a41bbb4e.
 
Successful addition of voting disk 4fab95ab67ed4f07bf4e9aa67e3e095e.
 
Successfully replaced voting disk group with +OCR.
 
CRS-4256: Updating the profile
 
CRS-4266: Voting file(s) successfully replaced
 
##  STATE    File Universal Id                File Name Disk group
 
--  -----    -----------------                --------- ---------
 
1. ONLINE   07f57bf9f7634f5abfb849735e86d3aa (/dev/asmdiskb) [OCR]
 
2. ONLINE   3c930c3a19f34f25bfddc3a5a41bbb4e (/dev/asmdiskd) [OCR]
 
3. ONLINE   4fab95ab67ed4f07bf4e9aa67e3e095e (/dev/asmdiskc) [OCR]
 
Located 3 voting disk(s).
 
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
 
CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'
 
CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded
 
CRS-2673: Attempting to stop 'ora.storage' on 'rac1'
 
CRS-2673: Attempting to stop 'ora.crf' on 'rac1'
 
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
 
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
 
CRS-2677: Stop of 'ora.crf' on 'rac1' succeeded
 
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
 
CRS-2677: Stop of 'ora.storage' on 'rac1' succeeded
 
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
 
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
 
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
 
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'
 
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
 
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
 
CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'
 
CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded
 
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
 
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
 
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
 
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
 
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
 
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
 
CRS-4133: Oracle High Availability Services has been stopped.
 
2017/03/22 00:06:15 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
 
CRS-4123: Starting Oracle High Availability Services-managed resources
 
CRS-2672: Attempting to start 'ora.evmd' on 'rac1'
 
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
 
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
 
CRS-2676: Start of 'ora.evmd' on 'rac1' succeeded
 
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
 
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
 
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
 
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
 
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'rac1'
 
CRS-2674: Start of 'ora.drivers.acfs' on 'rac1' failed
 
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
 
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
 
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
 
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
 
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
 
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
 
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac1'
 
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
 
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded
 
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'rac1'
 
CRS-2674: Start of 'ora.drivers.acfs' on 'rac1' failed
 
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
 
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
 
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
 
CRS-2672: Attempting to start 'ora.storage' on 'rac1'
 
CRS-2676: Start of 'ora.storage' on 'rac1' succeeded
 
CRS-2672: Attempting to start 'ora.crf' on 'rac1'
 
CRS-2676: Start of 'ora.crf' on 'rac1' succeeded
 
CRS-2672: Attempting to start 'ora.crsd' on 'rac1'
 
CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded
 
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
 
CRS-6017: Processing resource auto-start for servers: rac1
 
CRS-6016: Resource auto-start has completed for server rac1
 
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
 
CRS-4123: Oracle High Availability Services has been started.
 
2017/03/22 00:09:08 CLSRSC-343: Successfully started Oracle Clusterware stack
 
2017/03/22 00:09:08 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
 
 
 
 
CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac1'
 
CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac1' succeeded
 
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
 
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
 
CRS-2672: Attempting to start 'ora.OCR.dg' on 'rac1'
 
CRS-2676: Start of 'ora.OCR.dg' on 'rac1' succeeded
 
2017/03/22 00:14:18 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
 
2017/03/22 00:17:02 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
 
[root@rac1 ~]#

 

2.3   验证集群是否正常

[grid@rac1 ~]$ crsctl stat res -t
 
--------------------------------------------------------------------------------
 
Name           Target  State        Server                   State details      
 
--------------------------------------------------------------------------------
 
Local Resources
 
--------------------------------------------------------------------------------
 
ora.ASMNET1LSNR_ASM.lsnr
 
               ONLINE  ONLINE       rac1                     STABLE
 
               ONLINE  ONLINE       rac2                     STABLE
 
ora.LISTENER.lsnr
 
               ONLINE  ONLINE       rac1                     STABLE
 
               ONLINE  ONLINE       rac2                     STABLE
 
ora.OCR_VOTE.dg
 
               ONLINE  ONLINE       rac1                     STABLE
 
               ONLINE  ONLINE       rac2                     STABLE
 
ora.net1.network
 
               ONLINE  ONLINE       rac1                     STABLE
 
               ONLINE  ONLINE       rac2                     STABLE
 
ora.ons
 
               ONLINE  ONLINE       rac1                     STABLE
 
               ONLINE  ONLINE       rac2                     STABLE
 
--------------------------------------------------------------------------------
 
Cluster Resources
 
--------------------------------------------------------------------------------
 
ora.LISTENER_SCAN1.lsnr
 
      1        ONLINE  ONLINE       rac1                     STABLE
 
ora.MGMTLSNR
 
      1        OFFLINE OFFLINE                               STABLE
 
ora.asm
 
      1        ONLINE  ONLINE       rac1                     Started,STABLE
 
      2        ONLINE  ONLINE       rac2                     Started,STABLE
 
      3        OFFLINE OFFLINE                               STABLE
 
ora.cvu
 
      1        ONLINE  ONLINE       rac1                     STABLE
 
ora.qosmserver
 
      1        ONLINE  ONLINE       rac1                     STABLE
 
ora.rac1.vip
 
      1        ONLINE  ONLINE       rac1                     STABLE
 
ora.rac2.vip
 
      1        ONLINE  ONLINE       rac2                     STABLE
 
ora.scan1.vip
 
      1        ONLINE  ONLINE       rac1                     STABLE
 
--------------------------------------------------------------------------------
 
[grid@rac1 ~]$

 ASMCA建立磁盘组

界面都清爽了许多

 

 安装DB

这个安装方式和以前同样,没有变化

./runInstaller

安装部分就省略了,基本上就是配置ssh,选择磁盘组等等。

 

组分的更细了,分工更明确了。

 

 

 DBCA建立数据库

略....

 验证

6.1   查看建立的容器数据库

SQL> select name,cdb from v$database;
 
NAME      CDB
 
--------  ---------
 
CNDBA     YES

6.2   查看存在的插拨数据库

SQL> col pdb_name for a30
 
SQL> select pdb_id,pdb_name,dbid,status,creation_scn from dba_pdbs;
 
 
 
 
    PDB_ID PDB_NAME     DBID STATUS     CREATION_SCN
 
---------- ------------------------------ ---------- ---------- ------------
 
3 lei    3459708341 NORMAL          1456419
 
2 PDB$SEED       3422473700 NORMAL          1408778

标签:12C R2 RAC

相关文章
相关标签/搜索