-
2019-03-28 09:27:00
环境centos7.4,两台虚拟机,三个共享磁盘,共享磁盘的创建会在后面说
==============1.设置网络环境========
主机名我这里两台分别是rac1 ,rac2
先看下网关和DNS
网段1:192.168.145.xxx
网段2:192.168.89.xxx
主机DNS1:223.5.5.5
主机DNS2:223.6.6.6
现在开始设置网络节点
vim /etc/hosts
添加:
#public ip
192.168.145.132 rac1
192.168.145.130 rac2
#priv ip
192.168.89.132 rac1-priv
192.168.89.130 rac2-priv
#vip ip
192.168.145.210 rac1-vip
192.168.145.220 rac2-vip
#scan ip
192.168.145.230 rac-scan
192.168.145.231 rac-scan192.168.145.232 rac-scan
注意:publicip和priv ip一定是自己虚拟机的ip,最好是将ip设置成静态ip.
public ,vip ,scan ip需要在同一个网段vip,scan可以在网段中随意设置但是不能相同
======关闭selinux,防火墙=====
vim /etc/selinux/config
修改SELINUX的值为disabledSELINUX=disabled
修改了之后需要重启电脑,selinux才能生效,这一步非常重要
修改了之后重启可以通过sestatus -v命令来查看,值一定要disabled才行
关闭防火前
systemctl stop firewalld
查看防火墙的状态
systemctl status firewalld
接下来是创建用户组:
groupadd oinstall
groupadd dba
groupadd oper
groupadd asmdba
groupadd asmoper
groupadd asmadmin
useradd -g oinstall -G dba,oper,asmdba oracle
useradd -g oinstall -G asmdba,dba,asmadmin,asmoper grid
创建目录:
mkdir /u01
mkdir /u01/grid
mkdir /u01/oracle
mkdir /u01/gridbase赋予权限
chown -R grid:oinstall /u01
chown -R oracle:oinstall /u01/oraclechmod -R g+w /u01
添加环境变量:
vim /home/grid/.bash_profile
添加
ORACLE_BASE=/u01/gridbase
ORACLE_HOME=/u01/grid
PATH=$ORACLE_HOME/bin:$PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
DISPLAY=192.168.145.207:0.0
export ORACLE_BASE ORACLE_HOME PATH LD_LIBRARY_PATH DISPLAYvim /home/oracle/.bash_profile
添加:注意两台的oracle_sid应该不能相同
ORACLE_BASE=/u01/oracle
ORACLE_HOME=/u01/oracle/db
ORACLE_SID=cludb1
PATH=$ORACLE_HOME/bin:$PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
DISPLAY=192.168.145.207:0.0export ORACLE_BASE ORACLE_HOME ORACLE_SID PATH LD_LIBRARY_PATH DISPLAY
source /home/oracle/.bash_profile
source /home/grid/.bash_profile
查看是否生效
echo ORACLE_HOME;
=====================
修改linux内核参数
vim /etc/sysctl.conf具体参数意义参考官方文档
添加
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 1073741824
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
使修改生效
#sysctl –p
配置/etc/security/limits.conf#vim /etc/security/limits.conf
添加
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536grid soft stack 10240
设置节点时间同步
Root用户:一般时间同步有linux的ntpd时间同步,还有就是oracle的时间同步,我们用oracle的时间同步,需要关闭ntpd服务
/bin/systemctl stop ntpd
systemctl disable ntpd.service
mv /etc/ntp.conf /etc/ntp.conf.original
还要删除以下文件(如果有的话):
# rm /var/run/ntpd.pid
====创建共享磁盘,很重要========
先创建共享文件夹,rac1和rac2都指定F:\vm\sharedisk【当然这里的目录,可以随意指定】
点击 虚拟机设置-选项-共享文件夹-总是启用-添加目录为F:\vm\sharedisk【rac1和rac2都要设置】
rac1:点击虚拟机设置,选择-添加-硬盘-下一步-scsi-下一步-创建新虚拟磁盘-硬盘大小暂定8G-利益分配,将磁盘储存为单个文件-选择储存的位置,这个位置一定要是rac1和rac2的共享文件夹我这里是放在F:\vm\sharedisk下面的-完成-点击高级-设置虚拟设备节点为scsi 1:1 ,模式独立-永久 ,点击确认
一共创建三块,步骤都相同只是在设置虚拟设备节点时第二台设置为scsi 1:2,第三台设置成scsi 1:3
rac2同样进行这样的操作,但是在-选择磁盘-这个选项时选择-使用现有的虚拟磁盘,也就是之前rac1创建的放在F:\vm\sharedisk的三个磁盘
开启rac1:
Root用户:
# cd /dev/
# ll sd*
将看到sdb,sdc,sdd,这三块就是添加的磁盘
对硬盘进行分区
# fdisk sdb
按m键,此时有多个选项供选择
按n键(创建新分区)
按p键(创建主分区,另有e键是扩展分区)
按1键(分区数字从1开始,即sdb1,sdc1等)
按1键(分区从第一个柱面开始)
回车 (结束柱面选择 默认,即全部柱面只分在一个区)
按w键(将操作写入分区表)注意:依次对sdb sdc sdd都要分区。
接下来用oracleasm 创建磁盘,
oracleasm需要三个包:kmod-oracleasm,oracleasm-support,oracleasmlib.这里我已经提供了
我的网盘 : https://pan.baidu.com/s/11oRoWwmbsG5KG-HXU-k99Q
当然也可以通过rpm下载
rpm网站 : http://rpmfind.net
====
Root用户:
#oracleasm configure –i
然后输入用户grid
输入组dba
输入y
输入y
# oracleasm init
创建磁盘DISK01,DISK02,DISK03
# oracleasm createdisk DISK01 sdb1
# oracleasm createdisk DISK02 sdc1
# oracleasm createdisk DISK03 sdd1
查看磁盘:
# oracleasm listdisks
DISK01
DISK02DISK03
开启节点rac2
注:如果已开启则重启
Root用户:
# oracleasm configure -i
输入grid
输入dba
输入y
输入y
#oracleasm init
重启
#reboot
#oracleasm listdisks节点2也能看到DISK01,DISK02,DISK03了
配置节点间的相互信任
注:grid,Oracle用户都需要添加相互信息,下面以grid为例
Grid用户:
在节点rac1和rac2节点分别执行
ssh-keygen -t rsa [这里一路回车就可以了]
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh rac2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 【可能会有提示选择yes/no ,请选择yes】
ssh rac1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 【可能会有提示选择yes/no ,请选择yes】
如果秘钥的生成路径是按照上面一路默认的话,可以执行命令(rac2节点:ssh-copy-id grid@rac1;rac1节点:ssh-copy-id grid@rac2)将本地公钥追加到authorized_keys上,当然这个不作要求。
测试:
在rac2,rac1上分别执行
#ssh rac1 date
#ssh rac2 date
只要不再提示输入密码就成功了
======
接下来就是安装集群了。
安装集群grid
Grid用户:
将集群软件linux.x64_11gR2_grid.zip上传至/u01目录,并解压
将解压至grid目录,更改解压后文件属主
chown -R grid:oinstall /u01/grid
chown -R grid:oinstall /tmp/bootstrap 【后面执行检查时可能会报错,所以添加这个权限】
进入加压后的目录
#cd /u01/grid
执行安装前检查操作# ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose
这里重点说一下,因为检查时会有提示很多依赖包没有安装,依赖包我在这里提供一下,当然也有可能不全,可以百度,但基本上是没有问题,预检查时可以查看。
除了这个问题,基本上其他所有的检查项都应该是passd的
=====可能的错,swap不足====
1.交换空间内存不够
#dd if=/dev/zeroof=swapfile bs=1024 count=1500000
1024单位byte,增加1.5G内存大小
# mkswap swapfile
# swapon swapfile
交换内存开机自动挂载
# vim /etc/fstab
加入
/home/swapfile swap swap defaults 0 0
如果检查成功则开始安装操作
======可能的错 grid用户和oracle编号不一致===
# id grid 查看两台的信息是否一致,不一致需要用usermod 命令进行修改,具体请百度
#id oracle 同上
======可能的错 sem的四个参数之一最后一项提供了128但是并没有识别===
有的centos7可能会在检查的时候出现之前设置的系统参数中sem中的参数其中的值设了128但是还是提示需要128但是提供了0,这个问题可以忽略。
=========下载环境所需要的包=====
下载64bit的
yum -y install compat-libstdc++-33 glibc.i686 gcc elfutils-libelf-devel glibc-devel gcc-c++ libaio-devel unixODBC unixODBC-devel
这里需要先安装64bit的包才能安装32bit的包,不然会报错,--force是强制安装
32bit的包我这里已经提供了,放在了百度云盘上,当然也可以自己去下载,这里提供rpm的网站
我的网盘 : https://pan.baidu.com/s/11oRoWwmbsG5KG-HXU-k99Q
rpm网站 : http://rpmfind.net
32bit的包下载下来放在两台CentOS上,然后安装他们,前提是先装了64bit的,不然
rpm -ivh libaio-0.3.105-2.i386.rpm --force
rpm -ivh libgcc-3.4.6-8.i386.rpm --force
rpm -ivh compat-libstdc++-33-3.2.3-47.3.i386.rpm --force
rpm -ivh libaio-devel-0.3.105-2.i386.rpm --force
rpm -ivh libstdc++-3.4.6-11.i386.rpm --force
rpm -ivh unixODBC-2.2.11-7.1.i386.rpm --forcerpm -ivh unixODBC-devel-2.2.11-1.i386.rpm --force
centos7已经安装ksh取代了pdksh,所以装pdksh需要-- force
rpm -ivh pdksh-5.2.14-21.x86_64.rpm --force
========接下来就是安装grid了====
先授权grid,oracle可以使用界面
xhost +SI:localuser:oracle
xhost +SI:localuser:grid
cd /u01/grid
./runInstaller
接下来就会在主机出现安装界面【当然也可以通过书写响应文件进行安装,这里使用图形化界面,我在网盘中也提供了grid.rsp的响应文件,想通过这种方式也可以通过百度查看静默安装的命令,但是grid.rsp文件中的ens32,和ens33,需要对应你自己的网卡,我的是ens32和ens33,这里需要修改】
step1:选择第一项:for a cluster
step2:选择第二项:advanced installation
step3:默认english
step4:
cluster Name :rac-cluster
Scan Nane : rac-scan 这里这么设置是因为之前的/etc/hosts中配置了rac-scan。
scan port: 1521
取消勾选configure GNS
step5:点击add 添加
hostname 中填入rac2 vipname 中填入rac2-vip 刚好和rac1对应
step6:默认即可
step7:第一项即可 AUto 默认也是第一项
step8 : 选择 external 勾选all disks 下面勾选所有的磁盘
step9:设置密码,这里推荐设置成一样的密码,如果密码设置简单可以在接下来的弹窗中选择是否忽略,选择yes
step10:选择第二项 don not use inter。。。
step11:默认即可
step12:oraclebase: /u01/gridbase 第二项填 /u01/grid
这里会提示oreclehome contains dir 。。
点击下一步,提示是否忽略,选择yes,忽略即可
step13:/u01/oraInventory 这里是这样,默认即可
step14:
step15:点击finish
接下来开始安装,时间有点长
然后会提示执行
所有节点上执行,先执行主节点,然后执行从节点,主节点也就是那个节点执行的./runInstaller的那个
# /u01/oraInventory/oraInstRoot.sh
# /u01/grid/root.sh
执行root.sh的时候可能会出bug
因为oracle11.2.0.1有一个bug就是执行到Addingdaemon to inittab 的时候会卡着不动,或者是直接报错,这是oracle本身的bug,很坑,但是在后面的版本已经修复,这里使用的这个版本,所以也就只能解决这个bug,使用了图形化界面安装后面也会出现,后面会提供解决方法】
解决方法:
在执行root.sh脚本时候,当出现Adding daemon to inittab的时候,在另一个窗口执行下面命令:
/bin/dd if=/var/tmp/.oracle/npohasd of=/dev/null bs=1024count=1
===================可能的bug========
在执行root.sh脚本时候,出现错误:
/u01/grid/bin/clscfg.bin:error while loading shared libraries: libcap.so.1: cannot open shared objectfile: No such file or directory
Failedto create keys in the OLR, rc = 127, 32512
OLRconfiguration failed
共享库libcap.so.1 找不到,因为缺少包compat-libcap1.x86_64.rpm(前文依赖包中有此包,若已安装,则不会出现此错误),yum安装:
# yum install -y compat-libcap1.x86_64
安装完毕后,需要将之前失败的配置删除再重新执行脚本;
# /u01/grid/crs/install/rootcrs.pl -delete -force -verbose
脚本执行完毕后,点击安装界面的OK,继续下一步,如果出现下面的错误,可以忽略,问题是由于scan-cluster使用/etc/hosts文件来解析造成的,直接跳过,不影响。如果是用dns来解析的话应该不会报此错误
【INS-20802】oracle cluster varifycation utility filed
安装完成点击close
接下来是安装数据库了。同样有两种方式,一种是静默安装,一种是图形化界面安装,这里就不提供了静默安装的响应文件了,可以结合网上的文档执行查看
任一节点安装:
# cd /u01
将linux.x64_11gR2_database.zip上传至/u01,解压,将解压后的目录并改归属
解压后的目录是database
#chown –R oracle:oinstall database
#chmod 777 database/database/runInstaller
#chmod 777 database/database/install/unzip
# chmod 777 database/database/install/.oui
开始安装
# su - oracle
# cd /u01/database/database/
# ./runInstaller -ignoreInternalDriverError
参数-ignoreInternalDriverError的作用是忽略内部驱动导致cluster认证失败,下文有错误信息
安装过程跟单机版差不多,只是在step 3的时候要添加上第二个节点,也就是列表中有rac1和rac2.
在step8的时候可能会出现checks很多的failed,这个忽略即可。
安装了之后就是创建数据库,这里同样有两种方式,这里使用图形化界面
#dbca
弹出窗口,
我们选择第一项集群模式Oralce Real Applicatoin…,然后next
step 1:默认Create a Database,然后Next
step 2 :选择默认General Purpose… 然后next
step 3 :这里Global Database Name和SIDprefix为cludb,在前文节点环境变量中配置的cludb1,cludb2前缀cludb,然后选择所有节点,然后select all ,然后Next
step4: 默认,然后next
step5 :设置密码,所有用户均设置一致的密码,防止记混,也可以设置不同密码。然后Next
step6:默认选择ASM存储类型 然后Next,然后会弹出窗口指定ASM密码,填写密码,然后继续
step7:闪回区配置默认,也可以不选择,以后修改spfile文件,然后Next
step8 :样例schemal可装可不装,然后Next
step9: 字符可以选择ZHS16GBK-GBK-bit SimplifiedChinese,其他默认即可,然后Nextstep10:
step11:开始创建数据库 然后Finish,等待安装完成…
创建完成,Password Managerment可以解锁oracle默认账户,解锁scott(可略) 点击Exit
检查集群crs状态
# su - grid
# crs_stat –t
检查集群数据库
$ srvctl config database -dcludb
检查数据库实例状态:
$ srvctl status database -dcludb
检查本地配置的环境变量,以防sid不匹配
$ env |grep ORACLE
//注意与上面的Instance相同
登录数据库:
# su – oracle
# sqlplus / as sysdba
解决oracle11.2.0.1的客户端通过scan ip无法连接数据库bug
解决方法如下:
注:下文中的ip为绑定到当前节点上的scan ip ,其他节点同样做,但是如果服务器重新启动后rscan-ip会重新分配,这时需要重新设置local_listener否则客户端链接不上报ORA-12520,TNS:listener could not find available handler for requested type of server
修改local_listener
SQL> alter system set local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS= (PROTOCOL = TCP)(HOST =scan ip)(PORT = 1521))))' sid='cludb1';
System altered.
SQL> alter system register;
System altered.
集群维护:
实例管理
$ srvctlstatus instance -d cludb -i cludb1
$ srvctlstop instance -d cludb -i cludb1
$ srvctlstart instance -d cludb -i cludb1
监听管理
$ srvctlstatus listener -n CentOS7Srv01
$ srvctlstop listener -n CentOS7Srv01
$ srvctlstart listener -n CentOS7Srv01
集群管理命令crsctl
停止集群:
$ su - grid
$ srvctl stopdatabase -d cludb
$ su - root
# /u01/grid/bin/crsctl stopcrs //停止crs 同时停止asm磁盘组 root用户
启动集群
$ /u01/grid/bin/crsctl startcrs
$ /u01/grid/bin/srvctl startdatabase -d cludb
注意:在启动crs时候,若报
CRS-4124:Oracle HighAvailability Services startup failed.
CRS-4000:Command Startfailed, or completed with errors.
可在每个节点执行命令:
/bin/dd if=/var/tmp/.oracle/npohasdof=/dev/null bs=1024 count=1
(11.2.0.1才会有这个bug)
共享磁盘不足(后添加)
由于测试过程中发现共享硬盘不够用,需要增加共享硬盘,创建共享硬盘过程略(前文已有),
然后登陆(grid)
SQL> sqlplus / as sysasm
报:
ERROR:
ORA-01031: insufficient privileges
此时需要导入环境变量(grid用户)
$ export ORACLE_SID=+ASM1
$ exportORACLE_HOME=/u01/grid
$ exportPATH=$ORACLE_HOME/bin:$PATH
再次登陆即可
SQL> sqlplus / as sysasm
Connected.
查看用户:
SQL> select * from v$pwfile_users;
USERNAME SYSDB SYSOP SYSAS
----------------------------------- ----- -----
SYS TRUE TRUE TRUE
ASMSNMP TRUE FALSE FALSE
查看磁盘组:
SQL> select STATE,REDUNDANCY,TOTAL_MB,FREE_MB,NAME,FAILGROUP from v$asm_disk;
增加磁盘
SQL> alter diskgroup DATAadd disk 'ORCL:DISK5' name DISK5;
Diskgroup altered. //添加成功
注:DATA为安装asm集群时候的磁盘组的名字,如果没有更改则默认是DATA
先记到这里。
更多相关内容 -
centos7.7+Oracle 11g 2 RAC安装文档.docx
2020-04-19 21:54:20Oracle 11g R2 rac +centos7.7 +openfiler 静默安装教程,教程使用静默的形式安装grid软件和oracle databases 软件,全程无图形化 -
Centos7安装Oracle11g_x64相关依赖包.zip
2021-05-24 17:19:27包含了安装过程中无法在系统中直接下载但需要的依赖包,compat-libstdc++-33-3.2.3-61.i386.rpm libaio-0.3.105-2.i386.rpm libaio-devel-0.3.105-2.i386.rpm libgcc-3.4.6-3.1.i386.rpm libstdc++-3.4.6-11.i386.... -
Centos7安装部署Oracle RAC 11G
2018-09-12 14:03:31Centos7安装部署Oracle RAC 11G说明文档,Oracle 11g R2安装,RAC环境搭建 -
SV_centos7安装oracle11G R2 rac
2017-10-20 09:18:38SV_centos7安装oracle11G R2 rac,目前只分享前80页,有觉得写的比较好的,可以直接联系我。 -
centos7下安装oracle11gR2的详细步骤
2021-01-10 11:45:37CentOS-7-x86_64-DVD linux.x64_11gR2_database_1of2.zip linux.x64_11gR2_database_2of2.zip 本教程是在VMware下安装的,注意设置内存的时候,不要设置动态内存。 安装Oracle前准备 创建运行oracle数据库的系统... -
oracle11g CentOS 下RAC安装
2017-12-25 15:49:50oracle11g RAC在centos6.5下的安装和配置,版本11.2.0.4 -
centos7 搭建oracle11g rac
2021-06-08 14:29:34RAC,全称real application clusters, 译为“实时应用集群”,是Oracle新版数据库中采用的一项新技术,是高可用性的一-种,也是Oracle 数据库支持网格计算环境的核心技术。 ●VIP -虚拟IP地址(Virtual IP)。TCP...一、部分理论
RAC,全称real application clusters, 译为“实时应用集群”,是Oracle新版数据库中采用的一项新技术,是高可用性的一-种,也是Oracle 数据库支持网格计算环境的核心技术。●VIP -虚拟IP地址(Virtual IP)。TCP协议是一种面向连接的可靠的传输层协议,它保证了数据的可靠传输,对于一些出错,超时丢包等问题TCP设计的超时与重传机制。其基本原理:在发送一个数据之后,就开启一个定时器,若是在这个时间内没有收到发送数据的ACK确认报文,则对该报文进行重传,在达到一定次数还没有成功时放弃并发送一个复位信号。
如果没有虚拟IP,应用是使用主机网络IP进行连接,此时主机出现问题,网络将无法ping通,应用连接遇到TCPIP协议超时,才会出现报错无法连接,由于TCPIP时钟超时时间(暂时不清楚超时时间)如果使用VIP连接,由于VIP会自动飘逸到存活的节点
●OCR (Oracle Cluster Registr集群注册文件),记录每个节点的相关信息。
管理Oracle集群软件和Oracle RAC数据库配置信息;这也包含Oracle Local Registry (OLR),存在于集群的每个节点上,管理Oracle每个节点的集群配置信息。
●Voting Disk,仲裁机制用于仲裁多个节点向共享节点同时写的行为,这样做是为了避免发生冲突。在Oracle 10g版本前,RAC集群所需的集群软件依赖于硬件厂商,在不同平台上实施RAC集群都需要安装和配置厂商的集群软件,而Oracle只提供了Linux和Windows平台上的集群软件,
叫Oracle Cluster Manager。但从10.1版本开始,Oracle推出了一个与平台独立的集群产品:Cluster Ready Service,简称CRS,
从此RAC集群的实施不再依赖各个硬件厂商,从10.2版本开始,Oracle将这个集群产品改名为Oracle Clusterware,
在11g中又被称为GI(Oracle Grid Infrastructure),但我们叫惯了CRS,所以平时很多时候也就称之为CRS,
这个产品并不局限用于数据库的集群,其他应用都可以借用其API轻易实现集群功能1、 外部冗余(external redundancy):表示 Oracle 不帮你管理镜像,功能由外部存储系统实现,比如通过 RAID 技术;有效磁盘空间是所有磁盘设备空间的大小之和。
2、 默认冗余(normal redundancy):表示 Oracle 提供 2 份镜像来保护数据(镜像数据一次),有效磁盘空间是所有磁盘设备大小之和的 1/2 (使用最多)。
3、 高度冗余(high redundancy):表示 Oracle 提供3份镜像来保护数据(镜像数据两次),以提高性能和数据的安全,最少需要三块磁盘(三个 failure group);有效磁盘空间是所有磁盘设备大小之和的 1/3,虽然冗余级别高了,但是硬件的代价也最高。
实例恢复(Instance Recovery):如果实例被SHUTDOWN ABORT方法强行关闭,或者因为断电等事故发生故障。数据文件、控制文件、联机日志都没有丢失,这时数据库再次启动时,利用联机日志进行恢复。
崩溃恢复(Crash Recovery):到了RAC环境下,同样也有Media Recovery和Instance Recovery。指某个实例发生了Crash后在其他实例上进行的Recovery。这里最重要的区别是发生地点不是在故障节点,而是在某个健康节点。这种Recovery有一个特殊要求:在健康节点执行Crash Recovery时,必须要保证故障节点不能再对共享数据进行操作,也就是要对故障节点进行IO隔离(IO Fencing),这是由CSS服务来保证的.
介质恢复(Media Recovery):如果发生数据文件丢失或破坏,就需要用备份、归档日志和联机日志来进行恢复,这种恢复操作就叫作介质恢复,并分成完全恢复和不完全恢复两种情况。
粗糙条带化以每个1MB为单位将文件扩展到所有的磁盘。粗糙条带化适合于具有高度并发的小I0请求的系统,例如OLTP环境。
细密条带化以128KB为单位扩展文件,它适合于传统的数据仓库环境或具有较低并发性的OLTP系统,可以最大化单个I/O请求的响应时间。磁盘组快速同步镜像:oracle 11.1.0.0以上版本,当磁盘组中某个磁盘坏了之后,可以不让其offline。除非超过disk_repair_time设置的时间才会下线。这样可能磁盘控制器坏了导致的磁盘offline,当上线online时全部同步整个磁盘数据太慢。
公用网络:用于往返于节点和服务器之间的所有常规连接。
互连网络(或专用网络):支持集群内节点之间的通信,如节点状态信息和节点之间共享的实际数据块。这种接口速率应该尽可能地快,并且在此接口上不应进行其他类型的通信,否则会降低RAC数据库的性能。
虚拟IP地址:是分配给Oracle侦听程序的地址,并支持快速的连接时故障切换(rapidconnect-timefailover),该功能能够以远快于第三方高可用解决方案的速度将网络业务量和Oracle连接切换到RAC数据库中的-一个不同的实例上。每个虚拟地址必须和公用网络地址在同一个网段上。
SCAN IP(Single Client Access Name):是Oracle从11g R2开始推出的,客户端可以通过SCAN特性负载均衡地连接到RAC数据库。所以在Oracle 11gR2 中,引入了SCAN(Single ClientAccess Name)的特性。SCAN是一个域名,可以解析至少1个IP,最多解析3个SCAN IP,客户端可以通过这个SCAN 名字来访问数据库,另外SCAN ip必须与public ip和VIP在一个子网。在共享磁盘子系统上,需要两个特殊的分区: 一个分区用于表决磁盘,另一个分区用于Oracle集群注册表(Oracle Cluster Registry, 0CR)。
在专用网络发生故障时,Oracle 的集群软件 集群就绪服务(CRS)使用 表决磁盘来仲裁集群的所有权。OCR磁盘用来维护与集群相关的所有元数据:集群配置和集群数据库配置。多路径(multipath):普通的电脑主机都是一个硬盘挂接到一个总线上,这里是一对一的关系,而到了有光纤组成的SAN(Storage Area Network,存储网络)环境,
由于主机和存储通过了光纤交换机连接,这样的话,就构成了多对多的关系。也就是说,主机到存储可以有多条路径可以选择,即主机到存储之间的I/O有多条路径可以选择。
既然每个主机到所对应的存储可以经过多条不同的路径,那么,若同时使用的话,I/O流量如何分配?其中一条路径坏掉了,如何处理?
还有在操作系统的角度来看,每条路径,操作系统会认为是一个实际存在的物理盘,但实际上只是通向同一个物理盘的不同路径而已,这样在使用的时候,就给用户带来了困惑。
多路径软件(multipath)就是为了解决上面的问题应运而生的。多路径的主要功能就是和存储设备一起配合故障的切换和恢复、I/O流量的负载均衡以及磁盘的虚拟化。
1 RAID10+ 外部冗余
2 RAID 5 + 标准冗余
3 RAID10+ 标准冗余
4 RAID50 + 标准冗余
如果你考虑的是写性能那么 1>3>2
如果你考虑的是安全性那么 3>=2>1二、安装要求
centos 7 的两台机器
内存至少2g以上,swap 4g以上
磁盘空间至少10G以上
使用如下命令来配置 /etc/sysctl.conf
[root@ocl ~]# sysctl -a | egrep 'sem|shm|file-max|ip_1ocal|rmem|wmem'
然后使用 sysctl -p 使其生效
需要增加用户和组
需要配置ssh免密通信
需要配置 ORACLE_HOME \ CRS_HOME等环境变量Public ip,scan ip, vip 必须在相同网段
private ip 与上述ip 不能在相同网段;配置项 主机节点1 主机节点2
主机名 rac1 rac2
Public 网卡名称 eth33
Public IP 192.168.100.111 192.168.100.112
Virtual IP 192.168.100.211 192.168.100.212
Scan IP 192.168.100.11
Private 网卡名称 eth32
Private IP 10.10.10.11 10.10.10.12所有都选择默认冗余模式,那需要建如下逻辑磁盘
data1 放数据 5g
recov1 放归档 5g
ocr1 放Oracle Cluster Registr集群注册文件 1g . 注:从12c开始OCR磁盘组最少需要5G的空间
三、安装步骤
1.安装两台 centos 7 的虚拟机rac1 ,rac2,具体略。 系统安装完后安装如下两个基础工具包
yum -y install vim net-tools
2.虚拟网络编辑器新增网络 VMNET2。将子网改为 10.10.10.0
3.为rac1和rac2增加 网络适配器 2,rac1 的网络适配器选指定 VMNET8,网络适配器2选指定 VMNET2
4.为rac1和rac2增加虚拟磁盘,设置为立即分配磁盘空间 和 永久,选择高级,将磁盘虚拟设备节点选择为 1:0 1:1 1:2
data1 放数据 5g
recov1 放归档 5g
ocr1 放Oracle Cluster Registr集群注册文件 1g
修改RAC1和RAC2的虚拟机配置文件 rac1.vmx 和 rac2.vmx,在最后增加如下内容。防止两台机器启动时使用相同硬盘时报错
diskLib.dataCacheMaxSize = "0"
diskLib.dataCacheMaxReadAheadSize = "0"
diskLib.dataCacheMinReadAheadSize = "0"
diskLib.dataCachePageSize = "4096"
diskLib.maxUnsyncedWrites = "0"
disk.locking = "FALSE"
disk.enableUUID = "TRUE"
scsi1:0.sharedBus = "virtual"
scsi1:1.sharedBus = "virtual"
scsi1:2.sharedBus = "virtual"
5.rac1配置
[root@rac1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=1b76ee0d-ab10-4e46-88e1-1bf94ee3a4bc
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.100.111
PREFIX=24
GATEWAY=192.168.100.2
DNS1=8.8.8.8
IPV6_PRIVACY=norac2配置
[root@rac2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=209a4763-fadb-413b-b4f2-afa3e230d41f
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.100.112
PREFIX=24
GATEWAY=192.168.100.2
DNS1=8.8.8.8
IPV6_PRIVACY=no虚拟网络配置 rac1
cp /etc/sysconfig/network-scripts/ifcfg-ens33 /etc/sysconfig/network-scripts/ifcfg-ens37
[root@rac1 network-scripts]# cat /etc/sysconfig/network-scripts/ifcfg-ens37
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens34
UUID=6a068162-6ea0-4d60-a588-580424a1d978
DEVICE=ens34
ONBOOT=yes
IPADDR=10.10.10.11
PREFIX=24
IPV6_PRIVACY=no虚拟网络配置 rac2
cp /etc/sysconfig/network-scripts/ifcfg-ens33 /etc/sysconfig/network-scripts/ifcfg-ens37
[root@rac2 network-scripts]# cat /etc/sysconfig/network-scripts/ifcfg-ens37
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens34
UUID=5df21d9e-2fe0-480b-8055-f3d85651093b
DEVICE=ens34
ONBOOT=yes
IPADDR=10.10.10.12
PREFIX=24
IPV6_PRIVACY=no两个网盘修改好后重启网络服务
systemctl restart network两台机器执行
yum -y install binutils compat-libstdc++-33 glibc ksh libaio libgcc libstdc++ make compat-libcap1 gcc gcc-c++ glibc-devel libaio-devel libstdc++-devel sysstat elfutils-libelf-devel smartmontools修改内核参数
vi /etc/sysctl.conf
kernel.shmall = 2097152
kernel.shmmax = 1661249126
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128fs.file-max = 6815744
fs.aio-max-nr = 1048576net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576vm.swappiness = 0
vm.dirty_background_ratio = 3
vm.dirty_ratio = 80
vm.dirty_expire_centisecs = 500
vm.dirty_writeback_centisecs = 100sysctl -p 使参数生效
配置shell限制
vim /etc/security/limits.conf
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
关闭selinux
[root@rac1 network-scripts]# vim /etc/selinux/config# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# disabled - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of disabled.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted关闭防火墙
[root@rac1 network-scripts]# systemctl stop firewalld
[root@rac1 network-scripts]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@rac1 network-scripts]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:firewalld(1)3月 26 21:36:48 rac1 systemd[1]: Starting firewalld - dynamic firewall daemon...
3月 26 21:36:48 rac1 systemd[1]: Started firewalld - dynamic firewall daemon.
3月 26 21:36:49 rac1 firewalld[741]: WARNING: AllowZoneDrifting is enabled. This is considered an insecure configuration option. It will be removed in a future release. Please consider disabling it now.
3月 26 22:03:33 rac1 systemd[1]: Stopping firewalld - dynamic firewall daemon...
3月 26 22:03:35 rac1 systemd[1]: Stopped firewalld - dynamic firewall daemon.关闭网络管理服务
[root@rac1 network-scripts]# systemctl stop NetworkManager
[root@rac1 network-scripts]# systemctl disable NetworkManager
Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service.
Removed symlink /etc/systemd/system/network-online.target.wants/NetworkManager-wait-online.service.
[root@rac1 network-scripts]# systemctl status NetworkManager
● NetworkManager.service - Network Manager
Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; disabled; vendor preset: enabled)
Active: inactive (dead) since 五 2021-03-26 22:15:54 CST; 13s ago
Docs: man:NetworkManager(8)
Main PID: 774 (code=exited, status=0/SUCCESS)3月 26 21:46:21 rac1 NetworkManager[774]: <info> [1616766381.3219] device (ens37): state change: prepare -> config (reason 'none', sys-iface-state: 'managed')
3月 26 21:46:21 rac1 NetworkManager[774]: <info> [1616766381.3362] device (ens37): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed')
3月 26 21:46:21 rac1 NetworkManager[774]: <info> [1616766381.3402] device (ens37): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed')
3月 26 21:46:21 rac1 NetworkManager[774]: <info> [1616766381.3408] device (ens37): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed')
3月 26 21:46:21 rac1 NetworkManager[774]: <info> [1616766381.3411] device (ens37): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed')
3月 26 21:46:21 rac1 NetworkManager[774]: <info> [1616766381.3464] device (ens37): Activation: successful, device activated.
3月 26 22:15:54 rac1 NetworkManager[774]: <info> [1616768154.9202] caught SIGTERM, shutting down normally.
3月 26 22:15:54 rac1 systemd[1]: Stopping Network Manager...
3月 26 22:15:54 rac1 NetworkManager[774]: <info> [1616768154.9244] manager: NetworkManager state is now CONNECTED_SITE
3月 26 22:15:54 rac1 systemd[1]: Stopped Network Manager.设置两台主机的主机名
hostnamectl set-hostname rac1
hostnamectl set-hostname rac2
增加地址解析
vim /etc/hosts
#PUBLIC
192.168.100.111 rac1
192.168.100.112 rac2#PRIVATE
10.10.10.11 rac1-priv
10.10.10.12 rac2-priv#VIP
192.168.100.211 rac1-vip
192.168.100.212 rac2-vip#scan ip
192.168.100.11 rac-cluster-scan
创建用户和组
groupadd -g 1000 oinstall
groupadd -g 1001 dba
groupadd -g 1002 oper
groupadd -g 1003 asmadmin
groupadd -g 1004 asmdba
groupadd -g 1005 asmoper
useradd -u 1001 -g oinstall -G dba,asmadmin,asmdba,asmoper grid
useradd -u 1002 -g oinstall -G dba,oper,asmadmin,asmdba oraclepasswd grid
passwd oraclevi /etc/profile
if [ \$USER = "oracle" ] || [ \$USER = "grid" ]; then
if [ \$SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
创建目录
mkdir -p /opt/app/oraInventory
chown -R grid:oinstall /opt/app/oraInventory
chmod -R 775 /opt/app/oraInventorymkdir -p /opt/app/grid
chown -R grid:oinstall /opt/app/grid
chmod -R 775 /opt/app/gridmkdir -p /opt/app/11.2.0.4/grid
chown -R grid:oinstall /opt/app/11.2.0.4/grid
chmod -R 775 /opt/app/11.2.0.4/gridmkdir -p /opt/app/oracle
chown -R oracle:oinstall /opt/app/oracle
chmod -R 775 /opt/app/oraclemkdir -p /opt/app/oracle/product/11.2.0.4/db_1
chown -R oracle:oinstall /opt/app/oracle/product/11.2.0.4/db_1
chmod -R 775 /opt/app/oracle/product/11.2.0.4/db_1
修改root环境变量
vi .bash_profile
export GRID_BASE=/opt/app/grid
export GRID_SID=+ASM1 #这里不一样
export GRID_HOME=/opt/app/11.2.0.4/grid
export PATH=$GRID_HOME/bin:$GRID_HOME/OPatch:$PATHexport ORACLE_BASE=/opt/app/oracle
export ORACLE_SID=orcl1 #这里不一样
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0.4/db_1
export PATH=$ORACLE_HOME/bin:$PATH下面会有两边不一样的地方
修改grid用户环境变量
su - grid
vi .bash_profile
export ORACLE_BASE=/opt/app/grid
export ORACLE_SID=+ASM1 #这里不一样
export ORACLE_HOME=/opt/app/11.2.0.4/grid
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH
umask 022
修改oracle环境变量
su - oracle
vi .bash_profile
export ORACLE_BASE=/opt/app/oracle
export ORACLE_SID=orcl1 #这里不一样
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0.4/db_1
export PATH=$ORACLE_HOME/bin:$PATH
umask 022SSH互信配置,配置免费登录
su - grid
ssh-keygen -t rsa #生成公钥和私钥
ssh-copy-id rac2 #本机对rac2机器免密登录
ssh-copy-id rac1 #本机对rac1自己也要免密登录su - oracle
ssh-keygen -t rsa #生成公钥和私钥
ssh-copy-id rac2 #本机对rac2机器免密登录
ssh-copy-id rac1 #本机对rac1自己也要免密登录
-----------------------------------------------start 使用udev创建asm磁盘 --------------------------------------------------
为什么我强烈不推荐ASM环境下使用ASMLIB https://blog.csdn.net/askmaclean/article/details/7192487
[root@rac1 ~]# lsscsi --scsi_id -g
[0:0:0:0] disk VMware, VMware Virtual S 1.0 /dev/sda 36000c297f03deecc6d1d513106f44cb9 /dev/sg0
[2:0:0:0] cd/dvd NECVMWar VMware IDE CDR10 1.00 /dev/sr0 - /dev/sg1
[3:0:0:0] disk VMware, VMware Virtual S 1.0 /dev/sdb 36000c292d169095592aca7a901fde2f3 /dev/sg2
[3:0:1:0] disk VMware, VMware Virtual S 1.0 /dev/sdc 36000c29f3a6cf035b14ffb8225cf6e7d /dev/sg3
[3:0:2:0] disk VMware, VMware Virtual S 1.0 /dev/sdd 36000c29cbd0930c5f6a9f5bb5ff6be1e /dev/sg4创建一个新的udev 规则文件
[root@rac1 ~]# vi /etc/udev/rules.d/99-oracle-asmdevices.rules linux6使用如下
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="36000c290ea1b9b9d9e76575b36cc83b5", NAME="asm-disk1", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="36000c29e3fac88c8dd48128dd2f8a3b6", NAME="asm-disk2", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="36000c29acdaeeb157e2b47fd43288429", NAME="asm-disk3", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="36000c29a046db8852156dd1a8bd78ce2", NAME="asm-disk4", OWNER="grid", GROUP="asmadmin", MODE="0660"[root@rac1 ~]# vi /etc/udev/rules.d/99-oracle-asmdevices.rules linux7使用如下
KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="36000c292d169095592aca7a901fde2f3", RUN+="/bin/sh -c 'mknod /dev/asmdiskb b $major $minor; chown grid:asmadmin /dev/asmdiskb; chmod 0660 /dev/asmdiskb'"
KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="36000c29f3a6cf035b14ffb8225cf6e7d", RUN+="/bin/sh -c 'mknod /dev/asmdiskc b $major $minor; chown grid:asmadmin /dev/asmdiskc; chmod 0660 /dev/asmdiskc'"
KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="36000c29cbd0930c5f6a9f5bb5ff6be1e", RUN+="/bin/sh -c 'mknod /dev/asmdiskd b $major $minor; chown grid:asmadmin /dev/asmdiskd; chmod 0660 /dev/asmdiskd'"[root@rac2 ~]# udevadm control --reload-rules
[root@rac2 ~]# udevadm trigger --type=devices --action=change
[root@rac1 ~]# ll /dev/asm*
brw-rw---- 1 grid asmadmin 8, 16 Apr 1 16:37 /dev/asmdiskb
-----------------------------------------------end 使用udev创建asm磁盘 --------------------------------------------------
-----------------------------------------------start 使用asmlib创建asm磁盘 --------------------------------------------------
[root@rac1 ~]# yum -y install kmod-oracleasm
[root@rac1 ~]# rpm -ivh oracleasmlib-2.0.12-1.el7.x86_64.rpm #先下载此软件 https://www.oracle.com/linux/downloads/linux-asmlib-rhel7-downloads.html
警告:oracleasmlib-2.0.12-1.el7.x86_64.rpm: 头V3 RSA/SHA256 Signature, 密钥 ID ec551f03: NOKEY
准备中... ################################# [100%]
正在升级/安装...
1:oracleasmlib-2.0.12-1.el7 ################################# [100%]
[root@rac1 ~]# rpm -ivh oracleasm-support-2.1.11-2.el7.x86_64.rpm #先下载此软件 https://www.oracle.com/linux/downloads/linux-asmlib-rhel7-downloads.html
警告:oracleasm-support-2.1.11-2.el7.x86_64.rpm: 头V3 RSA/SHA256 Signature, 密钥 ID ec551f03: NOKEY
准备中... ################################# [100%]
正在升级/安装...
1:oracleasm-support-2.1.11-2.el7 ################################# [100%]
注意:正在将请求转发到“systemctl enable oracleasm.service”。
Created symlink from /etc/systemd/system/multi-user.target.wants/oracleasm.service to /usr/lib/systemd/system/oracleasm.service.
[root@rac1 ~]# scp -r oracleasm* root@192.168.100.112:/root/
[root@rac1 ~]# fdisk /dev/sdb #在一个节点执行即可
[root@rac1 ~]# fdisk /dev/sdc #在一个节点执行即可
[root@rac1 ~]# fdisk /dev/sdd #在一个节点执行即可
[root@rac1 ~]# ll /dev/sd*
brw-rw---- 1 root disk 8, 0 4月 6 09:31 /dev/sda
brw-rw---- 1 root disk 8, 1 4月 6 09:31 /dev/sda1
brw-rw---- 1 root disk 8, 2 4月 6 09:31 /dev/sda2
brw-rw---- 1 root disk 8, 16 4月 6 09:40 /dev/sdb
brw-rw---- 1 root disk 8, 17 4月 6 09:40 /dev/sdb1
brw-rw---- 1 root disk 8, 32 4月 6 09:40 /dev/sdc
brw-rw---- 1 root disk 8, 33 4月 6 09:40 /dev/sdc1
brw-rw---- 1 root disk 8, 48 4月 6 09:40 /dev/sdd
brw-rw---- 1 root disk 8, 49 4月 6 09:40 /dev/sdd1
[root@rac1 ~]# oracleasm configure -i
Configuring the Oracle ASM library driver.This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.Default user to own the driver interface []: grid
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
[root@rac1 ~]# oracleasm init
Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module "oracleasm": oracleasm
Configuring "oracleasm" to use device physical block size
Mounting ASMlib driver filesystem: /dev/oracleasm
[root@rac1 ~]# oracleasm createdisk DISK01 /dev/sdb1 #在一个节点执行即可
Writing disk header: done
Instantiating disk: done
[root@rac1 ~]# oracleasm createdisk DISK02 /dev/sdc1 #在一个节点执行即可
Writing disk header: done
Instantiating disk: done
[root@rac1 ~]# oracleasm createdisk DISK03 /dev/sdd1 #在一个节点执行即可
Writing disk header: done
Instantiating disk: done
[root@rac1 ~]# oracleasm listdisks
DISK01
DISK02
DISK03
其它节点直接使用 oracleasm scandisks #扫描出磁盘即可, 然后也可以看到磁盘 oracleasm listdisks
-----------------------------------------------end 使用asmlib创建asm磁盘 --------------------------------------------------
安装grid
[root@rac1 root]# su - root
[root@rac1 root]# yum -y install unzip
[root@rac1 root]# mv p13390677_112040_Linux-x86-64_3of7.zip /home/grid/
[root@rac1 root]# chmod 777 /home/grid/p13390677_112040_Linux-x86-64_3of7.zip
[root@rac1 root]# su - grid
[grid@rac1 grid]# unzip /home/grid/p13390677_112040_Linux-x86-64_3of7.zip
[grid@rac1 grid]# su - root
[root@rac1 grid]# cd /home/grid/grid/rpm/
[root@rac1 grid]# rpm -ivh /home/grid/grid/rpm/cvuqdisk-1.0.9-1.rpm #节点二也安装此软件
[root@rac1 rpm]# scp cvuqdisk-1.0.9-1.rpm root@192.168.100.112:/root/
[root@rac1 grid]# su - grid
[grid@rac1 grid]# /home/grid/grid/runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose检查: "pdksh" 的 包存在性
节点名 可用 必需 状态
------------ ------------------------ ------------------------ ----------
rac2 缺失 pdksh-5.2.14 失败
rac1 缺失 pdksh-5.2.14 失败
结果:"pdksh" 的 包存在性 检查失败 此可忽略
1.创建交换分区的文件:增加2G大小的交换分区
dd if=/dev/zero of=/var/swapfile bs=1M count=2048
2.设置交换文件
mkswap /var/swapfile
3.启用交换分区文件
swapon /var/swapfile
4.在/etc/fstab添加
echo '/var/swapfile swap swap defaults 0 0'>>/etc/fstab
5.检查
free -m
在win10客户端启动xlaunch,记得勾选 No Access Control
ssh -Y root@192.168.100.111
[root@rac1 ~]# yum -y install xdpyinfo xhost
[root@rac1 ~]# xhost +
access control disabled, clients can connect from any host
[root@rac1 ~]# su - grid
[grid@rac1 grid]$ export DISPLAY=192.168.100.100:0.0
[grid@rac1 grid]$ export LANG=en_US.UTF-8
[grid@rac1 grid]$ cd /home/grid/grid/
[grid@rac1 grid]$ ./runInstaller # SCAN Port为1521,注意SCAN Name必须与/etc/hosts文件中设置的一致;
下面的一定要按照顺序执行,几次没安装成功都是执行顺序问题
[root@rac1 ~]# /opt/app/oraInventory/orainstRoot.sh
Changing permissions of /opt/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.Changing groupname of /opt/app/oraInventory to oinstall.
The execution of the script is complete.[root@rac2 ~]# /opt/app/oraInventory/orainstRoot.sh
Changing permissions of /opt/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.Changing groupname of /opt/app/oraInventory to oinstall.
The execution of the script is complete.[root@rac1 ~]# /opt/app/11.2.0.4/grid/root.sh
Performing root user operation for Oracle 11gThe following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /opt/app/11.2.0.4/gridEnter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/app/11.2.0.4/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File AnalyzerOLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding Clusterware entries to inittab----------------------报错处理 start,问题1,执行到此处会一直卡着。需要开其它窗口执行如下----------------
[root@rac1 ~]# touch /usr/lib/systemd/system/ohas.service
[root@rac1 ~]# chmod 777 /usr/lib/systemd/system/ohas.service
[root@rac1 ~]# vi /usr/lib/systemd/system/ohas.service
[Unit]
Description=Oracle High Availability Services
After=syslog.target
[Service]
ExecStart=/etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple
Restart=always
[Install]
WantedBy=multi-user.target[root@rac1 ~]# systemctl daemon-reload
[root@rac1 ~]# systemctl enable ohas.service
Created symlink from /etc/systemd/system/multi-user.target.wants/ohas.service to /usr/lib/systemd/system/ohas.service.
[root@rac1 ~]# systemctl start ohas.service
[root@rac1 ~]# systemctl status ohas.service
-----------------------报错处理 end------------------------CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded已成功创建并启动 ASM。
已成功创建磁盘组DATA。
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 4cc88c42a5bf4ff8bf96b698bdec25fe.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 4cc88c42a5bf4ff8bf96b698bdec25fe (/dev/asmdiskb) [DATA]
Located 1 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'rac1'
CRS-2676: Start of 'ora.DATA.dg' on 'rac1' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeededhttp://www.itpub.net/thread-1836169-1-1.html
[root@rac1 ~]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.100.111 netmask 255.255.255.0 broadcast 192.168.100.255
inet6 fe80::20c:29ff:fef3:2014 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:f3:20:14 txqueuelen 1000 (Ethernet)
RX packets 204451 bytes 47085602 (44.9 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3045391 bytes 8370683243 (7.7 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0ens33:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.100.211 netmask 255.255.255.0 broadcast 192.168.100.255
ether 00:0c:29:f3:20:14 txqueuelen 1000 (Ethernet)ens33:2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.100.11 netmask 255.255.255.0 broadcast 192.168.100.255
ether 00:0c:29:f3:20:14 txqueuelen 1000 (Ethernet)ens34: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.10.10.11 netmask 255.255.255.0 broadcast 10.10.10.255
inet6 fe80::20c:29ff:fef3:201e prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:f3:20:1e txqueuelen 1000 (Ethernet)
RX packets 539 bytes 93019 (90.8 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 568 bytes 94606 (92.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0ens34:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 169.254.202.39 netmask 255.255.0.0 broadcast 169.254.255.255
ether 00:0c:29:f3:20:1e txqueuelen 1000 (Ethernet)lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 5049 bytes 3690700 (3.5 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 5049 bytes 3690700 (3.5 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0[root@rac2 ~]# /opt/app/11.2.0.4/grid/root.sh
Performing root user operation for Oracle 11gThe following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /opt/app/11.2.0.4/gridEnter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/app/11.2.0.4/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to inittab----------------------报错处理 start,问题1,执行到此处会一直卡着。需要开其它窗口执行如下----------------
[root@rac1 ~]# touch /usr/lib/systemd/system/ohas.service
[root@rac1 ~]# chmod 777 /usr/lib/systemd/system/ohas.service
[root@rac1 ~]# vi /usr/lib/systemd/system/ohas.service
[Unit]
Description=Oracle High Availability Services
After=syslog.target
[Service]
ExecStart=/etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple
Restart=always
[Install]
WantedBy=multi-user.target[root@rac1 ~]# systemctl daemon-reload
[root@rac1 ~]# systemctl enable ohas.service
Created symlink from /etc/systemd/system/multi-user.target.wants/ohas.service to /usr/lib/systemd/system/ohas.service.
[root@rac1 ~]# systemctl start ohas.service
[root@rac1 ~]# systemctl status ohas.service
-----------------------报错处理 end------------------------CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded安装完成后验证集群情况
[grid@rac1 grid]$ ./runcluvfy.sh stage -post crsinst -n rac1,rac2执行 集群服务设置 的后期检查
正在检查节点的可访问性...
节点 "rac1" 的节点可访问性检查已通过
正在检查等同用户...
用户 "grid" 的等同用户检查已通过正在检查节点连接性...
正在检查主机配置文件...
主机配置文件的验证成功
检查: 接口 "ens33" 的节点连接性
接口 "ens33" 的节点连接性检查已通过
子网 "192.168.100.0" 的 TCP 连接性检查通过
检查: 接口 "ens34" 的节点连接性
接口 "ens34" 的节点连接性检查已通过
子网 "10.10.10.0" 的 TCP 连接性检查通过正在检查子网掩码一致性...
子网 "192.168.100.0" 的子网掩码一致性检查已通过。
子网 "10.10.10.0" 的子网掩码一致性检查已通过。
子网掩码一致性检查已通过。节点连接性检查已通过
正在检查多点传送通信...
正在检查子网 "192.168.100.0" 是否能够与多点传送组 "230.0.1.0" 进行多点传送通信...
子网 "192.168.100.0" 是否能够与多点传送组 "230.0.1.0" 进行多点传送通信的检查已通过。正在检查子网 "10.10.10.0" 是否能够与多点传送组 "230.0.1.0" 进行多点传送通信...
子网 "10.10.10.0" 是否能够与多点传送组 "230.0.1.0" 进行多点传送通信的检查已通过。多点传送通信检查已通过。
时区一致性 检查已通过正在检查 Oracle 集群表决磁盘配置...
“ASM 运行”检查通过。ASM 正在所有指定节点上运行
Oracle 集群表决磁盘配置检查已通过
正在检查集群管理器完整性...
正在检查 CSS 守护程序...
Oracle 集群同步服务似乎处于联机状态。集群管理器完整性检查已通过
OCR 位置 的 UDev 属性检查开始...
OCR 位置 的 UDev 属性检查通过
表决磁盘位置 的 UDev 属性检查开始...
表决磁盘位置 的 UDev 属性检查通过默认用户文件创建掩码检查已通过
正在检查集群完整性...
集群完整性检查已通过
正在检查 OCR 完整性...正在检查是否缺少非集群配置...
所有节点都没有非集群的, 仅限本地的配置
“ASM 运行”检查通过。ASM 正在所有指定节点上运行正在检查 OCR 配置文件 "/etc/oracle/ocr.loc"...
OCR 配置文件 "/etc/oracle/ocr.loc" 检查成功
ocr 位置 "+ocr" 的磁盘组在所有节点上都可用
NOTE:
此检查不验证 OCR 内容的完整性。请以授权用户的身份执行 'ocrcheck' 以验证 OCR 的内容。OCR 完整性检查已通过
正在检查 CRS 完整性...
集群件版本一致性测试已通过
CRS 完整性检查已通过
正在检查节点应用程序是否存在...
检查 VIP 节点应用程序是否存在 (必需)
VIP 节点应用程序检查通过检查 NETWORK 节点应用程序是否存在 (必需)
NETWORK 节点应用程序检查通过检查 GSD 节点应用程序是否存在 (可选)
节点 "rac2,rac1" 上的 GSD 节点应用程序已脱机检查 ONS 节点应用程序是否存在 (可选)
ONS 节点应用程序检查通过
正在检查单客户机访问名 (SCAN)...正在检查 TCP 与 SCAN 监听程序之间的连接性...
所有集群节点上都存在 TCP 与 SCAN 监听程序之间的连接性正在检查 "rac-cluster-scan" 的名称解析设置...
Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed
ERROR:
PRVG-1101 : SCAN 名称 "rac-cluster-scan" 无法解析ERROR:
PRVF-4657 : "rac-cluster-scan" (IP 地址: 192.168.100.11) 的名称解析设置检查失败ERROR:
PRVF-4664 : 发现与 SCAN 名称 "rac-cluster-scan" 不一致的名称解析条目未能验证 SCAN VIP 和监听程序设置
正在检查 OLR 完整性...
正在检查 OLR 配置文件...
OLR 配置文件检查成功
正在检查 OLR 文件属性...OLR 文件检查成功
WARNING:
此检查不验证 OLR 内容的完整性。请以授权用户的身份执行 'ocrcheck -local' 以验证 OLR 的内容。OLR 完整性检查通过
用户 "grid" 不属于 "root" 组。检查已通过
正在检查是否在所有节点上安装了集群件...
集群件的安装检查通过正在检查 CTSS 资源是否在所有节点上运行...
CTSS 资源检查通过
正在查询所有节点上时间偏移量的 CTSS...
时间偏移量的 CTSS 查询通过检查 CTSS 状态已启动...
CTSS 处于活动状态。正在继续检查所有节点上的时钟时间偏移量...
时钟时间偏移量检查通过
Oracle 集群时间同步服务检查已通过
检查 VIP 配置。
检查 VIP 子网配置。
VIP 子网配置检查通过。
检查 VIP 可访问性
VIP 子网可访问性检查通过。在所有节点上 集群服务设置 的后期检查失败。
[grid@rac1 grid]$ crsctl status resource -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ARCH.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.DATA.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.LISTENER.lsnr
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.OCR.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.asm
ONLINE ONLINE rac1 Started
ONLINE ONLINE rac2 Started
ora.gsd
OFFLINE OFFLINE rac1
OFFLINE OFFLINE rac2
ora.net1.network
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.ons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac1
ora.cvu
1 ONLINE ONLINE rac1
ora.oc4j
1 ONLINE ONLINE rac1
ora.orcl.db
1 ONLINE ONLINE rac1 Open
2 ONLINE ONLINE rac2 Open
ora.rac1.vip
1 ONLINE ONLINE rac1
ora.rac2.vip
1 ONLINE ONLINE rac2
ora.scan1.vip
1 ONLINE ONLINE rac1[grid@rac1 grid]$ crsctl status server
NAME=rac1
STATE=ONLINENAME=rac2
STATE=ONLINE[grid@rac1 grid]$ srvctl status asm -a
ASM is running on rac2,rac1
ASM is enabled.[root@rac1 ~]# crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online[root@rac1 ~]# crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online[root@rac1 ~]# srvctl status nodeapps
VIP rac1-vip 已启用
VIP rac1-vip 正在节点上运行: rac1
VIP rac2-vip 已启用
VIP rac2-vip 正在节点上运行: rac2
网络已启用
网络正在节点上运行: rac1
网络正在节点上运行: rac2
GSD 已禁用
GSD 没有运行的节点: rac1
GSD 没有运行的节点: rac2
ONS 已启用
ONS 守护程序正在节点上运行:rac1
ONS 守护程序正在节点上运行:rac2
[root@rac1 ~]# srvctl status listener
监听程序 LISTENER 已启用
监听程序 LISTENER 正在节点上运行: rac2,rac1四、oracle软件安装
上传软件安装包到 /home/oracle目录
su - oracle
oracle rac也可以安装前验证
[oracle@rac1 database]$ ./runcluvfy.sh stage -pre dbinst -n rac1,rac2 -osdba dba
[oracle@rac1 database]$ ./runcluvfy.sh stage -pre dbcfg -n rac1,rac2 -d
[oracle@rac1 database]$ unzip p13390677_112040_Linux-x86-64_1of7.zip
[oracle@rac1 database]$ unzip p13390677_112040_Linux-x86-64_2of7.zip
[oracle@rac1 database]$ export DISPLAY=192.168.100.100:0.0
[oracle@rac1 database]$ export LANG=en_US.UTF-8
[oracle@rac1 database]$ cd /home/oracle/database
[oracle@rac1 database]$ ./runInstaller
/opt/app/oraInventory/logs/installActions2021-04-06_10-05-03AM.logError in invoking target 'agent nmhs' of makefile '/opt/app/oracle/product/11.2.0.4/db_1/sysman/lib/ins_emagent.mk'. See '/opt/app/oraInventory/logs/installActions2021-04-05_08-44-43AM.log' for details.
[root@rac1 ~]# cd $ORACLE_HOME/sysman/lib
[root@rac1 lib]# cp ins_emagent.mk ins_emagent.mk.bak
[root@rac1 lib]# vi ins_emagent.mk
进入vi编辑器后 命令模式输入/NMECTL 进行查找,快速定位要修改的行
在后面追加参数-lnnz11 第一个是字母l 后面两个是数字1
[root@rac1 lib]# /opt/app/oracle/product/11.2.0.4/db_1/root.sh
Performing root user operation for Oracle 11gThe following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /opt/app/oracle/product/11.2.0.4/db_1Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.[root@rac2 ~]# /opt/app/oracle/product/11.2.0.4/db_1/root.sh
Performing root user operation for Oracle 11gThe following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /opt/app/oracle/product/11.2.0.4/db_1Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
五、创建ASM磁盘组
[root@rac1 ~]# su - grid
[grid@rac1 ~]$ export DISPLAY=192.168.100.100:0.0
[grid@rac1 ~]$ export LANG=en_US.UTF-8
[grid@rac1 ~]$ asmca六、创建数据库
[root@rac1 ~]# su - oracle
[oracle@rac1 ~]$ export DISPLAY=192.168.100.100:0.0
[oracle@rac1 ~]$ export LANG=en_US.UTF-8
[oracle@rac1 ~]$ dbcahttps://www.oracle.com/cn/technical-resources/articles/hunter-rac11gr2-iscsi.html
https://oracle-base.com/articles/11g/oracle-db-11gr2-rac-installation-on-ol5-using-vmware-server-2 -
Oracle11g RAC 在Centos7上安装步骤
2020-05-18 12:08:13Oracle11204+ASM+RAC+Centos71 准备工作1.1 软件需求1.2 网络需求1.3 划分节点IP(各节点)1.4 防火墙和Selinux(各节点)1.5 创建组和用户(各节点)1.6 创建安装目录(各节点)1.7 配置安装用户的环境变量(各节点...
本次为步骤说明,但是也能用,如果需要详细文档的可移步下载链接:
https://download.csdn.net/download/weixin_44167712/124248861 准备工作
1.1 软件需求
rpm -q make binutils compat-libcap1 compat-libstdc++ elfutils-libelf elfutils-libelf-devel fontconfig-devel glibc glibc-devel ksh libaio libaio-devel libXll libXi libXau libXtst libXrender libXrender-devel libgcc libstdc++ libstdc++-devel libxcb net-tools nfs-utils smartmontools
配置yum 安装缺失包解决
1.2 网络需求
各节点配置网卡
每个节点至少要有两个网络适配器(网卡)或者是两个网络接口卡(NICs),一个用于公有网络,一个用于私有网络。
在11.2.0.2版本之后,可以使用Redundant Interconnect Usage创建1-4个高可用IP(HAIP),实现节点之间私有网络的高可用和负载均衡。1.3 划分节点IP(各节点)
[root@racdb1 yum.repos.d]# vi /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 ## Public 192.168.0.26 racdb1 192.168.0.27 racdb2 #Virtual IP 192.168.0.28 racdb1vip 192.168.0.29 racdb2vip #Private IP 10.10.0.202 racdb1pri 10.10.0.203 racdb2pri #Scan Virtual IP 192.168.0.33 rac-scan
1.4 防火墙和Selinux(各节点)
[root@racdb1 ~]# systemctl stop firewalld.service [root@racdb1 ~]# systemctl disable firewalld.service [root@racdb1 ~]# getenforce [root@racdb1 ~]# vi /etc/selinux/config [root@racdb1 ~]# vi /etc/selinux/config SELINUX=disabled 保存重启
1.5 创建组和用户(各节点)
groupadd -g 1022 asmoper groupadd -g 1020 asmadmin groupadd -g 1021 asmdba groupadd -g 1010 oinstall groupadd -g 1030 dba groupadd -g 1031 oper /usr/sbin/useradd -u 1101 -g oinstall -G dba,oper,asmdba,asmoper oracle echo oracle | passwd --stdin oracle /usr/sbin/useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper,dba grid echo grid | passwd --stdin grid
1.6 创建安装目录(各节点)
mkdir -p /oracle/app/11.2.0/grid mkdir -p /oracle/app/grid mkdir -p /oracle/app/oracle mkdir -p /oracle/software chown -R grid:oinstall /oracle chown oracle:oinstall /oracle/app/oracle chmod -R 775 /oracle
1.7 配置安装用户的环境变量(各节点)
oracle用户:
racdb1: export PATH export ORACLE_BASE=/oracle/app/oracle export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1 export ORACLE_SID=JYSDB1 export PATH=.:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH [oracle@racdb1 ~]$ . ./.bash_profile racdb2: export PATH export ORACLE_BASE=/oracle/app/oracle export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1 export ORACLE_SID=JYSDB2 export PATH=.:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH
grid用户:
racdb1: export PATH export ORACLE_BASE=/oracle/app/grid export ORACLE_HOME=/oracle/app/11.2.0/grid export ORACLE_SID=+ASM1 export PATH=.:$ORACLE_HOME/bin:$PATH racdb2: export ORACLE_BASE=/oracle/app/grid export ORACLE_HOME=/oracle/app/11.2.0/grid export ORACLE_SID=+ASM2 export PATH=.:$ORACLE_HOME/bin:$PATH ~ [root@racdb2 ~]# . ./.bash_profile
1.8 资源限制(各节点)
vi /etc/security/limits.conf oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 oracle soft stack 10240 grid soft nproc 2047 grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536 grid soft stack 10240
1.9 配置Linux内核参数
vim /etc/sysctl.conf fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.shmall = 131858432 kernel.shmmax = 220200960000 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 sysctl –p
1.10 修改pam登陆验证模块(各节点)
vim /etc/pam.d/login session required pam_limits.so
1.11 修改/etc/profile文件(各节点)
vim /etc/profile if [ $USER = "oracle" ] || [ $USER = "grid" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022 fi
1.12 安装介质
[root@racdb1 software]# unzip p13390677_112040_Linux-x86-64_1of7.zip [root@racdb1 software]# unzip p13390677_112040_Linux-x86-64_2of7.zip [root@racdb1 software]# unzip p13390677_112040_Linux-x86-64_3of7.zip
1.13 停用NTP服务(各节点)
# systemctl is-enable ntpdate.service # systemctl disable ntpdate.service mv /etc/ntp.config /etc/ntp.conf.bak (我这里没有,我就没管它) rm /var/run/ntpd.pid
1.14 配置互信
方法一:在OUI中完成
方法二:使用脚本配置
此处用方法二吧!1.14.1 配置及验证grid互信
[root@racdb1 software]# su - grid Last login: Fri Apr 10 14:58:36 CST 2020 on pts/1 [grid@racdb1 ~]$ cd /oracle/software/grid/sshsetup/ [grid@racdb1 sshsetup]$ ./sshUserSetup.sh -user grid -hosts "racdb1 racdb2" -advanced -noPromptPassphrase 按要求输下密码/yes什么的就行
将互信脚本SCP到其他节点
[grid@racdb1 sshsetup]$ scp sshUserSetup.sh grid@192.168.0.27:/oracle/software/
在其他节点配置grid用户互信
[grid@racbd2 software]$ ./sshUserSetup.sh -user grid -hosts "racdb1 racdb2" -advanced -noPromptPassphrase
所有节点验证grid用户的互信
[grid@racdb1 ~]$ ssh racdb1 date Fri Apr 10 16:44:53 CST 2020 [grid@racdb1 ~]$ ssh racdb2 date Fri Apr 10 16:45:04 CST 2020 [grid@racdb1 ~]$ ssh racdb1pri date The authenticity of host 'racdb1pri (192.168.0.206)' can't be established. ECDSA key fingerprint is SHA256:KP6p58a1QQ0L+Sfn+bS6CZkVTlOX8/snF50aCfAiLVQ. ECDSA key fingerprint is MD5:c6:bd:38:27:93:d7:86:b3:b9:c3:ca:74:4c:64:f9:cd. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'racdb1pri,192.168.0.206' (ECDSA) to the list of known hosts. Fri Apr 10 16:45:16 CST 2020 [grid@racdb1 ~]$ ssh racdb1pri date Fri Apr 10 16:45:18 CST 2020 [grid@racdb1 ~]$ ssh racdb2pri date The authenticity of host 'racdb2pri (192.168.0.207)' can't be established. ECDSA key fingerprint is SHA256:LjXQgtYGfwPFXMPJCZV5i52nhbqMLIJ0vvqQF70bHno. ECDSA key fingerprint is MD5:1a:ed:e2:05:a1:c0:c9:68:5f:eb:6a:f2:06:d1:31:cd. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'racdb2pri,192.168.0.207' (ECDSA) to the list of known hosts. Fri Apr 10 16:45:24 CST 2020 [grid@racdb1 ~]$ ssh racdb2pri date Fri Apr 10 16:45:27 CST 2020
1.14.2 所有节点配置Oracle用户的互信
[oracle@racdb1 sshsetup]$ ./sshUserSetup.sh -user oracle -hosts "racdb1 racdb2" -advanced -noPromptPassphrase
其它同上面grid
1.15 cvuqdisk软件包
查看是否已经安装cvuqdisk软件包
rpm -qa |grep cvuqdisk
如果存在那么就卸载
rpm -e |grep cvuqdisk
设置cvuq环境变量
export CVUQDISK_GRP=oinstall;
安装新版本的cvuqdisk软件包
[root@racdb1 grid]# cd /oracle/software/grid/rpm/ [root@racdb1 rpm]# rpm -iv cvuqdisk-1.0.9-1.rpm
将cvuqdisk安装包传送到其他节点并安装
[root@racdb1 rpm]# scp cvuqdisk-1.0.9-1.rpm root@192.168.0.27:/oracle/software/ [root@node2 ~]# export CVUQDISK_GRP=oinstall; [root@node2 ~]# echo $CVUQDISK_GRP oinstall [root@node2 software]# rpm -ivh cvuqdisk-1.0.9-1.rpm
1.16 创建UDEV
我的共享磁盘规划:
+OCRDG 10G /dev/sdb crs vote/ocr 专用
+OCRDG 10G /dev/sdc crs vote/ocr 专用
+OCRDG 10G /dev/sdd crs vote/ocr 专用+DATADG 10G /dev/sde 数据存储专用
+FRADG 10G /dev/sdf 归档日志专用
+FRADG 10G /dev/sdg 归档日志专用查看uuid
[root@racdb1 ~]# for i in b c d e f g; do /usr/lib/udev/scsi_id -g -u -d /dev/sd$i; done
划分存储空间
vim /etc/udev/rules.d/99-oracle-asmdevices.rules KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="36000c2922d6a30811e06c214aa02c9c1", RUN+="/bin/sh -c 'm knod /dev/asmdisk1 b $major $minor; chown grid:asmadmin /dev/asmdisk1; chmod 0660 /dev/asmdisk1'" KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="36000c29b531b958d1be58a86e2c6ac1d", RUN+="/bin/sh -c 'm knod /dev/asmdisk2 b $major $minor; chown grid:asmadmin /dev/asmdisk2; chmod 0660 /dev/asmdisk2'" KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="36000c29f7415c87563d0575aee5ea661", RUN+="/bin/sh -c 'm knod /dev/asmdisk3 b $major $minor; chown grid:asmadmin /dev/asmdisk3; chmod 0660 /dev/asmdisk3'" KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="36000c299d02b1dcdcfe7b03b702eb891", RUN+="/bin/sh -c 'm knod /dev/asmdisk4 b $major $minor; chown grid:asmadmin /dev/asmdisk4; chmod 0660 /dev/asmdisk4'" KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="36000c29dc71a4fa37f819aea20e2c057", RUN+="/bin/sh -c 'm knod /dev/asmdisk5 b $major $minor; chown grid:asmadmin /dev/asmdisk5; chmod 0660 /dev/asmdisk5'" KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="36000c29728bfacdf3baed93182c6985b", RUN+="/bin/sh -c 'm knod /dev/asmdisk6 b $major $minor; chown grid:asmadmin /dev/asmdisk6; chmod 0660 /dev/asmdisk6'"
将规则文件传送到其他节点
[root@racdb1 rules.d]# scp /etc/udev/rules.d/99-oracle-asmdevices.rules root@192.168.0.27:/etc/udev/rules.d/
在所有节点,使用root用户刷新块设备分区表
[root@racdb1 rules.d]# /sbin/udevadm trigger --type=devices --action=change
查看裸设备映射情况(不知道是不是虚拟机的原因节点2重启系统才看到3块盘)
[root@racdb2 ~]# ll -ltr /dev/asm* brw-rw---- 1 grid asmadmin 8, 16 May 12 11:59 /dev/asmdisk2 brw-rw---- 1 grid asmadmin 8, 16 May 12 11:59 /dev/asmdisk3 brw-rw---- 1 grid asmadmin 8, 16 May 12 14:00 /dev/asmdisk1 brw-rw---- 1 grid asmadmin 8, 16 May 12 11:59 /dev/asmdisk4 brw-rw---- 1 grid asmadmin 8, 16 May 12 11:59 /dev/asmdisk5 brw-rw---- 1 grid asmadmin 8, 16 May 12 14:00 /dev/asmdisk6
2 安装RAC
2.1 安装GRID
[root@racdb1 grid]# su - grid [grid@racdb1 software]# cd /oracle/software/grid/ [oracle@racdb1 grid]$ ./runInstaller
我这里有两个报错:
其中一个关于网卡virbr0 没啥用,把它干掉就玩了;其中一个是DNS服务器的,我没配,忽略过去了,需要的可以配一下。
注意:务必grid用户登录。root su过去的会报错
它默认racdb-vip,后来才发现多了-,需要注意(这里也可以图形界面配置互信,做过的就不需要配置了)
[root@racdb1 rpm]# rpm -qa gcc [root@racdb1 rpm]# yum install gcc compat-libstdc++ gcc-c++ pdksh
rpm版本太高,oracle检测不到的话可以忽略过去,如果不能接受,那么可以降级处理,我这里就忽略了
在所有节点执行第一个脚本,第一个脚本在所有节点都跑完之后再跑第二个
2.1.1 CRS-2101:The OLR was formatted using version 3.
主要是应为centos7使用systemctl而不是initd运行和重启进程,需要做一些简单修改:[root@racdb1 ~]# touch /usr/lib/systemd/system/ohas.service [root@racdb1 ~]# chmod 777 /usr/lib/systemd/system/ohas.service [root@racdb1 ~]# vi /usr/lib/systemd/system/ohas.service [Unit] Description=Oracle High Avilability Services After=syslog.target [Service] ExecStart=/etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple Restart=always [Install] WantedBy=multi-user.target [root@racdb1 system]# systemctl daemon-reload [root@racdb1 system]# systemctl enable ohas.service Created symlink from /etc/systemd/system/multi-user.target.wants/ohas.service to /usr/lib/systemd/system/ohas.service. [root@racdb1 system]# systemctl start ohas.service [root@racdb1 system]# systemctl status ohas.service
2.1.2 如果遇到找不到libcap.so.1报错
就是版本太高了啊,做个软连接解决吧!
[root@racdb2 lib64]# cd /lib64/ [root@racdb2 lib64]# ls -ltr libcap* -rwxr-xr-x. 1 root root 23968 Nov 20 2015 libcap-ng.so.0.0.0 -rwxr-xr-x. 1 root root 20032 Aug 3 2017 libcap.so.2.22 lrwxrwxrwx. 1 root root 14 Apr 9 01:51 libcap.so.2 -> libcap.so.2.22 lrwxrwxrwx. 1 root root 18 Apr 9 01:51 libcap-ng.so.0 -> libcap-ng.so.0.0.0 [root@racdb2 lib64]# ln -s libcap.so.2.22 libcap.so.1
继续执行第二个脚本,大概到“OLR initialization - successful”的时候,执行systemctl start ohas.service
[root@racdb2 system]# ll /etc/init.d/ total 60 -rw-r--r--. 1 root root 18281 Aug 24 2018 functions -rwxr-xr-x 1 root root 8800 May 12 10:24 init.ohasd -rwxr-xr-x. 1 root root 4569 Aug 24 2018 netconsole -rwxr-xr-x. 1 root root 7923 Aug 24 2018 network -rwxr-xr-x 1 root root 6723 May 12 10:24 ohasd -rw-r--r--. 1 root root 1160 Oct 31 2018 README
2.1.3 INS-20802 ORACLE Cluster Verification Utility failed
查询安装日志发现报错:
PRVG-1101 PRVG-4657 PRVG-4664
oracle强烈建议使用DNS或GNS进行scan名称解析,因为主机文件仅支持一个ip用于scan。
所以用主机文件解析,ping返回正确scan vip,则忽略错误继续前进即可。
如果使用DNS或GNS进行scan解析,在所有节点注释掉本地主机文件中scan,然后重新运行$GRID_HOME/bin/cluvfy comp scan进行确认。
查看集群安装结果:
2.2 创建DATADG / FRADG
[grid@racdb1 ~]$ asmca
查看下:[grid@racdb1 ~]$ asmcmd ASMCMD> lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 4096 1048576 10240 10145 0 10145 0 N DATADG/ MOUNTED NORMAL N 512 4096 1048576 20480 20290 0 10145 0 N FRADG/ MOUNTED NORMAL N 512 4096 1048576 30720 29794 10240 9777 0 Y OCRDG/
2.3 安装oracle软件
[oracle@racdb1 ~]$ clear [oracle@racdb1 ~]$ cd /oracle/software/database/ [oracle@racdb1 database]$ ./runInstaller
2.3.1 Error in invoking target ‘agent nmhs’ of makefile…
安装到56%的时候会报错:
解决方法:
这个错误是7版本的linux 安装oracle的bug保留安装过程,开启另外一个窗口,
[oracle@racdb1 ~]$ vi /oracle/app/oracle/product/11.2.0/db_1/sysman/lib/ins_emagent.mk
将$(MK_EMAGENT_NMECTL) 更改为 $(MK_EMAGENT_NMECTL) –lnnz11 ,点击retry即可。
2.4 安装database
[oracle@racdb1 database]$ dbca
-
RHEL7 搭建 Oracle11g RAC
2017-04-06 16:39:39本文档为笔者动手操作,每一步截图和说明均来自实践。 -
oracle11g+centos7 rac安装配置步骤整理_包括多路径配置.pdf
2019-06-18 09:29:00oracle11g+centos7 rac安装配置步骤整理_包括多路径配置 -
Centos 7.7下Oracle 11g RAC
2020-07-07 10:58:53Centos 7.7下Oracle 11g RAC部署 操作系统环境 1.1linux操作系统配置 两台主机IP规划信息如下: rac1 rac2 Ip:192.168.198.180 Ip:192.168.198.181 priv:10.10....Centos 7.7下Oracle 11g RAC部署
- 操作系统环境
1.1linux操作系统配置
两台主机IP规划信息如下:
rac1
rac2
Ip:192.168.198.180
Ip:192.168.198.181
priv:10.10.10.81
priv: 10.10.10.82
vip: 192.168.198.182
vip: 192.168.198.183
scan-ip:192.168.198.184
Linux系统版本均为7.7
Oracle版本为11.2.0.4.0
1.2hosts文件配置
[root@rac1 ~]# vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
#public
192.168.198.180 rac1
192.168.198.181 rac2
#vip
192.168.198.182 rac1-vip
192.168.198.183 rac2-vip
#priv
10.10.10.81 rac1-priv
10.10.10.82 rac2-priv
#scan
192.168.198.184 rac-cluster-scan
两个节点的hosts文件内容要相同
1.3配置内核参数
[root@rac1 ~]# vi /etc/sysctl.conf
# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 8389934592 # kernel.shmmax要以自己实际内存大小而定
kernel.shmall = 268435456
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
1.4关闭防火墙
[root@rac1 ~]# systemctl stop firewalld
[root@rac1 ~]# systemctl disable firewalld
[root@rac1 ~]# systemctl status firewalld
1.5 关闭SELINUX
[root@rac1 ~]# vi /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
1.6安全限制调整
[root@rac1 ~]# vi /etc/security/limits.conf
# /etc/security/limits.conf
#
#This file sets the resource limits for the users logged in via PAM.
#It does not affect resource limits of the system services.
#
#Also note that configuration files in /etc/security/limits.d directory,
#which are read in alphabetical order, override the settings in this
#file in case the domain is the same or more specific.
#That means for example that setting a limit for wildcard domain here
#can be overriden with a wildcard setting in a config file in the
#subdirectory, but a user specific setting here can be overriden only
#with a user specific setting in the subdirectory.
#
#Each line describes a limit for a user in the form:
#
#<domain> <type> <item> <value>
#
#Where:
#<domain> can be:
# - a user name
# - a group name, with @group syntax
# - the wildcard *, for default entry
# - the wildcard %, can be also used with %group syntax,
# for maxlogin limit
#
#<type> can have the two values:
# - "soft" for enforcing the soft limits
# - "hard" for enforcing hard limits
#
#<item> can be one of the following:
# - core - limits the core file size (KB)
# - data - max data size (KB)
# - fsize - maximum filesize (KB)
# - memlock - max locked-in-memory address space (KB)
# - nofile - max number of open file descriptors
# - rss - max resident set size (KB)
# - stack - max stack size (KB)
# - cpu - max CPU time (MIN)
# - nproc - max number of processes
# - as - address space limit (KB)
# - maxlogins - max number of logins for this user
# - maxsyslogins - max number of logins on the system
# - priority - the priority to run user process with
# - locks - max number of file locks the user can hold
# - sigpending - max number of pending signals
# - msgqueue - max memory used by POSIX message queues (bytes)
# - nice - max nice priority allowed to raise to values: [-20, 19]
# - rtprio - max realtime priority
#
#<domain> <type> <item> <value>
#
#* soft core 0
#* hard rss 10000
#@student hard nproc 20
#@faculty soft nproc 20
#@faculty hard nproc 50
#ftp hard nproc 0
#@student - maxlogins 4
# End of file
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
1.7安装rpm包
[root@rac1 ~]# yum -y install binutils compat-libstdc++-33 gcc gcc-c++ glibc glibc-common glibc-devel ksh libaio libaio-devel libgcc libstdc++ libstdc++-devel make sysstat openssh-clients compat-libcap1 xorg-x11-utils xorg-x11-xauth elfutils unixODBC unixODBC-devel libXp elfutils-libelf elfutils-libelf-devel smartmontools glibc-headers
2.用户配置
2.1添加Oracle/grid用户
[root@rac1 ~]# groupadd -g 54321 oinstall
[root@rac1 ~]# groupadd -g 54322 dba
[root@rac1 ~]# groupadd -g 54323 oper
[root@rac1 ~]# groupadd -g 54324 backupdba
[root@rac1 ~]# groupadd -g 54325 dgdba
[root@rac1 ~]# groupadd -g 54326 kmdba
[root@rac1 ~]# groupadd -g 54327 asmdba
[root@rac1 ~]# groupadd -g 54328 asmoper
[root@rac1 ~]#groupadd -g 54329 asmadmin
[root@rac1 ~]# groupadd -g 54330 racdba
[root@rac1 ~]# useradd -u 54321 -g oinstall -G dba,asmdba,backupdba,dgdba,kmdba,racdba,oper oracle
[root@rac1 ~]# useradd -u 54322 -g oinstall -G asmadmin,asmdba,asmoper,dba grid
[root@rac1 ~]# passwd oracle
[root@rac1 ~]# passwd grid
2.2创建目录
[root@rac1 ~]# mkdir -p /oracle/app/11.2.0/grid
[root@rac1 ~]# mkdir -p /oracle/app/grid
[root@rac1 ~]# mkdir -p /oracle/app
[root@rac1 ~]# mkdir -p /oracle/app/oracle/product/11.2.0/dbhome_1
[root@rac1 ~]# chown -R oracle:oinstall /oracle
[root@rac1 ~]# chown -R grid:oinstall /oracle/app/11.2.0/grid
[root@rac1 ~]# chown -R grid:oinstall /oracle/app/grid
[root@rac1 ~]# chown -R oracle:oinstall /oracle/app/oracle
[root@rac1 ~]# chmod 771 /oracle/
[root@rac1 ~]# chmod 771 /oracle/app
2.3设置Oracle用户环境变量
[root@rac1 ~]# su – oracle
[oracle@rac1 ~]$ vi .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/.local/bin:$HOME/bin
export PATH
export ORACLE_BASE=/oracle/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1
export ORACLE_SID=zzw1 #节点俩边实例名分别为zzw1和zzw2
export LANG=en_US.UTF-8
export NLS_LANG=american_america.ZHS16GBK
export NLS_DATE_FORMAT="yyyy-mm-dd hh24:mi:ss"
export PATH=.:${PATH}:$HOME/bin:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch
export PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
export PATH=${PATH}:$ORACLE_BASE/common/oracle/bin:/home/oracle/run
export ORACLE_TERM=xterm
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export CLASSPATH=$ORACLE_HOME/JRE
export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export THREADS_FLAG=native
export TEMP=/tmp
export TMPDIR=/tmp
export GI_HOME=/oracle/app/11.2.0/grid
export PATH=${PATH}:$GI_HOME/bin
export ORA_NLS10=$GI_HOME/nls/data
umask 022
export TMOUT=0
2.4设置grid用户环境变量
[grid@rac1 ~]$ vi .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/.local/bin:$HOME/bin
export PATH
export ORACLE_BASE=/oracle/app/grid
export ORACLE_HOME=/oracle/app/11.2.0/grid
export ORACLE_SID=+ASM1 #两边分别为+ASM1和+ASM2
export NLS_LANG=american_america.ZHS16GBK
export NLS_DATE_FORMAT="yyyy-mm-dd hh24:mi:ss"
export PATH=.:${PATH}:$HOME/bin:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch
export PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
export PATH=${PATH}:$ORACLE_BASE/common/oracle/bin
export ORACLE_TERM=xterm
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export CLASSPATH=$ORACLE_HOME/JRE
export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
export CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export THREADS_FLAG=native
export TEMP=/tmp
export TMPDIR=/tmp
umask 022
export TMOUT=0
~
3. UDEV绑盘
3.1创建4块新的虚拟磁盘
先关闭两边虚机
以此分别创建3个1G,1个10G的磁盘
以自己建的磁盘顺序依次绑定节点
第二台虚机添加硬盘时选择使用现有磁盘
找到刚刚创建的4块磁盘并绑定相同节点
3.2修改虚机参数
找到虚机.vmx参数文件
编辑该文件在文件末尾加上
diskLib.dataCacheMaxSize = "0"
diskLib.dataCacheMaxReadAheadSize = "0"
diskLib.dataCacheMinReadAheadSize = "0"
diskLib.dataCachePageSize = "4096"
diskLib.maxUnsyncedWrites = "0"
disk.locking = "FALSE"
disk.enableUUID = "TRUE"
两边都要添加之后保存文件再开机
3.3绑定磁盘组
[root@rac1 ~]# fdisk -l
Disk /dev/sda: 42.9 GB, 42949672960 bytes, 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000b3588
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 2099199 1048576 83 Linux
/dev/sda2 2099200 83886079 40893440 8e Linux LVM
Disk /dev/sdb: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdc: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdd: 1073 MB, 1073741824 bytes, 2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sde: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/centos-root: 33.4 GB, 33386659840 bytes, 65208320 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/centos-swap: 8485 MB, 8485076992 bytes, 16572416 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
先查看两边磁盘是否挂接成功
查看自己两边磁盘UUID是否相同
[root@rac1 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdb
36000c299fee8a318f3d12deab7b4991a
[root@rac1 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdc
36000c2933c41c168f10bd3fee9abe77c
[root@rac1 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdd
36000c29d9d88b533e4f69484ae8a58e9
[root@rac1 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sde
36000c29e3a08e62f7c157aa0012e84d1
两边添加绑定文件
[root@rac1 ~]# vi /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="sdb", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/sdb", RESULT=="36000c299fee8a318f3d12deab7b4991a", SYMLINK+="asm-crs1", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sdc", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/sdc", RESULT=="36000c2933c41c168f10bd3fee9abe77c", SYMLINK+="asm-crs2", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sdd", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/sdd", RESULT=="36000c29d9d88b533e4f69484ae8a58e9", SYMLINK+="asm-crs3", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sde", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/sde", RESULT=="36000c29e3a08e62f7c157aa0012e84d1", SYMLINK+="asm-data", OWNER="grid", GROUP="asmadmin", MODE="0660"
两边编辑绑定文件后,依次执行以下命令
[root@rac1 ~]# systemctl status systemd-udevd.service
[root@rac1 ~]# systemctl enable systemd-udevd.service
[root@rac1 ~]# /usr/sbin/udevadm control --reload-rules
[root@rac1 ~]# /usr/sbin/udevadm trigger --type=devices
[root@rac1 ~]# ls -l /dev/sd*
brw-rw---- 1 root disk 8, 0 Nov 14 11:23 /dev/sda
brw-rw---- 1 root disk 8, 1 Nov 14 11:23 /dev/sda1
brw-rw---- 1 root disk 8, 2 Nov 14 11:23 /dev/sda2
brw-rw---- 1 grid asmadmin 8, 16 Nov 14 11:52 /dev/sdb
brw-rw---- 1 grid asmadmin 8, 32 Nov 14 11:53 /dev/sdc
brw-rw---- 1 grid asmadmin 8, 49 Nov 14 11:56 /dev/sdd
brw-rw---- 1 grid asmadmin 8, 65 Nov 14 11:56 /dev/sde
[root@rac1 ~]# ls -l /dev/asm*
lrwxrwxrwx 1 root root 4 Nov 14 11:57 /dev/asm-crs1 -> sdb
lrwxrwxrwx 1 root root 4 Nov 14 11:56 /dev/asm-crs2 -> sdc
lrwxrwxrwx 1 root root 4 Nov 14 11:56 /dev/asm-crs3 -> sdd
lrwxrwxrwx 1 root root 4 Nov 14 11:56 /dev/asm-data -> sde
4安装集群软件
4.1先上传grid安装包到grid家目录下
[grid@rac1 ~]$ ll
-rwxrwxr-x 1 grid oinstall 1205251894 2月 27 2015 p13390677_112040_Linux-x86-64_3of7.zip
上传过来后要修改grid安装包权限
4.2解压安装包
[grid@rac1 ~]$ unzip p13390677_112040_Linux-x86-64_3of7.zip
[grid@rac1 ~]$ ll
drwxr-xr-x 7 grid oinstall 156 8月 27 2013 grid
drwxr-xr-x 3 grid oinstall 18 7月 5 21:43 oradiag_grid
-rwxrwxr-x 1 grid oinstall 1205251894 2月 27 2015 p13390677_112040_Linux-x86-64_3of7.zip
[grid@rac1 ~]$ cd grid/
[grid@rac1 grid]$ ./runInstaller
两边执行自动修复脚本
依次再在两个节点用root用户执行两个脚本,先执行脚本1按节点一二顺序,再执行脚本2按节点一二顺序
在执行第二个脚本时会出现版本bug
要在新开窗口执行
安装完成后显示的报错可以忽略
4.3检查grid集群是否安装成功
[grid@rac1 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[grid@rac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.DATA.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.LISTENER.lsnr
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.asm
ONLINE ONLINE rac1 Started
ONLINE ONLINE rac2 Started
ora.gsd
OFFLINE OFFLINE rac1
OFFLINE OFFLINE rac2
ora.net1.network
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.ons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac2
ora.cvu
1 ONLINE ONLINE rac1
ora.oc4j
1 ONLINE ONLINE rac2
ora.rac1.vip
1 ONLINE ONLINE rac1
ora.rac2.vip
1 ONLINE ONLINE rac2
ora.scan1.vip
1 ONLINE ONLINE rac2
ora.zzw.db
1 ONLINE ONLINE rac1 Open
2 ONLINE ONLINE rac2 Open
5.安装数据库软件
上传数据库压缩包到oracle家目录下并赋权
[oracle@rac1 ~]$ ll
-rwxrwxr-x 1 oracle oinstall 1212620273 Nov 7 2019 p13390677_112040_Linux-x86-64_1of7.zip
-rwxrwxr-x 1 oracle oinstall 113112960 Nov 7 2019 p13390677_112040_Linux-x86-64_2of7.zip
开始安装
[oracle@rac1 ~]$ cd database/
[oracle@rac1 database]$ ./runInstaller
执行以上脚本
使用grid用户执行asmca命令
[grid@rac1 ~]$ asmca
[oracle@rac1 ~]$ dbca
-
RHEL6.6安装Oracle 11g RAC
2019-03-05 10:22:19RHEL 6.6 下安装配置 oracle 11g RAC 的文档,在 centos 6.6下是通用的。 -
centos7.7安装oracle11g脚本(推荐)
2020-12-15 01:28:39最近需要安装oracle,然后网上查了一下教程,在centos7.7上安装成功,运行正常。这里记录一下。 环境: 硬件4核/8G RAM/100G 存储 centos7.7(64bit) oracle11g(官网下载的) 步骤(转载): 第一个脚本... -
CentOs 7.X 安装Oracle 11g RAC的坑
2021-04-30 11:00:34随着Linux 7版本的普及,但Oracle数据库主流版本仍是11gR2,11.2.0.4是生产安装首选。由于11.2.0.4对Linux 7的支持不很完美,在Linux 7上安装会遇到几处问题,以此记录下来。1.安装GI执行root.sh脚本时,ohasd进程... -
oracle11G RAC安装
2018-03-09 11:29:38oracle11G RAC数据库安装文档,环境:CentOS CentOS release 6.5 (Final) 64bite oracle版本:ORACLE 11G R2.0.4 64bite -
oracle11g RAC安装步骤详解
2018-12-11 14:54:41环境:CentOS CentOS release 6.5 (Final) 64bite oracle版本:ORACLE 11G R2.0.4 64bite -
ORACLE 11.2.0.4 RAC for CENTOS 7.0安装部署
2017-05-06 13:32:03通过Centos7环境部署Oracle11g RAC集群 -
Oracle11G_12C+CentOs7 RAC安装配置步骤整理_包括多路径配置
2018-03-15 09:56:58Oracle11G_12C+CentOs7 RAC安装配置步骤整理_包括多路径配置 -
Oracle11g RAC For Centos 6 安装步骤
2020-11-18 11:42:42Oracle RAC的架构简介 RAC由至少两个节点组成,节点之间通过公共网络和私有网络连接,其中私有网络的功能是实现节点之间的通信,而公共网络的功能是提供用户的访问。在每个节点上分别运行一个Oracle数据库实例和一... -
centos7 oracle11g RAC安装.docx
2021-03-11 19:36:59centos7 oracle11g RAC安装.docx -
RedHat/CentOS7 离线安装oracle 11g_r2_x64所有依赖包,亲测可用
2021-03-30 15:58:26RedHat/CentOS7离线安装Oracle11g_R2_x64所需的依赖包。 rpm -ivh \ mpfr-3.1.1-4.el7.x86_64.rpm \ compat-libstdc++-33-3.2.3-72.el7.x86_64.rpm \ libmpc-1.0.1-3.el7.x86_64.rpm \ libXau-1.0.8-2.1.el7.x86_64.... -
CentOS6.x部署实践 Oracle 11g RAC集群搭建
2017-12-08 14:08:51CentOS6.x部署实践 Oracle 11g RAC集群搭建,很详细,不错的文档学习吧 -
虚拟机搭建CentOS 7.x下Oracle 11g RAC 集群:安装前规划(一)
2021-02-23 14:37:32虚拟机搭建Centos 7.9下Oracle 11g RAC 集群:安装前规划(一) 一、主要使用工具说明 一、 主要使用工具说明 1) Oacle安装包:主要包括database安装包、grid安装包。版本:11.2.0.4 2) 多路径挂载工具:操作系统... -
Oracle 11g RAC搭建(VMware环境)
2017-07-07 11:31:15Oracle 11g RAC搭建(VMware环境) -
Oracle11204+RAC+ASM+Centos7超强安装手册
2021-08-30 15:41:12Oracle11204+RAC+ASM+Centos7超强安装手册 -
VMWARE ESXI6.5+CENTOS6.8+ORACLE11G RAC.docx
2021-06-10 14:46:11oracle11g rac安装文档 -
CentOS 7安装Oracle 11G教程
2021-05-14 12:45:33系统环境:CentOS Linux release 7.3.1611 (Core) MongoDB版本:mongodb-linux-x86_64-rhel70-4.4.2.tgz MongoD下载地址:https://www.mongodb.com/try/download/community 工具:Xshell6和Xftp6 注:系统安装时需... -
centos7.7部署oracle12.2.0.1rac.docx
2020-02-27 22:50:312.12 安装oracle需要的包 11 2.13 udev绑定磁盘 12 2.14 grid安装预检查 12 Swap比较小 12 /dev/shm问题 12 Virbr0 13 禁用avahi-daemon 13 NOZEROCONF 14 三、安装GRID软件 14 四、asmca创建磁盘组 33 五...