精华内容
下载资源
问答
  • 配置环境ORACLE_BASE=/oracle/11.2.0/11.2.0ORACLE_HOME=/oracle/11.2.0/grid/crsORACLE_SID=+ASMLANG=CPATH=$ORACLE_HOME/bin:$PATH:$HOME/binexport PATH ORACLE_BASE ORACLE_HOME ORACLE_SID LANG????root 权限下...

    安装教程:

    🎃配置环境

    ORACLE_BASE=/oracle/11.2.0/11.2.0ORACLE_HOME=/oracle/11.2.0/grid/crs

    ORACLE_SID=+ASM

    LANG=C

    PATH=$ORACLE_HOME/bin:$PATH:$HOME/bin

    export PATH ORACLE_BASE ORACLE_HOME ORACLE_SID LANG

    🎈root 权限下执行xhost +

    🎆su - grid

    🎊配置export DISPLAY=:0.0

    ✨进入安装目录运行./runInstaller

    78a18bb15505390f34aebfc67c528aa6.png

    安装开始:

    65b8fcbd6c351499a32c31c5d8ea9548.png

    c66157eaca0f0e540f3f83a11ebcf1ac.png

    4a5480e3ce20c875d444bb80fa059d66.png

    61a8efcc58104840dc69030fc62e976d.png

    选择语言,要中文自己选-- slovak--->chinese

    9239bce1ab044df475e7487303e9a1ad.png

    77d419195277deaf670c6f9c4b49865a.png

    35d1398f337dbb5b7fbb6b1d28217456.png

    2b1aa9513b31faa4a8655264cf938012.png

    78012402da770a15ac535d17d81ed046.png

    测试互信:

    5308c9a186091082a8925882879902cf.png

    8261dfdaee94e31f76ebb676ee0161f5.png

    b21af3d6c2267d8d78e93f9f428528f9.png

    f48b16afe703bd0a4880617924a98d3d.png

    下一步

    10e4f6c5b3687d9451c1f6b7ed2c8f2f.png

    下一步  如果这步报错显示INS 41112 ----->>>

    请访问私有节点互信,并且一定要关闭防火墙!!!!

    644962b92639179f4d7180ba735a7c1b.png

    65e7412bd010df97b18f7ed2974fb2c3.png

    69ad635b3dbc87653dc07af61608b69c.png

    731046fe443c1613082f33e0836572c8.png

    68c1dc78e71250fcca4820b8a2195818.png

    e301a4517cc79e513a3443753ee96487.png

    9d81373268149b70374a04a094d7f4cf.png

    ead573c4fd008f16b894de6b17f3b3d9.png

    f024ac6486ec1afc6d7402b99be3a572.png

    3921bc53d5181b3c6c00a847391c1415.png

    fb5cbf121ca18b8e7c6a6cb3a32ac0e9.png

    缺少对应的包安装

    前面的报错,内核更改问题两个节点都加上

    kernel.shmall = 2097152

    fs.file-max = 6815744

    fs.aio-max-nr = 1048576

    后续由于包的缺失,安装软件包

    进入安装grid 安装包里,有一个rpm 包,安装上(两个节点都需要)

    a449950b4d8fcb5733ff30bb5e193b42.png

    点击

    922cef26be868c4110652cd598059505.png

    根据提示内容运行脚本,必须两个节点都运行!!!!

    893964c58bfcba1b478c8cb382218efb.png

    好吧.一波忽略---

    b837359add89631e5413e996c5f03695.png

    3585a938b34247658903a7b95cad203b.png

    ------

    运行脚本可是很有讲究的!!!

    首先 rac1 节点运行第一个脚本完成后开始运行 rac2 上面的脚本

    等rac2 第一个脚本运行完毕后,才开始运行rac1 节点第二个脚本

    最后运行rac2 的第二个脚本!!!

    7a58fe6b6a7596c41cfac893db686a34.png

    2ede2dddd7a7a75a58a42131c5cbe573.png

    rac1节点:

    0db7bc405bf2eabb2b378be4dd2bc148.png

    rac2节点:

    1548dd276d76034e2a1f0de617fe94df.png

    rac1,节点:

    d4d5871d1c4be6ea0448f30a06c9bd00.png

    报错了:

    67063d5bdcefae96cd00f5900483db04.png

    遇见bug了

    换了系统.......换成redhat6的了

    94015ca1c1027335d72150898126ba27.png

    成功了!!

    node2 执行

    出项问题点击skip查看

    0f31942f565cc16ce8e340fc6d7b5e2a.png

    完成后关闭---

    https://blog.csdn.net/zx_canway/article/details/72768105

    https://www.cnblogs.com/wuwanyu/p/8275989.html

    展开全文
  • 在linux操作系统上部署oracle RAC 全过程和截图,包括:环境说明,搭建注意,搭建过程,检查,搭建iscsi存储,用的ASM自动存储管理。
  • 文章目录前言一、基础环境介绍IP地址规划vxm文件添加信息二、RAC环境搭建详细步骤1.节点1、2进行rpm安装2.节点1、 2安装所需依赖包:3.节点1、2进行创建组,添加组:4.节点1、2进行修改软硬件限制:5.节点1、2进行...


    前言

    进阶第一节当然是在linux环境下安装RAC集群数据库啦!!!


    提示:以下是本篇文章正文内容,下面案例可供参考(注意标题所提示操作的节点

    一、基础环境介绍

    在这里插入图片描述

    IP地址规划

    在这里插入图片描述

    vxm文件添加信息

    由于是vmware的虚拟机环境,需要配置系统的配置文件

    #节点1、2的vxm文件添加:
    disk.EnableUUID="TRUE" 
    disk.locking = "FALSE" 
    scsi1.shared = "TRUE" 
    diskLib.dataCacheMaxSize = "0" 
    diskLib.dataCacheMaxReadAheadSize = "0" 
    diskLib.dataCacheMinReadAheadSize = "0"
    diskLib.dataCachePageSize= "4096"
    diskLib.maxUnsyncedWrites = "0" 
    scsi1.present = "TRUE" 
    scsi1.virtualDev = "lsilogic" 
    scsi1.sharedBus = "VIRTUAL"
    

    二、RAC环境搭建详细步骤

    1.节点1、2进行rpm安装

    可参考本人文章:
    https://blog.csdn.net/weixin_41607523/article/details/110797695?spm=1001.2014.3001.5501

    mkdir software
    
    [root@localhost software]# yum install oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm
    [root@localhost software]# yum install oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm
    

    若无法进行rpm的安装操作,请参照:https://blog.csdn.net/weixin_41607523/article/details/110482175?spm=1001.2014.3001.5501
    第一章环境搭建,进行创建用户、依赖包、修改内核参数以及软件限制等。

    2.节点1、 2安装所需依赖包:

    yum install -y binutils
    yum install -y compat-libcap1
    yum install -y compat-libstdc++-33
    yum install -y compat-libstdc++-33.i686
    yum install -y gcc
    yum install -y gcc-c++
    yum install -y glibc
    yum install -y glibc.i686
    yum install -y glibc-devel
    yum install -y glibc-devel.i686
    yum install -y ksh
    yum install -y libgcc
    yum install -y libgcc.i686
    yum install -y libstdc++
    yum install -y libstdc++.i686
    yum install -y libstdc++-devel
    yum install -y libstdc++-devel.i686
    yum install -y libaio
    yum install -y libaio.i686
    yum install -y libaio-devel
    yum install -y libaio-devel.i686
    yum install -y libXext
    yum install -y libXext.i686
    yum install -y libXtst
    yum install -y libXtst.i686
    yum install -y libX11
    yum install -y libX11.i686
    yum install -y libXau
    yum install -y libXau.i686
    yum install -y libxcb
    yum install -y libxcb.i686
    yum install -y libXi
    yum install -y libXi.i686
    yum install -y make
    yum install -y sysstat
    yum install -y unixODBC
    yum install -y unixODBC-devel
    yum install -y readline
    yum install -y libtermcap-devel
    yum install -y bc
    yum install -y compat-libstdc++
    yum install -y elfutils-libelf
    yum install -y elfutils-libelf-devel
    yum install -y fontconfig-devel
    yum install -y libXi
    yum install -y libXtst
    yum install -y libXrender
    yum install -y libXrender-devel
    yum install -y libgcc
    yum install -y librdmacm-devel
    yum install -y libstdc++
    yum install -y libstdc++-devel
    yum install -y net-tools
    yum install -y nfs-utils
    yum install -y python
    yum install -y python-configshell
    yum install -y python-rtslib
    yum install -y python-six
    yum install -y targetcli
    yum install -y smartmontools
    

    注:rhel7还需单独安装一个独立包rpm -ivh compat-libstdc+±33-3.2.3-72.el7.x86_64.rpm

    3.节点1、2进行创建组,添加组:

    [root@localhost software]# groupadd -g 54332 asmdba
    [root@localhost software]# groupadd -g 54331 asmadmin
    [root@localhost software]# groupadd -g 54333 asmoper
    [root@localhost software]# useradd -u 54322 -g oinstall -G dba,oper,asmadmin,asmdba,asmoper,racdba grid
    [root@localhost software]# usermod -a -G asmdba oracle
    #注意修改密码
    

    4.节点1、2进行修改软硬件限制:

    [root@localhost software]# cd /etc/security/limits.d/
    [root@localhost limits.d]# pwd
    /etc/security/limits.d
    [root@localhost limits.d]# vi oracle-database-preinstall-19c.conf
    grid     soft   nofile    1024
    grid     hard   nofile    65536
    grid     soft   nproc     16384
    grid     hard   nproc     16384
    grid     soft   stack     10240
    grid     hard   stack     32768
    grid     hard   memlock    134217728
    grid     soft   memlock    134217728
    

    注意以上修改格式是否可以生效,需重新su – 用户查看。
    查看各用户参数ulimit -a

    5.节点1、2进行配置ip及主机名:

    [root@localhost limits.d]# vi /etc/hosts
    192.168.221.88 rac1 192.168.221.99 rac2 10.10.10.88 rac1-priv 10.10.10.99 rac2-priv 192.168.221.123 rac1-vip 192.168.221.124 rac2-vip 192.168.221.125 rac-scanip 
    [root@localhost limits.d]# hostname rac1
    或 hostnamectl set-hostname rac1
    [root@localhost limits.d]# hostname
    rac1
    [root@localhost limits.d]# exit
    

    6.节点1、2仅主机模式的网络设置:

    控制面板>网络和 Internet>网络连接> VMware Network Adapter VMnet1
    在这里插入图片描述
    在这里插入图片描述

    [root@rac1 network-scripts]# systemctl restart network
    

    7.节点1、2使用UDEV绑定磁盘:

    [root@rac1 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdb
    36000c29473a235055b42ace68faa51fe
    [root@rac1 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdc
    36000c297a674cbf99c29bce35b598bf6
    [root@rac1 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdd
    36000c29cc898162b85ed20bd99c3f5fc
    
    [root@rac1 ~]# vi /etc/udev/rules.d/99-oracle-asmdevices.rules
    

    在这里插入图片描述

    KERNEL=="sdb", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$name", RESULT=="36000c29473a235055b42ace68faa51fe", SYMLINK+="asm-ocrdisk1", OWNER="grid", GROUP="asmadmin", MODE="0660"
    KERNEL=="sdc", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$name", RESULT=="36000c297a674cbf99c29bce35b598bf6", SYMLINK+="asm-ocrdisk2", OWNER="grid", GROUP="asmadmin", MODE="0660"
    KERNEL=="sdd", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$name", RESULT=="36000c29cc898162b85ed20bd99c3f5fc", SYMLINK+="asm-ocrdisk3", OWNER="grid", GROUP="asmadmin", MODE="0660"
    KERNEL=="sde", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$name", RESULT=="36000c290242af74a235cc4aa010055e9", SYMLINK+="asm-ocrdiskdata", OWNER="grid", GROUP="asmadmin", MODE="0660"
    
    [root@rac1 ~]# udevadm control --reload-rules
    [root@rac1 ~]# udevadm trigger
    [root@rac1 ~]# ll /dev/asm*
    

    在这里插入图片描述
    查看属组权限:
    ls -l /dev/sd*
    在这里插入图片描述
    节点2进行文件拷贝:

    [root@rac2 ~]# scp rac1:/etc/udev/rules.d/99-oracle-asmdevices.rules /etc/udev/rules.d/99-oracle-asmdevices.rules
    

    8.节点1、2创建grid与oracle目录:

    mkdir -p /u01/app/oracle/product/19c/dbhome_1  
    mkdir -p /u01/grid/product/19c/gridhome_1  
    mkdir -p /u01/gridbase  chown -R oracle.oinstall /u01/  
    chown -R grid.oinstall /u01/grid*
    

    9.节点1、2设置grid与oracle用户变量:

    节点1、2设置grid和oracle用户的环境变量,这里只记录第一个节点的相关信息,第二节点除ORACLE_SID不一样之外,其他都一样:

    节点1oracle:
    export ORACLE_BASE=/u01/app/oracle
    export ORACLE_HOME=$ORACLE_BASE/product/19c/dbhome_1
    export ORACLE_SID=orcl1
    export PATH=.:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH:$HOME/bin
    export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
    umask 022
    
    节点1grid:
    export ORACLE_SID=+ASM1
    export ORACLE_BASE=/u01/gridbase
    export ORACLE_HOME=/u01/grid/product/19c/gridhome_1
    export PATH=.:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH:$HOME/bin
    umask 022
    
    节点2oracle:
    export ORACLE_BASE=/u01/app/oracle
    export ORACLE_HOME=$ORACLE_BASE/product/19c/dbhome_1
    export ORACLE_SID=orcl2
    export PATH=.:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH:$HOME/bin
    export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
    umask 022
    
    节点2grid:
    export ORACLE_SID=+ASM2
    export ORACLE_BASE=/u01/gridbase
    export ORACLE_HOME=/u01/grid/product/19c/gridhome_1
    export PATH=.:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH:$HOME/bin
    umask 022
    [grid@rac1 ~]$ source .bash_profile
    [grid@rac2 ~]$ source .bash_profile
    

    10.节点1、2修改root用户变量:

    [root@rac1 ~]# vi .bash_profile
    #添加gird的用户ORACLE_HOME路径
    PATH=$PATH:/u01/grid/product/19c/gridhome_1/bin:$HOME/bin
    

    11.节点1、2进行传输解压安装包:

    节点1 oracle用户:
    [oracle@rac1 dbhome_1]$ pwd
    /u01/app/oracle/product/19c/dbhome_1
    [oracle@rac1 dbhome_1]$ ls
    LINUX.X64_193000_db_home.zip
    [oracle@rac1 dbhome_1]$ unzip LINUX.X64_193000_db_home.zip
    节点1 grid用户:
    [grid@rac1 dbhome_1]$ cd /u01/grid/product/19c/gridhome_1/
    [grid@rac1 gridhome_1]$ ls
    LINUX.X64_193000_grid_home.zip
    [grid@rac1 gridhome_1]$ unzip LINUX.X64_193000_grid_home.zip
    

    12.节点1、2进行安装cvuqdisk包:

    节点1、2都需要安装cvuqdisk-1.0.10-1.rpm包,这个包linux的光盘内并不包含,需要到解压后的grid的安装文件中去找,在cv目录下面的rpm目录里面。

    [root@rac1 ~]# cd /u01/grid/product/19c/gridhome_1/cv/rpm/
    [root@rac1 rpm]# ls
    cvuqdisk-1.0.10-1.rpm
    [root@rac1 rpm]# yum install cvuqdisk-1.0.10-1.rpm
    #rpm拷贝至节点2:
    [root@rac1 rpm]# scp cvuqdisk-1.0.10-1.rpm 10.10.10.99:/u01/
    [root@rac2 u01]# yum install cvuqdisk-1.0.10-1.rpm
    

    13.节点1、2关闭时间同步ntpd服务(可选):

    [root@rac2 u01]# systemctl disable ntpd.service
    [root@rac2 u01]# systemctl stop ntpd.service
    [root@rac2 u01]# mv /etc/ntp.conf /etc/ntp.conf.orig
    [root@rac2 u01]# systemctl status ntpd
    ● ntpd.service - Network Time Service
       Loaded: loaded (/usr/lib/systemd/system/ntpd.service; disabled; vendor preset: disabled)
       Active: inactive (dead)
    [root@rac2 u01]# timedatectl list-timezones |grep Shanghai
    Asia/Shanghai
    [root@rac2 u01]# timedatectl set-timezone Asia/Shanghai
    

    14.节点1、2停止avahi-daemon服务:

    [root@rac2 u01]# systemctl disable avahi-daemon.socket
    Removed symlink /etc/systemd/system/sockets.target.wants/avahi-daemon.socket.
    [root@rac2 u01]# systemctl disable avahi-daemon.service
    Removed symlink /etc/systemd/system/multi-user.target.wants/avahi-daemon.service.
    Removed symlink /etc/systemd/system/dbus-org.freedesktop.Avahi.service.
    
    #kill
    ps -ef|grep avahi-daemon
    kill -9 pid avahi-daemon
    

    15.节点1、2禁用透明大页(oracle官方建议关闭透明大页):

    [root@rac2 u01]# vi /etc/default/grub
    #文末添加:GRUB_CMDLINE_LINUX="rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet transparent_hugepage=never"
    
    [root@rac2 u01]# grub2-mkconfig -o /boot/grub2/grub.cfg
    Generating grub configuration file ...
    Found linux image: /boot/vmlinuz-3.10.0-957.el7.x86_64
    Found initrd image: /boot/initramfs-3.10.0-957.el7.x86_64.img
    Found linux image: /boot/vmlinuz-0-rescue-c1e3e15a96c847918efb7bf5b02166d3
    Found initrd image: /boot/initramfs-0-rescue-c1e3e15a96c847918efb7bf5b02166d3.img
    Done
    

    不重启生效:

    [root@rac2 u01]# echo never > /sys/kernel/mm/transparent_hugepage/enabled
    

    查看状态:

    [root@rac2 u01]# cat /sys/kernel/mm/transparent_hugepage/enabled
    always madvise [never]
    [root@rac2 u01]# grep AnonHugePages /proc/meminfo
    AnonHugePages:         0 kB
    

    返回值为0则代表成功

    16.节点1、2关闭防火墙:

    [root@rac1 ~]# systemctl stop firewalld.service 
    [root@rac2 ~]# systemctl stop firewalld.service
    systemctl stop firewalld.service            #停止firewall
    systemctl disable firewalld.service        #禁止firewall开机启动
    

    三、图形化安装软件:

    1.节点1进行Grid安装:

    [root@rac1 ~]# xhost +
    access control disabled, clients can connect from any host
    [grid@rac1 gridhome_1]$ DISPLAY=10.10.10.1:0.0
    [grid@rac1 gridhome_1]$ export DISPLAY
    [grid@rac1 gridhome_1]$ ./gridSetup.sh 
    正在启动 Oracle Grid Infrastructure 安装向导...
    

    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述

    环境检查:属组错误

    在这里插入图片描述
    操作:更改属主信息vi /etc/udev/rules.d/99-oracle-asmdevices.rules
    [root@rac1 gridbase]# ls -l /dev/sd*
    在这里插入图片描述

    环境检查:内核参数错误

    在这里插入图片描述
    在这里插入图片描述

    [root@localhost software]# cd /etc/security/limits.d/
    [root@localhost limits.d]# pwd
    /etc/security/limits.d
    [root@localhost limits.d]# vi oracle-database-preinstall-19c.conf
    grid     soft   nofile    1024
    grid     hard   nofile    65536
    grid     soft   nproc     16384
    grid     hard   nproc     16384
    grid     soft   stack     10240
    grid     hard   stack     32768
    grid     hard   memlock    134217728
    grid     soft   memlock    134217728
    

    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述

    [root@rac1 u01]# cd /u01/grid/product/19c/gridhome_1/bin/
    #查看集群状态:
    [root@rac1 bin]# ./crsctl stat res -t
    

    附:节点1、2 root手动执行脚本

    注:以下脚本需要先在rac1执行,完成后到rac2执行。

    脚本1:
    [root@rac1 gridhome_1]# cd /u01/gridbase/oraInventory/
    [root@rac1 oraInventory]# ls
    ContentsXML  logs  oraInst.loc  orainstRoot.sh
    [root@rac1 oraInventory]# sh orainstRoot.sh
    

    在这里插入图片描述

    脚本2:
    [root@rac1 app]# cd /u01/grid/product/19c/gridhome_1
    [root@rac1 gridhome_1]# sh root.sh
    

    在这里插入图片描述

    2.节点1进行创建+DATA磁盘组:

    [grid@rac1 ~]$ asmca
    

    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述

    3.节点1进行安装DB软件:

    [oracle@rac1 ~]$ cd $ORACLE_HOME
    [oracle@rac1 dbhome_1]$ ./runInstaller
    [oracle@rac1 dbhome_1]$ DISPLAY=10.10.10.1:0.0
    [oracle@rac1 dbhome_1]$ export DISPLAY
    

    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述

    节点1:
    [root@rac1 ~]# cd /u01/app/oracle/product/19c/dbhome_1/
    [root@rac1 dbhome_1]# sh root.sh
    

    在这里插入图片描述

    节点2:
    [root@rac2 ~]# cd /u01/app/oracle/product/19c/dbhome_1/
    [root@rac2 dbhome_1]# sh root.sh
    

    在这里插入图片描述
    在这里插入图片描述

    4.节点1进行DBCA创建实例:

    [oracle@rac1 dbhome_1]$ dbca在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述

    四、常用命令

    1.asm实例启动

    节点1、 2都需进行启动(根据实际情况)
    [root@rac1 bin]# cd /u01/grid/product/19c/gridhome_1/bin
    [root@rac1 bin]# ./srvctl start asm
    

    2.节点集群服务启动

    查看集群状态
    crsctl stat res -t
    
    1、停止节点集群服务,必须以root用户:
    [root@rac1 oracle]# cd /u01/grid/11.2.0/grid/bin
    [root@rac1 bin]# ./crsctl stop cluster        ----停止本节点集群服务 
    [root@rac1 bin]# ./crsctl stop cluster -all   ---停止所有节点服务
    
    
    也可以如下控制所停节点:
    [root@rac1 bin]# crsctl stop cluster -n rac1 rac2
    2、检查本节点的集群状态
    [root@rac1 bin]# ./crsctl check crs
    
    3、所有节点启动
    [root@rac1 bin]# ./crsctl start cluster -n rac1 rac2
    
    [root@rac1 bin]# ./crsctl start cluster -all
    [root@rac2 ~]# ./crsctl check cluster
    

    3.启停数据库

    停库:
    oracle用户下操作(任意一个节点执行即可) 
    srvctl stop database -d orcl -o immediate
    
    启库:
    srvctl start database -d orcl
    
    展开全文
  • Linux Oracle 19C集群部署,两个节点,并附上oracle 常用的维护方式,请执行SQL 时候务必进入到PDB
  • Oracle RAC部署(一)

    千次阅读 2019-03-28 09:20:36
    1、系统环境和oracle rac版本 centos6.7 oracle 11.0.4 2、关闭防火墙,更改安全策略机制 #service itpables stop #service ip6tables stop #chkconfig iptables off #chkconfig ip6tables off #setenforce ...

    第一部分 大概的准备

    1、系统环境和oracle rac版本

    centos6.7

    oracle 11.0.4

    2、关闭防火墙,更改安全策略机制

    #service itpables stop

    #service ip6tables stop

    #chkconfig iptables off

    #chkconfig ip6tables off

    #setenforce 0

     

    3、两个节点node1,node2

     

    第二部分 部署步骤

    一、配置图形界面 node1

    在xshell上要配置node1会话的属性

    死亡笔记——Oracle RAC部署(一) - wangyj8807 - wangyj的博客

     

    在安装GRID的节点上,本文在node1

    查询有没有安装图形支持(查看可安装的组件,会有X Window System一项)

    # yum grouplist

    #yum -y groupinstall "X Window System"

    死亡笔记——Oracle RAC部署(一) - wangyj8807 - wangyj的博客

     

    下个简单的图形检测工具

    #yum -y install xclock

    二、创建用户、目录、权限及用户环境变量 node1、node2

    #以下内容写在脚本内

    # 创建用户、目录、权限

    groupadd -g 1000 oinstall

    groupadd -g 1020 asmadmin

    groupadd -g 1021 asmdba

    groupadd -g 1022 asmoper

    groupadd -g 1031 dba

    groupadd -g 1032 oper

    useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid

    useradd -u 1101 -g oinstall -G dba,asmdba,oper oracle

    mkdir -p /u01/app/11.2.0/grid

    mkdir -p /u01/app/grid

    chown -R grid:oinstall /u01

    mkdir -p /u01/app/oracle

    chown oracle:oinstall /u01/app/oracle

    chmod -R 775 /u01/

    echo 'oracle' | passwd --stdin grid

    echo 'oracle' | passwd --stdin oracle

     

     # grid用户和oracle用户的环境变量

    cat >> /home/grid/.bash_profile << EOF

    # Grid

    ORACLE_BASE=/u01/app/grid

    ORACLE_HOME=/u01/app/11.2.0/grid

    ORACLE_SID=+ASM2

    PATH=\$PATH:\$ORACLE_HOME/bin

    export ORACLE_BASE ORACLE_HOME ORACLE_SID PATH

    EOF

     

    chown grid:oinstall /home/grid/.bash_profile

     

    cat >> /home/oracle/.bash_profile << EOF

    # Oracle

    ORACLE_SID=orcl2

    ORACLE_BASE=/u01/app/oracle

    ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1

    GRID_HOME=/u01/app/11.2.0/grid

    ORACLE_HOSTNAME=racl2.gisquest.com

    ORACLE_UNQNAME=orcl

    TNS_ADMIN=\$GRID_HOME/network/admin

    LD_LIBRARY_PATH=\$ORACLE_HOME/lib:/lib:/usr/lib

    CLASSPATH=\$ORACLE_HOME/jre:\$ORACLE_HOME/jlib:\$ORACLE_HOME/rdbms/jlib

    PATH=\$PATH:\$ORACLE_HOME/bin:\$GRID_HOME/bin

    export ORACLE_SID ORACLE_BASE ORACLE_HOME GRID_HOME ORACLE_HOSTNAME ORACLE_UNQNAME TNS_ADMIN LD_LIBRARY_PATH CLASSPATH

    export PATH

    EOF

     

    chown oracle:oinstall /home/oracle/.bash_profile

     

    三、配置节点网络 node1,node2

    两个节点都要配置

    死亡笔记——Oracle RAC部署(一) - wangyj8807 - wangyj的博客

     

    死亡笔记——Oracle RAC部署(一) - wangyj8807 - wangyj的博客

     

    四、DNS欺骗 node1、node2

    两个节点的/usr/bin目录下的nslookup都要替换掉,执行以下命令

    # nslookup

    yum install bind-utils -y

    mv /usr/bin/nslookup /usr/bin/nslookup.orig ; cp nslookup  /usr/bin/

     

    [root@demo GRID_INSTALL]# cat nslookup 

    #!/bin/bash

    # nslookup

    HOSTNAME=$1

    if [[ $HOSTNAME = "rac1-vip" ]]

    then

         echo "Server:          223.5.5.5"

         echo "Address:         223.5.5.5#53"

         echo " "

         echo "Non-authoritative answer:"

         echo "Name:    rac1-vip"

         echo "Address: 192.168.41.113"

         echo " "

    elif [[ $HOSTNAME = "rac2-vip" ]]

    then

         echo "Server:          223.5.5.5"

         echo "Address:         223.5.5.5#53"

         echo " "

         echo "Non-authoritative answer:"

         echo "Name:    rac2-vip"

         echo "Address: 192.168.41.114"

         echo " "

    elif [[ $HOSTNAME = "rac1-vip.gisquest.com" ]]

    then

         echo "Server:          223.5.5.5"

         echo "Address:         223.5.5.5#53"

         echo " "

         echo "Non-authoritative answer:"

         echo "Name:    rac1-vip.gisquest.com"

         echo "Address: 192.168.41.113"

         echo " "

    elif [[ $HOSTNAME = "rac2-vip.gisquest.com" ]]

    then

         echo "Server:          223.5.5.5"

         echo "Address:         223.5.5.5#53"

         echo " "

         echo "Non-authoritative answer:"

         echo "Name:    rac2-vip.gisquest.com"

         echo "Address: 192.168.41.114"

         echo " "

    elif [[ $HOSTNAME = "rac1" ]]

    then

         echo "Server:          223.5.5.5"

         echo "Address:         223.5.5.5#53"

         echo " "

         echo "Non-authoritative answer:"

         echo "Name:    rac1"

         echo "Address: 192.168.41.103"

         echo " "

    elif [[ $HOSTNAME = "rac2" ]]

    then

         echo "Server:          223.5.5.5"

         echo "Address:         223.5.5.5#53"

         echo " "

         echo "Non-authoritative answer:"

         echo "Name:    rac2"

         echo "Address: 192.168.41.104"

         echo " "

    elif [[ $HOSTNAME = "rac1.gisquest.com" ]]

    then

         echo "Server:          223.5.5.5"

         echo "Address:         223.5.5.5#53"

         echo " "

         echo "Non-authoritative answer:"

         echo "Name:    rac1.gisquest.com"

         echo "Address: 192.168.41.103"

         echo " "

    elif [[ $HOSTNAME = "rac2.gisquest.com" ]]

    then

         echo "Server:          223.5.5.5"

         echo "Address:         223.5.5.5#53"

         echo " "

         echo "Non-authoritative answer:"

         echo "Name:    rac2.gisquest.com"

         echo "Address: 192.168.41.104"

         echo " "

    elif [[ $HOSTNAME = "cluster-scan.gisquest.com" ]]

    then

         echo "Server:          223.5.5.5"

         echo "Address:         223.5.5.5#53"

         echo " "

         echo "Non-authoritative answer:"

         echo "Name:    cluster-scan.gisquest.com"

         echo "Address: 192.168.41.180"

         echo "Address: 192.168.41.181"

         echo "Address: 192.168.41.182"

         echo " "

    elif [[ $HOSTNAME = "cluster-scan" ]]

    then

         echo "Server:          223.5.5.5"

         echo "Address:         223.5.5.5#53"

         echo " "

         echo "Non-authoritative answer:"

         echo "Name:    cluster-scan"

         echo "Address: 192.168.41.180"

         echo "Address: 192.168.41.181"

         echo "Address: 192.168.41.182"

         echo " "

    else

         /usr/bin/nslookup.orig $HOSTNAME

    fi 

     

    五、内核参数修改 node1、node2

    # Kernel

    cat >> /etc/sysctl.conf << EOF

    # For oracle

    fs.aio-max-nr = 1048576

    fs.file-max = 6815744

    kernel.shmall = 2097152

    kernel.shmmax = 4294967295

    kernel.shmmni = 4096

    kernel.sem = 250 32000 100 128

    net.ipv4.ip_local_port_range = 9000 65500

    net.core.rmem_default = 262144

    net.core.rmem_max = 4194304

    net.core.wmem_default = 262144

    net.core.wmem_max = 1048576

    EOF

     

    更改完之后生效:

    sysctl -p

     

     

    六、用户资源限制 node1、node2

    # Limit

    cat >> /etc/security/limits.conf << EOF

    grid             soft    nofile          1024

    grid             hard    nofile          65536

    grid             soft    nproc           2047

    grid             hard    nproc           16384

    grid             soft    stack           10240

    grid             hard    stack           32768

    oracle           soft    nofile          4096

    oracle           hard    nofile          65536

    oracle           soft    nproc           2047

    oracle           hard    nproc           16384

    EOF

     

    七、依赖包安装 node1,node2

    # Yum

    yum install -y binutils compat-libcap1 compat-libstdc++-33 \

    gcc gcc-c++ glibc glibc-devel libgcc libstdc++ \

    libstdc++-devel libaio libaio-devel \

    make sysstat unixODBC unixODBC-devel \

    elfutils-libelf-devel

    yum install xclock ntp unzip smartmontools libcap-devel libcap -y

    yum localinstall pdksh-5.2.14-37.el5_8.1.x86_64.rpm -y

    yum localinstall cvuqdisk-1.0.9-1.rpm -y

    yum localinstall kmod-oracleasm-2.0.8-6.el6_7.x86_64.rpm oracleasmlib-2.0.4-1.el6.x86_64.rpm oracleasm-support-2.1.8-1.el6.x86_64.rpm -y

     

    八、NTP服务 node1,node2

    # Ntp

    chkconfig ntpd on

    echo  > /etc/sysconfig/ntpd

    cat >> /etc/sysconfig/ntpd << EOF

    OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

    SYNC_HWCLOCK=no

    NTPDATE_OPTIONS=""

    EOF

     

    重启ntpd服务:

    /etc/init.d/ntpd restart

     

    九、SSH节点互相认证 node1,node2

    这个很重要,配置分别配置grid、oracle用户在两个节点的互信

    #yum -y install openssh-server openssh-clients

    [oracle@racl1]$ssh-keygen -t rsa

    [grid@racl1]$ssh-keygen -t rsa

    [oracle@racl2]$ssh-keygen -t rsa

    [grid@racl2]$ssh-keygen -t rsa

     

    在两个节点分别以grid和oracle用户执行,这里一个都不能落下,因为第一次连接是需要输入yes or no,会将信息写入~/.ssh/known_hosts

    $exec /usr/bin/ssh-agent $SHELL

    $ssh-add

    $ssh rac1 date

    $ssh rac2 date

    $ssh rac1-priv date

    $ssh rac2-priv date

     

    十、ASM共享存储 node1

    前期准备iscsi服务,挂载硬盘

    # yum install iscsi-initiator-utils

    # chkconfig iscsid on

    # chkconfig iscsi on

    # cat /etc/iscsi/initiatorname.iscsi

    #iscsiadm -m discovery -t sendtargets -p 192.168.20.34:3260

     

    然后做成ASM存储

    #yum localinstall kmod-oracleasm-2.0.8-6.el6_7.x86_64.rpm oracleasmlib-2.0.4-1.el6.x86_64.rpm oracleasm-support-2.1.8-1.el6.x86_64.rpm -y

     

    #格式化需要的硬盘,每个盘都要格 node1

    #fidsk /dev/sda

    # /etc/init.d/oracleasm configure

    # /etc/init.d/oracleasm enable

    #/etc/init.d/oracleasm createdisk ASM1 /dev/sda1

    # /etc/init.d/oracleasm listdisks

    其余节点

    # /etc/init.d/oracleasm  enable

    # /etc/init.d/oracleasm  scandisks

    # /etc/init.d/oracleasm  listdisks

    展开全文
  • Oracle Rac 部署详细过程

    万次阅读 2019-07-31 11:02:45
    # Each RAC node must have aunique ORACLE_SID. (i.e. racdb1, racdb2,...) exportORACLE_SID=racdb1   exportPATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin exportPATH=${PATH}:/usr/...

    1.        安装linux操作系统:

                                安装结束后安装一下包:

    rpm -Uvh binutils-2.*

    rpm -Uvh elfutils-libelf-0.*

    rpm -Uvh glibc-2.*

    rpm -Uvh glibc-common-2.*

    rpm -Uvh libaio-0.*

    rpm -Uvh libgcc-4.*

    rpm -Uvh libstdc++-4.*

    rpm -Uvh make-3.*

    rpm -Uvhkernel-headers-2.6.18-274.el5.x86_64.rpm

    rpm -Uvh glibc-headers-2.*

    rpm -Uvh glibc-devel-2.*

    rpm -Uvhelfutils-libelf-0.137-3.el5.i386.rpm

    rpm -Uvh elfutils-libelf-devel*

    rpm -Uvhelfutils-libelf-devel-0.*

    rpm -Uvh gcc-4.*

    rpm -Uvhlibstdc++-devel-4.*

    rpm -Uvh gcc-c++-4.*

    rpm -Uvh unixODBC-2.*

    rpm -Uvh compat-libstdc++-296*

    rpm -Uvh compat-libstdc++-33*

    rpm -Uvh libaio-devel-0.*

    rpm -Uvh libXp-1.*

    rpm -Uvh unixODBC-devel-2.*

    rpm -Uvh sysstat-7.*

    2.        给linux系统配置ntp,ftp

    3.        配置linux网络:

    1)        在两个节点上编辑/etc/hosts文件:

    127.0.0.1              localhost.localdomain localhost

    #used by ntp serviceto synchronize time

    66.187.233.4    clock.redhat.com

     

    #local network forRAC

    192.168.0.131   ora10racn1      ora10racn1.ccz.com

    192.168.0.132   ora10racn2      ora10racn2.ccz.com

     

    192.168.0.151   ora10racn1-vip

    192.168.0.152   ora10racn2-vip

     

    192.168.2.131   ora10racn1-str

    192.168.2.132   ora10racn2-str

     

    10.10.0.131   ora10racn1-priv

    10.10.0.132   ora10racn2-priv

     

    192.168.0.110   openfiler

    192.168.2.110   openfiler-str

    2)        确保节点的主机名称不在loopback中出现:

    127.0.0.1              localhost.localdomain localhost

    3)        修改/etc/sysctl.conf中的网络设置:

    #+---------------------------------------------------------+

    # | Default settingin bytes of the socket "receive" buffer |

    # | which may be setby using the SO_RCVBUF socket option.  |

    #+---------------------------------------------------------+

    net.core.rmem_default=1048576

     

    #+---------------------------------------------------------+

    # | Maximum settingin bytes of the socket "receive" buffer |

    # | which may be setby using the SO_RCVBUF socket option.  |

    #+---------------------------------------------------------+

    net.core.rmem_max=1048576

     

    #+---------------------------------------------------------+

    # | Default settingin bytes of the socket "send" buffer    |

    # | which may be setby using the SO_SNDBUF socket option.  |

    #+---------------------------------------------------------+

    net.core.wmem_default=262144

     

    #+---------------------------------------------------------+

    # | Maximum settingin bytes of the socket "send" buffer    |

    # | which may be setby using the SO_SNDBUF socket option.  |

    #+---------------------------------------------------------+

    net.core.wmem_max=262144

    4)        关闭所有节点上的UDP ICMP的rejection

    [root@ora10racn1Server]# /etc/rc.d/init.d/iptables status

    Firewall is stopped.

    如果是running状态需要用以下命令关闭:

    [root@ora10racn1Server]# /etc/rc.d/init.d/iptables stop

    在各个运行级别上关闭iptables

    [root@ora10racn1Server]# chkconfig --list|grep iptables

    iptables        0:off  1:off   2:on    3:on   4:on    5:on    6:off

    [root@ora10racn1Server]# chkconfig iptables off

    [root@ora10racn1Server]# chkconfig --list|grep iptables

    iptables        0:off  1:off   2:off   3:off  4:off   5:off   6:off

    4.        创建oracle用户并确认nobody帐号的存在:

    # groupadd -g 501 oinstall

    # groupadd -g 502 dba

    # groupadd -g 503 oper

    # useradd -m -u 501 -g oinstall-G dba,oper -d /home/oracle -s /bin/bash oracle

     

    # id oracle

    uid=501(oracle) gid=501(oinstall)groups=501(oinstall),502(dba),503(oper)

    # passwd oracle

    Changing password for useroracle.

    New UNIX password: xxxxxxxxxxx

    Retype new UNIX password:xxxxxxxxxxx

    passwd: all authentication tokensupdated successfully.

    修改oracle帐号的.bash_profile

    export JAVA_HOME=/usr/local/java

     

    # User specific environment andstartup programs

    exportORACLE_BASE=/u01/app/oracle

    exportORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1

    export ORA_CRS_HOME=/u01/app/crs

    exportORACLE_PATH=$ORACLE_BASE/dba_scripts/common/sql:.:$ORACLE_HOME/rdbms/admin

    export CV_JDKHOME=/usr/local/java

     

    # Each RAC node must have aunique ORACLE_SID. (i.e. racdb1, racdb2,...)

    exportORACLE_SID=racdb1

     

    exportPATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin

    exportPATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin

    export PATH=${PATH}:$ORACLE_BASE/dba_scripts/common/bin

    export ORACLE_TERM=xterm

    exportTNS_ADMIN=$ORACLE_HOME/network/admin

    exportORA_NLS10=$ORACLE_HOME/nls/data

    exportNLS_DATE_FORMAT="DD-MON-YYYY HH24:MI:SS"

    exportLD_LIBRARY_PATH=$ORACLE_HOME/lib

    export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib

    exportLD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib

    export CLASSPATH=$ORACLE_HOME/JRE

    exportCLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib

    exportCLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib

    exportCLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib

    export THREADS_FLAG=native

    export TEMP=/tmp

        检测nobody帐号:

    export TMPDIR=/tmp

    # id nobody

    uid=99(nobody) gid=99(nobody)groups=99(nobody)

    如果nobody不存在,需要增加:

    # /usr/sbin/useradd nobody

    5.        安装openfiler并创建相关卷:

    6.        在各个节点上创建相关的目录:

    # mkdir -p/u01/app/oracle

    # chown -Roracle:oinstall /u01/app/oracle

    # chmod -R 775/u01/app/oracle

     

    # mkdir -p/u01/app/crs

    # chown -Roracle:oinstall /u01/app/crs

    # chmod -R 775/u01/app/crs

     

    # mkdir -p /u02

    # chown -Roracle:oinstall /u02

    # chmod -R 775 /u02

    7.        在各个节点上识别iscsi卷

    1)        在各个节点安装iscsi的initator:

    [root@ora10racn1Server]# rpm -Uvh iscsi-initiator-utils-6.2.0.872-10.0.1.el5.x86_64.rpm

    warning:iscsi-initiator-utils-6.2.0.872-10.0.1.el5.x86_64.rpm: Header V3 DSA signature:NOKEY, key ID 1e5e0159

    Preparing...               ########################################### [100%]

            packageiscsi-initiator-utils-6.2.0.872-10.0.1.el5.x86_64 is already installed

    2)        在各个节点上启动iscsi的进程:

    [root@ora10racn1Server]# service iscsid start

    Starting iSCSI daemon:

    [  OK  ]

    3)        在各个节点上搜索openfiler上的相关卷:

    [root@ora10racn1Server]#  iscsiadm -m discovery -tsendtargets -p 192.168.2.110

    192.168.2.110:3260,1iqn.2006-01.com.openfiler:racdb.asm2

    192.168.0.110:3260,1iqn.2006-01.com.openfiler:racdb.asm2

    192.168.2.110:3260,1iqn.2006-01.com.openfiler:racdb.asm1

    192.168.0.110:3260,1iqn.2006-01.com.openfiler:racdb.asm1

    192.168.2.110:3260,1iqn.2006-01.com.openfiler:racdb.crs1

    192.168.0.110:3260,1iqn.2006-01.com.openfiler:racdb.crs1

    4)        在两个节点上手动连接相关卷并配置系统重启时自动连接

    [root@ora10racn1 Server]# iscsiadm -m node -Tiqn.2006-01.com.openfiler:racdb.crs1 -p 192.168.2.110 -l

    Logging in to [iface: default, target:iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.110,3260]

    Login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.crs1,portal: 192.168.2.110,3260] successful.

    [root@ora10racn1 Server]# iscsiadm -m node -Tiqn.2006-01.com.openfiler:racdb.asm1 -p 192.168.2.110 -l

    Logging in to [iface: default, target:iqn.2006-01.com.openfiler:racdb.asm1, portal: 192.168.2.110,3260]

    Login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.asm1,portal: 192.168.2.110,3260] successful.

    [root@ora10racn1 Server]# iscsiadm -m node -Tiqn.2006-01.com.openfiler:racdb.asm2 -p 192.168.2.110 -l

    Logging in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.asm2,portal: 192.168.2.110,3260]

    Login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.asm2,portal: 192.168.2.110,3260] successful.

     

    [root@ora10racn1 Server]# iscsiadm -m node -Tiqn.2006-01.com.openfiler:racdb.crs1 -p 192.168.2.110 --op update -nnode.startup -v automatic

    [root@ora10racn1 Server]# iscsiadm -m node -Tiqn.2006-01.com.openfiler:racdb.asm1 -p 192.168.2.110 --op update -nnode.startup -v automatic

    [root@ora10racn1 Server]# iscsiadm -m node -Tiqn.2006-01.com.openfiler:racdb.asm2 -p 192.168.2.110 --op update -nnode.startup -v automatic

    5)        在各个节点上建立永久性连接规则,以保证自动连接时设备号不受连接顺序影响:

    (cd /dev/disk/by-path; ls -l *openfiler* | awk '{FS=" "; print$9 " " $10 " " $11}')

    ip-192.168.2.110:3260-iscsi-iqn.2006-01.com.openfiler:racdb.asm1-lun-0-> ../../sdc

    ip-192.168.2.110:3260-iscsi-iqn.2006-01.com.openfiler:racdb.asm2-lun-0-> ../../sdd

    ip-192.168.2.110:3260-iscsi-iqn.2006-01.com.openfiler:racdb.crs1-lun-0-> ../../sdb

    创建连接规则:

    [root@ora10racn1 rules.d]# vi /etc/udev/rules.d/55-openiscsi.rules

     

    KERNEL=="sd*", BUS=="scsi",PROGRAM="/etc/udev/scripts/iscsidev.sh%b",SYMLINK+="iscsi/%c/part%n"

    创建连接规则使用的脚本:

    [root@ora10racn2 Server]# mkdir -p /etc/udev/scripts

    [root@ora10racn2 scripts]# vi /etc/udev/scripts/iscsidev.sh

    #!/bin/sh

     

    # FILE: /etc/udev/scripts/iscsidev.sh

     

    BUS=${1}

    HOST=${BUS%%:*}

     

    [ -e /sys/class/iscsi_host ] || exit 1

     

    file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname"

     

    target_name=$(cat ${file})

     

    # This is not an open-scsi drive

    if [ -z "${target_name}" ]; then

       exit 1

    fi

     

    # Check if QNAP drive

    check_qnap_target_name=${target_name%%:*}

    if [ $check_qnap_target_name = "iqn.2004-04.com.qnap" ]; then

        target_name=`echo"${target_name%.*}"`

    fi

     

    echo "${target_name##*.}"

    ~

    "iscsidev.sh" [New] 25L, 507C written

    [root@ora10racn2 scripts]#chmod 755 /etc/udev/scripts/iscsidev.sh

    重新启动iscsi服务:

    [root@ora10racn2 scripts]# service iscsi stop

    Logging out of session [sid: 1, target:iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.110,3260]

    Logging out of session [sid: 2, target:iqn.2006-01.com.openfiler:racdb.asm1, portal: 192.168.2.110,3260]

    Logging out of session [sid: 3, target:iqn.2006-01.com.openfiler:racdb.asm2, portal: 192.168.2.110,3260]

    Logout of [sid: 1, target: iqn.2006-01.com.openfiler:racdb.crs1, portal:192.168.2.110,3260] successful.

    Logout of [sid: 2, target: iqn.2006-01.com.openfiler:racdb.asm1, portal:192.168.2.110,3260] successful.

    Logout of [sid: 3, target: iqn.2006-01.com.openfiler:racdb.asm2, portal:192.168.2.110,3260] successful.

    Stopping iSCSI daemon:

    [root@ora10racn2 scripts]# service iscsi start

    iscsid is stopped

    [  OK  ] iSCSI daemon: [  OK  ]

    [  OK  ]

    Setting up iSCSI targets: Logging in to [iface: default, target:iqn.2006-01.com.openfiler:racdb.asm1, portal: 192.168.2.110,3260]

    Logging in to [iface: default, target: iqn.2006-01.com.openfiler:racdb.asm1,portal: 192.168.0.110,3260]

    Logging in to [iface: default, target:iqn.2006-01.com.openfiler:racdb.asm2, portal: 192.168.2.110,3260]

    Logging in to [iface: default, target:iqn.2006-01.com.openfiler:racdb.asm2, portal: 192.168.0.110,3260]

    Logging in to [iface: default, target:iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.2.110,3260]

    Logging in to [iface: default, target:iqn.2006-01.com.openfiler:racdb.crs1, portal: 192.168.0.110,3260]

    Login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.asm1,portal: 192.168.2.110,3260] successful.

    iscsiadm: Could not login to [iface: default, target:iqn.2006-01.com.openfiler:racdb.asm1, portal: 192.168.0.110,3260].

    iscsiadm: initiator reported error (19 - encountered non-retryable iSCSI loginfailure)

    Login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.asm2,portal: 192.168.2.110,3260] successful.

    iscsiadm: Could not login to [iface: default, target:iqn.2006-01.com.openfiler:racdb.asm2, portal: 192.168.0.110,3260].

    iscsiadm: initiator reported error (19 - encountered non-retryable iSCSIlogin failure)

    Login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.crs1,portal: 192.168.2.110,3260] successful.

    iscsiadm: Could not login to [iface: default, target: iqn.2006-01.com.openfiler:racdb.crs1,portal: 192.168.0.110,3260].

    iscsiadm: initiator reported error (19 - encountered non-retryable iSCSIlogin failure)

    iscsiadm: Could not log into all portals

    [  OK  ]

    [root@ora10racn2 scripts]# ls -l /dev/iscsi/*

    /dev/iscsi/asm1:

    total 0

    lrwxrwxrwx 1 root root 9 May 16 15:16 part -> ../../sdc

     

    /dev/iscsi/asm2:

    total 0

    lrwxrwxrwx 1 root root 9 May 16 15:16 part -> ../../sdb

     

    /dev/iscsi/crs1:

    total 0

    lrwxrwxrwx 1 root root 9 May 16 15:16 part -> ../../sdd

    6)        在一个节点上为iSCSI卷创建分区:(意:只能在一个节点上行)

    [root@ora10racn1 rules.d]# fdisk /dev/iscsi/crs1/part

    Device contains neither a valid DOS partition table, nor Sun, SGI or OSFdisklabel

    Building a new DOS disklabel. Changes will remain in memory only,

    until you decide to write them. After that, of course, the previous

    content won't be recoverable.

     

    Warning: invalid flag 0x0000 of partition table 4 will be corrected byw(rite)

     

    Command (m for help): n

    Command action

       e   extended

       p   primary partition (1-4)

    p

    Partition number (1-4): 1

    First cylinder (1-1009, default 1): 1

    Last cylinder or +size or +sizeM or +sizeK (1-1009, default 1009): 1009

     

    Command (m for help): p

     

    Disk /dev/iscsi/crs1/part: 2147 MB, 2147483648 bytes

    67 heads, 62 sectors/track, 1009 cylinders

    Units = cylinders of 4154 * 512 = 2126848 bytes

     

                   Device Boot      Start         End      Blocks  Id  System

    /dev/iscsi/crs1/part1              1        1009     2095662  83  Linux

     

    Command (m for help): w

    The partition table has been altered!

     

    Calling ioctl() to re-read partition table.

    Syncing disks.

    [root@ora10racn1 rules.d]#fdisk /dev/iscsi/asm1/part

    Device contains neither a valid DOS partition table, nor Sun, SGI or OSFdisklabel

    Building a new DOS disklabel. Changes will remain in memory only,

    until you decide to write them. After that, of course, the previous

    content won't be recoverable.

     

     

    The number of cylinders for this disk is set to 8192.

    There is nothing wrong with that, but this is larger than 1024,

    and could in certain setups cause problems with:

    1) software that runs at boot time (e.g., old versions of LILO)

    2) booting and partitioning software from other OSs

       (e.g., DOS FDISK, OS/2 FDISK)

    Warning: invalid flag 0x0000 of partition table 4 will be corrected byw(rite)

     

    Command (m for help): n

    Command action

       e   extended

       p   primary partition (1-4)

    p

    Partition number (1-4): 1

    First cylinder (1-8192, default 1): 1

    Last cylinder or +size or +sizeM or +sizeK (1-8192, default 8192): 8192

     

    Command (m for help): p

     

    Disk /dev/iscsi/asm1/part: 8589 MB, 8589934592 bytes

    64 heads, 32 sectors/track, 8192 cylinders

    Units = cylinders of 2048 * 512 = 1048576 bytes

     

                   Device Boot      Start         End      Blocks  Id  System

    /dev/iscsi/asm1/part1              1        8192     8388592  83  Linux

     

    Command (m for help): w

    The partition table has been altered!

     

    Calling ioctl() to re-read partition table.

     

    WARNING: Re-reading the partition table failed with error 16: Device orresource busy.

    The kernel still uses the old table.

    The new table will be used at the next reboot.

    Syncing disks.

    [root@ora10racn1 rules.d]# fdisk /dev/iscsi/asm2/part

    Device contains neither a valid DOS partition table, nor Sun, SGI or OSFdisklabel

    Building a new DOS disklabel. Changes will remain in memory only,

    until you decide to write them. After that, of course, the previous

    content won't be recoverable.

     

     

    The number of cylinders for this disk is set to 8192.

    There is nothing wrong with that, but this is larger than 1024,

    and could in certain setups cause problems with:

    1) software that runs at boot time (e.g., old versions of LILO)

    2) booting and partitioning software from other OSs

       (e.g., DOS FDISK, OS/2 FDISK)

    Warning: invalid flag 0x0000 of partition table 4 will be corrected byw(rite)

     

    Command (m for help): n

    Command action

       e   extended

       p   primary partition (1-4)

    p   

    Partition number (1-4): 1

    First cylinder (1-8192, default 1): 1

    Last cylinder or +size or +sizeM or +sizeK (1-8192, default 8192): 8192

     

    Command (m for help): p

     

    Disk /dev/iscsi/asm2/part: 8589 MB, 8589934592 bytes

    64 heads, 32 sectors/track, 8192 cylinders

    Units = cylinders of 2048 * 512 = 1048576 bytes

     

                   Device Boot      Start         End      Blocks  Id  System

    /dev/iscsi/asm2/part1              1        8192     8388592  83  Linux

     

    Command (m for help): w

    The partition table has been altered!

     

    Calling ioctl() to re-read partition table.

    Syncing disks.

    7)        在各个节点上对分区进行检验:

    [root@ora10racn1 rules.d]# partprobe

    [root@ora10racn1 rules.d]# fdisk -l

     

    Disk /dev/sda: 53.6 GB, 53687091200 bytes

    255 heads, 63 sectors/track, 6527 cylinders

    Units = cylinders of 16065 * 512 = 8225280 bytes

     

       Device Boot      Start         End      Blocks  Id  System

    /dev/sda1   *           1          13      104391  83  Linux

    /dev/sda2              14        6527   52323705   8e  Linux LVM

     

    Disk /dev/dm-0: 49.3 GB, 49358569472 bytes

    255 heads, 63 sectors/track, 6000 cylinders

    Units = cylinders of 16065 * 512 = 8225280 bytes

     

    Disk /dev/dm-0 doesn't contain a valid partition table

     

    Disk /dev/dm-1: 4194 MB, 4194304000 bytes

    255 heads, 63 sectors/track, 509 cylinders

    Units = cylinders of 16065 * 512 = 8225280 bytes

     

    Disk /dev/dm-1 doesn't contain a valid partition table

     

    Disk /dev/sdb: 8589 MB, 8589934592 bytes

    64 heads, 32 sectors/track, 8192 cylinders

    Units = cylinders of 2048 * 512 = 1048576 bytes

     

       Device Boot      Start         End      Blocks  Id  System

    /dev/sdb1               1        8192    8388592   83  Linux

     

    Disk /dev/sdc: 8589 MB, 8589934592 bytes

    64 heads, 32 sectors/track, 8192 cylinders

    Units = cylinders of 2048 * 512 = 1048576 bytes

     

       Device Boot      Start         End      Blocks  Id  System

    /dev/sdc1               1        8192    8388592   83  Linux

     

    Disk /dev/sdd: 2147 MB, 2147483648 bytes

    67 heads, 62 sectors/track, 1009 cylinders

    Units = cylinders of 4154 * 512 = 2126848 bytes

     

       Device Boot      Start         End      Blocks  Id  System

    /dev/sdd1               1        1009    2095662   83  Linux

    [root@ora10racn1 rules.d]#(cd /dev/disk/by-path; ls -l *openfiler* | awk'{FS=" "; print $9 " " $10 " " $11}')

    ip-192.168.2.110:3260-iscsi-iqn.2006-01.com.openfiler:racdb.asm1-lun-0-> ../../sdc

    ip-192.168.2.110:3260-iscsi-iqn.2006-01.com.openfiler:racdb.asm1-lun-0-part1-> ../../sdc1

    ip-192.168.2.110:3260-iscsi-iqn.2006-01.com.openfiler:racdb.asm2-lun-0-> ../../sdb

    ip-192.168.2.110:3260-iscsi-iqn.2006-01.com.openfiler:racdb.asm2-lun-0-part1-> ../../sdb1

    ip-192.168.2.110:3260-iscsi-iqn.2006-01.com.openfiler:racdb.crs1-lun-0-> ../../sdd

    ip-192.168.2.110:3260-iscsi-iqn.2006-01.com.openfiler:racdb.crs1-lun-0-part1-> ../../sdd1

    8.        配置各个节点的内核参数:

    [root@ora10racn1 ~]# sysctl -p

    net.ipv4.ip_forward = 0

    net.ipv4.conf.default.rp_filter = 1

    net.ipv4.conf.default.accept_source_route = 0

    kernel.sysrq = 0

    kernel.core_uses_pid = 1

    net.ipv4.tcp_syncookies = 1

    kernel.msgmnb = 65536

    kernel.msgmax = 65536

    kernel.shmmax = 4294967295

    kernel.shmall = 268435456

    net.core.rmem_default = 1048576

    net.core.rmem_max = 1048576

    net.core.wmem_default = 262144

    net.core.wmem_max = 262144

    kernel.shmmni = 4096

    kernel.sem = 250 32000 100 128

    fs.file-max = 65536

    net.ipv4.ip_local_port_range = 1024 65000

     

    [root@ora10racn1 ~]#cat >>/etc/security/limits.conf <<EOF

    oracle soft nproc 2047

    oracle hard nproc 16384

    oracle soft nofile 1024

    oracle hard nofile 65536

    EOF

     

    [root@ora10racn1 ~]#cat >> /etc/pam.d/login<<EOF

    session required /lib/security/pam_limits.so

    EOF

    将下面一段加入到/etc/profile文件的末尾:

    if [ \$USER = "oracle"]; then

        if [ \$SHELL = "/bin/ksh" ]; then

            ulimit -p 16384

            ulimit -n 65536

        else

            ulimit -u 16384 -n 65536

        fi

        umask 022

    fi

    9.        配置各个节点的hangcheck-timer:

    1)        检查是否安装了该模块:

    [root@ora10racn1 ~]#find /lib/modules -name"hangcheck-timer.ko"

    /lib/modules/2.6.18-274.el5/kernel/drivers/char/hangcheck-timer.ko

    /lib/modules/2.6.32-200.13.1.el5uek/kernel/drivers/char/hangcheck-timer.ko

    2)        配置hangcheck-timer的两个参数:

    [root@ora10racn1 ~]#echo "options hangcheck-timerhangcheck_tick=30 hangcheck_margin=180" >>/etc/modprobe.conf

    3)        手动加载hangcheckkernel 模块:

    [root@ora10racn1 ~]# echo"/sbin/modprobe hangcheck-timer" >> /etc/rc.local

    [root@ora10racn1 ~]#modprobehangcheck-timer

    [root@ora10racn1 ~]# grepHangcheck /var/log/messages | tail -2

    May 16 17:10:22 ora10racn1 kernel: Hangcheck: startinghangcheck timer 0.9.0 (tick is 60 seconds, margin is 180 seconds).

    May 16 17:10:22 ora10racn1 kernel: Hangcheck: Usingget_cycles().

    10.     配置各个节点间的信任关系:

    Ø  在节点1上:

    [root@ora10racn2 scripts]# su - oracle

    [oracle@ora10racn2 ~]$ mkdir -p ~/.ssh

    [oracle@ora10racn2 ~]$ chmod 700 ~/.ssh

    [oracle@ora10racn2 ~]$ /usr/bin/ssh-keygen -t rsa

    Generating public/private rsa key pair.

    Enter file in which to save the key(/home/oracle/.ssh/id_rsa):

    Enter passphrase (empty for nopassphrase):

    Enter same passphrase again:                                    /*此处不需要输入密码

    Your identification has been saved in/home/oracle/.ssh/id_rsa.

    Your public key has been saved in/home/oracle/.ssh/id_rsa.pub.

    The key fingerprint is:

    47:d4:27:ab:64:82:24:97:d9:85:59:f0:87:49:7d:c7oracle@ora10racn1.ccz.com

    [oracle@ora10racn1 ~]$ touch ~/.ssh/authorized_keys

    [oracle@ora10racn1 ~]$ cd .ssh

    [oracle@ora10racn1 .ssh]$ ls -l *.pub

    -rw-r--r-- 1 oracle oinstall 407 May 1620:29 id_rsa.pub

     

    [oracle@ora10racn1 .ssh]$ ssh ora10racn1.ccz.com cat ~/.ssh/id_rsa.pub>>~/.ssh/authorized_keys

    The authenticity of host'ora10racn1.ccz.com (192.168.0.131)' can't be established.

    RSA key fingerprint isf9:5e:99:da:b1:00:9e:6b:a4:9e:a0:cd:ae:e3:4d:ca.

    Are you sure you want to continueconnecting (yes/no)? yes

    Warning: Permanently added'ora10racn1.ccz.com,192.168.0.131' (RSA) to the list of known hosts.

    oracle@ora10racn1.ccz.com's password: /*此处需要输入原密码

    [oracle@ora10racn1 .ssh]$ ssh ora10racn2.ccz.com cat ~/.ssh/id_rsa.pub>>~/.ssh/authorized_keys

    The authenticity of host'ora10racn2.ccz.com (192.168.0.132)' can't be established.

    RSA key fingerprint isf9:5e:99:da:b1:00:9e:6b:a4:9e:a0:cd:ae:e3:4d:ca.

    Are you sure you want to continue connecting(yes/no)? yes

    Warning: Permanently added'ora10racn2.ccz.com,192.168.0.132' (RSA) to the list of known hosts.

    [oracle@ora10racn1 .ssh]$ scp ~/.ssh/authorized_keys ora10racn2:.ssh/authorized_keys

    The authenticity of host 'ora10racn2(192.168.0.132)' can't be established.

    RSA key fingerprint isf9:5e:99:da:b1:00:9e:6b:a4:9e:a0:cd:ae:e3:4d:ca.

    Are you sure you want to continueconnecting (yes/no)? yes

    Warning: Permanently added 'ora10racn2'(RSA) to the list of known hosts.

    authorized_keys                                                                                   100%1221     1.2KB/s   00:00   

    [oracle@ora10racn1 .ssh]$ chmod 600 ~/.ssh/authorized_keys

    [oracle@ora10racn1 .ssh]$ ssh ora10racn1.ccz.com date

    Wed May 16 20:38:33 CST 2012

    [oracle@ora10racn1 .ssh]$ ssh ora10racn2.ccz.com date

    Wed May 16 20:38:38 CST 2012

    [oracle@ora10racn1 .ssh]$

     

    [oracle@ora10racn1 .ssh]$ exec /usr/bin/ssh-agent $SHELL

    [oracle@ora10racn1 .ssh]$ /usr/bin/ssh-add

    Identity added:/home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)

    [oracle@ora10racn1 .ssh]$ ssh ora10racn1.ccz.com "date;hostname"

    Wed May 16 20:52:16 CST 2012

    ora10racn1.ccz.com

    [oracle@ora10racn1 .ssh]$ ssh ora10racn2.ccz.com "date;hostname"

    Wed May 16 20:52:21 CST 2012

    ora10racn2.ccz.com

    Ø  在节点2上:

    [root@ora10racn2scripts]# su - oracle

    [oracle@ora10racn2 ~]$ mkdir -p ~/.ssh

    [oracle@ora10racn2 ~]$chmod 700 ~/.ssh

    [oracle@ora10racn2 ~]$/usr/bin/ssh-keygen -t rsa

    Generatingpublic/private rsa key pair.

    Enter file in which tosave the key (/home/oracle/.ssh/id_rsa):

    Enter passphrase(empty for no passphrase):

    Enter same passphraseagain:

    Your identificationhas been saved in /home/oracle/.ssh/id_rsa.

    Your public key hasbeen saved in /home/oracle/.ssh/id_rsa.pub.

    The key fingerprintis:

    73:ec:af:49:ad:f1:3d:af:96:3a:d5:1f:5c:a7:d0:eaoracle@ora10racn2.ccz.com

     

    [oracle@ora10racn2.ssh]$ chmod 600 ~/.ssh/authorized_keys

    [oracle@ora10racn2.ssh]$ ssh ora10racn1.ccz.com hostname

    ora10racn1.ccz.com

    [oracle@ora10racn2.ssh]$ ssh ora10racn1.ccz.com date

    Wed May 16 20:37:53CST 2012

    [oracle@ora10racn2.ssh]$ ssh ora10racn2.ccz.com date

    The authenticity ofhost 'ora10racn2.ccz.com (192.168.0.132)' can't be established.

    RSA key fingerprint isf9:5e:99:da:b1:00:9e:6b:a4:9e:a0:cd:ae:e3:4d:ca.

    Are you sure you wantto continue connecting (yes/no)? yes

    Warning: Permanentlyadded 'ora10racn2.ccz.com,192.168.0.132' (RSA) to the list of known hosts.

    Wed May 16 20:38:00CST 2012

    [oracle@ora10racn2.ssh]$ ssh ora10racn2.ccz.com hostname

    ora10racn2.ccz.com

    [oracle@ora10racn2.ssh]$ ssh ora10racn2.ccz.com date

    Wed May 16 20:38:11CST 2012

     

    [oracle@ora10racn2.ssh]$ exec /usr/bin/ssh-agent $SHELL

    [oracle@ora10racn2.ssh]$ /usr/bin/ssh-add

    Identity added:/home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)

    [oracle@ora10racn2.ssh]$ ssh ora10racn1.ccz.com "date;hostname"

    Wed May 16 20:52:36CST 2012

    ora10racn1.ccz.com

    [oracle@ora10racn2.ssh]$ ssh ora10racn2.ccz.com "date;hostname"

    Wed May 16 20:52:41CST 2012

    ora10racn2.ccz.com

    11.     在各个节点oracle帐号的.bash_profile中增加下段以disable stty:

    if [ -t 0 ]; then

        stty intr ^C

    fi

    12.     在各个节点的图形化界面中diskable SELinux

    #/usr/bin/system-config-securitylevel &

    13.     在各个节点安装并配置ocfs2:

    1)        安装ocfs2

     [root@ora10racn2 Server]# pwd

    /media/OL5.7 x86_64dvd 20110728/Server

    [root@ora10racn2Server]# ls -l ocfs2*

    -rw-r--r-- 1 rootroot  328649 Jul 26  2011ocfs2-2.6.18-274.el5-1.4.8-2.el5.x86_64.rpm

    -rw-r--r-- 1 rootroot  333910 Jul 26  2011ocfs2-2.6.18-274.el5debug-1.4.8-2.el5.x86_64.rpm

    -rw-r--r-- 1 rootroot  328371 Jul 26  2011ocfs2-2.6.18-274.el5xen-1.4.8-2.el5.x86_64.rpm

    -rw-r--r-- 1 rootroot  457984 Sep 17  2010 ocfs2console-1.6.3-2.el5.x86_64.rpm

    -rw-r--r-- 1 root root1825960 Sep 17  2010ocfs2-tools-1.6.3-2.el5.x86_64.rpm

    -rw-r--r-- 1 rootroot  180899 Sep 17  2010 ocfs2-tools-devel-1.6.3-2.el5.x86_64.rpm

    [root@ora10racn2Server]#rpm -Uvhocfs2-2.6.18-274.el5-1.4.8-2.el5.x86_64.rpm \

    >ocfs2console-1.6.3-2.el5.x86_64.rpm \

    >ocfs2-tools-1.6.3-2.el5.x86_64.rpm \

    >ocfs2-2.6.18-274.el5xen-1.4.8-2.el5.x86_64.rpm

    warning: ocfs2-2.6.18-274.el5-1.4.8-2.el5.x86_64.rpm:Header V3 DSA signature: NOKEY, key ID 1e5e0159

    Preparing...               ########################################### [100%]

            package ocfs2-tools-1.6.3-2.el5.x86_64is already installed

            package ocfs2console-1.6.3-2.el5.x86_64is already installed

    [root@ora10racn2Server]#

    2)        diskable SELinux

    [root@ora10racn2Server]# /usr/bin/system-config-securitylevel &

       

    3)        在两个节点上配置OCFS2

    [root@ora10racn1Server]#ocfs2console &

     

    依次【Cluster】-->【Configure Nodes】-->【add】,在每个节点上都要增加两个节点信息,注意此时虽然IP选用的是心跳IP,但hostname用的是主机名称(即hostname命令的返回值)

     

    配置后的结果写入到文件/etc/ocfs2/cluster.conf中

    node:

            ip_port = 7777

            ip_address = 10.10.0.131

            number = 0

            name = ora10racn1.ccz.com

            cluster = ocfs2

     

    node:

           ip_port = 7777

            ip_address = 10.10.0.132

            number = 1

            name = ora10racn2.ccz.com

            cluster = ocfs2

     

    cluster:

            node_count = 2

            name = ocfs2

    注意:如果在用ocfs2console增加节点时报以下错误:

    o2cb_ctl: Unable toaccess cluster service while creating node

    Could not add nodenode1

    则可以先将/etc/ocfs2/cluster.conf改名,然后使用ocfs2console重新加:

    此时可以查看o2cb服务:

    [oracle@ora10racn1 ~]$/etc/init.d/o2cb status

    Driver for"configfs": Loaded

    Filesystem"configfs": Mounted

    Stack glue driver:Loaded

    Stack plugin"o2cb": Loaded

    Driver for"ocfs2_dlmfs": Loaded

    Filesystem"ocfs2_dlmfs": Mounted

    Checking O2CB clusterocfs2: Online

    Heartbeat deadthreshold = 31

      Network idle timeout: 30000

      Network keepalive delay: 2000

      Network reconnect delay: 2000

    Checking O2CBheartbeat: Active

    4)        修改o2cb的配置(只需要在一个节点上执行),将heartbeat deadthreshold从缺省的31秒改为61秒:

    [root@ora10racn1 ~]# /etc/init.d/o2cb offline ocfs2

    Stopping O2CB clusterocfs2: OK

    [root@ora10racn1 ~]# /etc/init.d/o2cb unload

    Unmounting ocfs2_dlmfsfilesystem: OK

    Unloading module"ocfs2_dlmfs": OK

    Unloading module"ocfs2_stack_o2cb": OK

    Unmounting configfsfilesystem: OK

    Unloading module"configfs": OK

    [root@ora10racn1 ~]# /etc/init.d/o2cb configure

    Configuring the O2CBdriver.

     

    This will configurethe on-boot properties of the O2CB driver.

    The followingquestions will determine whether the driver is loaded on

    boot.  The current values will be shown in brackets('[]').  Hitting

    <ENTER> withouttyping an answer will keep that current value. Ctrl-C

    will abort.

     

    Load O2CB driver onboot (y/n) [y]:

    Cluster stack backingO2CB [o2cb]:

    Cluster to start onboot (Enter "none" to clear) [ocfs2]:

    Specify heartbeat deadthreshold (>=7) [31]: 61

    Specify network idletimeout in ms (>=5000) [30000]:

    Specify networkkeepalive delay in ms (>=1000) [2000]:

    Specify networkreconnect delay in ms (>=2000) [2000]:

    Writing O2CBconfiguration: OK

    Loading filesystem"configfs": OK

    Mounting configfsfilesystem at /sys/kernel/config: OK

    Loading stack plugin"o2cb": OK

    Loading filesystem"ocfs2_dlmfs": OK

    Mounting ocfs2_dlmfsfilesystem at /dlm: OK

    Setting cluster stack"o2cb": OK

    Starting O2CB clusterocfs2: OK

    检查修改结果:

    [oracle@ora10racn1 ~]$/etc/init.d/o2cb status

    Driver for"configfs": Loaded

    Filesystem"configfs": Mounted

    Stack glue driver:Loaded

    Stack plugin "o2cb":Loaded

    Driver for"ocfs2_dlmfs": Loaded

    Filesystem"ocfs2_dlmfs": Mounted

    Checking O2CB clusterocfs2: Online

    Heartbeat deadthreshold = 61

      Network idle timeout: 30000

      Network keepalive delay: 2000

      Network reconnect delay: 2000

    Checking O2CB heartbeat:Active

    5)        格式化OCFS2文件系统(只能在一个节点上执行):

    [root@ora10racn1 ~]# mkfs.ocfs2 -b 4k -C 32k -N 4 -L oracrsfiles/dev/iscsi/crs1/part1

    mkfs.ocfs2 1.6.3

    Cluster stack: classico2cb

    Overwriting existingocfs2 partition.

    Proceed (y/N): y

    Label: oracrsfiles

    Features: sparsebackup-super unwritten inline-data strict-journal-super

    Block size: 4096 (12bits)

    Cluster size: 32768(15 bits)

    Volume size:2145943552 (65489 clusters) (523912 blocks)

    Cluster groups: 3(tail covers 977 clusters, rest cover 32256 clusters)

    Extent allocator size:4194304 (1 groups)

    Journal size: 67108864

    Node slots: 4

    Creating bitmaps: done

    Initializingsuperblock: done

    Writing system files:done

    Writing superblock:done

    Writing backupsuperblock: 1 block(s)

    Formatting Journals:done

    Growing extentallocator: done

    Formatting slot map:done

    Formatting quotafiles: done

    Writing lost+found:done

    mkfs.ocfs2 successful

    6)        加载ocfs2文件系统(需要在两个节点上运行):

    [root@ora10racn1 ~]# mount -t ocfs2 -o datavolume,nointr -L"oracrsfiles" /u02

    [root@ora10racn1 ~]# mount

    /dev/mapper/VolGroup00-LogVol00on / type ext3 (rw)

    proc on /proc typeproc (rw)

    sysfs on /sys typesysfs (rw)

    devpts on /dev/ptstype devpts (rw,gid=5,mode=620)

    /dev/sda1 on /boottype ext3 (rw)

    tmpfs on /dev/shm typetmpfs (rw)

    none on/proc/sys/fs/binfmt_misc type binfmt_misc (rw)

    sunrpc on/var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)

    192.168.2.110:/mnt/nfs4backup/nfs4backup/nfs4backupon /mnt/share type nfs(rw,hard,nointr,tcp,noac,nfsvers=3,timeo=600,rsize=32768,wsize=32768,addr=192.168.2.110)

    configfs on/sys/kernel/config type configfs (rw)

    ocfs2_dlmfs on /dlmtype ocfs2_dlmfs (rw)

    /dev/sdd1 on/u02 type ocfs2 (rw,_netdev,datavolume,nointr,heartbeat=local)

    7)        在两个节点上修改/etc/fstab以让系统启动时自动加载ocfs2文件系统:

    [root@ora10racn1 ~]# vi /etc/fstab

     

    /dev/VolGroup00/LogVol00/                       ext3   defaults        1 1

    LABEL=/boot             /boot                   ext3    defaults        1 2

    tmpfs                   /dev/shm                tmpfs   defaults        0 0

    devpts                  /dev/pts                devpts  gid=5,mode=620  0 0

    sysfs                   /sys                    sysfs   defaults        0 0

    proc                    /proc                   proc    defaults        0 0

    /dev/VolGroup00/LogVol01swap                    swap    defaults        0 0

    /home/swap swap swapdefaults 0 0

    192.168.2.110:/mnt/nfs4backup/nfs4backup/nfs4backup     /mnt/share      nfs    rw,hard,nointr,tcp,noac,vers=3,timeo=600,rsize=32768,wsize=32768        0      0

    LABEL=oracrsfiles     /u02           ocfs2   _netdev,datavolume,nointr     0 0

    8)        在两个节点上检查o2cb在各个运行级别上的运行设置:

    [root@ora10racn1 ~]# chkconfig --list o2cb

    o2cb            0:off   1:off  2:on   3:on    4:on    5:on    6:off

    9)        在一个节点上检查并修正ocfs2文件系统的所有者属性及权限属性:

    [root@ora10racn1 /]# ls -ld /u02

    drwxr-xr-x 3 root root 3896 May 16 21:54 /u02

    [root@ora10racn1 /]# chown oracle:oinstall /u02

    [root@ora10racn1 /]# chmod 775 /u02

    [root@ora10racn1 /]# ls -ld /u02

    drwxr-xr-x 3 oracle oinstall 3896 May 16 21:54 /u02

    10)    创建oracleclusterware 的相关目录(只需在一个节点上运行):

    [root@ora10racn1 ~]# mkdir -p /u02/oradata/racdb

    [root@ora10racn1 ~]# chown -R oracle:oinstall /u02/oradata

    [root@ora10racn1 ~]# chmod -R 775 /u02/oradata

    [root@ora10racn1 ~]# ls -l /u02/oradata

    total 0

    drwxr-xr-x 2 oracleoinstall 3896 May 16 22:01 racdb

    14.           在两个节点分别安装ASMLib

    1)        安装,其中前两个包在OLE5.7的介质中有,可直接安装,第三个需要到oracle网站下载:

    [root@ora10racn1Server]# rpm -Uvhoracleasm-2.6.18-274.el5-2.0.5-1.el5.x86_64.rpm \

    oracleasm-support-2.1.7-1.el5.x86_64.rpm

    warning:oracleasm-2.6.18-274.el5-2.0.5-1.el5.x86_64.rpm: Header V3 DSA signature:NOKEY, key ID 1e5e0159

    Preparing...               ########################################### [100%]

       1:oracleasm-support     ########################################### [ 50%]

      2:oracleasm-2.6.18-274.el###########################################[100%]

     

    [root@ora10racn1Server]# cd /home/oracle

    [root@ora10racn1oracle]# ls

    database  Desktop oracleasmlib-2.0.4-1.el5.x86_64.rpm

    [root@ora10racn1oracle]# rpm -Uvh oracleasmlib-2.0.4-1.el5.x86_64.rpm

    warning:oracleasmlib-2.0.4-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159

    Preparing...               ########################################### [100%]

       1:oracleasmlib          ########################################### [100%]

    2)        在两个节点上配置ASMLib:

    [root@ora10racn1oracle]# /etc/init.d/oracleasm configure

    Configuring the OracleASM library driver.

     

    This will configurethe on-boot properties of the Oracle ASM library

    driver.  The following questions will determinewhether the driver is

    loaded on boot andwhat permissions it will have.  Thecurrent values

    will be shown in brackets('[]').  Hitting <ENTER> withouttyping an

    answer will keep thatcurrent value.  Ctrl-C will abort.

     

    Default user to ownthe driver interface []: oracle

    Default group to ownthe driver interface []: oinstall

    Start Oracle ASMlibrary driver on boot (y/n) [n]: y

    Scan for Oracle ASMdisks on boot (y/n) [y]: y

    Writing Oracle ASMlibrary driver configuration: done

    Initializing theOracle ASMLib driver: [  OK  ]

    Scanning the systemfor Oracle ASMLib disks: [  OK  ]

    3)        创建ASM磁盘(只需在一个节点上执行创建,在其余节点上扫描即可):

    节点一:

    [root@ora10racn1oracle]# /etc/init.d/oracleasm listdisks

    [root@ora10racn1oracle]# /etc/init.d/oracleasm createdisk VOL1/dev/iscsi/asm1/part1

    Marking disk"VOL1" as an ASM disk: [ OK  ]

    [root@ora10racn1oracle]# /etc/init.d/oracleasm createdisk VOL2 /dev/iscsi/asm2/part1

    Marking disk"VOL2" as an ASM disk: [ OK  ]

    [root@ora10racn1oracle]# /etc/init.d/oracleasm listdisks

    VOL1

    VOL2

    节点二:

    [root@ora10racn2oracle]# /etc/init.d/oracleasm listdisks

    [root@ora10racn2oracle]# /etc/init.d/oracleasm scandisks

    Scanning the systemfor Oracle ASMLib disks: [  OK  ]

    [root@ora10racn2oracle]# /etc/init.d/oracleasm listdisks

    VOL1

    VOL2

    15.     安装cluster软件前的预检:

    1)        在两个节点上安装cvuqdisk(只有RHEL需要)

    [oracle@ora10racn1 ~]$su -

    Password:

    [root@ora10racn1 ~]# rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n"|grep cvuqdisk

    [root@ora10racn1 ~]# cd /home/oracle/ clusterware/rpm

    [root@ora10racn1 rpm]#ls

    cvuqdisk-1.0.1-1.rpm

    [root@ora10racn1 rpm]# rpm -Uvh cvuqdisk-1.0.1-1.rpm

    Preparing...               ########################################### [100%]

       1:cvuqdisk              ########################################### [100%]

     

    [root@ora10racn1 rpm]#scp ./cvuqdisk-1.0.1-1.rpm ora10racn2.ccz.com:/home/oracle

    The authenticity ofhost 'ora10racn2.ccz.com (192.168.0.132)' can't be established.

    RSA key fingerprint isf9:5e:99:da:b1:00:9e:6b:a4:9e:a0:cd:ae:e3:4d:ca.

    Are you sure you wantto continue connecting (yes/no)? yes

    Warning: Permanentlyadded 'ora10racn2.ccz.com,192.168.0.132' (RSA) to the list of known hosts.

    root@ora10racn2.ccz.com'spassword:

    cvuqdisk-1.0.1-1.rpm  

     

    [root@ora10racn2oracle]# rpm -Uvh cvuqdisk-1.0.1-1.rpm

    Preparing...                ###########################################[100%]

       1:cvuqdisk              ########################################### [100%]

    2)        安装前检测(在oracle帐号下执行,且只需在一个节点上运行):

    [oracle@ora10racn1cluvfy]$ cd /home/oracle/clusterware/cluvfy

    [oracle@ora10racn1cluvfy]$ mkdir -p jdk14

    [oracle@ora10racn1cluvfy]$ unzip jrepack.zip -d jdk14

    [oracle@ora10racn1cluvfy]$ export CV_HOME=/home/oracle/clusterware/cluvfy[oracle@ora10racn1 cluvfy]$ exportCV_JDKHOME=/home/oracle/clusterware/cluvfy/jdk14[oracle@ora10racn1 cluvfy]$ ./runcluvfy.sh stage-pre crsinst -n ora10racn1,ora10racn2 -verbose

    在检测中报

    Check: Userequivalence for user "oracle"

      Node Name                            Comment                

      ------------------------------------  ------------------------

      ora10racn2                            passed                 

      ora10racn1                            failed                 

    Result: Userequivalence check failed for user "oracle".

     

    WARNING:

    User equivalence isnot set for nodes:

            ora10racn1

    Verification willproceed with nodes:

            ora10racn2

    这是因为在设置用户等效性后,第一次运行需要确认,而系统将这个确认视作失败,因此应手动先在各个节点上对所有节点(包括自身)进行下测试并确认,以确保第二次测试无需确认:

    [oracle@ora10racn1 cluvfy]$ sshora10racn1 date

    The authenticity ofhost 'ora10racn1 (192.168.0.131)' can't be established.

    RSA key fingerprint isf9:5e:99:da:b1:00:9e:6b:a4:9e:a0:cd:ae:e3:4d:ca.

    Are you sure you wantto continue connecting (yes/no)? yes

    Warning: Permanentlyadded 'ora10racn1' (RSA) to the list of known hosts.

    Thu May 17 08:45:40CST 2012

    [oracle@ora10racn1 cluvfy]$ sshora10racn2 date

    Thu May 17 08:45:55CST 2012

    [oracle@ora10racn1 cluvfy]$ sshora10racn1 date

    Thu May 17 08:50:10CST 2012

    再次进行检测,遇到报错:

    ERROR:

    Could not find asuitable set of interfaces for VIPs.

     

    Result: Nodeconnectivity check failed.

    根据oracle文档(338924.1),这是一个bug,可以忽略。

    3)        通过CVU对系统硬件及操作系统进行检测:

    [oracle@ora10racn1 cluvfy]$ ./runcluvfy.sh stage -post hwos -n ora10racn1,ora10racn2-verbose

     

    Performing post-checksfor hardware and operating system setup

    Checking nodereachability...

    ......

    ERROR:

    Could notfind a suitable set of interfaces for VIPs.

    Result: Nodeconnectivity check failed.

     

    Checking sharedstorage accessibility...

    WARNING:

    Unable todetermine the sharedness of /dev/sda on nodes:

            ora10racn2,ora10racn1

      Disk                                  Sharing Nodes(2 in count)

      ------------------------------------  ------------------------

      /dev/sdb                              ora10racn2ora10racn1  

      Disk                                  Sharing Nodes(2 in count)

      ------------------------------------  ------------------------

      /dev/sdc                              ora10racn2ora10racn1  

      Disk                                  Sharing Nodes(2 in count)

      ------------------------------------  ------------------------

      /dev/sdd                              ora10racn2ora10racn1  

     

    Shared storage checkwas successful on nodes "ora10racn2,ora10racn1".

    Post-check forhardware and operating system setup was unsuccessful on all the nodes.

    检测结果中除了VIP错误(忽略),还有一个warning,这个warning是因为CVU调用了linux的smartctl,而smartctl不能返回iscsi设备的序列号所致,不影响RAC的安装和使用,忽略。

    16.     安装clusterware(只需要在一个节点的oracle帐号下运行即可):

                               

                                注意此处需要将crshome的路径做下修改,不要和oracle的home重合了:

     

    根据实际情况增加节点:

     

    根据情况调整网卡设置,其中eth1是连接存储的网卡,此处我单独设置成DoNotUse:

     

    这只ocr及voting disk的存储路径:

     

     

    注意:在执行root.sh前先如下修复下srvctl及vipca,否则报错,在实行root.sh后需要在图形界面中以root身份调用vipca配置vip,否则OUI仍然会报错:

     

    分别在两个节点按顺序以root身份run相应脚本,但在最后一个节点run最后一个脚本root.sh时有一个报错:

    CSS is active on allnodes.

    Waiting for the OracleCRSD and EVMD to start

    Oracle CRS stackinstalled and running under init(1M)

    Running vipca(silent)for configuring nodeapps

    /u01/app/crs/jdk/jre//bin/java:error while loading shared libraries: libpthread.so.0: cannot open sharedobject file: No such file or directory

    经查,这是一个oracle的bug,解决方法见:

                                http://hi.baidu.com/heroofhero/blog/item/76747032361fc84dac4b5f09.html

    修复:

    Ø  /u01/app/crs/bin/vipca:

    152 exportLD_LIBRARY_PATH

    153 ;;

    154 esac

    155

    156 unset LD_ASSUME_KERNEL

    157

    158ARGUMENTS=""

    159 NUMBER_OF_ARGUMENTS=$#

    160 if [ $NUMBER_OF_ARGUMENTS -gt 0 ]; then

    161 ARGUMENTS=$*

    162 fi

    /u01/app/crs/bin/srvctl:

    166 #Remove this workaround when the bug 3937317 is fixed

    167 LD_ASSUME_KERNEL=2.4.19

    168 export LD_ASSUME_KERNEL

    169 unset LD_ASSUME_KERNEL

    170

    171 # Run ops control utility

    解决后回到原OUI中点击OK继续。

    17.     安装oracle软件,注意只安装软件,不创建数据库:

     

    18.     在两个节点上修正$ORACLE_HOME/bin/srvctl

    同安装clusterware后对$CRS_HOME/bin/srvctl做的修正一样,需对$ORACLE_HOME/bin/srvctl进行修正:

    LD_ASSUME_KERNEL=2.4.19

    exportLD_ASSUME_KERNEL

    unsetLD_ASSUME_KERNEL

    19.     创建监听起及配置naming method(只需在一个节点运行)

                               

    20.     创建cluster数据库:

    1)        创建前的预检:

    [oracle@ora10racn1cluvfy]$ env|grep CV

    CV_JDKHOME=/home/oracle/clusterware/cluvfy/jdk14

    CV_HOME=/home/oracle/clusterware/cluvfy

    [oracle@ora10racn1cluvfy]$ pwd

    /home/oracle/clusterware/cluvfy

    [oracle@ora10racn1 cluvfy]$ ./runcluvfy.sh stage-pre dbcfg -n ora10racn1,ora10racn2 -d ${ORACLE_HOME} -verbose

    预检中除了前面出现过的VIP问题以外,没有其他问题,该VIP问题可以忽略。

    2)        用dbca创建数据库:

    在创建的过程中注意,ASM的spfile缺省位置由于不是共享存储,应改成共享存储位置,如图:

     

    在创建ASM实例时报错:ORA-27125:unable to create shared memory segment

    解决的办法是将dba的用户组加入到文件/proc/sys/vm/hugetlb_shm_group

    [root@rac2 ~]# idoracle

    uid=500(oracle)gid=501(oinstall)groups=501(oinstall),502(dba),503(asmadmin),504(oper)

    [root@rac2 ~]#more/proc/sys/vm/hugetlb_shm_group

    0

    下面用root执行下面的命令,将dba组添加到系统内核中:

    [root@rac2 ~]# echo502 >/proc/sys/vm/hugetlb_shm_group

    具体参见:http://blog.csdn.net/tianlesoftware/article/details/7309046

    3)        创建磁盘组时注意修改下磁盘路径:

     

    如使用缺省的ORCL:VOL*,将会导致机器HANG住。

    4)        创建成功:

     

    21.     创建TAF服务:

    [oracle@ora10racn1 ~]$dbca&

    注意新建的taf服务不要带域名,否则会报错。

     

    SQL> show parameterservice

    NAME                                 TYPE        VALUE

    ----------------------------------------------- ------------------------------

    service_names                        string      racdb.ccz.com, racdb_taf

    [oracle@ora10racn2disks]$ crs_stat -t

    Name           Type           Target   State     Host       

    ------------------------------------------------------------

    ora....SM1.asmapplication    ONLINE    ONLINE   ora10racn1 

    ora....N1.lsnrapplication    ONLINE    ONLINE   ora10racn1 

    ora....cn1.gsdapplication    ONLINE    ONLINE   ora10racn1 

    ora....cn1.onsapplication    ONLINE    ONLINE   ora10racn1 

    ora....cn1.vipapplication    ONLINE    ONLINE   ora10racn1 

    ora....SM2.asmapplication    ONLINE    ONLINE   ora10racn2 

    ora....N2.lsnrapplication    ONLINE    ONLINE   ora10racn2 

    ora....cn2.gsdapplication    ONLINE    ONLINE   ora10racn2 

    ora....cn2.onsapplication    ONLINE    ONLINE   ora10racn2 

    ora....cn2.vipapplication    ONLINE    ONLINE   ora10racn2 

    ora.racdb.db   application    ONLINE   ONLINE    ora10racn1 

    ora....b1.instapplication    ONLINE    ONLINE   ora10racn1 

    ora....b2.instapplication    ONLINE    ONLINE   ora10racn2 

    ora...._taf.csapplication    ONLINE    ONLINE   ora10racn1 

    ora....db1.srvapplication    ONLINE    ONLINE   ora10racn1 

    ora....db2.srvapplication    ONLINE    ONLINE   ora10racn2 

    设置taf服务在系统启动时自动启动:

    [oracle@ora10racn2bash]$ srvctl enable service -d racdb -s racdb_taf

    查看taf服务的状态:

    [oracle@ora10racn2bash]$srvctl config service -d racdb -s racdb_taf -a

    racdb_tafPREF: racdb1 racdb2 AVAIL:  TAF: basic

    22.     重新编译数据库对象:

    $ sqlplus / as sysdba

    SQL>@?/rdbms/admin/utlrp.sql

    23.     开启系统archive模式:

    1)        在一号节点上关闭RAC设置:

    SYS@racdb1 SQL>showparameter cluster_database                    

    NAME                                 TYPE        VALUE

    ----------------------------------------------- ------------------------------

    cluster_database                     boolean     TRUE

    cluster_database_instances           integer     2

    SYS@racdb1SQL>alter system set cluster_database=falsescope=spfile sid='racdb1';

    2)        shutdown RAC上所有实例:

    [oracle@ora10racn1admin]$ srvctl stop database -d racdb

    [oracle@ora10racn1admin]$ crs_stat -t

    Name           Type           Target    State    Host       

    ------------------------------------------------------------

    ora....SM1.asmapplication    ONLINE    ONLINE   ora10racn1 

    ora....N1.lsnr application    ONLINE   ONLINE    ora10racn1 

    ora....cn1.gsdapplication    ONLINE    ONLINE   ora10racn1 

    ora....cn1.onsapplication    ONLINE    ONLINE   ora10racn1 

    ora....cn1.vipapplication    ONLINE    ONLINE   ora10racn1 

    ora....SM2.asm application    ONLINE   ONLINE    ora10racn2 

    ora....N2.lsnrapplication    ONLINE    ONLINE   ora10racn2 

    ora....cn2.gsdapplication    ONLINE    ONLINE   ora10racn2 

    ora....cn2.onsapplication    ONLINE    ONLINE   ora10racn2 

    ora....cn2.vipapplication    ONLINE    ONLINE   ora10racn2 

    ora.racdb.db   application    OFFLINE  OFFLINE              

    ora....b1.instapplication    OFFLINE   OFFLINE              

    ora....b2.instapplication    OFFLINE   OFFLINE              

    ora...._taf.csapplication    OFFLINE  OFFLINE              

    ora....db1.srvapplication    ONLINE    OFFLINE              

    ora....db2.srvapplication    ONLINE    OFFLINE      

    3)        在1号节点上启动实例到mount状态:

    SYS@racdb1 SQL>startup mount;

    ORACLE instancestarted.

     

    Total System GlobalArea 1862270976 bytes

    Fixed Size                  2021600 bytes

    Variable Size             469763872 bytes

    Database Buffers         1375731712 bytes

    Redo Buffers               14753792 bytes

    Database mounted.

    SYS@racdb1SQL>archive log list;

    Database log mode              No Archive Mode

    Automaticarchival             Disabled

    Archivedestination           USE_DB_RECOVERY_FILE_DEST

    Oldest online logsequence     33

    Current logsequence           34

    4)        在1号节点上打开archive模式:

    SYS@racdb1SQL>alter database archivelog;

     

    Database altered.

    5)        在1号节点上enable cluster的设置并关闭实例:

    SYS@racdb1SQL>alter system set cluster_database=true scope=spfile sid='racdb1';

    System altered.

     

    SYS@racdb1SQL>shutdown immediate;

    ORA-01109: databasenot open

    Database dismounted.

    ORACLE instance shutdown.

    6)        启动所有实例并验证:

    [oracle@ora10racn1admin]$ srvctl start database -d racdb

    [oracle@ora10racn1admin]$ sqlplus /nolog

    SQL*Plus: Release10.2.0.1.0 - Production on Fri May 18 17:44:53 2012

    Copyright (c) 1982,2005, Oracle.  All rights reserved.

     

    @ SQL>conn / assysdba

    Connected.

    SYS@racdb1SQL>archive log list;

    Database log mode              ArchiveMode

    Automaticarchival             Enabled

    Archivedestination           USE_DB_RECOVERY_FILE_DEST

    Oldest online logsequence     33

    Next log sequence toarchive   34

    Current logsequence           34

    24.     建立统一的password文件:

    1)        将一个节点上的ASM及RDBMS的口令文件移到共享文件系统上:

    oracle@ora10racn2dbs]$ cd $ORACLE_HOME/dbs

    [oracle@ora10racn2dbs]$ ls

    ab_+ASM2.dat  hc_+ASM2.dat hc_racdb2.dat  init+ASM2.ora  initdw.ora init.ora  initracdb2.ora  orapw+ASM2 orapwracdb2

    [oracle@ora10racn2dbs]$ mv ./orapw+ASM2 /u02/oradata/racdb/dbs/orapw+ASM

    [oracle@ora10racn2dbs]$ mv ./orapwracdb2/u02/oradata/racdb/dbs/orapwracdb

    2)        在各个节点建立到共享口令文件的软连接:

    [oracle@ora10racn2dbs]$ pwd

    /u01/app/oracle/product/10.2.0/db_1/dbs

    [oracle@ora10racn2dbs]$ ln -s /u02/oradata/racdb/dbs/orapw+ASM/u01/app/oracle/product/10.2.0/db_1/dbs/orapw+ASM2

    [oracle@ora10racn2dbs]$ ln -s /u02/oradata/racdb/dbs/orapwracdb/u01/app/oracle/product/10.2.0/db_1/dbs/orapwracdb2

    [oracle@ora10racn2dbs]$ ls -l

    total 48

    -rw-rw---- 1 oracleoinstall  1571 May 18 15:00 ab_+ASM2.dat

    -rw-rw---- 1 oracleoinstall  1552 May 18 15:00 hc_+ASM2.dat

    -rw-rw---- 1 oracleoinstall  1552 May 18 17:44 hc_racdb2.dat

    -rw-r----- 1 oracleoinstall    47 May 18 14:59 init+ASM2.ora

    -rw-r----- 1 oracleoinstall 12920 May  3  2001 initdw.ora

    -rw-r----- 1 oracleoinstall  8385 Sep 11  1998 init.ora

    -rw-r----- 1 oracleoinstall    39 May 18 15:33initracdb2.ora

    lrwxrwxrwx 1 oracleoinstall    32 May 19 08:22 orapw+ASM2-> /u02/oradata/racdb/dbs/orapw+ASM

    lrwxrwxrwx 1 oracleoinstall    33 May 19 08:22 orapwracdb2-> /u02/oradata/racdb/dbs/orapwracdb

     

    [oracle@ora10racn1dbs]$ rm -r orapw+ASM1

    [oracle@ora10racn1dbs]$ rm -r orapwracdb1

    [oracle@ora10racn1dbs]$ ln -s /u02/oradata/racdb/dbs/orapw+ASM/u01/app/oracle/product/10.2.0/db_1/dbs/orapw+ASM1

    [oracle@ora10racn1dbs]$ ln -s /u02/oradata/racdb/dbs/orapwracdb/u01/app/oracle/product/10.2.0/db_1/dbs/orapwracdb1

    [oracle@ora10racn1dbs]$ ls -l

    total 48

    -rw-rw---- 1 oracleoinstall   796 May 18 14:59 ab_+ASM1.dat

    -rw-rw---- 1 oracleoinstall  1552 May 18 14:59 hc_+ASM1.dat

    -rw-rw---- 1 oracleoinstall  1552 May 18 17:43 hc_racdb1.dat

    -rw-r----- 1 oracleoinstall    47 May 18 14:59 init+ASM1.ora

    -rw-r----- 1 oracleoinstall 12920 May  3  2001 initdw.ora

    -rw-r----- 1 oracleoinstall  8385 Sep 11  1998 init.ora

    -rw-r----- 1 oracleoinstall    39 May 18 15:33initracdb1.ora

    lrwxrwxrwx 1 oracleoinstall    32 May 19 08:27 orapw+ASM1-> /u02/oradata/racdb/dbs/orapw+ASM

    lrwxrwxrwx 1 oracleoinstall    33 May 19 08:27 orapwracdb1-> /u02/oradata/racdb/dbs/orapwracdb

    25.     修改各个数据文件的size:

    NAME                                              BYTES/(1024*1024)

    -------------------------------------------------------------------

    +ORADAT/racdb/datafile/system.259.783615899                      480

    +ORADAT/racdb/datafile/undotbs1.260.783615911                    330

    +ORADAT/racdb/datafile/sysaux.261.783615915                      270

    +ORADAT/racdb/datafile/undotbs2.263.783615923                    200

    +ORADAT/racdb/datafile/users.264.783615929                        5

    SYS@racdb2SQL>alter database datafile '+ORADAT/racdb/datafile/users.264.783615929'resize 1024M;

    Database altered.

     

    SYS@racdb2SQL>alter tablespace users add datafile '+ORADAT' size 1024m autoextend on;

    Tablespace altered.

     

    SYS@racdb2SQL>create tablespace indx datafile '+ORADAT' size 1024M

      2 autoextend on next 50M maxsize unlimited

      3* extent management local autoallocatesegment space management auto;

     

    SYS@racdb2SQL>alter database datafile '+ORADAT/racdb/datafile/system.259.783615899'resize 800M;

    Database altered.

     

    SYS@racdb2SQL>alter database datafile '+ORADAT/racdb/datafile/sysaux.261.783615915'resize 500M;

    Database altered.

     

    SYS@racdb2SQL>alter database datafile '+ORADAT/racdb/datafile/undotbs1.260.783615911'resize 1024m;

    Database altered.

     

    SYS@racdb2SQL>alter database datafile '+ORADAT/racdb/datafile/undotbs2.263.783615923'resize 1024M;

    Database altered.

     

    SYS@racdb2SQL>select name,bytes/(1024*1024) from v$datafile;

    NAME                                               BYTES/(1024*1024)

    -------------------------------------------------------------------

    +ORADAT/racdb/datafile/system.259.783615899                      800

    +ORADAT/racdb/datafile/undotbs1.260.783615911                   1024

    +ORADAT/racdb/datafile/sysaux.261.783615915                      500

    +ORADAT/racdb/datafile/undotbs2.263.783615923                   1024

    +ORADAT/racdb/datafile/users.264.783615929                      1024

    +ORADAT/racdb/datafile/users.268.783679533                      1024

    +ORADAT/racdb/datafile/indx.269.783679683                       1024

     

    SYS@racdb2SQL>select name,bytes/(1024*1024) from v$tempfile;

    NAME                                              BYTES/(1024*1024)

    -------------------------------------------------------------------

    +ORADAT/racdb/tempfile/temp.262.783615919                         28

     

    SYS@racdb2SQL>alter database tempfile '+ORADAT/racdb/tempfile/temp.262.783615919'resize 1024M;

    Database altered.

     

    展开全文
  • ORACLE 11g_RAC部署方案

    2019-03-02 10:06:55
    生产环境非常详细的ORACLE RAC 安装部署系统集成实施方案文档
  • Oracle RAC集群存储双活方案部署

    千次阅读 2020-12-01 09:42:26
    刚给客户配置完双机双柜存储双活,双活就是双节点RAC+底层双存储.记录一下 详细磁盘规划如下: 存储1 存储2 第三方仲裁 OCR_1盘 OCR_2盘 采用NFS (DATA1 DATA2 DATA3 DATA4)(故障组1) (DATA5 DATA6 DATA7 ...
  • 教程名称:Oracle RAC数据库集群视频教程(10讲)课程目录:【】1.OracleRAC集群体系结构_drm【】10.测试OracleRAC数据库集群功能【】2.安装OracleRAC数据库(一)【】3.安装OracleRAC数据库(二)_drm【】4.安装...
  • 配置环境ORACLE_BASE=/oracle/11.2.0/11.2.0ORACLE_HOME=/oracle/11.2.0/grid/crsORACLE_SID=+ASMLANG=CPATH=$ORACLE_HOME/bin:$PATH:$HOME/binexport PATH ORACLE_BASE ORACLE_HOME ORACLE_SID LANG??root 权限下...
  • oracle11g+centos7 rac安装配置步骤整理_包括多路径配置
  • Linux下Oracle RAC集群配置详细说明
  • ORACLE rac集群概念和原理

    万次阅读 多人点赞 2019-04-17 15:21:35
    Oracle集群概念和原理 Oracle的三种高可用集群方案 1 RAC(Real Application Clusters) 多个Oracle服务器组成一个共享的Cache,而这些Oracle服务器共享一个基于网络的存储。这个系统可以容忍单机/或是多机失败...
  • Oracle RAC 集群安装部署,详细的文档,里面有视频回放链接!
  • Oracle RAC 集群安装部署
  • 生产环境下oracle 11G RAC实施方案,希望能够给大家带来帮助!
  • RAC基础设施部署方案

    2021-05-08 04:52:01
    明白要做的事情有多少(一定要看),总体图整体的分图如下:插入一个我们经常烦恼头疼的图片废话不说,开始正文RAC基础设施部署方案一、前期准备或规划:1、IP地址和VLAN划分一套RAC服务需要有两台物理机主机,每台物理...
  • Oracle RAC 工作原理:单节点数据库,如果实例宕机了,如果一个业务链接在实例上面,那么这个业务就中断了。这个时候系统就不具有可用性了,那么这个时候单节点的可用性是很差的。对于RAC来说,和单实例一样,还是一...
  • 高俊峰视频:第一讲:Oracle RAC体系结构1、oracle rac的实质是多个OS上的多个实例访问同一个数据库;多个节点实例间通过oracle私有网络进行通信;数据库的数据文件、日志文件、控制文件、参数文件等存放在共享存储...
  • Oracle 11G RAC集群安装

    2017-09-05 09:37:47
    在进行oracle集群安装之前,首先看下整体集群架构图。另:oracle RAC集群整体配置基于三个步骤(前期配置,集群安装,数据库安装)   Oracle登入rac1进行上传,解压安装p13390677_112040_Linux-x86-64_1...
  • oracle RAC 在 linux下安装部署的文档,文档简单明了,需要有一点基础,Oracle11g 12c均可参考
  • 生产环境根据企业业务发展和项目需求,会根据业务的重要性,对业务数据库选型,并建立数据模型,数据库设计...磁盘RAID、共享存储、DG、OGG及流技术可实现oracle集群架构的组合高可用方案,可同城灾备,也可异地容灾。
  • Centos7安装部署Oracle RAC 11G说明文档,Oracle 11g R2安装,RAC环境搭建
  • oracle 11gr2在linux下的安装配置,包括1.集群规划;2.RAC主库安装实施;3.RAC共享存储安装配置;4.安装GRID集群;5.ASM磁盘安装;6.安装数据库软件;7.安装数据库实例;8.备库安装配置。
  • Oracle 18c RAC部署文档

    2018-07-31 16:48:44
    文档为Oracle 18c的RAC部署文档。 文档加密密码为:www.cndba.cn
  • Oracle 11g RAC数据库集群搭建实战视频 Oracle OCM顶级...
  • 在Centos7.3下的oracle 19c RAC安装.参考了一些网上其它人的博客,也添加了一些我自己的理解和配置.经过测试验证安装成功.主要添加了LINUX 7下udev绑定的配置方法.
  • Oracle 11G 11.2.0.4 RAC部署参考指南

    千次阅读 2019-02-13 16:57:53
    Oracle 11G 11.2.0.4 RAC部署指南一、Oracle 11g RAC部署二、集群规划三、主机网络规划四、操作系统配置部分五、Grid集群软件安装部分六、Oracle DataBase软件安装七、Oracle ASM磁盘划分八、dbca创建实例九、CentOS...
  • 1.准备2台虚拟机rac1和rac2 配置IP分别为 192.168.137.215 192.168.137.216 rac1上设置网卡 vim /etc/sysconfig/network-scripts/ifcfg-ens33 BOOTPROTO=none ONBOOT=yes IPV6_PRIVACY=no IPADDR=192.168.137.215 ...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 4,936
精华内容 1,974
关键字:

oracle集群rac部署