精华内容
下载资源
问答
  • linux部署 rac+Dataguard

    2018-07-13 17:16:01
    本实施方案主要对 Oracle DataGuard 实施部署作相应的说明。以便实施人员能根据当前业务特点,规划、建设符合高可用、高可靠的数据库集群系统。具体由 Oracle DG 环境拓扑、Oracle 单机数据库规划部分构成!
  • 部署rac 提示ORA

    2014-06-27 15:10:14
    [root@his2 soft]# /app/grid/product/11.2.0/grid/root.sh Running Oracle 11g root script... The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /app/grid/product/11.2.0

    [root@his2 soft]# /app/grid/product/11.2.0/grid/root.sh
    Running Oracle 11g root script...

    The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME= /app/grid/product/11.2.0/grid

    Enter the full pathname of the local bin directory: [/usr/local/bin]:
    Copying dbhome to /usr/local/bin ...
    Copying oraenv to /usr/local/bin ...
    Copying coraenv to /usr/local/bin ...


    Creating /etc/oratab file...
    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root script.
    Now product-specific root actions will be performed.
    Using configuration parameter file: /app/grid/product/11.2.0/grid/crs/install/crsconfig_params
    Creating trace directory
    LOCAL ADD MODE
    Creating OCR keys for user 'root', privgrp 'root'..
    Operation successful.
    OLR initialization - successful
    Adding daemon to inittab
    ACFS-9200: Supported
    ACFS-9300: ADVM/ACFS distribution files found.
    ACFS-9307: Installing requested ADVM/ACFS software.
    ACFS-9308: Loading installed ADVM/ACFS drivers.
    ACFS-9321: Creating udev for ADVM/ACFS.
    ACFS-9323: Creating module dependencies - this may take some time.
    ACFS-9327: Verifying ADVM/ACFS devices.
    ACFS-9309: ADVM/ACFS installation correctness verified.
    CRS-2672: Attempting to start 'ora.mdnsd' on 'his2'
    CRS-2676: Start of 'ora.mdnsd' on 'his2' succeeded
    CRS-2672: Attempting to start 'ora.gpnpd' on 'his2'
    CRS-2676: Start of 'ora.gpnpd' on 'his2' succeeded
    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'his2'
    CRS-2672: Attempting to start 'ora.gipcd' on 'his2'
    CRS-2676: Start of 'ora.cssdmonitor' on 'his2' succeeded
    CRS-2676: Start of 'ora.gipcd' on 'his2' succeeded
    CRS-2672: Attempting to start 'ora.cssd' on 'his2'
    CRS-2672: Attempting to start 'ora.diskmon' on 'his2'
    CRS-2676: Start of 'ora.diskmon' on 'his2' succeeded
    CRS-2676: Start of 'ora.cssd' on 'his2' succeeded

    Disk Group ORACRS creation failed with the following message:
    ORA-15018: diskgroup cannot be created
    ORA-15031: disk specification '/dev/oracleasm/disks/CRSVOL' matches no disks
    ORA-15025: could not open disk "/dev/oracleasm/disks/CRSVOL"
    ORA-15056: additional error message


    Configuration of ASM ... failed
    see asmca logs at /app/grid/grid_base/cfgtoollogs/asmca for details
    Did not succssfully configure and start ASM at /app/grid/product/11.2.0/grid/crs/install/crsconfig_lib.pm line 6464.
    /app/grid/product/11.2.0/grid/perl/bin/perl -I/app/grid/product/11.2.0/grid/perl/lib -I/app/grid/product/11.2.0/grid/crs/install /app/grid/product/11.2.0/grid/crs/install/rootcrs.pl execution failed
    [root@his2 soft]#
    Last login: Thu Apr 17 16:00:56 2014 from 10.0.0.254
    [root@his2 ~]# ll -l /dev/oracleasm/disks/CRSVOL
    brw------- 1 root root 8, 1 Apr 17 15:16 /dev/oracleasm/disks/CRSVOL
    [root@his2 ~]# cd /dev/oracleasm/disks/
    [root@his2 disks]# ls
    CRSVOL DATVOL FRAVOL
    [root@his2 disks]# ll
    total 0
    brw------- 1 root root 8, 1 Apr 17 15:16 CRSVOL 这边权限出了问题导致
    brw------- 1 root root 8, 17 Apr 17 15:16 DATVOL
    brw------- 1 root root 8, 33 Apr 17 15:16 FRAVOL
    [root@his2 disks]# cd ./usr/sbin/oracleasm configure
    -bash: cd: ./usr/sbin/oracleasm: No such file or directory
    [root@his2 disks]# /usr/sbin/oracleasm configure
    ORACLEASM_ENABLED=true
    ORACLEASM_UID=grid
    ORACLEASM_GID=onstall
    ORACLEASM_SCANBOOT=true
    ORACLEASM_SCANORDER=""
    ORACLEASM_SCANEXCLUDE=""
    [root@his2 disks]# /etc/init.d/oracleasm configure
    Configuring the Oracle ASM library driver.

    This will configure the on-boot properties of the Oracle ASM library
    driver. The following questions will determine whether the driver is
    loaded on boot and what permissions it will have. The current values
    will be shown in brackets ('[]'). Hitting without typing an
    answer will keep that current value. Ctrl-C will abort.

    Default user to own the driver interface [grid]: grid
    Default group to own the driver interface [onstall]: oinstall
    Start Oracle ASM library driver on boot (y/n) [y]: y
    Scan for Oracle ASM disks on boot (y/n) [y]: y
    Writing Oracle ASM library driver configuration: done
    Initializing the Oracle ASMLib driver: [ OK ]
    Scanning the system for Oracle ASMLib disks: [ OK ]
    [root@his2 disks]# oracleasm scandisks
    Reloading disk partitions: done
    Cleaning any stale ASM disks...
    Scanning system for ASM disks...
    [root@his2 disks]# oracleasm listdisks;
    CRSVOL
    DATVOL
    FRAVOL
    [root@his2 disks]# cd /dev/oracleasm/disks/
    [root@his2 disks]# ll
    total 0
    brw-rw---- 1 grid oinstall 8, 1 Apr 17 15:16 CRSVOL
    brw-rw---- 1 grid oinstall 8, 17 Apr 17 15:16 DATVOL
    brw-rw---- 1 grid oinstall 8, 33 Apr 17 15:16 FRAVOL
    [root@his2 disks]# /app/grid/product/11.2.0/grid/root.sh
    Running Oracle 11g root script...

    The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME= /app/grid/product/11.2.0/grid

    Enter the full pathname of the local bin directory: [/usr/local/bin]:
    The contents of "dbhome" have not changed. No need to overwrite.
    The contents of "oraenv" have not changed. No need to overwrite.
    The contents of "coraenv" have not changed. No need to overwrite.

    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root script.
    Now product-specific root actions will be performed.
    Using configuration parameter file: /app/grid/product/11.2.0/grid/crs/install/crsconfig_params
    CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node his1, number 1, and is terminating
    An active cluster was found during exclusive startup, restarting to join the cluster
    Configure Oracle Grid Infrastructure for a Cluster ... succeeded
    [root@his2 disks]# crsctl stat res -t
    -bash: crsctl: command not found
    [root@his2 disks]# su - grid
    [grid@his2 ~]$ crsctl check crs
    CRS-4638: Oracle High Availability Services is online
    CRS-4537: Cluster Ready Services is online
    CRS-4529: Cluster Synchronization Services is online
    CRS-4533: Event Manager is online
    [grid@his2 ~]$ crsstat -t
    --------------------------------------------------------------------------------
    NAME TARGET STATE SERVER STATE_DETAILS
    --------------------------------------------------------------------------------
    Local Resources
    --------------------------------------------------------------------------------
    ora.ORACRS.dg
    ONLINE ONLINE his1
    ONLINE ONLINE his2
    ora.asm
    ONLINE ONLINE his1 Started
    ONLINE ONLINE his2
    ora.gsd
    OFFLINE OFFLINE his1
    OFFLINE OFFLINE his2
    ora.net1.network
    ONLINE ONLINE his1
    ONLINE ONLINE his2
    ora.ons
    ONLINE ONLINE his1
    ONLINE ONLINE his2
    ora.registry.acfs
    ONLINE ONLINE his1
    ONLINE ONLINE his2
    --------------------------------------------------------------------------------
    Cluster Resources
    --------------------------------------------------------------------------------
    ora.LISTENER_SCAN1.lsnr
    1 ONLINE ONLINE his1
    ora.cvu
    1 ONLINE ONLINE his1
    ora.his1.vip
    1 ONLINE ONLINE his1
    ora.his2.vip
    1 ONLINE ONLINE his2
    ora.oc4j
    1 ONLINE ONLINE his1
    ora.scan1.vip
    1 ONLINE ONLINE his1
    [grid@his2 ~]$ crsctk check crs
    -bash: crsctk: command not found
    [grid@his2 ~]$ crsctl check crs
    CRS-4638: Oracle High Availability Services is online
    CRS-4537: Cluster Ready Services is online
    CRS-4529: Cluster Synchronization Services is online
    CRS-4533: Event Manager is online
    [grid@his2 ~]$ ocrcheck
    Status of Oracle Cluster Registry is as follows :
    Version : 3
    Total space (kbytes) : 262120
    Used space (kbytes) : 2368
    Available space (kbytes) : 259752
    ID : 1459065767
    Device/File Name : ORACRS
    Device/File integrity check succeeded

    著名笔者刊www.zmbzk.com

    Device/File not configured

    Device/File not configured

    Device/File not configured

    Device/File not configured

    Cluster registry integrity check succeeded

    Logical corruption check bypassed due to non-privileged user

    [grid@his2 ~]$ crsctl query css votedisk
    ## STATE File Universal Id File Name Disk group
    -- ----- ----------------- --------- ---------
    1. ONLINE 11b2373f9bc94f17bfd8c7e39a386573 (/dev/oracleasm/disks/CRSVOL) [ORACRS]
    Located 1 voting disk(s).

    展开全文
  • [TOC] ...VMware Workstation 搭建11g RAC 规划 11g OCR与votedisk放在一个磁盘组,大小1G够用,12C需要 6G以上 OCR根据不同的冗余类型需要的磁盘数规则如下: external — 一块盘 nor...

    参考文档:http://www.cnblogs.com/lhrbest/p/6337496.html

    VMware Workstation 搭建11g RAC

    规划

    这里写图片描述

    11g OCR与votedisk放在一个磁盘组,大小1G够用,12C需要 6G以上

    OCR根据不同的冗余类型需要的磁盘数规则如下:
    external — 一块盘
    normal — 三块
    high — 五块

    一:准备与配置OS环境

    VM 创建共享磁盘

    创建磁盘(windows CMD下)

    cd /d D:\VMware\VMware Workstation
    vmware-vdiskmanager.exe -c -s 2g -a lsilogic -t 2 "E:\VM\sharedisk\ocr_vote1.vmdk"
    vmware-vdiskmanager.exe -c -s 2g -a lsilogic -t 2 "E:\VM\sharedisk\ocr_vote2.vmdk"
    vmware-vdiskmanager.exe -c -s 2g -a lsilogic -t 2 "E:\VM\sharedisk\ocr_vote3.vmdk"
    vmware-vdiskmanager.exe -c -s 20g -a lsilogic -t 2 "E:\VM\sharedisk\data1.vmdk"
    vmware-vdiskmanager.exe -c -s 20g -a lsilogic -t 2 "E:\VM\sharedisk\data2.vmdk"
    vmware-vdiskmanager.exe -c -s 10g -a lsilogic -t 2 "E:\VM\sharedisk\fra1.vmdk"
    vmware-vdiskmanager.exe -c -s 10g -a lsilogic -t 2 "E:\VM\sharedisk\fra2.vmdk"

    修改两台虚拟机vmx配置文件,添加如下类容(关机情况下修改):

    scsi1.present = "TRUE"
    scsi1.virtualDev = "lsilogic"
    scsi1.sharedBus = "virtual"
    scsi2.present = "TRUE"
    scsi2.virtualDev = "lsilogic"
    scsi2.sharedBus = "virtual"
    scsi3.present = "TRUE"
    scsi3.virtualDev = "lsilogic"
    scsi3.sharedBus = "virtual"
    
    scsi1:1.present = "TRUE"
    scsi1:1.mode = "independent-persistent"
    scsi1:1.filename = "E:\VM\sharedisk\ocr_vote1.vmdk"
    scsi1:1.deviceType = "plainDisk"
    
    scsi1:2.present = "TRUE"
    scsi1:2.mode = "independent-persistent"
    scsi1:2.filename = "E:\VM\sharedisk\ocr_vote2.vmdk"
    scsi1:2.deviceType = "plainDisk" 
    
    scsi1:3.present = "TRUE"
    scsi1:3.mode = "independent-persistent"
    scsi1:3.filename = "E:\VM\sharedisk\ocr_vote3.vmdk"
    scsi1:3.deviceType = "plainDisk"
    
    scsi2:1.present = "TRUE"
    scsi2:1.mode = "independent-persistent"
    scsi2:1.filename = "E:\VM\sharedisk\data1.vmdk"
    scsi2:1.deviceType = "plainDisk"
    
    scsi2:2.present = "TRUE"
    scsi2:2.mode = "independent-persistent"
    scsi2:2.filename = "E:\VM\sharedisk\data2.vmdk"
    scsi2:2.deviceType = "plainDisk"
    
    scsi3:1.present = "TRUE"
    scsi3:1.mode = "independent-persistent"
    scsi3:1.filename = "E:\VM\sharedisk\fra1.vmdk"
    scsi3:1.deviceType = "plainDisk"
    
    scsi3:2.present = "TRUE"
    scsi3:2.mode = "independent-persistent"
    scsi3:2.filename = "E:\VM\sharedisk\fra2.vmdk"
    scsi3:2.deviceType = "plainDisk"
    
    disk.locking = "false"
    diskLib.dataCacheMaxSize = "0"
    diskLib.dataCacheMaxReadAheadSize = "0"
    diskLib.DataCacheMinReadAheadSize = "0"
    diskLib.dataCachePageSize = "4096"
    diskLib.maxUnsyncedWrites = "0"

    开启虚拟机,并查看

    [root@breath01 ~]# fdisk -l | grep /dev/s
    Disk /dev/sda: 53.7 GB, 53687091200 bytes
    /dev/sda1   *           1          64      512000   83  Linux
    /dev/sda2              64        6528    51915776   8e  Linux LVM
    Disk /dev/sdb: 2147 MB, 2147483648 bytes
    Disk /dev/sdc: 2147 MB, 2147483648 bytes
    Disk /dev/sdd: 2147 MB, 2147483648 bytes
    Disk /dev/sde: 21.5 GB, 21474836480 bytes
    Disk /dev/sdf: 21.5 GB, 21474836480 bytes
    Disk /dev/sdg: 10.7 GB, 10737418240 bytes
    Disk /dev/sdh: 10.7 GB, 10737418240 bytes

    配置udev绑定 scsi_id (两节点都运行,且检查uuid是否一致,不一致则有问题)
    [root@breath01 ~]# which scsi_id
    /sbin/scsi_id
    [root@breath01 ~]# echo “options=–whitelisted –replace-whitespace” > /etc/scsi_id.config
    创建并运行绑定脚本:

    [root@breath01 ~]# vi udev_oracle_asmdisk.sh
    #! /bin/bash
    
    mv /etc/udev/rules.d/99-oracle-asmdevices.rules /etc/udev/rules.d/99-oracle-asmdevices.rules_bk
    
    for i in b c d e f g h ;
    
    do
    
    echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\""      >> /etc/udev/rules.d/99-oracle-asmdevices.rules
    
    done
    
    start_udev

    [root@breath01 ~]# chmod +x udev_oracle_asmdisk.sh
    [root@breath01 ~]# ./udev_oracle_asmdisk.sh

    [root@breath01 ~]# cat /etc/udev/rules.d/99-oracle-asmdevices.rules 
    KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29b20f39cafb83828a6f5d81377", NAME="asm-diskb", OWNER="grid", GROUP="asmadmin", MODE="0660"
    KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c299735ec6b2bd545c129b45a779", NAME="asm-diskc", OWNER="grid", GROUP="asmadmin", MODE="0660"
    KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c294eaa90d0713ff024b02084359", NAME="asm-diskd", OWNER="grid", GROUP="asmadmin", MODE="0660"
    KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29eb019bef08cb14b1b28b3c76d", NAME="asm-diske", OWNER="grid", GROUP="asmadmin", MODE="0660"
    KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29ac7a226b89bc873c295f38882", NAME="asm-diskf", OWNER="grid", GROUP="asmadmin", MODE="0660"
    KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29de259c634c2c35a0678254cc7", NAME="asm-diskg", OWNER="grid", GROUP="asmadmin", MODE="0660"
    KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c297047df5519efedffdf8165382", NAME="asm-diskh", OWNER="grid", GROUP="asmadmin", MODE="0660"
    
    [root@breath02 ~]# ll /dev/asm-disk*
    brw-rw----. 1 root root 8,  16 May 23 01:13 /dev/asm-diskb
    brw-rw----. 1 root root 8,  32 May 23 01:13 /dev/asm-diskc
    brw-rw----. 1 root root 8,  48 May 23 01:13 /dev/asm-diskd
    brw-rw----. 1 root root 8,  64 May 23 01:13 /dev/asm-diske
    brw-rw----. 1 root root 8,  80 May 23 01:13 /dev/asm-diskf
    brw-rw----. 1 root root 8,  96 May 23 01:13 /dev/asm-diskg
    brw-rw----. 1 root root 8, 112 May 23 01:13 /dev/asm-diskh

    注:这里属主属组还是root 因为目前的grid 用户 与 asmadmin 组 都还没有创建,后面创建之后 重启下udev即可

    1.配置node1 和node 2 两节点的网络IP

    [root@breath01 ~]# ifconfig
    eth0      Link encap:Ethernet  HWaddr 00:0C:29:E2:9B:5C  
              inet addr:10.10.10.101  Bcast:10.10.10.255  Mask:255.255.255.0
              inet6 addr: fe80::20c:29ff:fee2:9b5c/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:765 errors:0 dropped:0 overruns:0 frame:0
              TX packets:599 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000 
              RX bytes:91063 (88.9 KiB)  TX bytes:114308 (111.6 KiB)
    
    eth1      Link encap:Ethernet  HWaddr 00:0C:29:E2:9B:66  
              inet addr:192.168.1.101  Bcast:192.168.1.255  Mask:255.255.255.0
              inet6 addr: fe80::20c:29ff:fee2:9b66/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:17 errors:0 dropped:0 overruns:0 frame:0
              TX packets:11 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000 
              RX bytes:1963 (1.9 KiB)  TX bytes:746 (746.0 b)
    
    eth2      Link encap:Ethernet  HWaddr 00:0C:29:E2:9B:70  
              inet addr:192.168.1.102  Bcast:192.168.1.255  Mask:255.255.255.0
              inet6 addr: fe80::20c:29ff:fee2:9b70/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:14 errors:0 dropped:0 overruns:0 frame:0
              TX packets:11 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000 
              RX bytes:1783 (1.7 KiB)  TX bytes:746 (746.0 b)
    
    [root@breath02 ~]# ifconfig 
    eth0      Link encap:Ethernet  HWaddr 00:0C:29:E4:4F:64  
              inet addr:10.10.10.102  Bcast:10.10.10.255  Mask:255.255.255.0
              inet6 addr: fe80::20c:29ff:fee4:4f64/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:55 errors:0 dropped:0 overruns:0 frame:0
              TX packets:68 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000 
              RX bytes:7986 (7.7 KiB)  TX bytes:10132 (9.8 KiB)
    
    eth1      Link encap:Ethernet  HWaddr 00:0C:29:E4:4F:78  
              inet addr:192.168.1.103  Bcast:192.168.1.255  Mask:255.255.255.0
              inet6 addr: fe80::20c:29ff:fee4:4f78/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:4 errors:0 dropped:0 overruns:0 frame:0
              TX packets:11 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000 
              RX bytes:240 (240.0 b)  TX bytes:726 (726.0 b)
    
    eth2      Link encap:Ethernet  HWaddr 00:0C:29:E4:4F:6E  
              inet addr:192.168.1.104  Bcast:192.168.1.255  Mask:255.255.255.0
              inet6 addr: fe80::20c:29ff:fee4:4f6e/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:1 errors:0 dropped:0 overruns:0 frame:0
              TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000 
              RX bytes:60 (60.0 b)  TX bytes:676 (676.0 b)

    2.配置hosts文件

    两节点hosts文件一致如下:
    [root@breath01 ~]# cat /etc/hosts

    # Do not remove the following line, or various programs
    # that require network functionality will fail.
    127.0.0.1               localbreath.localdomain localbreath
    ::1             localbreath6.localdomain6 localbreath6
    
    ##Public Network
    10.10.10.101         breath01
    10.10.10.102         breath02
    
    ##Public Virtual IP (VIP) 
    10.10.10.111            breath01-vip
    10.10.10.112            breath02-vip
    
    ##Private IP
    192.168.1.101          breath01-priv
    192.168.1.103          breath02-priv
    
    ##SCAN-ip
    10.10.10.100            breath-scan

    3.配置防火墙及selinux

    两节点执行:

    [root@breath01 ~]# service iptables stop
    iptables: Setting chains to policy ACCEPT: filter [ OK ]
    iptables: Flushing firewall rules: [ OK ]
    iptables: Unloading modules: [ OK ]
    [root@breath01 ~]# chkconfig iptables off
    [root@breath01 ~]# chkconfig –list | grep iptables
    iptables 0:off 1:off 2:off 3:off 4:off 5:off 6:off
    [root@breath01 ~]# getenforce
    Enforcing
    [root@breath01 ~]# vi /etc/selinux/config
    SELINUX=disabled
    [root@breath01 ~]# setenforce 0
    [root@breath01 ~]# getenforce
    Permissive

    4.创建用户及组

    所有节点配置一定要一致,UID
    创建组:
    groupadd -g 1000 oinstall
    groupadd -g 1200 asmadmin
    groupadd -g 1201 asmdba
    groupadd -g 1202 asmoper
    groupadd -g 1300 dba
    groupadd -g 1301 oper
    创建用户
    useradd -m -u 1100 -g oinstall -G asmadmin,asmdba,asmoper,dba -d /home/grid -s /bin/bash grid
    useradd -m -u 1101 -g oinstall -G dba,oper,asmdba -d /home/oracle -s /bin/bash oracle
    修改密码及验证user
    passwd oracle
    passwd grid

    因为之前udev绑定时 grid用户 asmadmin组不存在,这里需重启下机器,然后查看
    [root@breath01 ~]# ll /dev/asm-disk*
    brw-rw—- 1 grid asmadmin 8, 16 May 23 01:53 /dev/asm-diskb
    brw-rw—- 1 grid asmadmin 8, 32 May 23 01:53 /dev/asm-diskc
    brw-rw—- 1 grid asmadmin 8, 48 May 23 01:53 /dev/asm-diskd
    brw-rw—- 1 grid asmadmin 8, 64 May 23 01:53 /dev/asm-diske
    brw-rw—- 1 grid asmadmin 8, 80 May 23 01:53 /dev/asm-diskf
    brw-rw—- 1 grid asmadmin 8, 96 May 23 01:53 /dev/asm-diskg
    brw-rw—- 1 grid asmadmin 8, 112 May 23 01:53 /dev/asm-diskh

    5.检查swap

    [root@breath01 ~]# grep SwapTotal /proc/meminfo
    SwapTotal: 3080184 kB

    6.所有节点上验证nobody 用户是否存在

    [root@breath01 ~]# id nobody
    uid=99(nobody) gid=99(nobody) groups=99(nobody)

    7.时间同步设置

    RAC 时间同步有两种方式:NTP和CTSS
    这里使用CTSS,则需要停用NTP服务(默认是开启)
    node1 及 node2 都需做如下配置
    [root@breath02 ~]# service ntpd stop
    [root@breath02 ~]# chkconfig ntpd off
    [root@breath02 ~]# chkconfig ntpd –list
    [root@breath02 ~]# mv /etc/ntp.conf /etc/ntp.conf.old

    8.创建目录结构(所有节点)

    [root@breath01 ~]# mkdir -pv /u01/app/11.2.0/grid
    [root@breath01 ~]# mkdir -pv /u01/app/grid
    [root@breath01 ~]# mkdir -pv /u01/app/oracle/product/11.2.0/dbhome_1

    权限:
    [root@breath01 ~]# chown -R oracle.oinstall /u01/
    [root@breath01 ~]# chown -R grid.oinstall /u01/app/grid/
    [root@breath01 ~]# chown -R grid.oinstall /u01/app/11.2.0
    [root@breath01 ~]# chmod -R 775 /u01/
    [root@breath01 ~]# ll /u01/app/
    total 12
    drwxrwxr-x 3 grid oinstall 4096 Oct 26 15:03 11.2.0
    drwxrwxr-x 2 grid oinstall 4096 Oct 26 15:03 grid
    drwxrwxr-x 3 oracle oinstall 4096 Oct 26 15:03 oracle

    9.修改/etc/security/limits.conf

    [root@breath01 ~]# echo “grid soft nproc 2047
    grid hard nproc 16384
    grid soft nofile 1024
    grid hard nofile 65536
    oracle soft nproc 2047
    oracle hard nproc 16384
    oracle soft nofile 1024
    oracle hard nofile 65536 ” >>/etc/security/limits.conf

    10.修改 /etc/pam.d/login (所有节点)

    [root@breath01 ~]# echo “session required pam_selinux.so” >>/etc/pam.d/login

    11.修改shell 的限制 (所有节点)

    对默认的shell启动文件进行以下更改,以便更改所有Oracle安装所有者的ulimit设置,对/etc/profile文件添加以下内容:
    [root@breath01 ~]# vi /etc/profile

    #oracle export and  ulimit setting
    export ORACLE_HOME=/u01/app/11.2.0/grid
    export PATH=$PATH:$ORACLE_HOME/bin
    
    if [ /$USER = "oracle" ] || [ /$USER = "grid" ]; then
            if [ /$SHELL = "/bin/ksh" ]; then
                    ulimit -p 16384
                    ulimit -n 65536
            else
                    ulimit -u 16384 -n 65536
            fi      
            umask 022
    fi

    12.修改 /etc/sysctl.conf

    官网手册给出的值都是最小值,因此如果您的系统使用更大的值,则不用更改。
    [root@breath01 ~]# vi /etc/sysctl.conf

    #oracle setting
    kernel.shmmax = 4294967295              
    kernel.shmall = 2097152                            
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    fs.file-max = 6815744
    net.ipv4.ip_local_port_range = 9000 65500
    net.core.rmem_default=262144
    net.core.rmem_max=4194304
    net.core.wmem_default=262144
    net.core.wmem_max=1048576
    fs.aio-max-nr=1048576

    [root@breath01 ~]# sysctl -p

    13.配置用户环境变量(所有节点)

    注:修改不同节点的 ORACLE_SID 变量值 (+ASM1,+AMS2),(brac1,brac2)
    grid用户:
    vi /home/grid/.bash_profile

    umask 022
    export ORACLE_BASE=/u01/app/grid
    export ORACLE_HOME=/u01/app/11.2.0/grid
    export PATH=$ORACLE_HOME/bin:$PATH
    export LD_LIBRARY_PATH=$ORACLE_HOME/lib
    export ORACLE_SID=+ASM1
    export NLS_DATE_FORMAT="YYYY:MM:DD HH24:MI:SS"
    alias sqlplus='rlwrap sqlplus'
    alias asmcmd='rlwrap asmcmd'

    oracle用户:
    vi /home/oralce/.bash_profile 添加内容:

    umask 022
    
    export ORACLE_SID=brac1
    export ORACLE_BASE=/u01/app/oracle
    export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
    export PATH=$ORACLE_HOME/bin:$PATH
    export TNS_ADMIN=$ORACLE_HOME/network/admin
    export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
    export NLS_DATE_FORMAT="YYYY:MM:DD HH24:MI:SS"
    export NLS_LANG="AMERICAN_CHINA.ZHS16GBK"
    export TMP=/tmp
    export EDITOR=vi
    
    alias sqlplus='rlwrap sqlplus'
    alias rman='rlwrap rman'
    alias asmcmd='rlwrap asmcmd'

    14. 配置yum 安装相关安装包(所有节点)

    配置yum:
    方法一:配置本地yum
    挂载光盘后
    [root@breath01 ~]#mount /dev/cdrom /mnt/
    [root@breath01 ~]#mv /etc/yum.repos.d/ /etc/yum.repos.d.bak
    [root@breath01 ~]#mkdir -pv /etc/yum.repos.d
    [root@breath01 ~]#vi /etc/yum.repos.d/local.repo
    [centos6.5]
    name=yum
    baseurl=file:///mnt
    enable=1
    gpgcheck=0

    方法二:配置公共yum源,需要连接外网
    如:配置 阿里云的 centos yum源
    wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-6.repo

    安装相关安装包:
    参考官方文档,选择对应系统:https://docs.oracle.com/cd/E11882_01/install.112/e47689/pre_install.htm#BABCFJFG
    这里写图片描述

    [root@breath01 ~]# yum -y install binutils-* compat-lib* gcc* glibc-2* glibc-devel* ksh libgcc-* libstdc* libaio-* make-* sysstat-*

    15. 上传解压的安装包和安装rlwrap

    上传过程省略
    安装rlwrap(所有节点):
    [root@breath01 ~]# yum install -y readline-dev*
    [root@breath01 ~]# tar -zvxf rlwrap-0.42.tar.gz
    [root@breath01 ~]# cd rlwrap-0.42
    [root@breath01 rlwrap-0.42]# ./configure
    [root@breath01 rlwrap-0.42]# make && make install
    节点一上解压安装包:
    [root@breath01 ~]# unzip p13390677_112040_Linux-x86-64_1of7.zip -d /tmp/ && unzip p13390677_112040_Linux-x86-64_2of7.zip -d /tmp/
    [root@breath01 ~]# unzip p13390677_112040_Linux-x86-64_3of7.zip -d /tmp/ && unzip p13390677_112040_Linux-x86-64_4of7.zip -d /tmp/

    16. 配置ssh信(可省略)

    注:这步可省略,后面图形安装GI 可以再图形界面点击配置
    利用GI安装包中工具 sshUserSetup.sh 一条命令搞定(root用户下):
    [root@breath01 ~]# /tmp/grid/sshsetup/sshUserSetup.sh -user grid -hosts “breath01 breath02” -advanced exverify –confirm
    [root@breath01 ~]# /tmp/grid/sshsetup/sshUserSetup.sh -user oracle -hosts “breath01 breath02” -advanced exverify –confirm
    一路yes 然后 密码 与回车 —-完事..

    二:安装Grid information

    1.脚本验证环境

    切换grid用户,进入GI安装目录运行脚本
    su - grid
    [grid@breath01 ~]$ cd /tmp/grid/

    [grid@breath01 grid]$ ./runcluvfy.sh stage -pre crsinst -n breath01,breath02 -fixup -verbose

    2.对验证failed的处理

    当前验证结果 failed的如下(有的可处理,可不处理的):

    Check: Package existence for "elfutils-libelf-devel" 
      Node Name     Available                 Required                  Status    
      ------------  ------------------------  ------------------------  ----------
      breath02      missing                   elfutils-libelf-devel-0.97  failed    
      breath01      missing                   elfutils-libelf-devel-0.97  failed    
    Result: Package existence check failed for "elfutils-libelf-devel"
    Check: Package existence for "pdksh" 
      Node Name     Available                 Required                  Status    
      ------------  ------------------------  ------------------------  ----------
      breath02      missing                   pdksh-5.2.14              failed    
      breath01      missing                   pdksh-5.2.14              failed    
    Result: Package existence check failed for "pdksh"

    pfkdh 包可以网上搜索下载,这里我百度网盘分享地址:
    https://pan.baidu.com/s/1JRo4umLCXlvrkWG26716Tg

    缺少两个包进行安装
    yum install -y elfutils-libelf-devel
    rpm -ivh pdksh-5.2.14-37.el5_8.1.x86_64.rpm –nodeps

    3.进行安装

    [grid@breath01 grid]$ ./runInstaller
    …..安装部分截图
    <1> 下图这里 可以手动点击配置,省去之前手动配置SSH互信
    这里写图片描述
    <2> 下图 eth1和eth2 双网卡做HAIP
    这里写图片描述
    <3> 配置ASM 磁盘组存放OCR
    这里写图片描述
    这里写图片描述
    <4> 配置统一密码
    这里写图片描述
    <5>一路过
    这里写图片描述
    这里写图片描述
    这里写图片描述

    <6> 检查报错
    这里写图片描述

    解决第一个package cvu:
    进入grid 安装目录下 rpm 中
    [root@breath01 ~]# cd /tmp/grid/rpm/
    [root@breath01 rpm]# ls
    cvuqdisk-1.0.9-1.rpm
    [root@breath01 rpm]# rpm -ivh cvuqdisk-1.0.9-1.rpm
    Preparing… ########################################### [100%]
    Using default group oinstall to install package
    1:cvuqdisk ########################################### [100%]
    [root@breath01 rpm]# scp -r breath01:/tmp/grid/rpm/cvuqdisk-1.0.9-1.rpm breath02:/root/
    [root@breath02]# rpm -ivh cvuqdisk-1.0.9-1.rpm

    第二个 device checks for ASM
    确认存储ASM磁盘设置没问题,可以忽略。

    <7>最后运行脚本(root用户)
    注:运行顺序要保证,有提示说明。本地节点运行完毕之后再在其他节点运行
    orainstRoot.sh脚本:
    [root@breath01 ~]# /u01/app/oraInventory/orainstRoot.sh
    [root@breath02 ~]# /u01/app/oraInventory/orainstRoot.sh
    root.sh脚本:
    [root@breath01 ~]# /u01/app/11.2.0/grid/root.sh

    ....
    ....
    CRS-4266: Voting file(s) successfully replaced
    ##  STATE    File Universal Id                File Name Disk group
    --  -----    -----------------                --------- ---------
     1. ONLINE   2eb1efe181f74f2cbf1b6bb920af5614 (/dev/asm-diskb) [OCR]
     2. ONLINE   80cdee29d8bb4f8cbf954070fc3c73ff (/dev/asm-diskc) [OCR]
     3. ONLINE   1a6a1296bed14f1cbf75fa831960f7f2 (/dev/asm-diskd) [OCR]
    Located 3 voting disk(s).
    CRS-2672: Attempting to start 'ora.asm' on 'breath01'
    CRS-2676: Start of 'ora.asm' on 'breath01' succeeded
    CRS-2672: Attempting to start 'ora.OCR.dg' on 'breath01'
    CRS-2676: Start of 'ora.OCR.dg' on 'breath01' succeeded
    Configure Oracle Grid Infrastructure for a Cluster ... succeeded

    [root@breath02 ~]# /u01/app/oraInventory/orainstRoot.sh

    ...
    ...
    An active cluster was found during exclusive startup, restarting to join the cluster
    Configure Oracle Grid Infrastructure for a Cluster ... succeeded

    最后点击”OK“ 完成 继续安装直至结束。
    最后如果报错,看日志:

    INFO: ERROR: 
    INFO: PRVG-1101 : SCAN name "breath-scan" failed to resolve
    INFO: ERROR: 
    INFO: PRVF-4657 : Name resolution setup check for "breath-scan" (IP address: 10.10.10.100) failed
    INFO: ERROR: 
    INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "breath-scan"
    INFO: Verification of SCAN VIP and Listener setup failed

    关于SCAN 的报错可以忽略,其实是安装成功,下面验证可以得到

    4.安装完成后验证:

    [grid@breath01 ~]$ crsctl check crs

    CRS-4638: Oracle High Availability Services is online
    CRS-4537: Cluster Ready Services is online
    CRS-4529: Cluster Synchronization Services is online
    CRS-4533: Event Manager is online

    [grid@breath01 ~]# crsctl stat res -t

    --------------------------------------------------------------------------------
    NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
    --------------------------------------------------------------------------------
    Local Resources
    --------------------------------------------------------------------------------
    ora.LISTENER.lsnr
                   ONLINE  ONLINE       breath01                                     
                   ONLINE  ONLINE       breath02                                     
    ora.OCR.dg
                   ONLINE  ONLINE       breath01                                     
                   ONLINE  ONLINE       breath02                                     
    ora.asm
                   ONLINE  ONLINE       breath01                 Started             
                   ONLINE  ONLINE       breath02                 Started             
    ora.gsd
                   OFFLINE OFFLINE      breath01                                     
                   OFFLINE OFFLINE      breath02                                     
    ora.net1.network
                   ONLINE  ONLINE       breath01                                     
                   ONLINE  ONLINE       breath02                                     
    ora.ons
                   ONLINE  ONLINE       breath01                                     
                   ONLINE  ONLINE       breath02                                     
    --------------------------------------------------------------------------------
    Cluster Resources
    --------------------------------------------------------------------------------
    ora.LISTENER_SCAN1.lsnr
          1        ONLINE  ONLINE       breath01                                     
    ora.breath01.vip
          1        ONLINE  ONLINE       breath01                                     
    ora.breath02.vip
          1        ONLINE  ONLINE       breath02                                     
    ora.cvu
          1        ONLINE  ONLINE       breath01                                     
    ora.oc4j
          1        ONLINE  ONLINE       breath01                                     
    ora.scan1.vip
          1        ONLINE  ONLINE       breath01             

    5.asmca创建磁盘组

    这时集群环境GI已经完成,创建数据库需要的磁盘组 DATA和FRA
    [grid@breath01 ~]$ asmca
    这里写图片描述
    这里写图片描述
    这里写图片描述

    最后”Exit”退出

    三:安装数据库

    1.用oracle用户进入database安装目录进行安装

    [grid@breath01 grid]suoracle[oracle@breath01 ] cd /tmp/database/
    [oracle@breath01 database]$ ./runInstaller

    安装软件
    跟单实例安装区别在下图,这块,其他一样,不描述了
    这里写图片描述

    直至数据库软件安装完成

    2.DBCA创建数据库

    节点一上 oracle用户下运行dbca
    <1> SID 与 Global name 一致
    注: 11g 中 起名规则
    单实例SID <=12字符
    RAC SID <=8字符
    <2> 配置 内存 ,字符集
    建议:ORACLE Memory=SGA+PGA=物理内存*(70%~80%)
    关闭 Automatic Memory Management 特性,以便后面搭建 DG
    这里写图片描述
    这里写图片描述
    这里写图片描述
    这里写图片描述
    这里写图片描述
    这里写图片描述
    这里写图片描述
    这里写图片描述
    这里写图片描述
    这里写图片描述
    这里写图片描述
    这里写图片描述

    3. 验证

    [oracle@breath01 ~]$ crsctl stat res -t

    --------------------------------------------------------------------------------
    NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
    --------------------------------------------------------------------------------
    Local Resources
    --------------------------------------------------------------------------------
    ora.DATA.dg
                   ONLINE  ONLINE       breath01                                     
                   ONLINE  ONLINE       breath02                                     
    ora.FRA.dg
                   ONLINE  ONLINE       breath01                                     
                   ONLINE  ONLINE       breath02                                     
    ora.LISTENER.lsnr
                   ONLINE  ONLINE       breath01                                     
                   ONLINE  ONLINE       breath02                                     
    ora.OCR.dg
                   ONLINE  ONLINE       breath01                                     
                   ONLINE  ONLINE       breath02                                     
    ora.asm
                   ONLINE  ONLINE       breath01                 Started             
                   ONLINE  ONLINE       breath02                 Started             
    ora.gsd
                   OFFLINE OFFLINE      breath01                                     
                   OFFLINE OFFLINE      breath02                                     
    ora.net1.network
                   ONLINE  ONLINE       breath01                                     
                   ONLINE  ONLINE       breath02                                     
    ora.ons
                   ONLINE  ONLINE       breath01                                     
                   ONLINE  ONLINE       breath02                                     
    --------------------------------------------------------------------------------
    Cluster Resources
    --------------------------------------------------------------------------------
    ora.LISTENER_SCAN1.lsnr
          1        ONLINE  ONLINE       breath01                                     
    ora.brac.db
          1        ONLINE  ONLINE       breath01                 Open                
          2        ONLINE  ONLINE       breath02                 Open                
    ora.breath01.vip
          1        ONLINE  ONLINE       breath01                                     
    ora.breath02.vip
          1        ONLINE  ONLINE       breath02                                     
    ora.cvu
          1        ONLINE  ONLINE       breath01                                     
    ora.oc4j
          1        ONLINE  ONLINE       breath01                                     
    ora.scan1.vip
          1        ONLINE  ONLINE       breath01                                        

    [oracle@breath01 ~]$ srvctl status database -d brac
    Instance brac1 is running on node breath01
    Instance brac2 is running on node breath02

    [oracle@breath01 ~]$ srvctl status nodeapps
    VIP breath01-vip is enabled
    VIP breath01-vip is running on node: breath01
    VIP breath02-vip is enabled
    VIP breath02-vip is running on node: breath02
    Network is enabled
    Network is running on node: breath01
    Network is running on node: breath02
    GSD is disabled
    GSD is not running on node: breath01
    GSD is not running on node: breath02
    ONS is enabled
    ONS daemon is running on node: breath01
    ONS daemon is running on node: breath02

    [oracle@breath01 ~]$ srvctl config database -d brac -a
    Database unique name: brac
    Database name: brac
    Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
    Oracle user: oracle
    Spfile: +DATA/brac/spfilebrac.ora
    Domain:
    Start options: open
    Stop options: immediate
    Database role: PRIMARY
    Management policy: AUTOMATIC
    Server pools: brac
    Database instances: brac1,brac2
    Disk Groups: DATA,FRA
    Mount point paths:
    Services:
    Type: RAC
    Database is enabled
    Database is administrator managed

    [oracle@breath01 ~]$ srvctl status database -d brac -v
    Instance brac1 is running on node breath01. Instance status: Open.
    Instance brac2 is running on node breath02. Instance status: Open.

    [oracle@breath01 ~]$ srvctl status asm -a
    ASM is running on breath02,breath01
    ASM is enabled.

    [oracle@breath01 ~]$ srvctl status listener -v
    Listener LISTENER is enabled
    Listener LISTENER is running on node(s): breath02,breath01

    [oracle@breath01 ~]$ srvctl config nodeapps -a -g -s -l
    Warning:-l option has been deprecated and will be ignored.
    Network exists: 1/10.10.10.0/255.255.255.0/eth0, type static
    VIP exists: /breath01-vip/10.10.10.111/10.10.10.0/255.255.255.0/eth0, hosting node breath01
    VIP exists: /breath02-vip/10.10.10.112/10.10.10.0/255.255.255.0/eth0, hosting node breath02
    GSD exists
    ONS exists: Local port 6100, remote port 6200, EM port 2016
    Name: LISTENER
    Network: 1, Owner: grid
    Home:
    /u01/app/11.2.0/grid on node(s) breath01,breath02
    End points: TCP:1521

    [oracle@breath01 ~]$ cluvfy comp clocksync -verbose

    Verifying Clock Synchronization across the cluster nodes 
    
    Checking if Clusterware is installed on all nodes...
    Check of Clusterware install passed
    
    Checking if CTSS Resource is running on all nodes...
    Check: CTSS Resource running on all nodes
      Node Name                             Status                  
      ------------------------------------  ------------------------
      breath01                              passed                  
    Result: CTSS resource check passed
    
    
    Querying CTSS for time offset on all nodes...
    Result: Query of CTSS for time offset passed
    
    Check CTSS state started...
    Check: CTSS state
      Node Name                             State                   
      ------------------------------------  ------------------------
      breath01                              Active                  
    CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
    Reference Time Offset Limit: 1000.0 msecs
    Check: Reference Time Offset
      Node Name     Time Offset               Status                  
      ------------  ------------------------  ------------------------
      breath01      0.0                       passed                  
    
    Time offset is within the specified limits on the following set of nodes: 
    "[breath01]" 
    Result: Check of clock time offsets passed
    
    
    Oracle Cluster Time Synchronization Services check passed
    
    Verification of Clock Synchronization across the cluster nodes was successful. 

    [oracle@breath01 ~]$ sqlplus / as sysdba

    SYS@brac1>select INSTANCE_NAME,HOST_NAME,VERSION,STARTUP_TIME,STATUS,ACTIVE_STATE,INSTANCE_ROLE,DATABASE_STATUS from gv$INSTANCE;
    
    INSTANCE_N HOST_NAME       VERSION    STARTUP_TIME        STATUS          ACTIVE_STATE    INSTANCE_ROLE        DATABASE_STATUS
    ---------- --------------- ---------- ------------------- --------------- --------------- -------------------- ---------------
    brac2      breath02        11.2.0.4.0 2018-05-25 01:13:16 OPEN            NORMAL          PRIMARY_INSTANCE     ACTIVE
    brac1      breath01        11.2.0.4.0 2018-05-25 01:13:12 OPEN            NORMAL          PRIMARY_INSTANCE     ACTIVE
    展开全文
  • 部署RAC 2.7.1 安装Grid Infrastructure

    千次阅读 2013-11-27 12:00:38
    Installation能够满足更灵活的配置要求,提供更多定制选项并部署更复杂的环境。  步骤5 单击“Next”。   图2-25 帮助信息  步骤6 配置Grid Infrastructure信息。  (1)GNS解析...

    Grid Infrastructure是11gR2版本新出现的安装包,使用独立的grid用户进行安装。如果要使用Clusterware、ASM、ACFS、ASM动态卷等功能时都需要先安装此包。Grid Infrastructure封装了Clusterware集群软件和ASM存储软件,此外,对ASM的管理也同样独立出来,新建了ASMADMIN、ASMDBA和ASMOPER 3个系统组来完善对ASM的管理。11g 版本中的ASM能够存放包括OCR和Votedisk在内的所有类型文件。本书也将使用ASM作为OCR、Votedisk、数据文件和闪回恢复区存储的管理软件。 
    Grid Infrastructure和Database软件既可以安装在一个用户下,也可以分用户安装;为了方便分离管理,Oracle推荐使用不同用户安装Grid Infrastructure和Database软件。使用grid用户安装Grid Infrastructure,使用oracle用户安装Database。 
    如果使用一个oracle用户来完成所有软件的安装,在安装之前先设置好ORACLE_ HOME和ORACLE_SID的环境变量。Grid Infrastructure和Database软件需要安装在不同的目录下。

     

    安装Grid Infrastructure的步骤如下: 
    步骤1   使用VNC以root身份登录服务器,按以下步骤启动安装程序。 
    #xhost+ 
    #su - grid 
    $xclock(能够正常显示图形化时钟,证明在grid下运行图形化运行环境已经具备,按Ctrl+C终止时钟显示) 
    $./runInstaller 
    步骤2   选择“Install and Configure Grid Infrastructure for a Cluster”。 
    步骤3   当光标移动到某个选项时,选项的最左边会出现一个问号,单击它会获得该选项的详细说明,单击“more”将获得更多帮助。如图2-25所示。有关其他的安装选项将在第7章讨论。 
    步骤4   选择“Advanced Installation”。 
    注意   典型配置(Typical Installation)提供了一个快速安装Grid Infrastructure的方法,使用Oracle的默认值来简化安装,提供更少的环境定制选项。使用Advanced Installation能够满足更灵活的配置要求,提供更多定制选项并部署更复杂的环境。 

    步骤5   单击“Next”。



     
    图2-25   帮助信息 

    步骤6   配置Grid Infrastructure信息。 
    (1)GNS解析配置 
    如图2-26所示,根据在2.5.3节中的配置填入正确的SCAN名称,选中“Configure GNS”,填入正确的“GNS Sub Domain”和“GNS VIP Address”。 


    图2-26   GNS解析配置 
    表2-11是对上面的参数的详细说明。 、

    表2-11   GNS解析配置参数说明 
    配置项                                                           配置值                                    含      义 
    Cluster Name                                     rhel-cluster                                        集群名称 
    SCAN Name                  rhel-cluster-scan.grid.example.com                     SCAN名称, 对外提供连接服务的接口
                                                                                                                          也可以写成rhel-cluster-scan 
    SCAN Port                                          1521                                            SCAN监听端口号 
    GNS Sub Domain                                 grid.example.com                           名称服务器子域,所有需要GNS解析
                                                                                                                         的域名都必须在这个域下 
    GNS VIP Address                               192.168.4.200                                 接受客户端GNS解析请求的地址 
    (2)DNS解析配置 
    如图2-27所示,参考2.5.3节DNS解析配置的内容,填入正确的SCAN名称。去掉“Configure GNS”选项,将采用DNS解析SCAN名称。 

    图2-27   DNS解析配置 
    注意   Oracle从11g版本开始,安装过程中的每一步都是立即检查的,也就是说,如果某一步配置错误会立即报错或者出现告警。 
    步骤7 单击“Add”添加“rhel2”。如果配置了GNS,Virtual IP Name是AUTO状态,否则,Virtual IP Name是在hosts文件中设置的VIP地址。 
    步骤8 单击“SSH Connectivity”,在OS Password输入grid的密码,单击“Setup”,创建节点间grid用户的等效性。 
    步骤9 确认网络接口配置正确,单击“Next”。 
    注意   Oracle从11.2.0.2版本开始,安装Grid Infrastructure支持HAIP技术。可以直接在图形化界面选择网卡,使用两个或更多的网络作为心跳网络,避免一个网络的失败导致心跳的失败,详细内容将在第3章讨论。11.2.0.2之前的版本也可以使用传统的操作系统bonding技术来实现网卡的绑定,详细的内容将在第15章讨论。 
    步骤10   选择使用ASM作为OCR和Votedisk的存储方式。 
    步骤11   配置存储OCR和Votedisk的ASM Disk Group。如图2-28所示。 
    注意   如果创建用来存放OCR和VOTEDISK的ASM磁盘组,那么External、Normal、High三种冗余级别对应的Failgroup个数是1、3、5。也就是说,创建这三种冗余级别的磁盘组至少分别需要1、3、5个ASM磁盘。 
    如果创建用于非OCR和VOTEDISK存储的ASM磁盘组,那么External、Normal、High三种冗余级别对应的Failgroup至少是1、2、3。也就是说,创建这三种冗余级别的磁盘组至少分别需要1、2、3个ASM磁盘。 
    如果没有遵循创建ASM磁盘组的规则,将收到以下报错信息: 
    [INS-30510] Insufficient number of ASM disks selected. 

    图2-28   OCR和VDISK存储配置 
    步骤12   设置密码。如图2-29所示。 

    图2-29   指定ASM密码 
    扩展阅读:密码最低要求 
    q 密码不能超过30个字符。 
    q 密码不能包含以下符号:! @ % ^ & * () + = \ | ` ~ [ { ] } ; : ' " , <> ?。 
    q 密码不能和用户名相同。 
    q 密码不能为空。 
    q SYS用户的密码不能为change_on_install。 
    q ASMSNMP的密码不能为asmsnmp。 
    q 如果选择使用统一的密码,密码不能是change_on_install或asmsnmp。 
    建议Oracle密码至少包含1个小写字母、1个大写字母、1个数字,且长度不能低于8位。 
    11gR2中的ASM出现了一个新用户ASMSNMP。默认的SYS用户具有SYSASM权限管理ASM实例,但是Oracle推荐创建一个较小权限,具有SYSDBA权限的ASMSNMP用户监控ASM实例。图2-29为SYS和ASMSNMP用户指定了相同的密码,也可以分别指定密码。 
    步骤13   不对IPMI进行配置。IPMI的内容将在第3章讨论。 
    步骤14   确定选择正确的操作系统组:OSDBA(asmdba组)、OSOPER(asmoper组)、OSASM(asmadmin组)。 
    步骤15   确定安装路径。 
    步骤16   确定Inventory目录。 


    步骤17   开始安装前的检查。图2-30为检查问题页面,这里OUI是通过调用CVU验证工具实现的。 


    图2-30   检查问题页面 
    如果检查没有完全通过,会列出检查项、状态、是否可以修复。如果Fixable=Yes证明通过生成脚本Oracle能够自己修复,那么单击上方的“Fix&Check Again”,会出现如图2-31所示提示框,单击“OK”就会重新执行一次自检,并在/tmp/CVU_11.2.0.2.0_grid/目录下生成runfixup.sh修复脚本,运行这个脚本就能自动修复上面标记Fixable=Yes的问题。fixup特性是CVU出现的新功能,能直接在/tmp目录下生成修复脚本。 
    如图2-31所示,存在的问题是没有安装cvuqdisk包,通过执行Oracle自己生成的runfixup.sh修复脚本,cvuqdisk包被自动安装。 
    [root@rhel1 CVU_11.2.0.2.0_grid]# ./runfixup.sh 
    /usr/bin/id 
    Response file being used is :./fixup.response 
    Enable file being used is :./fixup.enable 
    Log file location: ./orarun.log 
    Installing Package /tmp/CVU_11.2.0.2.0_grid//cvuqdisk-1.0.9-1.rpm 
    Preparing...              ########################################### [100%] 
       1:cvuqdisk             ########################################### [100%] 


    图2-31   生成fixup脚本提示框 
    步骤18   如果检查没有发现错误,直接跳转到Summary页面。 
    步骤19   开始安装。 
    步骤20   在RAC所有节点以root的身份分别执行orainstRoot.sh、root.sh脚本。执行root.sh的时候,不能所有的节点一起执行;只有root.sh在第一个节点执行成功之后,才可以在其他节点同时执行。其中orainstRoot.sh脚本是为了设置清单目录的权限,root.sh是为了调用一系列的其他脚本,主要包括Clusterware和系统的配置工作。 
    步骤21   脚本执行完成后单击“OK”,安装继续其他的配置工作,配置完成如果没有报错将直接跳转到Finish页面,如果有报错查看Inventory目录下的详细日志。 
    在安装Grid Infrastructure的过程中,如果使用DNS解析SCAN名称,OUI工具在最后一步CVU检查的时候,可能出现报错,安装日志将记录类似下面的报错信息: 
    INFO: Checking name resolution setup for "rhel-cluster.grid.example.com."... 
    INFO: ERROR: 
    INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name 
     "rhel-cluster.grid.example.com." 
    INFO: ERROR: 
    INFO: PRVF-4657 : Name resolution setup check for "rhel- 
    cluster.grid.example.com." (IP address: 10.168.4.149)failed 
    INFO: ERROR: 
    INFO: PRVF-4657 : Name resolution setup check for "rhel- 
    cluster.grid.example.com." (IP address: 10.168.4.150)failed 
    INFO: ERROR: 
    INFO: PRVF-4657 : Name resolution setup check for "rhel- 
    cluster.grid.example.com." (IP address: 10.168.4.151)failed 
    INFO: ERROR: 
    INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name 
     "rhel-cluster.grid.example.com." 
    INFO: Verification of SCAN VIP and Listener setup failed 
    如果使用以下命令能够正常解析出域名对应的3个地址,就可以忽略此报错: 
    nslookup rhel-cluster.grid.example.com 
    如果使用的是HOSTS文件解析SCAN域名,应该将解析方式修改为DNS或者GNS。


    展开全文
  • 2.7 部署RAC 2.7.1 安装Grid Infrastructure

    千次阅读 2013-01-05 22:55:33
    使用Advanced Installation能够满足更灵活的配置要求,提供更多定制选项并部署更复杂的环境。  步骤5 单击“Next”。   图2-25 帮助信息  步骤6 配置Grid Infrastructure信息。  (1)GNS解析...

    Grid Infrastructure是11gR2版本新出现的安装包,使用独立的grid用户进行安装。如果要使用Clusterware、ASM、ACFS、ASM动态卷等功能时都需要先安装此包。Grid Infrastructure封装了Clusterware集群软件和ASM存储软件,此外,对ASM的管理也同样独立出来,新建了ASMADMIN、ASMDBA和ASMOPER 3个系统组来完善对ASM的管理。11g 版本中的ASM能够存放包括OCR和Votedisk在内的所有类型文件。本书也将使用ASM作为OCR、Votedisk、数据文件和闪回恢复区存储的管理软件。 
    Grid Infrastructure和Database软件既可以安装在一个用户下,也可以分用户安装;为了方便分离管理,Oracle推荐使用不同用户安装Grid Infrastructure和Database软件。使用grid用户安装Grid Infrastructure,使用oracle用户安装Database。 
    如果使用一个oracle用户来完成所有软件的安装,在安装之前先设置好ORACLE_ HOME和ORACLE_SID的环境变量。Grid Infrastructure和Database软件需要安装在不同的目录下。

     

    安装Grid Infrastructure的步骤如下: 
    步骤1   使用VNC以root身份登录服务器,按以下步骤启动安装程序。 
    #xhost+ 
    #su - grid 
    $xclock(能够正常显示图形化时钟,证明在grid下运行图形化运行环境已经具备,按Ctrl+C终止时钟显示) 
    $./runInstaller 
    步骤2   选择“Install and Configure Grid Infrastructure for a Cluster”。 
    步骤3   当光标移动到某个选项时,选项的最左边会出现一个问号,单击它会获得该选项的详细说明,单击“more”将获得更多帮助。如图2-25所示。有关其他的安装选项将在第7章讨论。 
    步骤4   选择“Advanced Installation”。 
    注意   典型配置(Typical Installation)提供了一个快速安装Grid Infrastructure的方法,使用Oracle的默认值来简化安装,提供更少的环境定制选项。使用Advanced Installation能够满足更灵活的配置要求,提供更多定制选项并部署更复杂的环境。 
    步骤5   单击“Next”。

     
    图2-25   帮助信息 
    步骤6   配置Grid Infrastructure信息。 
    (1)GNS解析配置 
    如图2-26所示,根据在2.5.3节中的配置填入正确的SCAN名称,选中“Configure GNS”,填入正确的“GNS Sub Domain”和“GNS VIP Address”。 


    图2-26   GNS解析配置 
    表2-11是对上面的参数的详细说明。 、

    表2-11   GNS解析配置参数说明 
    配置项                                                           配置值                                    含      义 
    Cluster Name                                     rhel-cluster                                        集群名称 
    SCAN Name                  rhel-cluster-scan.grid.example.com                     SCAN名称, 对外提供连接服务的接口
                                                                                                                          也可以写成rhel-cluster-scan 
    SCAN Port                                          1521                                            SCAN监听端口号 
    GNS Sub Domain                                 grid.example.com                           名称服务器子域,所有需要GNS解析
                                                                                                                         的域名都必须在这个域下 
    GNS VIP Address                               192.168.4.200                                 接受客户端GNS解析请求的地址 
    (2)DNS解析配置 
    如图2-27所示,参考2.5.3节DNS解析配置的内容,填入正确的SCAN名称。去掉“Configure GNS”选项,将采用DNS解析SCAN名称。 

    图2-27   DNS解析配置 
    注意   Oracle从11g版本开始,安装过程中的每一步都是立即检查的,也就是说,如果某一步配置错误会立即报错或者出现告警。 
    步骤7 单击“Add”添加“rhel2”。如果配置了GNS,Virtual IP Name是AUTO状态,否则,Virtual IP Name是在hosts文件中设置的VIP地址。 
    步骤8 单击“SSH Connectivity”,在OS Password输入grid的密码,单击“Setup”,创建节点间grid用户的等效性。 
    步骤9 确认网络接口配置正确,单击“Next”。 
    注意   Oracle从11.2.0.2版本开始,安装Grid Infrastructure支持HAIP技术。可以直接在图形化界面选择网卡,使用两个或更多的网络作为心跳网络,避免一个网络的失败导致心跳的失败,详细内容将在第3章讨论。11.2.0.2之前的版本也可以使用传统的操作系统bonding技术来实现网卡的绑定,详细的内容将在第15章讨论。 
    步骤10   选择使用ASM作为OCR和Votedisk的存储方式。 
    步骤11   配置存储OCR和Votedisk的ASM Disk Group。如图2-28所示。 
    注意   如果创建用来存放OCR和VOTEDISK的ASM磁盘组,那么External、Normal、High三种冗余级别对应的Failgroup个数是1、3、5。也就是说,创建这三种冗余级别的磁盘组至少分别需要1、3、5个ASM磁盘。 
    如果创建用于非OCR和VOTEDISK存储的ASM磁盘组,那么External、Normal、High三种冗余级别对应的Failgroup至少是1、2、3。也就是说,创建这三种冗余级别的磁盘组至少分别需要1、2、3个ASM磁盘。 
    如果没有遵循创建ASM磁盘组的规则,将收到以下报错信息: 
    [INS-30510] Insufficient number of ASM disks selected. 

    图2-28   OCR和VDISK存储配置 
    步骤12   设置密码。如图2-29所示。 

    图2-29   指定ASM密码 
    扩展阅读:密码最低要求 
    q 密码不能超过30个字符。 
    q 密码不能包含以下符号:! @ % ^ & * () + = \ | ` ~ [ { ] } ; : ' " , <> ?。 
    q 密码不能和用户名相同。 
    q 密码不能为空。 
    q SYS用户的密码不能为change_on_install。 
    q ASMSNMP的密码不能为asmsnmp。 
    q 如果选择使用统一的密码,密码不能是change_on_install或asmsnmp。 
    建议Oracle密码至少包含1个小写字母、1个大写字母、1个数字,且长度不能低于8位。 
    11gR2中的ASM出现了一个新用户ASMSNMP。默认的SYS用户具有SYSASM权限管理ASM实例,但是Oracle推荐创建一个较小权限,具有SYSDBA权限的ASMSNMP用户监控ASM实例。图2-29为SYS和ASMSNMP用户指定了相同的密码,也可以分别指定密码。 
    步骤13   不对IPMI进行配置。IPMI的内容将在第3章讨论。 
    步骤14   确定选择正确的操作系统组:OSDBA(asmdba组)、OSOPER(asmoper组)、OSASM(asmadmin组)。 
    步骤15   确定安装路径。 
    步骤16   确定Inventory目录。 


    步骤17   开始安装前的检查。图2-30为检查问题页面,这里OUI是通过调用CVU验证工具实现的。 


    图2-30   检查问题页面 
    如果检查没有完全通过,会列出检查项、状态、是否可以修复。如果Fixable=Yes证明通过生成脚本Oracle能够自己修复,那么单击上方的“Fix&Check Again”,会出现如图2-31所示提示框,单击“OK”就会重新执行一次自检,并在/tmp/CVU_11.2.0.2.0_grid/目录下生成runfixup.sh修复脚本,运行这个脚本就能自动修复上面标记Fixable=Yes的问题。fixup特性是CVU出现的新功能,能直接在/tmp目录下生成修复脚本。 
    如图2-31所示,存在的问题是没有安装cvuqdisk包,通过执行Oracle自己生成的runfixup.sh修复脚本,cvuqdisk包被自动安装。 
    [root@rhel1 CVU_11.2.0.2.0_grid]# ./runfixup.sh 
    /usr/bin/id 
    Response file being used is :./fixup.response 
    Enable file being used is :./fixup.enable 
    Log file location: ./orarun.log 
    Installing Package /tmp/CVU_11.2.0.2.0_grid//cvuqdisk-1.0.9-1.rpm 
    Preparing...              ########################################### [100%] 
       1:cvuqdisk             ########################################### [100%] 


    图2-31   生成fixup脚本提示框 
    步骤18   如果检查没有发现错误,直接跳转到Summary页面。 
    步骤19   开始安装。 
    步骤20   在RAC所有节点以root的身份分别执行orainstRoot.sh、root.sh脚本。执行root.sh的时候,不能所有的节点一起执行;只有root.sh在第一个节点执行成功之后,才可以在其他节点同时执行。其中orainstRoot.sh脚本是为了设置清单目录的权限,root.sh是为了调用一系列的其他脚本,主要包括Clusterware和系统的配置工作。 
    步骤21   脚本执行完成后单击“OK”,安装继续其他的配置工作,配置完成如果没有报错将直接跳转到Finish页面,如果有报错查看Inventory目录下的详细日志。 
    在安装Grid Infrastructure的过程中,如果使用DNS解析SCAN名称,OUI工具在最后一步CVU检查的时候,可能出现报错,安装日志将记录类似下面的报错信息: 
    INFO: Checking name resolution setup for "rhel-cluster.grid.example.com."... 
    INFO: ERROR: 
    INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name 
     "rhel-cluster.grid.example.com." 
    INFO: ERROR: 
    INFO: PRVF-4657 : Name resolution setup check for "rhel- 
    cluster.grid.example.com." (IP address: 10.168.4.149)failed 
    INFO: ERROR: 
    INFO: PRVF-4657 : Name resolution setup check for "rhel- 
    cluster.grid.example.com." (IP address: 10.168.4.150)failed 
    INFO: ERROR: 
    INFO: PRVF-4657 : Name resolution setup check for "rhel- 
    cluster.grid.example.com." (IP address: 10.168.4.151)failed 
    INFO: ERROR: 
    INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name 
     "rhel-cluster.grid.example.com." 
    INFO: Verification of SCAN VIP and Listener setup failed 
    如果使用以下命令能够正常解析出域名对应的3个地址,就可以忽略此报错: 
    nslookup rhel-cluster.grid.example.com 
    如果使用的是HOSTS文件解析SCAN域名,应该将解析方式修改为DNS或者GNS。
    展开全文
  • 这里想要提醒注意的内容是:请不要将系统主机名保留在环路地址中。记录一下这个过程,供参考。1.问题再现1)RAC第一节点演示记录信息(1)系统hosts文件中记录的信息[root@node1 ~]# cat /etc/hosts#...
  • [root@his2 soft]# /app/grid/product/11.2.0/grid/root.sh Running Oracle 11g root script... The following...
  • rac集群部署

    2014-09-16 14:58:50
    rac集群部署
  • RAC部署方案

    2018-12-19 17:14:15
    详细的RAC部署方案,包括系统架构,安装步骤,架构图等
  • 文档介绍如何部署RAC环境,使得数据库具有更好的性能。 文档从表空间类型、对象的创建、序列以及应用性能调整四个方面进行了描述。这章在最后还进行了简单的总结,为了获取更好的性能:使用本地管理表空间并采用自动...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 1,457
精华内容 582
关键字:

部署rac