精华内容
下载资源
问答
  • 把2套19c的oracle的rac同步到同一台dg服务器。 彻底明白rac和dg的安装过程。 一课程主题 把2套19c的oracle的rac同步到同一台dg服务器。 彻底明白rac和dg的安装过程。   二课程特色 课程以实践为主,从头到尾...
  • centos7安装Oracle19cRac文档
  • oracle 19cRAC FOR redhat8.2 on vmware workstation 16.0 一、 take the RAC1 AND RAC2 VMWARE FILE CONFIG RAC1 AND RAC2 diskLib.dataCacheMaxSize="0" diskLib.dataCacheMaxReadAheadSize="0" diskLib....

    oracle 19cRAC FOR redhat8.2 on vmware workstation 16.0

    一、 take the RAC1 AND RAC2 VMWARE FILE CONFIG

    RAC1 AND RAC2
    diskLib.dataCacheMaxSize="0"
    diskLib.dataCacheMaxReadAheadSize="0"
    diskLib.dataCacheMinReadAheadSize="0"
    diskLib.dataCachePageSize="4096"
    diskLib.maxAUnsyncedWrites="0"
    disk.EnableUUID="TRUE"
    scsi1:0.deviceType="disk"
    scsi1:1.deviceType="disk"
    scsi1:2.deviceType="disk"
    scsi1:3.deviceType="disk"
    scsi1:4.deviceType="disk"
    disk.locking="false"
    

    二、 SETUP RAC1 AND RAC2 on same vesion system on Linux (redhat enterprise 8.2)

    三:DOWNLOAD Oracle setup package

    LINUX.X64193000gridhome.zip AND LINUX.X64193000dbhome.zip

    四:configer rac1 and rac2 sharedisk on system

    4.1 take five sharedisk set fdisk partition
    --RAC1(IP address 192.168.0.28)
    
    fdisk -l
    fdisk /dev/sdb
    
    fdisk /dev/sdc
    
    fdisk /dev/sdd
    
    fdisk /dev/sde
    
    fdisk /dev/sdf
    
    --RAC2(Ip address 192.168.0.30)(save the disk partition)
    fdisk /dev/sdb
    
    fdisk /dev/sdc
    fdisk /dev/sdd
    fdisk /dev/sde
    
    fdisk /dev/sdf
    
    4.2 config udev to bind the partition of corresponding disk
    for i in b c d e f :
    \> do
    \> echo "KERNEL==\"sd?1\", SUBSYSTEM==\"block\",PROGRAM==\"/usr/lib/udev/scsi_id -g -u -d /dev/\$parent\", RESULT==\"`/usr/lib/udev/scsi_id -g -u -d /dev/sd\$i`\", SYMLINK+=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\"" 
    \> done
    KERNEL=="sd?1", SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="36000c297eb92bdc8e6e60d60cab4b412", SYMLINK+="asm-diskb", OWNER="grid", GROUP="asmadmin", MODE="0660"
    KERNEL=="sd?1", SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="36000c29ebaaf07ed09b0c9911f5b8094", SYMLINK+="asm-diskc", OWNER="grid", GROUP="asmadmin", MODE="0660"
    KERNEL=="sd?1", SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="36000c29455bd6cb5318318507cafd8cf", SYMLINK+="asm-diskd", OWNER="grid", GROUP="asmadmin", MODE="0660"
    KERNEL=="sd?1", SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="36000c29721ff5fd4fe64098b594eddf5", SYMLINK+="asm-diske", OWNER="grid", GROUP="asmadmin", MODE="0660"
    KERNEL=="sd?1", SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="36000c2999eb5810b9c3c7f5291b49f60", SYMLINK+="asm-diskf", OWNER="grid", GROUP="asmadmin", MODE="0660"
    --vi /etc/udev/rules.d/99-oracle-asmdevices.rules
    
    KERNEL=="sd?1", SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="36000c297eb92bdc8e6e60d60cab4b412", SYMLINK+="asm-diskb", OWNER="grid", GROUP="asmadmin", MODE="0660"
    KERNEL=="sd?1", SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="36000c29ebaaf07ed09b0c9911f5b8094", SYMLINK+="asm-diskc", OWNER="grid", GROUP="asmadmin", MODE="0660"
    KERNEL=="sd?1", SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="36000c29455bd6cb5318318507cafd8cf", SYMLINK+="asm-diskd", OWNER="grid", GROUP="asmadmin", MODE="0660"
    KERNEL=="sd?1", SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="36000c29721ff5fd4fe64098b594eddf5", SYMLINK+="asm-diske", OWNER="grid", GROUP="asmadmin", MODE="0660"
    KERNEL=="sd?1", SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="36000c2999eb5810b9c3c7f5291b49f60", SYMLINK+="asm-diskf", OWNER="grid", GROUP="asmadmin", MODE="0660"
    
    --udevadm config reload:
    udevadm control --reload
    udevadm trigger
    --RAC2
    partprobe /dev/sdb
    partprobe /dev/sdc
     partprobe /dev/sdd
    partprobe /dev/sde
     partprobe /dev/sdf
    

    五: every nond proofread system time

    systemctl list-unit-files|grep chronyd
    systemctl status chronyd
    
    systemctl disable chronyd
    systemctl stop chronyd
    mv /etc/chrony.conf /etc/chrony.conf_bak
    

    六:close the selinux on every nond

    vi /etc/selinux/config
    
     SELINUX=disabled
    
    

    七:close the firwalld on every nond

    systemctl list-unit-files|grep firewalld
    systemctl status firewalld
    
    systemctl disable firewalld
    systemctl stop firewalld
    

    八:use yum setup rpm packages

    the import supported Red hat Enterprise Linux8
    #https://docs.oracle.com/en/database/oracle/oracle-database/19/cwlin/supported-red-hat-enterprise-linux-8-distributions-for-x86-64.html#GUID-B1487167-84F8-4F8D-AC31-A4E8F592374B
    Supported Red Hat Enterprise Linux 8 Distributions for x86-64

    Use the following information to check supported Red Hat Enterprise Linux 8 distributions:

    Table 4-3 x86-64 Red Hat Enterprise Linux 8 Minimum Operating System Requirements

    ItemRequirements
    SSH RequirementEnsure that OpenSSH is installed on your servers. OpenSSH is the required SSH software.
    Red Hat Enterprise Linux 8Minimum supported versions:Red Hat Enterprise Linux 8: 4.18.0-80.el8.x86_64 or later
    Packages for Red Hat Enterprise Linux 8Install the latest released versions of the following packages: bc binutils elfutils-libelf elfutils-libelf-devel fontconfig-devel glibc glibc-devel ksh libaio libaio-devel libXrender libX11 libXau libXi libXtst libgcc libnsl librdmacm libstdc++ libstdc+±devel libxcb libibverbs make smartmontools sysstatNote:If you intend to use 32-bit client applications to access 64-bit servers, then you must also install the latest 32-bit versions of the packages listed in this table.
    Optional Packages for Red Hat Enterprise Linux 8Based on your requirement, install the latest released versions of the following packages: ipmiutil (for Intelligent Platform Management Interface) libnsl2 (for Oracle Database Client only) libnsl2-devel (for Oracle Database Client only) net-tools (for Oracle RAC and Oracle Clusterware) nfs-utils (for Oracle ACFS)
    Patches and Known IssuesFor a list of latest Oracle Database Release Updates (RU) and Release Update Revisions (RUR) patches for Oracle Linux Enterprise Linux 8 and Red Hat Enterprise Linux 8, visit My Oracle SupportFor a list of known issues and open bugs for Oracle Linux 8 and Red Hat Enterprise Linux 8, read the Oracle Database Release Notes

    *Parent topic:* Operating System Requirements for x86-64 Linux Platforms

    8.1 configer yum
    vi /etc/yum.repos.d/system.repo
    [BaseOS]
    name=BaseOS
    baseurl=file:///run/media/root/RHEL-8-2-0-BaseOS-x86_64/BaseOS
    enabled=1
    gpgcheck=0
    [AppStream]
    name=AppStream
    baseurl=file:///run/media/root/RHEL-8-2-0-BaseOS-x86_64/AppStream
    enabled=1
    gpgcheck=0
    
    8.2 yum ORACLE 19C rac NEED yum package
    yum install bc binutils elfutils-libelf elfutils-libelf-devel fontconfig-devel glibc glibc-devel ksh libaio libaio-devel libXrender libX11 libXau libXi libXtst libgcc libnsl librdmacm libstdc++ libstdc++-devel libxcb libibverbs make smartmontools systat ipmiutil net-tools nfs-utils libnsl2 libnsl2-devel
    
    
    

    九:CREATE ORACLE and grid AND groups

    groupadd -g 54321 oinstall  
    groupadd -g 54322 dba  
    groupadd -g 54323 oper  
    groupadd -g 54324 backupdba  
    groupadd -g 54325 dgdba  
    groupadd -g 54326 kmdba  
    groupadd -g 54327 asmdba  
    groupadd -g 54328 asmoper  
    groupadd -g 54329 asmadmin  
    groupadd -g 54330 racdba  
     
    useradd -u 54321 -g oinstall -G dba,asmdba,backupdba,dgdba,kmdba,racdba,oper oracle  
    useradd -u 54322 -g oinstall -G asmadmin,asmdba,asmoper,dba grid 
    passwd oracle
    passwd grid
    

    十:create these files list

    mkdir -p /u01/app/grid_base
    mkdir -p /u01/app/grid
    mkdir -p /u01/app/oracle
    mkdir -p /u01/app/oracle/product/19.0.0/db_1
    chown -R grid:oinstall /u01
    chown -R oracle:oinstall /u01/app/oracle
    chmod -R 775 /u01/
    

    十一:config ‘/etc/secutity/limits.conf’

    vi /etc/security/limits.conf
    #ORACLE SETTING
    grid  soft  nproc 16384
    grid  hard  nproc 16384
    grid  soft  nofile 1024
    grid  hard  nofile 65536
    grid  soft  stack 10240
    grid  hard  stack 32768
    oracle soft nproc 16384
    oracle hard nproc 16384
    oracle soft nofile 1024
    oracle hard nofile 65536
    oracle soft stack  10240
    oracle hard stack  32768
    oracle hard memlock 3145728
    oracle soft memlock 3145728
    

    十二:config sysctl.conf

    vi /etc/sysctl.conf
    #ORACLE SETTING
    fs.aio-max-nr = 1048576
    fs.file-max = 6815744
    kernel.shmmax = 15461882265
    kernel.shmall = 3774873
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    net.ipv4.ip_local_port_range = 9000 65500
    net.core.rmem_default = 262144
    net.core.rmem_max = 4194304
    net.core.wmem_default = 262144
    net.core.wmem_max = 1048586
    
     sysctl -p
    

    十三:stop “avahi-daemon” service

    [root@rac1 ~]# systemctl disable avahi-daemon.socket
    [root@rac1 ~]# systemctl disable avahi-daemon.service
    ps -ef|grep avahi-daemon
    kill -9 pid avahi-daemon
    

    十四:config /etc/system/network

    # Created by anaconda
    NOZEROCONF=yes
    

    十五:config /etc/hosts

    192.168.0.28 rac1
    192.168.0.30 rac2
    192.168.0.29 rac1-vip
    192.168.0.31 rac2-vip
    4.4.4.3    rac1-priv
    4.4.4.4    rac2-priv
    192.168.0.32 rac-scan
    

    十六:close the transpage

    #1 修改grub文件
    cp /etc/default/grub /etc/default/grub.bak
    vi /etc/default/grub
    #2 增加一行transparent_hugepage=never到尾部
    GRUB_CMDLINE_LINUX=
    "rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet transparent_hugepage=never
    #3 执行命令
    grub2-mkconfig -o /boot/grub2/grub.cfg
    #4 不重启生效
    [root@rac1 ~]#echo never > /sys/kernel/mm/transparent_hugepage/enabled  
    
    #5 查看是否禁用透明大页
    [root@rac1 ~]# cat /sys/kernel/mm/transparent_hugepage/enabled
    always madvise [never]
    
    [root@rac2 ~]# grep AnonHugePages /proc/meminfo
    AnonHugePages:     0 kB---------->>>返回值若是零,代表成功禁用THP
    #其实最后还是要重启
    

    十七:config users environment

    17.1 : config user ‘grid’ environment in the first nond
    # user grid in nond1
    [root@rac1 ~]# su - grid
    
    [grid@rac1:/home/grid]$vi ~/.bash_profile
    export TMP=/tmp
    export LANG=en_US
    export TMPDIR=$TMP
    export ORACLE_SID=+ASM1 
    export ORACLE_TERM=xterm 
    export ORACLE_BASE=/u01/app/grid_base
    export ORACLE_HOME=/u01/app/grid;  
    export NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"
    export PATH=.:$PATH:$HOME/bin:$ORACLE_HOME/bin; 
    export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
    export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
    export Texport HREADS_FLAG=native
    
    
    17.2: config user ‘grid’ environment in the second nond
    #user grid in nond2
    [root@rac2 ~]# su - grid
    [grid@rac2:/home/grid]$vi ~/.bash_profile
    export TMP=/tmp
    export LANG=en_US
    export TMPDIR=$TMP
    export ORACLE_SID=+ASM2 
    export ORACLE_TERM=xterm 
    export ORACLE_BASE=/u01/app/grid_base
    export ORACLE_HOME=/u01/app/grid;  
    export NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"
    export PATH=.:$PATH:$HOME/bin:$ORACLE_HOME/bin; 
    export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
    export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
    export Texport HREADS_FLAG=native
    
    17.3 config user “oracle” environment in the first nond
    #user oracle in nond1
    # User specific environment and startup programs
    export TMP=/tmp
    export LANG=en_us
    export TMPDIR=$TMP
    export ORACLE_BASE=/u01/app/oracle
    export ORACLE_HOME=$ORACLE_BASE/product/19.0.0/db_1
    export ORACLE_SID=xztd1
    export ORACLE_TERM=xterm
    export NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"
    export PATH=.$PATH:$HOME/bin:$ORACLE_HOME/bin
    export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
    export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
    
    17.4 config user “oracle” environment in the second nond
    #user oracle in node2
    export TMP=/tmp
    export LANG=en_us
    export TMPDIR=$TMP
    export ORACLE_BASE=/u01/app/oracle
    export ORACLE_HOME=$ORACLE_BASE/product/19.0.0/db_1
    export ORACLE_SID=xztd2
    export ORACLE_TERM=xterm
    export NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"
    export PATH=.$PATH:$HOME/bin:$ORACLE_HOME/bin
    export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
    export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
    

    十八:config ssh

    su - grid
    cd /u01/app/grid/oui/prov/resources/scripts
     ./sshUserSetup.sh -user grid -hosts "rac1 rac2 rac1-priv rac2-priv" -advanced -noPromptPassphrase
    
    su - oracle
    ./sshUserSetup -user oracle -hosts "rac1 rac2 rac1-priv rac2-priv" -advanced -noPromptPassphrase
    
    Import: INS-06006 GI RunInstaller Fails If OpenSSH Is Upgraded to 8.x (文档 ID 2555697.1)
    APPLIES TO:

    Oracle Database - Enterprise Edition - Version 19.3.0.0.0 and later
    Information in this document applies to any platform.

    SYMPTOMS

    When attempting to configure 19c grid infrastructure by running <gridSetup.sh>, the following error occurs in SSH connectivity step:

    [INS-06006] Passwordless SSH connectivity not set up between the following node(s): []

    The error can’t be ignored so CRS installation fails.

    However, SSH setup shows successful and ssh date command works fine for all nodes, CVU user equivalence check also shows passed status.

    Run gridSetup.sh in debug mode: **
    **

    $ gridSetup.sh -debug | tee /tmp/gridsetup.log
    

    In the debug trace “/tmp/gridsetup.log”, it reports <protocol error: filename does not match request> when calling <scp> command:

    [Worker 0] [ 2019-05-31 14:40:49.921 CST ] [UnixSystem.remoteCopyFile:848]  UnixSystem: /usr/local/bin/scp -p <racnode2>:'/tmp/GridSetupActions2019-05-31_02-39-46PM/CVU_19.0.0.0.0_grid/scratch/getFileInfo12906.out' /tmp/GridSetupActions2019-05-31_02-39-46PM/<racnode2>.getFileInfo12906.out
    [Thread-440] [ 2019-05-31 14:40:49.921 CST ] [StreamReader.run:62]  In StreamReader.run
    [Worker 0] [ 2019-05-31 14:40:49.921 CST ] [RuntimeExec.runCommand:294]  runCommand: Waiting for the process
    [Thread-439] [ 2019-05-31 14:40:49.921 CST ] [StreamReader.run:62]  In StreamReader.run
    [Thread-440] [ 2019-05-31 14:40:50.109 CST ] [StreamReader.run:66]  ERROR>protocol error: filename does not match request
    [Worker 0] [ 2019-05-31 14:40:50.109 CST ] [RuntimeExec.runCommand:296]  runCommand: process returns 1
    [Worker 0] [ 2019-05-31 14:40:50.109 CST ] [RuntimeExec.runCommand:323]  RunTimeExec: error>
    [Worker 0] [ 2019-05-31 14:40:50.109 CST ] [RuntimeExec.runCommand:326]  protocol error: filename does not match request
    
    CHANGES

    OpenSSH is upgraded to 8.x.

    CAUSE

    OpenSSH is upgraded to 8.x. Please note OpenSSH’s behavior might be different on any other platforms/OS, for example on AIX, OpenSSH 7.5 has this problem, and on SLES Linux 12 SP4, OpenSSH_7.2p2 has this problem.

    # ssh -V
    OpenSSH_8.0p1, OpenSSL 1.0.2r 26 Feb 2019
    
    #The below command might also give the above error on OpenSSH 8.0.
    
    # scp -p <racnode2>:"'/tmp/test.txt'" /tmp/test.txt
    

    protocol error: filename does not match request

    And the error can be avoided by adding “-T” option in the command:

    # scp -T -p <racnode2>:"'/tmp/test.txt'" /tmp/test.txt
    test.txt 100% 2 0.1KB/s 00:00
    

    To mitigate the risk of (CVE-2019-6111), OpenSSH 8.0 adds client-side checking that the filenames sent from the server match the command-line request, if there is a difference between client and server wildcard expansion, the client may refuse files from the server. For this reason, OpenSSH 8.0 provids a new “-T” flag to scp that disables these client-side checks. for details, see https://www.openssh.com/txt/release-8.0

    SOLUTION
    Workaround : (if your unix admin allows it)
    Before installation, as root user: (please change the path if the location of your “scp” is not the same with below)
    # Rename the original scp.
    mv /usr/bin/scp /usr/bin/scp.orig
    
    \# Create a new file </usr/bin/scp>.
    vi /usr/bin/scp
    
    \# Add the below line to the new created file </usr/bin/scp>.
    /usr/bin/scp.orig -T $*
    
    \# Change the file permission.
    chmod 555 /usr/bin/scp
    
    After installation:
    
    mv /usr/bin/scp.orig /usr/bin/scp
    
    REFERENCES

    NOTE:30159782.8](https://support.oracle.com/epmos/faces/DocumentDisplay?parent=DOCUMENT&sourceId=2555697.1&id=30159782.8) - Bug 30159782 - Remote Copy Fails if using openssh-7.2. 7.4, etc.
    NOTE:30189609.8 - Bug 30189609 - CVU FAILS TO DETECT THE PASSWORDLESS SSH AS WELL AS TO SETUP PASSWORDLESS SSH CONNECTIVITY

    十九:setup the ‘cvuqdisk’ package

    export CVUQDISK_GRP=oinstall
    rpm -ivh cvuqdisk-1.0.10-1.rpm
    

    二十:setup grid software

    xhost +
    su - grid
    export DISPLAY=192.168.0.118:0.0
    export CV_ASSUME_DISTID=RHEL8.2
    cd $ORACLE_HOME
    ./gridSetup.sh
    
     ```shell
    [root@rac1 ~]# /u01/app/oraInventory/orainstRoot.sh
    Changing permissions of /u01/app/oraInventory.
    Adding read,write permissions for group.
    Removing read,write,execute permissions for world.
    
    Changing groupname of /u01/app/oraInventory to oinstall.
    The execution of the script is complete.
    [root@rac1 ~]# /u01/app/grid/root.sh
    
    
    

    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述

    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述

    [root@rac1 ~]# /u01/app/oraInventory/orainstRoot.sh
    Changing permissions of /u01/app/oraInventory.
    Adding read,write permissions for group.
    Removing read,write,execute permissions for world.
    
    Changing groupname of /u01/app/oraInventory to oinstall.
    The execution of the script is complete.
    [root@rac1 ~]# /u01/app/grid/root.sh
    Performing root user operation.
    
    The following environment variables are set as:
      ORACLE_OWNER= grid
      ORACLE_HOME=  /u01/app/grid
    
    Enter the full pathname of the local bin directory: [/usr/local/bin]: 
      Copying dbhome to /usr/local/bin ...
      Copying oraenv to /usr/local/bin ...
      Copying coraenv to /usr/local/bin ...
    
    
    Creating /etc/oratab file...
    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root script.
    Now product-specific root actions will be performed.
    Relinking oracle with rac_on option
    Using configuration parameter file: /u01/app/grid/crs/install/crsconfig_params
    The log of current session can be found at:
     /u01/app/grid_base/crsdata/rac1/crsconfig/rootcrs_rac1_2020-11-12_10-08-27PM.log
    2020/11/12 22:08:45 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
    2020/11/12 22:08:45 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
    2020/11/12 22:08:45 CLSRSC-363: User ignored prerequisites during installation
    2020/11/12 22:08:45 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
    2020/11/12 22:08:48 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
    2020/11/12 22:08:50 CLSRSC-594: Executing installation step 5 of 19: 'SetupOSD'.
    2020/11/12 22:08:50 CLSRSC-594: Executing installation step 6 of 19: 'CheckCRSConfig'.
    2020/11/12 22:08:51 CLSRSC-594: Executing installation step 7 of 19: 'SetupLocalGPNP'.
    2020/11/12 22:09:15 CLSRSC-594: Executing installation step 8 of 19: 'CreateRootCert'.
    2020/11/12 22:09:21 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
    2020/11/12 22:09:22 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
    2020/11/12 22:09:41 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
    2020/11/12 22:09:41 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
    2020/11/12 22:09:49 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
    2020/11/12 22:09:49 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
    2020/11/12 22:10:20 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
    2020/11/12 22:10:28 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
    2020/11/12 22:10:36 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
    2020/11/12 22:10:44 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
    ASM has been created and started successfully.
    
    [DBT-30001] Disk groups created successfully. Check /u01/app/grid_base/cfgtoollogs/asmca/asmca-201112PM101119.log for details.
    
    2020/11/12 22:12:16 CLSRSC-482: Running command: '/u01/app/grid/bin/ocrconfig -upgrade grid oinstall'
    CRS-4256: Updating the profile
    Successful addition of voting disk cb9b0468c1f14fcfbf28b6b43c5a2f58.
    Successful addition of voting disk 6c119514ffb64f77bf57db0e7b0a4149.
    Successful addition of voting disk e45cb46b5e0d4fd5bf9b8868236c549e.
    Successfully replaced voting disk group with +DATA.
    CRS-4256: Updating the profile
    CRS-4266: Voting file(s) successfully replaced
    \##  STATE   File Universal Id         File Name Disk group
    --  -----   -----------------         --------- ---------
     \1. ONLINE  cb9b0468c1f14fcfbf28b6b43c5a2f58 (/dev/asm-diskb) [OCR]
     \2. ONLINE  6c119514ffb64f77bf57db0e7b0a4149 (/dev/asm-diskc) [OCR]
     \3. ONLINE  e45cb46b5e0d4fd5bf9b8868236c549e (/dev/asm-diskd) [OCR]
    Located 3 voting disk(s).
    Located 3 voting disk(s).
    2020/11/12 22:14:09 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
    2020/11/12 22:15:22 CLSRSC-343: Successfully started Oracle Clusterware stack
    2020/11/12 22:15:22 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'
    2020/11/12 22:17:16 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
    2020/11/12 22:17:55 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
    
    [root@rac2 ~]# /u01/app/oraInventory/orainstRoot.sh
    Changing permissions of /u01/app/oraInventory.
    Adding read,write permissions for group.
    Removing read,write,execute permissions for world.
    
    Changing groupname of /u01/app/oraInventory to oinstall.
    The execution of the script is complete.
    [root@rac2 ~]#  /u01/app/grid/root.sh
    Performing root user operation.
    
    The following environment variables are set as:
      ORACLE_OWNER= grid
      ORACLE_HOME=  /u01/app/grid
    
    Enter the full pathname of the local bin directory: [/usr/local/bin]: 
      Copying dbhome to /usr/local/bin ...
      Copying oraenv to /usr/local/bin ...
      Copying coraenv to /usr/local/bin ...
    
    
    Creating /etc/oratab file...
    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root script.
    Now product-specific root actions will be performed.
    Relinking oracle with rac_on option
    Using configuration parameter file: /u01/app/grid/crs/install/crsconfig_params
    The log of current session can be found at:
     /u01/app/grid_base/crsdata/rac2/crsconfig/rootcrs_rac2_2020-11-12_10-20-20PM.log
    2020/11/12 22:20:30 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
    2020/11/12 22:20:31 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
    2020/11/12 22:20:31 CLSRSC-363: User ignored prerequisites during installation
    2020/11/12 22:20:31 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
    2020/11/12 22:20:32 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
    2020/11/12 22:20:32 CLSRSC-594: Executing installation step 5 of 19: 'SetupOSD'.
    2020/11/12 22:20:33 CLSRSC-594: Executing installation step 6 of 19: 'CheckCRSConfig'.
    2020/11/12 22:20:35 CLSRSC-594: Executing installation step 7 of 19: 'SetupLocalGPNP'.
    2020/11/12 22:20:37 CLSRSC-594: Executing installation step 8 of 19: 'CreateRootCert'.
    2020/11/12 22:20:37 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
    2020/11/12 22:20:47 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
    2020/11/12 22:20:47 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
    2020/11/12 22:20:49 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
    2020/11/12 22:20:49 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
    2020/11/12 22:20:49 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
    2020/11/12 22:21:12 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
    2020/11/12 22:21:15 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
    2020/11/12 22:21:17 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
    2020/11/12 22:21:19 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
    2020/11/12 22:24:06 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
    2020/11/12 22:24:37 CLSRSC-594: Executing installation step 17 of 19:'StartCluster'.
    2020/11/12 22:25:23 CLSRSC-343: Successfully started Oracle Clusterware stack
    2020/11/12 22:25:23 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
    2020/11/12 22:25:41 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
    2020/11/12 22:25:48 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
    [root@rac2 ~]# 
    

    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述

    在这里插入图片描述

    二十一:Use asmca manage ASM DISKS

    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述

    二十三:setup ORACLE software

    export DISPLAY=192.168.0.118:0.0
    export CV_ASSUME_DISTID=RHEL8.2
    ./runInstall
    

    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述

    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述

    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述

    [root@rac1 ~]# /u01/app/oracle/product/19.0.0/db_1/root.sh
    Performing root user operation.
    
    The following environment variables are set as:
      ORACLE_OWNER= oracle
      ORACLE_HOME=  /u01/app/oracle/product/19.0.0/db_1
    
    Enter the full pathname of the local bin directory: [/usr/local/bin]: 
    The contents of "dbhome" have not changed. No need to overwrite.
    The contents of "oraenv" have not changed. No need to overwrite.
    The contents of "coraenv" have not changed. No need to overwrite.
    
    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root script.
    Now product-specific root actions will be performed.
    
    [root@rac2 ~]# /u01/app/oraInventory/orainstRoot.sh
    Changing permissions of /u01/app/oraInventory.
    Adding read,write permissions for group.
    Removing read,write,execute permissions for world.
    
    Changing groupname of /u01/app/oraInventory to oinstall.
    The execution of the script is complete.
    [root@rac2 ~]# /u01/app/oracle/product/19.0.0/db_1/root.sh
    Performing root user operation.
    
    The following environment variables are set as:
      ORACLE_OWNER= oracle
      ORACLE_HOME=  /u01/app/oracle/product/19.0.0/db_1
    
    Enter the full pathname of the local bin directory: [/usr/local/bin]: 
    The contents of "dbhome" have not changed. No need to overwrite.
    The contents of "oraenv" have not changed. No need to overwrite.
    The contents of "coraenv" have not changed. No need to overwrite.
    
    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root script.
    Now product-specific root actions will be performed.
    
    read,write,execute permissions for world.
    
    Changing groupname of /u01/app/oraInventory to oinstall.
    The execution of the script is complete.
    [root@rac2 ~]# /u01/app/oracle/product/19.0.0/db_1/root.sh
    Performing root user operation.
    
    The following environment variables are set as:
      ORACLE_OWNER= oracle
      ORACLE_HOME=  /u01/app/oracle/product/19.0.0/db_1
    
    Enter the full pathname of the local bin directory: [/usr/local/bin]: 
    The contents of "dbhome" have not changed. No need to overwrite.
    The contents of "oraenv" have not changed. No need to overwrite.
    The contents of "coraenv" have not changed. No need to overwrite.
    
    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root script.
    Now product-specific root actions will be performed.
    
    展开全文
  • Oracle19c RAC for Centos7.6
  • 19c RAC远程克隆PDB.docx

    2021-07-08 10:38:54
    oracle19c 远程克隆PDB详细步骤
  • su - oracle [oracle@oadb1 OPatch]$ ./opatch version OPatch Version: 12.2.0.1.21 OPatch succeeded. su - grid [grid@oadb1:/u01/app/19.0.0/grid/OPatch]$ ./opatch version OPatch Version: 12.2.0.1.21 ...

    检查RAC所有节点下的OPatch版本
    su - oracle
    [oracle@oadb1 OPatch]$ ./opatch version
    OPatch Version: 12.2.0.1.21

    OPatch succeeded.

    su - grid
    [grid@oadb1:/u01/app/19.0.0/grid/OPatch]$ ./opatch version
    OPatch Version: 12.2.0.1.21

    OPatch succeeded.
    RAC所有节点检查补丁冲突
    For Grid Infrastructure Home, as home user:

    % $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /oraru/31720429/31750108/31771877
    % $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /oraru/31720429/31750108/31772784
    % $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /oraru/31720429/31750108/31773437
    % $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /oraru/31720429/31750108/
    % $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /oraru/31720429/31750108/31780966
    For Database home, as home user:
    % $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /oraru/31720429/31750108/31771877
    % $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /oraru/31720429/31750108/31772784

    RAC所有节点检查文件系统上是否有足够的可用空间来应用补丁

    vi /tmp/patch_list_gihome.txt

    /oraru/31720429/31750108/31771877
    /oraru/31720429/31750108/31772784
    /oraru/31720429/31750108/31773437
    /oraru/31720429/31750108/
    /oraru/31720429/31750108/31780966

    $ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_gihome.txt

    vi /tmp/patch_list_dbhome.txt
    /oraru/31720429/31750108/31771877
    /oraru/31720429/31750108/31772784

    $ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_dbhome.txt

    RAC各节点滚动应用GI+DB RU
    此操作需要停集群服务,允许滚动应用补丁,先操作一个节点,其他节点提供服务,这个节点操作完成启动成功可对外服务后,再依次操作其他节点。

    export ORACLE_HOME=/u01/app/19.0.0/grid
    export PATH= P A T H : PATH: PATH:ORACLE_HOME/OPatch
    opatchauto apply /oraru/31720429/31750108
    第一个节点大约20分钟,执行完成无报错且集群服务和数据库启动正常,在第二个节点执行opatchauto,第二个节点大约30分钟,执行完成无报错且集群服务和数据库启动正常。

    RAC在其中一个节点执行datapatch -verbose

    [oracle@oadb1 ~]$ cd O R A C L E H O M E / O P a t c h [ o r a c l e @ o a d b 1 O P a t c h ] ORACLE_HOME/OPatch [oracle@oadb1 OPatch] ORACLEHOME/OPatch[oracle@oadb1OPatch] ./datapatch -verbose

    RAC在其中一个节点执行utlrp.sql

    [oracle@oadb1 ~]$ cd O R A C L E H O M E / r d b m s / a d m i n [ o r a c l e @ o a d b 1   ] ORACLE_HOME/rdbms/admin [oracle@oadb1 ~] ORACLEHOME/rdbms/admin[oracle@oadb1 ]sqlplus / as sysdba
    SQL> @utlrp.sql

    GI+DB RU安装后检查

    [grid@oadb1:/u01/app/19.0.0/grid/OPatch]$opatch lspatches
    31780966;TOMCAT RELEASE UPDATE 19.0.0.0.0 (31780966)
    31773437;ACFS RELEASE UPDATE 19.9.0.0.0 (31773437)
    31772784;OCW RELEASE UPDATE 19.9.0.0.0 (31772784)
    31771877;Database Release Update : 19.9.0.0.201020 (31771877)

    [oracle@oadb1 OPatch]$ ./opatch lspatches
    31772784;OCW RELEASE UPDATE 19.9.0.0.0 (31772784)
    31771877;Database Release Update : 19.9.0.0.201020 (31771877)

    OPatch succeeded.
    SQL> select status,description from dba_registry_sqlpatch;

    STATUS DESCRIPTION

    SUCCESS Database Release Update : 19.3.0.0.190416 (29517242)
    SUCCESS Database Release Update : 19.6.0.0.200114 (30557433)
    SUCCESS Database Release Update : 19.9.0.0.201020 (31771877)

    打JAVAVM RU

    cd /oraru/31720429/31668882
    $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -ph ./
    依次关闭RAC的各个实例并应用补丁

    [oracle@oadb1 OPatch]$ sqlplus / as sysdba
    SQL> shutdown immediate
    cd /oraru/31720429/31668882
    $ORACLE_HOME/OPatch/opatch apply

    关闭最后一个实例前修改cluster_database=false

    [oracle@oadb1 OPatch]$ sqlplus / as sysdba
    SQL> alter system set cluster_database=false scope=spfile;
    SQL> shutdown immediate
    cd /oraru/31720429/31668882
    $ORACLE_HOME/OPatch/opatch apply
    RAC所有节点opatch apply完成无报错后,在其中一个节点以UPGRADE模式启动数据库

    [oracle@oadb1 OPatch]$ sqlplus / as sysdba
    SQL> STARTUP UPGRADE
    SQL> alter pluggable database all open upgrade;

    RAC其中一个节点执行datapatch -verbose

    ./datapatch -verbose
    datapatch -verbose执行成功无报错后,修改参数cluster_database=true并关闭此节点,启动数据库

    sqlplus / as sysdba
    SQL> alter system set cluster_database=true scope=spfile;
    SQL> shutdown immediate
    srvctl start database -d jcoadb
    补丁安装后检查
    SQL> select status,description from dba_registry_sqlpatch;

    STATUS DESCRIPTION

    SUCCESS Database Release Update : 19.3.0.0.190416 (29517242)
    SUCCESS Database Release Update : 19.6.0.0.200114 (30557433)
    SUCCESS Database Release Update : 19.9.0.0.201020 (31771877)
    SUCCESS OJVM RELEASE UPDATE: 19.9.0.0.201020 (31668882)

    展开全文
  • oracle文档实测通过,并已规范为公司安装标准文档。oracle下任何安装不再愁,46页图文详细手把手教你安装RAC,DG,ASM,无基础也可轻松安装。
  • oracle rac安装的时候,先配置管理ip和private ip。virtual ip不需要手动配置。

    oracle rac安装的时候,先配置管理ip和private ip。virtual ip不需要手动配置。

    展开全文

空空如也

空空如也

1 2 3 4 5 ... 14
收藏数 262
精华内容 104
关键字:

oracle19crac