VMware Centos7.4+UDEV+AMS+ORACLE 配置RAC環境

目錄

一、準備工作

1.1 服務器準備

1.2 IP地址準備

1.3 用戶及組準備

1.4 SSH信任關係準備

1.5 文件目錄準備(crmtest1和crmtest2)

1.6 RPM包準備

1.7 共享磁盤準備

1.8 系統參數修改

1.9 DNS服務器搭建(可選)

二、安裝RAC

2.1 安裝grid

2.2 asmca創建data/fra datagroup

2.3 安裝database


一、準備工作

1.1 服務器準備

本次準備搭建一個RAC的測試環境,採用的是在VMware Workstation中搭建兩臺Centos虛擬機,以此搭建RAC的集羣環境。虛擬機的信息如下:

 

IP

主機名

域名後綴

用戶名密碼

安裝目錄

DB

192.168.150.128

crmtest1

tp-link.net

oracle/grid

/u1/db

192.168.150.129

crmtest2

tp-link.net

oracle/grid

/u1/db

1.2 IP地址準備

RAC的IP地址規劃如下。需要注意的是,搭建或者克隆Oracle數據庫的時候,一定要先記得配置好/etc/hosts文件,否則有可能會出現數據庫一直處理connected狀態。

##public IP
192.168.150.128 crmtest1.tp-link.net    crmtest1
192.168.150.139 crmtest2.tp-link.net    crmtest2

##Virtual IP
192.168.150.130 crmtest1-vip.tp-link.net        crmtest1-vip
192.168.150.131 crmtest2-vip.tp-link.net        crmtest2-vip

##Private IP
192.168.37.128  crmtest1-priv.tp-link.net       crmtest1-priv
192.168.37.129  crmtest2-priv.tp-link.net       crmtest2-priv

##Scan IP
192.168.150.135 crmtest-scan.tp-link.net    crmtest-scan
192.168.150.136 crmtest-scan.tp-link.net    crmtest-scan
192.168.150.137 crmtest-scan.tp-link.net    crmtest-scan

爲了規劃上述IP,需要對Centos虛擬機另外綁定一個網卡。

網卡1:

網卡2:

然後將網卡動態綁定的IP設定爲靜態IP

[root@crmtest1 network-scripts]# pwd
/etc/sysconfig/network-scripts
[root@crmtest1 network-scripts]# vim ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO="static"
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
IPADDR=192.168.150.128
NAME=ens33
UUID=4747bd04-7e51-40b7-9d24-04969f2c5196
DEVICE=ens33
ONBOOT=yes
PREFIX=24
GATEWAY=192.168.150.2

[root@crmtest1 network-scripts]# vim ifcfg-Wired_connection_1
HWADDR=00:0C:29:88:E0:AB
MACADDR=00:0C:29:88:E0:AB
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=192.168.37.128
PREFIX=24
GATEWAY=192.168.37.2
DEFROUTE=yes
PEERDNS=no
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME="Wired connection 1"
UUID=21cde08c-d753-3558-b7d4-6aa9f2604e45
ONBOOT=yes
AUTOCONNECT_PRIORITY=-999

crmtest2和上述步驟一樣,設定靜態IP,不需要設定VIP IP

1.3 用戶及組準備

[root@crmtest1 ~]# groupadd -g 1000 oinstall
[root@crmtest1 ~]# groupadd -g 1200 asmadmin
[root@crmtest1 ~]# groupadd -g 1201 asmdba
[root@crmtest1 ~]# groupadd -g 1202 asmoper
[root@crmtest1 ~]# groupadd -g 1300 dba
[root@crmtest1 ~]# groupadd -g 1301 oper
[root@crmtest1 ~]# useradd -m -u 1100 -g oinstall -G asmadmin,asmdba,asmoper,dba -s /bin/bash -c "Grid Infrastructure Owner" grid
[root@crmtest1 ~]# useradd -m -u 1101 -g oinstall -G dba,oper,asmdba,asmadmin -s /bin/bash -c "Oracle Software Owner" oracle

[root@crmtest2 ~]# groupadd -g 1000 oinstall
[root@crmtest2 ~]# groupadd -g 1200 asmadmin
[root@crmtest2 ~]# groupadd -g 1201 asmdba
[root@crmtest2 ~]# groupadd -g 1202 asmoper
[root@crmtest2 ~]# groupadd -g 1300 dba
[root@crmtest2 ~]# groupadd -g 1301 oper
[root@crmtest2 ~]# useradd -m -u 1100 -g oinstall -G asmadmin,asmdba,asmoper,dba -s /bin/bash -c "Grid Infrastructure Owner" grid
[root@crmtest2 ~]# useradd -m -u 1101 -g oinstall -G dba,oper,asmdba,asmadmin -s /bin/bash -c "Oracle Software Owner" oracle

軟件組件

操作系統用戶

主組

輔助組

Oracle基目錄/Oracle主目錄

Grid Infrastructure

grid

oinstall

asmadminasmdbaasmoper

/u1/db/grid

/u1/db/11.2.0/grid

Oracle RAC

oracle

oinstall

dbaoperasmdbaasmadmin

/u1/db/oracle

/u1/db/oracle/product/11.2.0/db_1

1.4 SSH信任關係準備

oracle用戶建立信任關係

在crmtest1節點執行:
[oracle@crmtest1 ~]$ mkdir ~/.ssh
[oracle@crmtest1 ~]$ chmod 700 ~/.ssh
[oracle@crmtest1 ~]$ ssh-keygen -t rsa
[oracle@crmtest1 ~]$ ssh-keygen -t dsa
在crmtest2節點執行
[oracle@crmtest2 ~]$ mkdir ~/.ssh
[oracle@crmtest2 ~]$ chmod 700 ~/.ssh
[oracle@crmtest2 ~]$ ssh-keygen -t rsa
[oracle@crmtest2 ~]$ ssh-keygen -t dsa

在crmtest1節點執行:
[oracle@crmtest1 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[oracle@crmtest1 ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[oracle@crmtest1 ~]$ ssh crmtest2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[oracle@crmtest1 ~]$ ssh crmtest2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[oracle@crmtest1 ~]$ scp ~/.ssh/authorized_keys  oracle@crmtest2:~/.ssh/authorized_keys

grid用戶建立信任關係

在crmtest1節點執行:
[grid@crmtest1 ~]$ mkdir ~/.ssh
[grid@crmtest1 ~]$ chmod 700 ~/.ssh
[grid@crmtest1 ~]$ ssh-keygen -t rsa
[grid @crmtest1 ~]$ ssh-keygen -t dsa
在crmtest2節點執行
[grid@crmtest2 ~]$ mkdir ~/.ssh
[grid@crmtest2 ~]$ chmod 700 ~/.ssh
[grid@crmtest2 ~]$ ssh-keygen -t rsa
[grid@crmtest2 ~]$ ssh-keygen -t dsa

在crmtest1節點執行:
[grid@crmtest1 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[grid@crmtest1 ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[grid@crmtest1 ~]$ ssh crmtest2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[grid@crmtest1 ~]$ ssh crmtest2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[grid@crmtest1 ~]$ scp ~/.ssh/authorized_keys  grid@crmtest2:~/.ssh/authorized_keys

1.5 文件目錄準備(crmtest1和crmtest2

a)創建inventory directory
[root@crmtest1 ~]# mkdir -p /u1/db/oraInventory
[root@crmtest1 ~]# chown -R grid:oinstall /u1/db/oraInventory/
[root@crmtest1 ~]# chmod -R 775 /u1/db/oraInventory/

b)創建grid base,grid home directory
[root@crmtest1 ~]# mkdir -p /u1/db/grid
[root@crmtest1 ~]# chown -R grid:oinstall /u1/db/grid/
[root@crmtest1 ~]# chmod -R 775 /u1/db/grid/

[root@crmtest1 ~]# mkdir -p /u1/db/11.2.0/grid
[root@crmtest1 ~]# chown -R grid:oinstall /u1/db/11.2.0/
[root@crmtest1 ~]# chmod -R 775 /u1/db/11.2.0/

c)創建oracle base,oracle home directory
[root@crmtest1 ~]# mkdir -p /u1/db/oracle/cfgtoollogs
[root@crmtest1 ~]# chown -R oracle:oinstall /u1/db/oracle/
[root@crmtest1 ~]# chmod -R 775 /u1/db/oracle/

[root@crmtest1 ~]# mkdir -p /u1/db/oracle/product/11.2.0/db_1
[root@crmtest1 ~]# chown -R oracle:oinstall /u1/db/oracle/product/11.2.0/db_1/
[root@crmtest1 ~]# chmod -R 775 /u1/db/oracle/product/11.2.0/db_1/

1.6 RPM包準備

##這裏建議創建一個yum.sh腳本來執行。
yum install -y compat-glibc
yum install -y compat-glibc-headers
yum install -y compat-libstdc++-296
yum install -y compat-libstdc++-296.i686
yum install -y compat-libstdc++-33.i686
yum install -y compat-libstdc++-33
yum install -y gcc
yum install -y gcc-c++
yum install -y gdbm
yum install -y gdbm.i686
yum install -y glibc
yum install -y glibc.i686
yum install -y glibc-common
yum install -y glibc-devel
yum install -y glibc-devel.i686
yum install -y libaio
yum install -y libaio.i686
yum install -y libaio-devel
yum install -y libaio-devel.i686
yum install -y libgcc
yum install -y libgcc.i686
yum install -y libgomp
yum install -y libgomp.i686
yum install -y libstdc++
yum install -y libstdc++.i686
yum install -y libstdc++-devel
yum install -y libstdc++-devel.i686
yum install -y libXp
yum install -y libXp.i686
yum install -y libXp-devel
yum install -y libXp-devel.i686
yum install -y libXtst
yum install -y libXtst.i686
yum install -y libXt
yum install -y libXt.i686
yum install -y libXt-devel
yum install -y libXt-devel.i686
yum install -y make
yum install -y sysstat
yum install -y elfutils-libelf-devel
yum install -y elfutils-libelf-devel.i686
yum install -y unixODBC
yum install -y unixODBC.i686
yum install -y unixODBC-devel
yum install -y unixODBC-devel.i686
yum install -y kernel-headers
yum install -y lrzsz
yum install -y libXrender
yum install -y compat-libcap1
yum install -y binutils
yum install -y ksh

在yum安裝過程中,有可能由於Centos7.4一些依賴包的版本過高,需要對一些依賴包進行降級

[root@crmtest1 ~]# yum list --showduplicates glibc
[root@crmtest1 ~]# yum downgrade glibc glibc-common glibc-devel glibc-headers

[root@crmtest1 ~]# yum downgrade gcc cpp libgomp
[root@crmtest1 ~]# yum downgrade libgcc
[root@crmtest1 ~]# yum downgrade libstdc++
[root@crmtest1 software]# rpm -ivh openmotif21-2.1.30-11.EL5.i386.rpm 
[root@crmtest1 software]# rpm -ivh xorg-x11-libs-compat-6.8.2-1.EL.33.0.1.i386.rpm 

1.7 共享磁盤準備

在VMware Centos中配置RAC共享磁盤。第1臺虛擬機Centos添加新建磁盤,第2臺虛擬機Centos添加第1臺虛擬機新建的VMDK文件,磁盤設置爲永久模式,共享磁盤的位置到SCSI 1設備節點上。

 

如果是節點2添加磁盤,則選擇使用現有虛擬磁盤

 

如上步驟分別添加硬盤2(SCSI 1:1)、硬盤3(SCSI 1:2)、硬盤4(SCSI 1:3)

確認磁盤位置信息,分別修改兩臺虛擬機的VMX文件

#shared disks configure
disk.EnableUUID="TRUE" 
disk.locking="FALSE" 
diskLib.dataCacheMaxSize="0" 
diskLib.dataCacheMaxReadAheadSize="0" 
diskLib.dataCacheMinReadAheadSize="0" 
diskLib.dataCachePageSize="4096" 
diskLib.maxUnsyncedWrites="0"
scsi1.sharedBus="VIRTUAL"
scsi1.virtualDev = "lsilogic"
scsi1:1.present = "TRUE"
scsi1:1.fileName = "E:\Centos-ASM\ASM1.vmdk"
scsi1:1.mode = "independent-persistent"
scsi1:1.deviceType = "disk"
scsi1:2.present = "TRUE"
scsi1:2.fileName = "E:\Centos-ASM\ASM2.vmdk"
scsi1:2.mode = "independent-persistent"
scsi1:2.deviceType = "disk"
scsi1:3.present = "TRUE"
scsi1:3.fileName = "E:\Centos-ASM\ASM3.vmdk"
scsi1:3.mode = "independent-persistent"
scsi1:3.deviceType = "disk"
scsi1:1.redo = ""
scsi1:3.redo = ""
scsi1:2.redo = ""

共享磁盤信息如下

+CRSDG               1個10G的磁盤           /dev/sdb               crs vote/ocr專用

+DATADG            1個20G的磁盤           /dev/sdc               數據存儲專用

+FRADG               1個20G的磁盤           /dev/sdd               歸檔日誌存儲專用

[root@crmtest1 ~]# fdisk -l | grep /dev
Disk /dev/sda: 42.9 GB, 42949672960 bytes, 83886080 sectors
/dev/sda1   *        2048      616447      307200   83  Linux
/dev/sda2          616448     8744959     4064256   82  Linux swap / Solaris
/dev/sda3         8744960    83886079    37570560   83  Linux
Disk /dev/sdd: 21.5 GB, 21474836480 bytes, 41943040 sectors
Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
Disk /dev/sdc: 21.5 GB, 21474836480 bytes, 41943040 sectors

創建udev

查找磁盤的UUID

[root@crmtest1 ~]# for i in b c d
>do
>/usr/lib/udev/scsi_id -g -u -d /dev/sd$i
>done
36000c2947a6468b10cc6cd64589e2028
36000c291f4d76644081f3089bc7701d3
36000c29e1859fa556a72c06fc8f48d2d

編輯 UDEV Rules File

vim /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="36000c2947a6468b10cc6cd64589e2028", RUN+="/bin/sh -c 'mknod /dev/asmdisk1 b $major $minor; chown grid:asmadmin /dev/asmdisk1; chmod 0660 /dev/asmdisk1'"
KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="36000c291f4d76644081f3089bc7701d3", RUN+="/bin/sh -c 'mknod /dev/asmdisk2 b $major $minor; chown grid:asmadmin /dev/asmdisk2; chmod 0660 /dev/asmdisk2'"
KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="36000c29e1859fa556a72c06fc8f48d2d", RUN+="/bin/sh -c 'mknod /dev/asmdisk3 b $major $minor; chown grid:asmadmin /dev/asmdisk3; chmod 0660 /dev/asmdisk3'"

啓動udev

[root@crmtest1 rules.d]# /sbin/udevadm trigger --type=devices --action=change

查看磁盤

[root@crmtest1 rules.d]# ll -ltr /dev/asm*
brw-rw----. 1 grid asmadmin 8, 16 Oct 30 11:03 /dev/asmdisk1
brw-rw----. 1 grid asmadmin 8, 48 Oct 30 11:03 /dev/asmdisk3
brw-rw----. 1 grid asmadmin 8, 32 Oct 30 11:03 /dev/asmdisk2

1.8 系統參數修改

#在/etc/sysctl.conf增加如下參數:
kernel.shmall = 33057936
kernel.shmmax = 67702652928
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 6815744
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576

#/etc/security/limits.conf文件增加:
*  hard  nofile  65536
*  soft  nofile  4096
*  hard  nproc   16384
*  soft  nproc   4096

使上述參數生效

[root@crmtest1 ~]# /sbin/sysctl -p

1.9 DNS服務器搭建(可選)

由於RAC集羣安裝的時候需要各種DNS解析,我在crmtest1服務器上搭建一個DNS服務器。

[root@crmtest1 ~]# yum –y install bind*

打開DNS的主配置文件

[root@crmtest1 ~]# vim /etc/named.conf
options {
        # listen-on port 53 { 127.0.0.1; };  #刪除本行後會默認在所有接口的UDP 53端口監聽服務,建議刪除
        listen-on-v6 port 53 { ::1; };
        directory       "/var/named";
        dump-file       "/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";
        recursing-file  "/var/named/data/named.recursing";
        secroots-file   "/var/named/data/named.secroots";
        # allow-query     { localhost; };   刪除本行會默認相應所有客戶機的查詢請求,建議刪除 
        recursion yes;    #開啓遞歸查詢
        dnssec-enable yes;
        dnssec-validation yes;
        /* Path to ISC DLV key */
        bindkeys-file "/etc/named.iscdlv.key";
        managed-keys-directory "/var/named/dynamic";
        pid-file "/run/named/named.pid";
        session-keyfile "/run/named/session.key";
};
logging {
        channel default_debug {
                file "data/named.run";
                severity dynamic;
        };
};
zone "." IN {
        type hint;
        file "named.ca";
};
include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";

在/etc/named.rfc1912.zones文件中添加自己所需域名的正向解析

……
zone "tp-link.net" IN {
        type master;
        file "tp-link.net.zone";
        allow-update { none; };
};

編輯tp-link.net.zone文件

[root@crmtest1 /]# cd /var/named/
[root@crmtest1 named]# cp named.localhost tp-link.net.zone
[root@crmtest1 named]# vim tp-link.net.zone
$TTL 1D
@       IN SOA  tp-link.net. rname.invalid. (
                                        0       ; serial
                                        1D      ; refresh
                                        1H      ; retry
                                        1W      ; expire
                                        3H )    ; minimum
        NS      @
        A       127.0.0.1
        AAAA    ::1
crmtest1        IN      A       192.168.150.128
crmtest2        IN      A       192.168.150.129
crmtest1-vip    IN      A       192.168.150.130
crmtest2-vip    IN      A       192.168.150.131
crmtest1-priv   IN      A       192.168.37.128
crmtest2-priv   IN      A       192.168.37.129
crmtest-scan    IN      A       192.168.150.135
crmtest-scan    IN      A       192.168.150.136
crmtest-scan    IN      A       192.168.150.137

全部配置文件編寫完成後可以使用以下命令對所有DNS相關的配置文件進行檢查,如有語法錯誤的地方,會依次指出。

[root@crmtest1 /]# named-checkconf /etc/named.conf
[root@crmtest1 /]# named-checkzone tp-link.net /var/named/tp-link.net.zone

#啓動DNS服務器
[root@crmtest1 /]# systemctl start named.service

#加入到開機自啓動
[root@crmtest1 /]# systemctl enable named.service

二、安裝RAC

下載grid軟件包p10404530_112030_Linux-x86-64_3of7.zip,解壓,安裝之前檢查。

[grid@crmtest1 ~]$ unzip p10404530_112030_Linux-x86-64_3of7.zip
[grid@crmtest1 ~]$ ll grid/
total 56
drwxr-xr-x.  9 grid oinstall   178 Sep 22  2011 doc
drwxr-xr-x.  4 grid oinstall  4096 Sep 22  2011 install
-rwxr-xr-x.  1 grid oinstall 28122 Sep 22  2011 readme.html
drwxr-xr-x.  2 grid oinstall    30 Sep 22  2011 response
drwxr-xr-x.  2 grid oinstall    34 Sep 22  2011 rpm
-rwxr-xr-x.  1 grid oinstall  4878 Sep 22  2011 runcluvfy.sh
-rwxr-xr-x.  1 grid oinstall  3227 Sep 22  2011 runInstaller
drwxr-xr-x.  2 grid oinstall    29 Sep 22  2011 sshsetup
drwxr-xr-x. 14 grid oinstall  4096 Sep 22  2011 stage
-rwxr-xr-x.  1 grid oinstall  4326 Sep  2  2011 welcome.html
[grid@crmtest1 ~]$ cd grid/
[grid@crmtest1 grid]$  ./runcluvfy.sh stage -pre crsinst -n crmtest1,crmtest2 -verbose

Performing pre-checks for cluster services setup 

Checking node reachability...

Check: Node reachability from node "crmtest1"
  Destination Node                      Reachable?              
  ------------------------------------  ------------------------
  crmtest1                              yes                     
  crmtest2                              yes                     
Result: Node reachability check passed from node "crmtest1"


Checking user equivalence...

Check: User equivalence for user "grid"
  Node Name                             Status                  
  ------------------------------------  ------------------------
  crmtest2                              passed                  
  crmtest1                              failed                  
Result: PRVF-4007 : User equivalence check failed for user "grid"

WARNING: 
User equivalence is not set for nodes:
	crmtest1
Verification will proceed with nodes:
	crmtest2

Checking node connectivity...

Checking hosts config file...
  Node Name                             Status                  
  ------------------------------------  ------------------------
  crmtest2                              passed                  

Verification of the hosts config file successful

……

Check: TCP connectivity of subnet "192.168.122.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  crmtest1:192.168.122.1          crmtest2:192.168.122.1          failed          

ERROR: 
PRVF-7617 : Node connectivity between "crmtest1 : 192.168.122.1" and "crmtest2 : 192.168.122.1" failed
Result: TCP connectivity check failed for subnet "192.168.122.0"

……
Check: Package existence for "pdksh" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  crmtest2      missing                   pdksh-5.2.14              failed    
Result: Package existence check failed for "pdksh"

……

All nodes have one search entry defined in file "/etc/resolv.conf"
Checking DNS response time for an unreachable node
  Node Name                             Status                  
  ------------------------------------  ------------------------
  crmtest2                              failed                  
  crmtest1                              failed                  
PRVF-5637 : DNS response time could not be checked on following nodes: crmtest2,crmtest1

File "/etc/resolv.conf" is not consistent across nodes

處理前置安裝檢查問題

Result: PRVF-4007 : User equivalence check failed for user "grid"

分別在crmtest1和crmtest2上驗證PUBLIC IP,VIP IP,PRIVATE IP
[grid@crmtest1 ~]$ ssh crmtest1 date
[grid@crmtest1 ~]$ ssh crmtest1-vip date
[grid@crmtest1 ~]$ ssh crmtest1-priv date
[grid@crmtest1 ~]$ ssh crmtest2 date
[grid@crmtest1 ~]$ ssh crmtest2-vip date
[grid@crmtest1 ~]$ ssh crmtest2-priv date

[grid@crmtest2 ~]$ ssh crmtest1 date
[grid@crmtest2 ~]$ ssh crmtest1-vip date
[grid@crmtest2 ~]$ ssh crmtest1-priv date
[grid@crmtest2 ~]$ ssh crmtest2 date
[grid@crmtest2 ~]$ ssh crmtest2-vip date
[grid@crmtest2 ~]$ ssh crmtest2-priv date


PRVF-7617 : Node connectivity between "crmtest1 : 192.168.122.1" and "crmtest2 : 192.168.122.1" failed
該問題去除Centos虛擬機的虛擬網卡即可
[root@crmtest1 ~]# ifconfig virbr0 down
[root@crmtest1 ~]# brctl delbr virbr0
[root@crmtest1 ~]# systemctl disable libvirtd.service

[root@crmtest2 ~]# ifconfig virbr0 down
[root@crmtest2 ~]# brctl delbr virbr0
[root@crmtest2 ~]# systemctl disable libvirtd.service


Result: Package existence check failed for "pdksh"
忽略即可,pdksh是一箇舊的包,已經廢棄,使用ksh


PRVF-5637 : DNS response time could not be checked on following nodes: crmtest2,crmtest1

在這裏我在crmtest1上搭建了一個DNS服務器
修改crmtest1和crmtest2上的/etc/resolv.conf
search tp-link.net
nameserver 192.168.150.128

然後再修改crmtest1和crmtest2上的 /usr/bin/nslookup

# mv /usr/bin/nslookup /usr/bin/nslookup.orig
# echo '#!/bin/bash
/usr/bin/nslookup.orig $*
exit 0' > /usr/bin/nslookup
# chmod a+x /usr/bin/nslookup

再次檢查,除了pdksh,其它都成功。

2.1 安裝grid

開始安裝grid

[grid@crmtest1 grid]$ ./runInstaller -jreLoc /etc/alternatives/jre_1.8.0

測試crmtest1和crmtest2之間的ssh連通性

測試成功

點擊“Next”

發現一直卡住,日誌錯誤爲 :

SEVERE: [FATAL] [INS-40912] Virtual host name: crmtest1-vip is assigned to another system on the network.
   CAUSE: One or more virtual host names appeared to be assigned to another system on the network.
   ACTION: Ensure that the virtual host names assigned to each of the nodes in the cluster are not currently in use, and the IP addresses are registered to the domain name you want to use as the virtual host name.

查詢資料後發現VIP的IP地址不能手動綁定,需要RAC自己配置,於是在crmtest1和crmtest2上將VIP綁定去除

[root@crmtest1 ~]# ifdown ens33:1

[root@crmtest2 ~]# ifdown ens33:1

然後安裝可以繼續下去了

Password:Oracle123

兩個節點都安裝cvuqdisk-1.0.9-1.rpm軟件包,重新執行檢查

點擊“Ignore All”

安裝到76%的時候,需要在crmtest1和crmtest2上面以root用戶執行兩個腳本

/u1/db/oraInventory/orainstRoot.sh

/u1/db/11.2.0/grid/root.sh

執行orainstRoot.sh成功,沒有問題。

執行root.sh腳本時,遇見了如下問題

[root@crmtest1 grid]# ./root.sh 
Performing root user operation for Oracle 11g 

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u1/db/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u1/db/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
Adding Clusterware entries to inittab
ohasd failed to start
Failed to start the Clusterware. Last 20 lines of the alert log follow: 
2019-10-31 11:19:36.572
[client(33002)]CRS-2101:The OLR was formatted using version 3.

原因:

在centos7中ohasd需要被設置爲一個服務,在運行腳本root.sh之前

以root用戶創建服務文件
[root@crmtest1 ~]# touch /usr/lib/systemd/system/ohas.service
[root@crmtest1 ~]# chmod 777 /usr/lib/systemd/system/ohas.service

將以下內容添加到新創建的ohas.service文件中
[root@crmtest1 ~]# cat /usr/lib/systemd/system/ohas.service
[Unit]
Description=Oracle High Availability Services
After=syslog.target

[Service]
ExecStart=/etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple
Restart=always

[Install]
WantedBy=multi-user.target

以root用戶運行下面的命令
[root@crmtest1 ~]# systemctl daemon-reload
[root@crmtest1 ~]# systemctl enable ohas.service
Created symlink from /etc/systemd/system/multi-user.target.wants/ohas.service to /usr/lib/systemd/system/ohas.service.
[root@crmtest1 ~]# systemctl start ohas.service

查看運行狀態
[root@crmtest1 ~]# systemctl status ohas.service
● ohas.service - Oracle High Availability Services
   Loaded: loaded (/usr/lib/systemd/system/ohas.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2019-10-31 11:30:42 CST; 4s ago
 Main PID: 35265 (init.ohasd)
   CGroup: /system.slice/ohas.service
           └─35265 /bin/sh /etc/init.d/init.ohasd run >/dev/null 2>&1 Type=si...

Oct 31 11:30:42 crmtest1 systemd[1]: Started Oracle High Availability Services.
Hint: Some lines were ellipsized, use -l to show in full.

由於crmtest1和crmtest2同時執行了root.sh,在crmtest1和crmtest2上執行以下操作

[root@crmtest1 grid]# cd /u1/db/11.2.0/grid/crs/install/
[root@crmtest1 install]# /u1/db/11.2.0/grid/perl/bin/perl rootcrs.pl -deconfig -force -verbose
[root@crmtest1 install]# /u1/db/11.2.0/grid/perl/bin/perl roothas.pl -deconfig -force -verbose

[root@crmtest2 grid]# cd /u1/db/11.2.0/grid/crs/install/
[root@crmtest2 install]# /u1/db/11.2.0/grid/perl/bin/perl rootcrs.pl -deconfig -force -verbose
[root@crmtest2 install]# /u1/db/11.2.0/grid/perl/bin/perl roothas.pl -deconfig -force -verbose

然後crmtest1和crmtest2再次運行root.sh文件,crmtest1運行成功以後再執行第二個節點

[root@crmtest1 grid]# ./root.sh 
Performing root user operation for Oracle 11g 

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u1/db/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u1/db/11.2.0/grid/crs/install/crsconfig_params
User ignored Prerequisites during installation
CRS-2672: Attempting to start 'ora.mdnsd' on 'crmtest1'
CRS-2676: Start of 'ora.mdnsd' on 'crmtest1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'crmtest1'
CRS-2676: Start of 'ora.gpnpd' on 'crmtest1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'crmtest1'
CRS-2672: Attempting to start 'ora.gipcd' on 'crmtest1'
CRS-2676: Start of 'ora.cssdmonitor' on 'crmtest1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'crmtest1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'crmtest1'
CRS-2672: Attempting to start 'ora.diskmon' on 'crmtest1'
CRS-2676: Start of 'ora.diskmon' on 'crmtest1' succeeded
CRS-2676: Start of 'ora.cssd' on 'crmtest1' succeeded

ASM created and started successfully.

Disk Group CRSDG created successfully.

clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 9e32eb96f5ad4f65bf0142d05b027b4c.
Successfully replaced voting disk group with +CRSDG.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   9e32eb96f5ad4f65bf0142d05b027b4c (/dev/asmdisk1) [CRSDG]
Located 1 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'crmtest1'
CRS-2676: Start of 'ora.asm' on 'crmtest1' succeeded
CRS-2672: Attempting to start 'ora.CRSDG.dg' on 'crmtest1'
CRS-2676: Start of 'ora.CRSDG.dg' on 'crmtest1' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

crmtest2節點執行root.sh

[root@crmtest2 grid]# ./root.sh 
Performing root user operation for Oracle 11g 

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u1/db/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u1/db/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node crmtest1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

查看錯誤原因

INFO: PRVF-5494 : The NTP Daemon or Service was not alive on all nodes
INFO: PRVF-5415 : Check to see if NTP daemon or service is running failed
INFO: Clock synchronization check using Network Time Protocol(NTP) failed
INFO: PRVF-9652 : Cluster Time Synchronization Services check failed

停止crmtest1和crmtest2系統的NTP服務

[root@crmtest1 ~]# systemctl stop ntpd
[root@crmtest1 ~]# chkconfig ntpd off
[root@crmtest1 ~]# mv /etc/ntp.conf /etc/ntp.conf.original

[root@crmtest2 ~]# systemctl stop ntpd
[root@crmtest2 ~]# chkconfig ntpd off
[root@crmtest2 ~]# mv /etc/ntp.conf /etc/ntp.conf.original

至此,RAC軟件安裝完成。

2.2 asmca創建data/fra datagroup

在兩個數據庫節點位grid用戶設置如下環境變量

export ORACLE_BASE=/u1/db/grid
export ORACLE_HOME=/u1/db/11.2.0/grid
export ORACLE_SID=+ASM1   ##節點2這裏的值爲+ASM2
export PATH=$ORACLE_HOME/bin:$PATH:$ORACLE_HOME/OPatch

查看RAC集羣狀態,我設置了3個SCAN IP

[grid@crmtest1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRSDG.dg
               ONLINE  ONLINE       crmtest1                                     
               ONLINE  ONLINE       crmtest2                                     
ora.DATADG.dg
               ONLINE  ONLINE       crmtest1                                     
               ONLINE  ONLINE       crmtest2                                     
ora.FRADG.dg
               ONLINE  ONLINE       crmtest1                                     
               ONLINE  ONLINE       crmtest2                                     
ora.LISTENER.lsnr
               ONLINE  ONLINE       crmtest1                                     
               ONLINE  ONLINE       crmtest2                                     
ora.asm
               ONLINE  ONLINE       crmtest1                 Started             
               ONLINE  ONLINE       crmtest2                 Started             
ora.gsd
               OFFLINE OFFLINE      crmtest1                                     
               OFFLINE OFFLINE      crmtest2                                     
ora.net1.network
               ONLINE  ONLINE       crmtest1                                     
               ONLINE  ONLINE       crmtest2                                     
ora.ons
               ONLINE  ONLINE       crmtest1                                     
               ONLINE  ONLINE       crmtest2                                     
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       crmtest2                                     
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       crmtest1                                     
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       crmtest1                                     
ora.crmtest1.vip
      1        ONLINE  ONLINE       crmtest1                                     
ora.crmtest2.vip
      1        ONLINE  ONLINE       crmtest2                                     
ora.cvu
      1        ONLINE  ONLINE       crmtest1                                     
ora.oc4j
      1        ONLINE  ONLINE       crmtest1                                     
ora.scan1.vip
      1        ONLINE  ONLINE       crmtest2                                     
ora.scan2.vip
      1        ONLINE  ONLINE       crmtest1                                     
ora.scan3.vip
      1        ONLINE  ONLINE       crmtest1                                     

創建DATA和FRA  datagroup:

[grid@crmtest1 ~]$ source .bash_profile
[grid@crmtest1 ~]$ asmca

最後形成三個磁盤組CRSDG、DATADG、FRADG

在命令行查看

[grid@crmtest1 ~]$ asmcmd
ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512   4096  4194304     10240     9808                0            9808              0             Y  CRSDG/
MOUNTED  EXTERN  N         512   4096  1048576     20480    20385                0           20385              0             N  DATADG/
MOUNTED  EXTERN  N         512   4096  1048576     20480    20385                0           20385              0             N  FRADG/
ASMCMD> quit

2.3 安裝database

Oracle用戶在crmtest1上執行

[oracle@crmtest1 ~]$ unzip p10404530_112030_Linux-x86-64_1of7.zip
[oracle@crmtest1 ~]$ unzip p10404530_112030_Linux-x86-64_2of7.zip
[oracle@crmtest1 ~]$ cd database
[oracle@crmtest1 ~]$ ./runInstall -jreLoc /etc/alternatives/jre_1.8.0

 

 

Password: Oracle123

Password: Oracle123

crmtest1執行root.sh腳本

[root@crmtest1 db_1]# ./root.sh 
Performing root user operation for Oracle 11g 

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u1/db/oracle/product/11.2.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.

crmtest2執行root.sh腳本

[root@crmtest1 db_1]# ./root.sh 
Performing root user operation for Oracle 11g 

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u1/db/oracle/product/11.2.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.

在兩個數據庫節點位oracle用戶設置如下環境變量:

export ORACLE_BASE=/u1/db/oracle
export ORACLE_HOME=/u1/db/oracle/product/11.2.0/db_1
export ORACLE_SID=CRMTEST1  #節點2這裏爲CRMTEST2
export PATH=$ORACLE_HOME/perl/bin:$PATH:$ORACLE_HOME/bin

至此已經安裝成功。

查詢數據庫狀態

[oracle@crmtest1 ~]$ sqlplus / as sysdba
SQL> select instance_name,status from v$instance;

INSTANCE_NAME	 STATUS
---------------- ------------
CRMTEST1	 OPEN

[oracle@crmtest2 ~]$ sqlplus / as sysdba
SQL> select instance_name,status from v$instance;

INSTANCE_NAME	 STATUS
---------------- ------------
CRMTEST2	 OPEN

grid用戶查詢集羣狀態,正常

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章