ocr和votedisk是什麼?
作爲集羣,oracle cluster需要共享存儲來存放整個集羣的配置信息,ocr便是用例存放這些配置信息的地方,ocr的存儲容量一般不會太大,在10g下,oracle建議256M已經足以。ocr必須需要存儲在集羣文件系統或者裸設備上,出於性能上的考慮,本人建議將ocr建立在裸設備上,性能高並且管理也不復雜(ocr和votedisk的數量一般不會太多)。ocr中存放的是集羣的配置信息,這些信息只能在一個節點上進行維護操作,這一節點叫做Master Node,其他節點會在內存中保留一份ocr的複製,並且只能進行讀操作,所有ocr的更新都是有master node來執行並通知各個節點的。
votedisk磁盤存儲集中地各個節點並用來進行心跳監測
ocr和votedisk的維護是否需要保持脫機狀態?
orc的維護在大多數情況下,是需要聯機操作的,因爲在各個節點具有ocr.loc文件,聯機操作可以保證各個節點的ocr.loc文件及時得到更新。但是部分操作,如repaire和重建等操作需要在脫機狀態下進行(後面會詳細描述)
votedisk磁盤的維護往往需要在脫機狀態下進行
ocr維護的命令有哪些?
維護ocr常用的命令有:
ocrcheck
[root@node1 bin]# ./ocrcheck -h
Name:
ocrcheck - Displays health of Oracle Cluster Registry.
Synopsis:
ocrcheck
Description:
prompt> ocrcheck
Displays current usage, location and health of the cluster registry
Notes:
A log file will be created in
$ORACLE_HOME/log/<hostname>/client/ocrcheck_<pid>.log. Please ensure
you have file creation privileges in the above directory before
running this tool.
ocrdump(dump出來的內容可以用來查看ocr中的內容,但是不可以用來進行恢復)
[root@node1 bin]# ./ocrdump -h
Name:
ocrdump - Dump contents of Oracle Cluster Registry to a file.
Synopsis:
ocrdump [<filename>|-stdout] [-backupfile <backupfilename>] [-keyname <keyname>] [-xml] [-noheader]
Description:
Default filename is OCRDUMPFILE. Examples are:
prompt> ocrdump
writes cluster registry contents to OCRDUMPFILE in the current directory
prompt> ocrdump MYFILE
writes cluster registry contents to MYFILE in the current directory
prompt> ocrdump -stdout -keyname SYSTEM
writes the subtree of SYSTEM in the cluster registry to stdout
prompt> ocrdump -stdout -xml
writes cluster registry contents to stdout in xml format
Notes:
The header information will be retrieved based on best effort basis.
A log file will be created in
$ORACLE_HOME/log/<hostname>/client/ocrdump_<pid>.log. Make sure
you have file creation privileges in the above directory before
running this tool.
ocrconfig
[root@node1 bin]# ./ocrconfig -h
Name:
ocrconfig - Configuration tool for Oracle Cluster Registry.
Synopsis:
ocrconfig [option]
option:
-export <filename> [-s online]
- Export cluster register contents to a file
-import <filename> - Import cluster registry contents from a file
-upgrade [<user> [<group>]]
- Upgrade cluster registry from previous version
-downgrade [-version <version string>]
- Downgrade cluster registry to the specified version
-backuploc <dirname> - Configure periodic backup location
-showbackup - Show backup information
-restore <filename> - Restore from physical backup
-replace ocr|ocrmirror [<filename>] - Add/replace/remove a OCR device/file
-overwrite - Overwrite OCR configuration on disk
-repair ocr|ocrmirror <filename> - Repair local OCR configuration
-help - Print out this help information
Note:
A log file will be created in
$ORACLE_HOME/log/<hostname>/client/ocrconfig_<pid>.log. Please ensure
you have file creation privileges in the above directory before
running this tool.
Table D-1 The ocrconfig Command Options
Option | Purpose |
---|---|
|
To change an OCR backup file location. For this entry, use a full path that is accessible by all of the nodes. |
|
To downgrade an OCR to an earlier version. |
|
To export the contents of an OCR into a target file. |
|
To display help for the |
|
To import the OCR contents from a previously exported OCR file. |
|
To update an OCR configuration that is recorded on the OCR with the current OCR configuration information that is found on the node from which you are running this command. |
|
To update an OCR configuration on the node from which you are running this command with the new configuration information specified by this command. |
|
To add, replace, or remove an OCR location. |
|
To restore an OCR from an automatically created OCR backup file. |
|
To display the location, timestamp, and the originating node name of the backup files that Oracle created in the past 4 hours, 8 hours, 12 hours, and in the last day and week. You do not have to be the |
|
To upgrade an OCR to a later version. |
ocr損壞後怎麼恢復?
ocr損壞後通常有兩種方式進行修正:恢復和重建。恢復的時候,我們可以從之前export導出的文件恢復,也可以從之前有masternode備份的文件進行恢復。
[root@node1 bin]# ./crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
httpd_vip application 0/1 0/0 ONLINE ONLINE node2
httpd_web application 0/1 0/4 ONLINE ONLINE node2
ora....SM1.asm application 0/5 0/0 ONLINE ONLINE node1
ora....E1.lsnr application 0/5 0/0 ONLINE ONLINE node1
ora.node1.gsd application 0/5 0/0 ONLINE ONLINE node1
ora.node1.ons application 0/3 0/0 ONLINE ONLINE node1
ora.node1.vip application 0/0 0/0 ONLINE ONLINE node1
ora....SM2.asm application 0/5 0/0 ONLINE ONLINE node2
ora....E2.lsnr application 0/5 0/0 ONLINE ONLINE node2
ora.node2.gsd application 0/5 0/0 ONLINE ONLINE node2
ora.node2.ons application 0/3 0/0 ONLINE ONLINE node2
ora.node2.vip application 0/0 0/0 ONLINE ONLINE node2
ora.racdb.db application 0/0 0/1 ONLINE ONLINE node2
ora....b1.inst application 0/5 0/0 ONLINE ONLINE node1
ora....b2.inst application 0/5 0/0 ONLINE ONLINE node2
[root@node1 bin]# ./ocrconfig -export a.ocr (導出時最好是關閉crs)
[root@node1 bin]# ./crs_stop httpd_web
Attempting to stop `httpd_web` on member `node2`
Stop of `httpd_web` on member `node2` succeeded.
[root@node1 bin]# ./crs_stop httpd_vip
Attempting to stop `httpd_vip` on member `node2`
Stop of `httpd_vip` on member `node2` succeeded.
[root@node1 bin]# ./crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
httpd_vip application 0/1 0/0 OFFLINE OFFLINE
httpd_web application 0/1 0/4 OFFLINE OFFLINE
ora....SM1.asm application 0/5 0/0 ONLINE ONLINE node1
ora....E1.lsnr application 0/5 0/0 ONLINE ONLINE node1
ora.node1.gsd application 0/5 0/0 ONLINE ONLINE node1
ora.node1.ons application 0/3 0/0 ONLINE ONLINE node1
ora.node1.vip application 0/0 0/0 ONLINE ONLINE node1
ora....SM2.asm application 0/5 0/0 ONLINE ONLINE node2
ora....E2.lsnr application 0/5 0/0 ONLINE ONLINE node2
ora.node2.gsd application 0/5 0/0 ONLINE ONLINE node2
ora.node2.ons application 0/3 0/0 ONLINE ONLINE node2
ora.node2.vip application 0/0 0/0 ONLINE ONLINE node2
ora.racdb.db application 0/0 0/1 ONLINE ONLINE node2
ora....b1.inst application 0/5 0/0 ONLINE ONLINE node1
ora....b2.inst application 0/5 0/0 ONLINE ONLINE node2
[root@node1 bin]# ./crs_unregister httpd_web
[root@node1 bin]# ./crs_unregister httpd_vip
[root@node1 bin]# ./crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora....SM1.asm application 0/5 0/0 ONLINE ONLINE node1
ora....E1.lsnr application 0/5 0/0 ONLINE ONLINE node1
ora.node1.gsd application 0/5 0/0 ONLINE ONLINE node1
ora.node1.ons application 0/3 0/0 ONLINE ONLINE node1
ora.node1.vip application 0/0 0/0 ONLINE ONLINE node1
ora....SM2.asm application 0/5 0/0 ONLINE ONLINE node2
ora....E2.lsnr application 0/5 0/0 ONLINE ONLINE node2
ora.node2.gsd application 0/5 0/0 ONLINE ONLINE node2
ora.node2.ons application 0/3 0/0 ONLINE ONLINE node2
ora.node2.vip application 0/0 0/0 ONLINE ONLINE node2
ora.racdb.db application 0/0 0/1 ONLINE ONLINE node2
ora....b1.inst application 0/5 0/0 ONLINE ONLINE node1
ora....b2.inst application 0/5 0/0 ONLINE ONLINE node2
[root@node1 bin]# ssh node2 /u01/app/crs_home/bin/crs_stat -t -v
root@node2's password:
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora....SM1.asm application 0/5 0/0 ONLINE ONLINE node1
ora....E1.lsnr application 0/5 0/0 ONLINE ONLINE node1
ora.node1.gsd application 0/5 0/0 ONLINE ONLINE node1
ora.node1.ons application 0/3 0/0 ONLINE ONLINE node1
ora.node1.vip application 0/0 0/0 ONLINE ONLINE node1
ora....SM2.asm application 0/5 0/0 ONLINE ONLINE node2
ora....E2.lsnr application 0/5 0/0 ONLINE ONLINE node2
ora.node2.gsd application 0/5 0/0 ONLINE ONLINE node2
ora.node2.ons application 0/3 0/0 ONLINE ONLINE node2
ora.node2.vip application 0/0 0/0 ONLINE ONLINE node2
ora.racdb.db application 0/0 0/1 ONLINE ONLINE node2
ora....b1.inst application 0/5 0/0 ONLINE ONLINE node1
ora....b2.inst application 0/5 0/0 ONLINE ONLINE node2
[root@node1 bin]# ./corconfig -import a.ocr
-bash: ./corconfig: No such file or directory
<strong>[root@node1 bin]# ./ocrconfig -import a.ocr
PROT-19: Cannot proceed while clusterware is running. Shutdown clusterware first(<span style="background-color: rgb(255, 0, 0);">在導入ocr是集羣必須要關閉</span>)</strong>
[root@node1 bin]# ./crsctl stop crs
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
[root@node1 bin]# ssh node2 /u01/app/crs_home/bin/crsctl stop crs
root@node2's password:
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
[root@node1 bin]# ./ocrconfig -import a.ocr
[root@node1 bin]# ./crsctl start crs
Attempting to start CRS stack
The CRS stack will be started shortly
[root@node1 bin]# ssh node2 /u01/app/crs_home/bin/crsctl start crs
root@node2's password:
Attempting to start CRS stack
The CRS stack will be started shortly
[root@node1 bin]# ./crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
httpd_vip application 0/1 0/0 ONLINE ONLINE node1
httpd_web application 1/1 0/4 ONLINE ONLINE node1
ora....SM1.asm application 0/5 0/0 ONLINE OFFLINE
ora....E1.lsnr application 0/5 0/0 ONLINE ONLINE node1
ora.node1.gsd application 0/5 0/0 ONLINE ONLINE node1
ora.node1.ons application 0/3 0/0 ONLINE ONLINE node1
ora.node1.vip application 0/0 0/0 ONLINE ONLINE node1
ora....SM2.asm application 0/5 0/0 ONLINE ONLINE node2
ora....E2.lsnr application 0/5 0/0 ONLINE ONLINE node2
ora.node2.gsd application 0/5 0/0 ONLINE ONLINE node2
ora.node2.ons application 0/3 0/0 ONLINE ONLINE node2
ora.node2.vip application 0/0 0/0 ONLINE ONLINE node2
ora.racdb.db application 0/0 0/1 ONLINE OFFLINE
ora....b1.inst application 0/5 0/0 ONLINE OFFLINE
ora....b2.inst application 0/5 0/0 ONLINE OFFLINE
下面是由restore來恢復ocr
[root@node1 crs]# ll -h /u01/app/crs_home/bin/a.ocr
-rw-r--r-- 1 root root 93K Aug 1 16:06 /u01/app/crs_home/bin/a.ocr
[root@node1 crs]# ll -h
total 25M
-rw-r--r-- 1 root root 4.4M Jul 31 12:36 35521234
-rw-r--r-- 1 oracle root 3.5M Jul 22 14:04 backup00.ocr
-rw-r--r-- 1 oracle root 3.5M Jul 10 14:04 backup01.ocr
-rw-r--r-- 1 oracle root 3.5M Jul 9 14:00 backup02.ocr
-rw-r--r-- 1 oracle root 3.5M Jul 22 14:04 day.ocr
-rw-r--r-- 1 oracle root 85K Jul 24 15:27 ocr.exp
-rw-r--r-- 1 oracle root 3.5M Jul 10 14:04 week_.ocr
-rw-r--r-- 1 oracle root 3.5M Jul 3 14:15 week.ocr
[root@node1 crs]# ocrconfig -restore /u01/app/crs_home/bin/a.ocr
PROT-22: Storage too small
[root@node1 crs]# ocrconfig -restore /u01/app/crs_home/cdata/crs/backup00.ocr
PROT-19: Cannot proceed while clusterware is running. Shutdown clusterware first
說明:export導出的文件只能是由import導入,restore時,cluster也必須全部離線
具體的操作過程這裏不再演示,
10g下怎麼增加或者刪除ocr磁盤?
在10g中,如果在安裝cluster時,如果選擇external redundancy那麼ocr只能選擇一個磁盤,但是通過命令行我們依然可以添加新的磁盤到ocr。如果我們需要替換現有的ocr磁盤,那麼必須同時存在兩份ocr磁盤纔可以,即primary ocr和mirror ocr同時存在時,纔可以替換primary ocr或者mirror ocr。
操作實例:
[root@node1 bin]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 1125736
Used space (kbytes) : 3852
Available space (kbytes) : 1121884
ID : 849560479
Device/File Name : /dev/raw/raw1
Device/File integrity check succeeded
Device/File not configured
Cluster registry integrity check succeeded
[root@node1 bin]# ocrconfig -replace ocr /dev/raw/raw3
PROT-16: Internal Error
[root@node1 bin]# ocrconfig -replace ocrmirror /dev/raw/raw4
[root@node1 bin]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 1125736
Used space (kbytes) : 3852
Available space (kbytes) : 1121884
ID : 849560479
Device/File Name : /dev/raw/raw1
Device/File integrity check succeeded
Device/File Name : /dev/raw/raw4
Device/File integrity check succeeded
Cluster registry integrity check succeeded
[root@node1 bin]# ocrconfig -replace ocr /dev/raw/raw3
[root@node1 bin]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 1125736
Used space (kbytes) : 3852
Available space (kbytes) : 1121884
ID : 849560479
Device/File Name : /dev/raw/raw3
Device/File integrity check succeeded
Device/File Name : /dev/raw/raw4
Device/File integrity check succeeded
Cluster registry integrity check succeeded
[root@node1 bin]# crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
[root@node1 bin]# ssh node2 /u01/app/crs_home/bin/ocrcheck
Warning: Permanently added the RSA host key for IP address '192.168.100.32' to the list of known hosts.
root@node2's password:
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 1125736
Used space (kbytes) : 3852
Available space (kbytes) : 1121884
ID : 849560479
Device/File Name : /dev/raw/raw3
Device/File integrity check succeeded
Device/File Name : /dev/raw/raw4
Device/File integrity check succeeded
Cluster registry integrity check succeeded
[root@node1 bin]# ssh node2 cat /etc/oracle/ocr.loc
root@node2's password:
#Device/file /dev/raw/raw1 getting replaced by device /dev/raw/raw3
ocrconfig_loc=/dev/raw/raw3
ocrmirrorconfig_loc=/dev/raw/raw4
local_only=false
新增或者刪除orc最好在所有節點都在線時進行,否則會出現節點信息不同步的情況,
例如:如果在節點node2關閉時,在節點node1進行ocr刪除操作,結果如下
[root@node1 bin]# ssh node2 /u01/app/crs_home/bin/crsctl stop crs
root@node2's password:
Permission denied, please try again.
root@node2's password:
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
[root@node1 bin]# ssh node2 /u01/app/crs_home/bin/crsctl check crs
root@node2's password:
Failure 1 contacting CSS daemon
Cannot communicate with CRS
Cannot communicate with EVM
[root@node1 bin]# ./ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 1125736
Used space (kbytes) : 3852
Available space (kbytes) : 1121884
ID : 849560479
Device/File Name : /dev/raw/raw3
Device/File integrity check succeeded
Device/File Name : /dev/raw/raw4
Device/File integrity check succeeded
Cluster registry integrity check succeeded
[root@node1 bin]# ./ocrconfig -replace ocrmirror
[root@node1 bin]# ./ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 1125736
Used space (kbytes) : 3852
Available space (kbytes) : 1121884
ID : 849560479
Device/File Name : /dev/raw/raw3
Device/File integrity check succeeded
Device/File not configured
Cluster registry integrity check succeeded
[root@node1 bin]# cat /etc/oracle/ocr.loc
#Device/file /dev/raw/raw4 being deleted
ocrconfig_loc=/dev/raw/raw3
local_only=false[root@node1 bin]# ssh node2 cat /etc/oracle/ocr.loc
root@node2's password:
Permission denied, please try again.
root@node2's password:
#Device/file /dev/raw/raw1 getting replaced by device /dev/raw/raw3
ocrconfig_loc=/dev/raw/raw3
ocrmirrorconfig_loc=/dev/raw/raw4
local_only=false[root@node1 bin]# ssh node2 /u01/app/crs_home/bin/crsctl start crs
root@node2's password:
Attempting to start CRS stack
The CRS stack will be started shortly
[root@node1 bin]# ssh node2 /u01/app/crs_home/bin/crsctl check crs
root@node2's password:
Failure 1 contacting CSS daemon
Cannot communicate with CRS
Cannot communicate with EVM
[root@node1 bin]# ssh node2 /u01/app/crs_home/bin/ocrconfig -repair ocrmirror
root@node2's password:
[root@node1 bin]# ssh node2 cat /etc/oracle/ocr.loc
root@node2's password:
#Device/file /dev/raw/raw4 being deleted
ocrconfig_loc=/dev/raw/raw3
local_only=false[r
[root@node1 bin]# ssh node2 /u01/app/crs_home/bin/crsctl start crs
root@node2's password:
Attempting to start CRS stack
The CRS stack will be started shortly
[root@node1 bin]# ssh node2 /u01/app/crs_home/bin/crsctl check crs
root@node2's password:
CSS appears healthy
CRS appears healthy
EVM appears healthy
結論:在增加、刪除、替換ocr時,所有節點最好同時在線,在導入導出或者恢復ocr是,cluster節點需要關閉
如何添加/刪除votedisk?
添加刪除votedisk可以使用crsctl命令:
[root@node1 bin]# ./crsctl
Usage: crsctl check crs - checks the viability of the CRS stack
crsctl check cssd - checks the viability of CSS
crsctl check crsd - checks the viability of CRS
crsctl check evmd - checks the viability of EVM
crsctl set css <parameter> <value> - sets a parameter override
crsctl get css <parameter> - gets the value of a CSS parameter
crsctl unset css <parameter> - sets CSS parameter to its default
crsctl query css votedisk - lists the voting disks used by CSS
crsctl add css votedisk <path> - adds a new voting disk
crsctl delete css votedisk <path> - removes a voting disk
crsctl enable crs - enables startup for all CRS daemons
crsctl disable crs - disables startup for all CRS daemons
crsctl start crs - starts all CRS daemons.
crsctl stop crs - stops all CRS daemons. Stops CRS resources in case of cluster.
crsctl start resources - starts CRS resources.
crsctl stop resources - stops CRS resources.
crsctl debug statedump evm - dumps state info for evm objects
crsctl debug statedump crs - dumps state info for crs objects
crsctl debug statedump css - dumps state info for css objects
crsctl debug log css [module:level]{,module:level} ...
- Turns on debugging for CSS
crsctl debug log crs [module:level]{,module:level} ...
- Turns on debugging for CRS
crsctl debug log evm [module:level]{,module:level} ...
- Turns on debugging for EVM
crsctl debug log res <resname:level> turns on debugging for resources
crsctl query crs softwareversion [<nodename>] - lists the version of CRS software installed
crsctl query crs activeversion - lists the CRS software operating version
crsctl lsmodules css - lists the CSS modules that can be used for debugging
crsctl lsmodules crs - lists the CRS modules that can be used for debugging
crsctl lsmodules evm - lists the EVM modules that can be used for debugging
If necesary any of these commands can be run with additional tracing by
adding a "trace" argument at the very front.
Example: crsctl trace check css
首先在線添加:
[root@node1 bin]# ./crsctl add css votedisk /dev/raw/raw4
Cluster is not in a ready state for online disk addition
[root@node1 bin]# ./crsctl add css votedisk /dev/raw/raw4 -force
Now formatting voting disk: /dev/raw/raw4
successful addition of votedisk /dev/raw/raw4.
[root@node1 bin]# ./crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
[root@node1 bin]# ./crsctl query css votedisk
0. 0 /dev/raw/raw2
1. 0 /dev/raw/raw4
located 2 votedisk(s).
[root@node1 bin]# ssh node2 /u01/app/crs_home/bin/crsctl query css votedisk
root@node2's password:
0. 0 /dev/raw/raw2
1. 0 /dev/raw/raw4
located 2 votedisk(s).
[root@node1 bin]# ./crsctl stop crs
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
[root@node1 bin]# ssh node2 /u01/app/crs_home/bin/crsctl stop crs
root@node2's password:
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
[root@node1 bin]# ./crsctl start crs
Attempting to start CRS stack
The CRS stack will be started shortly
[root@node1 bin]# ssh node2 /u01/app/crs_home/bin/crsctl start crs
root@node2's password:
Attempting to start CRS stack
The CRS stack will be started shortly
[root@node1 bin]# ./crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora....SM1.asm application 0/5 0/0 ONLINE ONLINE node1
ora....E1.lsnr application 0/5 0/0 ONLINE ONLINE node1
ora.node1.gsd application 0/5 0/0 ONLINE ONLINE node1
ora.node1.ons application 0/3 0/0 ONLINE ONLINE node1
ora.node1.vip application 0/0 0/0 ONLINE ONLINE node1
ora....SM2.asm application 0/5 0/0 ONLINE ONLINE node2
ora....E2.lsnr application 0/5 0/0 ONLINE ONLINE node2
ora.node2.gsd application 0/5 0/0 ONLINE ONLINE node2
ora.node2.ons application 0/3 0/0 ONLINE ONLINE node2
ora.node2.vip application 0/0 0/0 ONLINE ONLINE node2
ora.racdb.db application 0/1 0/1 ONLINE ONLINE node1
ora....b1.inst application 0/5 0/0 ONLINE OFFLINE
ora....b2.inst application 0/5 0/0 ONLINE ONLINE node2
刪除操作不再演示
看來在線添加votedisk是可以成功的,在10g下需要使用 -force選項,但是網上有資料表示:在添加或者刪除votedisk時,最好停掉所有應用。
如何重新創建ocr和votedisk(在ocr和votedisk損壞,並且沒有備份時,這是非常有用的)?
首先,在所有的節點停止cluster
[root@node1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 1125736
Used space (kbytes) : 3852
Available space (kbytes) : 1121884
ID : 849560479
Device/File Name : /dev/raw/raw1
Device/File integrity check succeeded
Device/File not configured
Cluster registry integrity check succeeded
[root@node1 ~]# crsctl query css votedisk
0. 0 /dev/raw/raw2
1. 0 /dev/raw/raw4
located 2 votedisk(s).
[root@node1 ~]# crsctl check crs
Failure 1 contacting CSS daemon
Cannot communicate with CRS
Cannot communicate with EVM
[root@node1 ~]# ssh node2 /u01/app/crs_home/bin/crsctl check crs
Failure 1 contacting CSS daemon
Cannot communicate with CRS
Cannot communicate with EVM
在每個節點執行rootdelete.sh腳本
[root@node1 ~]# cd $CRS_HOME/install
[root@node1 install]# ./rootdelete.sh
Shutting down Oracle Cluster Ready Services (CRS):
Stopping resources. This could take several minutes.
Error while stopping resources. Possible cause: CRSD is down.
Shutdown has begun. The daemons should exit soon.
Checking to see if Oracle CRS stack is down...
Oracle CRS stack is not running.
Oracle CRS stack is down now.
Removing script for Oracle Cluster Ready services
Updating ocr file for downgrade
Cleaning up SCR settings in '/etc/oracle/scls_scr'
Cleaning up Network socket directories
[root@node1 install]# ssh node2 /u01/app/crs_home/install/rootdelete.sh
Shutting down Oracle Cluster Ready Services (CRS):
Stopping resources. This could take several minutes.
Error while stopping resources. Possible cause: CRSD is down.
Shutdown has begun. The daemons should exit soon.
Checking to see if Oracle CRS stack is down...
Oracle CRS stack is not running.
Oracle CRS stack is down now.
Removing script for Oracle Cluster Ready services
Updating ocr file for downgrade
Cleaning up SCR settings in '/etc/oracle/scls_scr'
Cleaning up Network socket directories
在任意節點上,執行rootdeinstall.sh腳本(只需在一個節點上運行即可)
[root@node1 install]# ./rootdeinstall.sh
Removing contents from OCR device
2560+0 records in
2560+0 records out
10485760 bytes (10 MB) copied, 0.486033 seconds, 21.6 MB/s
在所有節點上執行root.sh腳本
[root@node1 crs_home]# pwd
/u01/app/crs_home
[root@node1 crs_home]# ./root.sh
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
No value set for the CRS parameter CRS_OCR_LOCATIONS. Using Values in paramfile.crs
Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: node1 node1-priv node1
node 2: node2 node2-priv node2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw2
Format of 1 voting devices complete.
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
node1
CSS is inactive on these nodes.
node2
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
[root@node1 crs_home]# ssh node2 /u01/app/crs_home/root.sh
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
No value set for the CRS parameter CRS_OCR_LOCATIONS. Using Values in paramfile.crs
Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: node1 node1-priv node1
node 2: node2 node2-priv node2
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
node1
node2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
Invalid interface "255.255.255.0/eth0" entered in an input argument.
[root@node1 crs_home]# oifcfg iflist
eth0 192.168.100.0
eth1 100.100.100.0
[root@node1 crs_home]# crs_stat -t -v
CRS-0202: No resources are registered.
[root@node1 crs_home]# crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
[root@node1 crs_home]# ssh node2 /u01/app/crs_home/bin/crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
從上面可以看出,crs已經正常運行,但是vip one gns等資源沒有配置成功(可能是因爲執行我修改過ip的原因),手工調用vipca圖形界面後:
[root@node1 ~]# crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora.node1.gsd application 0/5 0/0 ONLINE ONLINE node1
ora.node1.ons application 0/3 0/0 ONLINE ONLINE node1
ora.node1.vip application 0/0 0/0 ONLINE ONLINE node1
ora.node2.gsd application 0/5 0/0 ONLINE ONLINE node2
ora.node2.ons application 0/3 0/0 ONLINE ONLINE node2
ora.node2.vip application 0/0 0/0 ONLINE ONLINE node2
添加剩餘資源到rac中,如asm,instance,linstener
調用netca添加listener,使用srvctl添加其他服務,至此重建ocr和votedisk完成
有一點需要注意,ocr和votedisk重建後,ocr磁盤和votedisk磁盤會設置爲cluster新建時的初始值,如何後續我們對此進行過修改,則需要我們手工來進行再次維護了
[root@node1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 1125736
Used space (kbytes) : 3760
Available space (kbytes) : 1121976
ID : 1334010282
Device/File Name : /dev/raw/raw1
Device/File integrity check succeeded
Device/File not configured
Cluster registry integrity check succeeded
[root@node1 ~]# crsctl query css votedisk
0. 0 /dev/raw/raw2
located 1 votedisk(s).
[root@node1 ~]#