Oracle 11g R2 RAC 靜默安裝

Oracle Database 11g R2 RAC 靜默安裝

有如下步驟:

準備環境;
靜默安裝Grid Infrastructure;
創建ASM磁盤組;
靜默安裝Oracle RDBMS Software;
DBCA靜默創建數據庫;
檢查與後期完善。
1、準備環境

準備環境包括:

安裝一臺DNS、一臺SAN、兩臺數據庫節點服務器操作系統安裝與配置;
配置DNS服務(scan解析);
劃分LUN;
RPM安裝所需軟件包;
滿足RAC安裝相關係統配置;
創建grid、oracle用戶並授權;
創建所需目錄並授權;
環境變量設置;
iscsi掛載磁盤並分區格式化;
ASMLIB驅動安裝及創建ASM磁盤;
NTP服務禁止;
上傳介質並解壓。
其中,數據庫節點服務器和DNS服務器我都安裝了Oracle Enterprise Linux 5.5 64bit,SAN服務器安裝了Openfiler 2.3 64bit; 而數據庫軟件版本選擇11.2.0.3。 準備環境部分可參考: 《Oracle Enterprise Linux 5.5(64位)部署安裝Oracle 11g R2 RAC(11.2.0.1)教程》 我這裏就PASS之。

2、靜默安裝Grid Infrastructure(Oracle Grid Infrastructure silent mode installation) 我將Oracle提供的模板複製加以修改之後使用,這些模板在GI和DATABASE解壓之後根目錄下的response裏。 GI只有一個模板:grid_install.rsp DATABASE有三個模板:db_install.rsp、netca.rsp、dbca.rsp

我不會詳細解釋模板裏面的各項涵義,建議大家打開默認模板詳細閱讀。

[grid@rac1 response]$ pwd
/install/grid/response
[grid@rac1 response]$ vi grid_install.rsp
[grid@rac1 response]$ cat grid_install.rsp | grep -v ^# | grep -v ^$ > /tmp/gi.rsp[grid@rac1 response]$ cat gi.rsp 

oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v11_2_0
ORACLE_HOSTNAME=rac1
INVENTORY_LOCATION=/u01/app/oraInventory
SELECTED_LANGUAGES=en
oracle.install.option=CRS_CONFIG
ORACLE_BASE=/u01/app/grid
ORACLE_HOME=/u01/app/11.2.0/grid
oracle.install.asm.OSDBA=asmdba
oracle.install.asm.OSOPER=asmoper
oracle.install.asm.OSASM=asmadmin
oracle.install.crs.config.gpnp.scanName=scan.luocs.com
oracle.install.crs.config.gpnp.scanPort=1521
oracle.install.crs.config.clusterName=rac-cluster
oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.gpnp.gnsSubDomain=
oracle.install.crs.config.gpnp.gnsVIPAddress=
oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.crs.config.clusterNodes=rac1:rac1-vip,rac2:rac2-vip
oracle.install.crs.config.networkInterfaceList=eth0:192.168.53.0:1,eth1:10.0.3.0:2
oracle.install.crs.config.storageOption=ASM_STORAGE
oracle.install.crs.config.sharedFileSystemStorage.diskDriveMapping=
oracle.install.crs.config.sharedFileSystemStorage.votingDiskLocations=
oracle.install.crs.config.sharedFileSystemStorage.votingDiskRedundancy=
oracle.install.crs.config.sharedFileSystemStorage.ocrLocations=
oracle.install.crs.config.sharedFileSystemStorage.ocrRedundancy=
oracle.install.crs.config.useIPMI=false
oracle.install.crs.config.ipmi.bmcUsername=
oracle.install.crs.config.ipmi.bmcPassword=
oracle.install.asm.SYSASMPassword=Oracle_12345
oracle.install.asm.diskGroup.name=CRS
oracle.install.asm.diskGroup.redundancy=EXTERNAL
oracle.install.asm.diskGroup.AUSize=4
oracle.install.asm.diskGroup.disks=/dev/oracleasm/disks/CRS1
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/oracleasm/disks/*
oracle.install.asm.monitorPassword=Oracle_12345
oracle.install.crs.upgrade.clusterNodes=
oracle.install.asm.upgradeASM=false
oracle.installer.autoupdates.option=SKIP_UPDATES
oracle.installer.autoupdates.downloadUpdatesLoc=
AUTOUPDATES_MYORACLESUPPORT_USERNAME=
AUTOUPDATES_MYORACLESUPPORT_PASSWORD=
PROXY_HOST=
PROXY_PORT=0
PROXY_USER=
PROXY_PWD=
PROXY_REALM=
 

說明:11g圖形安裝的時候我們無法使用裸設備,Oracle從11g開始也不建議使用裸設備,而選擇ASM。但靜默安裝的時候我們可以通過配置這個模板來使用裸設備,但因爲我一直超級喜歡ASM也是ASM支持者,所以這裏不做裸設備的實驗。 注意:oracle.install.asm.OSDBA=,oracle.install.asm.OSOPER=,oracle.install.asm.OSASM=  這三項,如果全部設置爲oinstall,會給出警告,下面詳細解釋。

oracle.install.asm.OSDBA= oracle.install.asm.OSOPER= oracle.install.asm.OSASM=

ASM database administrator (osdba) group ASM instance administrator operator(osoper) group ASM instance administrator (osasm) group

這裏三個權限的選擇很容易拋出點問題來,我之前全部設置爲oinstall,結果執行runInstaller的時候提示警告,後期也因爲這個有了點問題。 警告類似: [WARNING] [INS-41809] Possible invalid choice for OSDBA Group.    CAUSE: The group name you selected as the OSDBA for ASM group is commonly used for Oracle Database administrator privileges.    ACTION: Oracle recommends that you designate asmdba as the OSDBA for ASM group, and that the group should not be the same group as an Oracle Database OSDBA group. [WARNING] [INS-41812] OSOPER and OSASM are the same OS group.    CAUSE: The chosen values for OSOPER group and the chosen value for OSASM group are the same.    ACTION: Select an OS group that is unique for ASM administrators. The OSASM group should not be the same as the OS groups that grant privileges for Oracle ASM access, or for database administration.

參考了Maclean的《Oracle安裝與操作系統用戶組》,然後我就改成如下,問題也就解決。 oracle.install.asm.OSDBA=asmdba oracle.install.asm.OSOPER=asmoper oracle.install.asm.OSASM=asmadmin

先檢測環境有沒有配置準備完畢:

[grid@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose

Performing pre-checks for cluster services setup 

Checking node reachability...Check:Node reachability from node "rac1"DestinationNodeReachable?------------------------------------------------------------
  rac2                                  yes                     
  rac1                                  yes                     
Result:Node reachability check passed from node "rac1"Checking user equivalence...Check:User equivalence for user "grid"NodeNameStatus------------------------------------------------------------
  rac2                                  failed                  
  rac1                                  failed                  
Result: PRVF-4007:User equivalence check failed for user "grid"

ERROR:User equivalence unavailable on all the specified nodes
Verification cannot proceed

Pre-check for cluster services setup was unsuccessful on all the nodes.
 

– 這裏遇到了些問題,網上查了一下這個錯誤,有如下解決方式: # mkdir -p /usr/local/bin # ln -s -f /usr/bin/ssh /usr/local/bin/ssh # ln -s -f /usr/bin/scp /usr/local/bin/scp $ exec /usr/bin/ssh-agent $SHELL $ /usr/bin/ssh-add

但上面方法都無效,其實問題根源就是沒有做SSH用戶等效性配置。 Oracle從11g裏可以在OUI圖形安裝過程中自動建立SSH用戶等效性配置,但靜默方式看來搞不定,我就嘗試了手動建立方法。 手動建立SSH用戶等效性配置方法玩過10g RAC的朋友都不陌生吧,我這裏就只列出步驟:

節點1:# su - grid
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh
$ ssh-keygen -t rsa
enter
enter
enter
$ ssh-keygen -t dsa
enter
enter
enter

節點2:# su - grid
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh
$ ssh-keygen -t rsa
enter
enter
enter
$ ssh-keygen -t dsa
enter
enter
enter

節點1:
$ cat ~/.ssh/id_rsa.pub >>~/.ssh/authorized_keys
$ cat ~/.ssh/id_dsa.pub >>~/.ssh/authorized_keys 
$ ssh rac2 cat ~/.ssh/id_rsa.pub >>~/.ssh/authorized_keys
yes
rac2的密碼
$ ssh rac2 cat ~/.ssh/id_dsa.pub >>~/.ssh/authorized_keys
rac2的密碼

$ scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys

手動執行測試:節點1:
$ ssh rac1 date
$ ssh rac2 date
$ ssh rac1.luocs.com date
$ ssh rac2.luocs.com date

節點2:
$ ssh rac1 date
$ ssh rac2 date
$ ssh rac1.luocs.com date
$ ssh rac2.luocs.com date
 

重新進行CVU檢測: [grid@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose ……省略N個PASSED行 Check: Membership of user "grid" in group "dba"   Node Name         User Exists   Group Exists  User in Group  Status           —————-  ————  ————  ————  —————-   rac2              yes           yes           no            failed           rac1              yes           yes           no            failed         Result: Membership check for user "grid" in group "dba" failed

……省略N個PASSED行 Pre-check for cluster services setup was unsuccessful on all the nodes.

– 有一個問題導致最後拋出了個unsuccessful問題,我以前也說過,這個CVU安裝前檢測只是一種參考,結果未必都要一個SUCCESSFUL, 你只要能夠確信Failed項並不影響你的後續安裝計劃,那麼你大可以忽略這個未成功的結果。 – 我能確信grid不在dba組也並不影響RAC的安裝與使用,所以忽略之。 – 如果你想去除這個問題,用root用戶在每個節點執行# gpasswd -a grid dba。

執行runInstaller

[grid@rac1 grid]$ ./runInstaller -ignorePrereq -silent -force -responseFile /tmp/gi.rsp
StartingOracleUniversalInstaller...CheckingTemp space: must be greater than 120 MB.Actual25347 MB    PassedChecking swap space: must be greater than 150 MB.Actual3098 MB    PassedPreparing to launch OracleUniversalInstallerfrom/tmp/OraInstall2012-11-01_03-31-18AM.Please wait ...[grid@rac1 grid]$ You can find the log of this install session at:/u01/app/oraInventory/logs/installActions2012-11-01_03-31-18AM.log
The installation of OracleGridInfrastructure was successful.Please check '/u01/app/oraInventory/logs/silentInstall2012-11-01_03-31-18AM.log'for more details.As a root user, execute the following script(s):1./u01/app/oraInventory/orainstRoot.sh
        2./u01/app/11.2.0/grid/root.sh

Execute/u01/app/oraInventory/orainstRoot.sh on the following nodes:[rac1, rac2]Execute/u01/app/11.2.0/grid/root.sh on the following nodes:[rac1, rac2]As install user, execute the following script to complete the configuration.1./u01/app/11.2.0/grid/cfgtoollogs/configToolAllCommands

        Note:1.This script must be run on the same system fromwhere installer was run.2.This script needs a small password properties file for configuration assistants that require passwords (refer to install guide documentation).SuccessfullySetupSoftware.按照提示執行腳本節點1:[root@rac1 ~]# /u01/app/oraInventory/orainstRoot.shChanging permissions of /u01/app/oraInventory.Adding read,write permissions forgroup.Removing read,write,execute permissions for world.Changing groupname of /u01/app/oraInventory to oinstall.The execution of the script is complete.節點2:[root@rac2 ~]# /u01/app/oraInventory/orainstRoot.shChanging permissions of /u01/app/oraInventory.Adding read,write permissions forgroup.Removing read,write,execute permissions for world.Changing groupname of /u01/app/oraInventory to oinstall.The execution of the script is complete.節點1:[root@rac1 ~]# /u01/app/11.2.0/grid/root.shCheck/u01/app/11.2.0/grid/install/root_rac1.luocs.com_2012-11-01_03-41-30.logfor the output of root script

查看這個日誌輸出:[root@rac1 dbs]# cat /u01/app/11.2.0/grid/install/root_rac1.luocs.com_2012-11-01_03-41-30.logPerforming root user operation forOracle11gThe following environment variables are setas:
    ORACLE_OWNER= grid
    ORACLE_HOME=/u01/app/11.2.0/grid

Creating/etc/oratab file...Entries will be added to the /etc/oratab file as needed byDatabaseConfigurationAssistantwhen a database is created
Finished running generic part of root script.Now product-specific root actions will be performed.Using configuration parameter file:/u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
AddingClusterware entries to inittab
CRS-2672:Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676:Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672:Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676:Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672:Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2672:Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676:Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2676:Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672:Attempting to start 'ora.cssd' on 'rac1'
CRS-2672:Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676:Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676:Start of 'ora.cssd' on 'rac1' succeeded

ASM created and started successfully.DiskGroup CRS created successfully.

clscfg:-install mode specified
Successfully accumulated necessary OCR keys.Creating OCR keys for user 'root', privgrp 'root'..Operation successful.
CRS-4256:Updating the profile
Successful addition of voting disk 773f8f28fd984f00bf70c5b7b56228ae.Successfully replaced voting disk groupwith+CRS.
CRS-4256:Updating the profile
CRS-4266:Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group------------------------------------------1. ONLINE   773f8f28fd984f00bf70c5b7b56228ae(/dev/oracleasm/disks/CRS1)[CRS]Located1 voting disk(s).
CRS-2672:Attempting to start 'ora.asm' on 'rac1'
CRS-2676:Start of 'ora.asm' on 'rac1' succeeded
CRS-2672:Attempting to start 'ora.CRS.dg' on 'rac1'
CRS-2676:Start of 'ora.CRS.dg' on 'rac1' succeeded
CRS-2672:Attempting to start 'ora.registry.acfs' on 'rac1'
CRS-2676:Start of 'ora.registry.acfs' on 'rac1' succeeded
ConfigureOracleGridInfrastructurefor a Cluster... succeeded

節點2:[root@rac2 ~]# /u01/app/11.2.0/grid/root.shCheck/u01/app/11.2.0/grid/install/root_rac2.luocs.com_2012-11-01_03-52-03.logfor the output of root script

查看輸出日誌:[root@rac2 ~]# cat /u01/app/11.2.0/grid/install/root_rac2.luocs.com_2012-11-01_03-52-03.logPerforming root user operation forOracle11gThe following environment variables are setas:
    ORACLE_OWNER= grid
    ORACLE_HOME=/u01/app/11.2.0/grid

Creating/etc/oratab file...Entries will be added to the /etc/oratab file as needed byDatabaseConfigurationAssistantwhen a database is created
Finished running generic part of root script.Now product-specific root actions will be performed.Using configuration parameter file:/u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
AddingClusterware entries to inittab
CRS-4402:The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1,andis terminating
An active cluster was found during exclusive startup, restarting to join the cluster

ConfigureOracleGridInfrastructurefor a Cluster... succeeded
 

我參考的文檔裏說是會在ORACLE_HOME目錄下生成名爲“cfgrsp.properties”的文件,但我的環境裏始終沒有看到。 所以我就手動創建一個cfgrsp.properties文件:

[grid@rac1 ~]$ cd $ORACLE_HOME/cfgtoollogs
[grid@rac1 cfgtoollogs]$ touch cfgrsp.properties
然後填寫口令內容:[grid@rac1 cfgtoollogs]$ cat cfgrsp.properties 
oracle.assistants.asm|S_ASMPASSWORD=Oracle_12345
oracle.assistants.asm|S_ASMMONITORPASSWORD==Oracle_12345[root@rac1 ~]# whoami
root
[root@rac1 ~]# chmod 600 /u01/app/11.2.0/grid/cfgtoollogs/cfgrsp.properties 執行語句:[grid@rac1 cfgtoollogs]$ whoami
grid
[grid@rac1 cfgtoollogs]$ pwd
/u01/app/11.2.0/grid/cfgtoollogs
[grid@rac1 cfgtoollogs]$ 
[grid@rac1 cfgtoollogs]$ ./configToolAllCommands RESPONSE_FILE=./cfgrsp.properties
Setting the invPtrLoc to /u01/app/11.2.0/grid/oraInst.loc

perform - mode is starting for action: configure

perform - mode finished for action: configure

You can see the log file:/u01/app/11.2.0/grid/cfgtoollogs/oui/configActions2012-10-31_06-38-03-PM.log

查看日誌:[root@rac1 ~]# cat /u01/app/11.2.0/grid/cfgtoollogs/oui/configActions2012-10-31_06-38-03-PM.log###################################################The action configuration is performing
------------------------------------------------------The plug-inUpdateInventoryis running


/u01/app/11.2.0/grid/oui/bin/runInstaller -nowait -noconsole -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=true"CLUSTER_NODES={rac1,rac2}" ORACLE_HOME=/u01/app/11.2.0/grid 
StartingOracleUniversalInstaller...Checking swap space: must be greater than 500 MB.Actual3098 MB    PassedThe inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory

The plug-inUpdateInventory has successfully been performed
------------------------------------------------------------------------------------------------------------The plug-inOracleNetConfigurationAssistantis running


Parsing command line arguments:Parameter"orahome"=/u01/app/11.2.0/grid
    Parameter"orahnam"=Ora11g_gridinfrahome1Parameter"instype"= typical
    Parameter"inscomp"= client,oraclenet,javavm,server
    Parameter"insprtcl"= tcp
    Parameter"cfg"=localParameter"authadp"= NO_VALUE
    Parameter"responsefile"=/u01/app/11.2.0/grid/network/install/netca_typ.rsp
    Parameter"silent"=trueParameter"silent"=trueDone parsing command line arguments.OracleNetServicesConfiguration:Profile configuration complete.Profile configuration complete.Listener"LISTENER" already exists.OracleNetServices configuration successful.Theexit code is0The plug-inOracleNetConfigurationAssistant has successfully been performed
------------------------------------------------------------------------------------------------------------The plug-inAutomaticStorageManagementConfigurationAssistantis running


The plug-inAutomaticStorageManagementConfigurationAssistant has successfully been performed
------------------------------------------------------------------------------------------------------------The plug-inOracleClusterVerificationUtilityis running


Performing post-checks for cluster services setup 

Checking node reachability...Node reachability check passed from node "rac1"Checking user equivalence...User equivalence check passed for user "grid"Checking node connectivity...Checking hosts config file...Verification of the hosts config file successful

Check:Node connectivity forinterface"eth0"Node connectivity passed forinterface"eth0"
TCP connectivity check passed for subnet "192.168.53.0"Check:Node connectivity forinterface"eth1"Node connectivity passed forinterface"eth1"
TCP connectivity check passed for subnet "10.0.3.0"Checking subnet mask consistency...Subnet mask consistency check passed for subnet "192.168.53.0".Subnet mask consistency check passed for subnet "10.0.3.0".Subnet mask consistency check passed.Node connectivity check passed

Checking multicast communication...Checking subnet "192.168.53.0"for multicast communication with multicast group"230.0.1.0"...Check of subnet "192.168.53.0"for multicast communication with multicast group"230.0.1.0" passed.Checking subnet "10.0.3.0"for multicast communication with multicast group"230.0.1.0"...Check of subnet "10.0.3.0"for multicast communication with multicast group"230.0.1.0" passed.Check of multicast communication passed.Time zone consistency check passed

CheckingOracleClusterVotingDisk configuration...

ASM Running check passed. ASM is running on all specified nodes

OracleClusterVotingDisk configuration check passed

CheckingCluster manager integrity...Checking CSS daemon...OracleClusterSynchronizationServices appear to be online.Cluster manager integrity check passed


UDev attributes check for OCR locations started...UDev attributes check passed for OCR locations 


UDev attributes check forVotingDisk locations started...UDev attributes check passed forVotingDisk locations 

Default user file creation mask check passed

Checking cluster integrity...Cluster integrity check passed


Checking OCR integrity...Checking the absence of a non-clustered configuration...All nodes free of non-clustered,local-only configurations


ASM Running check passed. ASM is running on all specified nodes

Checking OCR config file "/etc/oracle/ocr.loc"...

OCR config file "/etc/oracle/ocr.loc" check successful


Diskgroupfor ocr location "+CRS" available on all the nodes


NOTE:This check does not verify the integrity of the OCR contents.Execute'ocrcheck'as a privileged user to verify the contents of OCR.

OCR integrity check passed

Checking CRS integrity...Clusterware version consistency passed

CRS integrity check passed

Checking node application existence...Checking existence of VIP node application (required)
VIP node application check passed

Checking existence of NETWORK node application (required)
NETWORK node application check passed

Checking existence of GSD node application (optional)
GSD node application is offline on nodes "rac2,rac1"Checking existence of ONS node application (optional)
ONS node application check passed


CheckingSingleClientAccessName(SCAN)...Checking TCP connectivity to SCAN Listeners...
TCP connectivity to SCAN Listeners exists on all cluster nodes

Checking name resolution setup for"scan.luocs.com"...Verification of SCAN VIP andListener setup passed

Checking OLR integrity...Checking OLR config file...

OLR config file check successful


Checking OLR file attributes...

OLR file check successful


WARNING:This check does not verify the integrity of the OLR contents.Execute'ocrcheck -local'as a privileged user to verify the contents of OLR.

OLR integrity check passed
OCR detected on ASM.Running ACFS Integrity checks...Starting check to see if ASM is running on all cluster nodes...

ASM Running check passed. ASM is running on all specified nodes

StartingDiskGroups check to see if at least one DiskGroup configured...DiskGroupCheck passed.At least one DiskGroup configured

Task ACFS Integrity check passed

User"grid"isnot part of "root"group.Check passed

CheckingifClusterwareis installed on all nodes...Check of Clusterware install passed

Checkingif CTSS Resourceis running on all nodes...
CTSS resource check passed


Querying CTSS for time offset on all nodes...Query of CTSS for time offset passed

Check CTSS state started...
CTSS isinActive state.Proceedingwith check of clock time offsets on all nodes...Check of clock time offsets passed


OracleClusterTimeSynchronizationServices check passed
Checking VIP configuration.Checking VIP Subnet configuration.Checkfor VIP Subnet configuration passed.Checking VIP reachability
Checkfor VIP reachability passed.Post-check for cluster services setup was successful.The plug-inOracleClusterVerificationUtility has successfully been performed
------------------------------------------------------The action configuration has successfully completed

###################################################
 

擴展內容: 這裏我剛開始做的時候沒有加上cfgrsp.properties,直接運行./configToolAllCommands,結果沒有創建ASM口令文件。 [root@rac1 ~]# cd /u01/app/11.2.0/grid/dbs/ [root@rac1 dbs]# ls ab_+ASM1.dat  hc_+ASM1.dat  init.ora

然後我手動創建個: 節點1: [root@rac1 ~]# su – grid [grid@rac1 ~]$ cd $ORACLE_HOME/dbs [grid@rac1 dbs]$ orapwd file='orapw+ASM' entries=5 password=Oracle_12345

節點2: [root@rac2 ~]# su – grid [grid@rac2 ~]$ cd $ORACLE_HOME/dbs [grid@rac2 dbs]$ orapwd file='orapw+ASM' entries=5 password=Oracle_12345

但最後依然遇到了些問題,所以這個步驟上務必帶上cfgrsp.properties項。

簡單檢測:

看CRS服務資源狀態:[grid@rac1 grid]$ crs_stat -t
NameTypeTargetStateHost------------------------------------------------------------
ora.CRS.dg     ora....up.type ONLINE    ONLINE    rac1        
ora....ER.lsnr ora....er.type ONLINE    ONLINE    rac1        
ora....N1.lsnr ora....er.type ONLINE    ONLINE    rac2        
ora....N2.lsnr ora....er.type ONLINE    ONLINE    rac1        
ora....N3.lsnr ora....er.type ONLINE    ONLINE    rac1        
ora.asm        ora.asm.type   ONLINE    ONLINE    rac1        
ora.cvu        ora.cvu.type   ONLINE    ONLINE    rac1        
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE               
ora....network ora....rk.type ONLINE    ONLINE    rac1        
ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    rac1        
ora.ons        ora.ons.type   ONLINE    ONLINE    rac1        
ora....SM1.asm application    ONLINE    ONLINE    rac1        
ora....C1.lsnr application    ONLINE    ONLINE    rac1        
ora.rac1.gsd   application    OFFLINE   OFFLINE               
ora.rac1.ons   application    ONLINE    ONLINE    rac1        
ora.rac1.vip   ora....t1.type ONLINE    ONLINE    rac1        
ora....SM2.asm application    ONLINE    ONLINE    rac2        
ora....C2.lsnr application    ONLINE    ONLINE    rac2        
ora.rac2.gsd   application    OFFLINE   OFFLINE               
ora.rac2.ons   application    ONLINE    ONLINE    rac2        
ora.rac2.vip   ora....t1.type ONLINE    ONLINE    rac2        
ora....ry.acfs ora....fs.type ONLINE    ONLINE    rac1        
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    rac2        
ora.scan2.vip  ora....ip.type ONLINE    ONLINE    rac1        
ora.scan3.vip  ora....ip.type ONLINE    ONLINE    rac1  

繼續執行幾個命令來確保集羣服務的啓動情況[grid@rac1 grid]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------LocalResources--------------------------------------------------------------------------------
ora.CRS.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.asm
               ONLINE  ONLINE       rac1                     Started             
               ONLINE  ONLINE       rac2                     Started             
ora.gsd
               OFFLINE OFFLINE      rac1                                         
               OFFLINE OFFLINE      rac2                                         
ora.net1.network
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.ons
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.registry.acfs
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
--------------------------------------------------------------------------------ClusterResources--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2                                         
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac1                                         
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac1                                         
ora.cvu
      1        ONLINE  ONLINE       rac1                                         
ora.oc4j
      1        ONLINE  ONLINE       rac1                                         
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                                         
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                                         
ora.scan1.vip
      1        ONLINE  ONLINE       rac2                                         
ora.scan2.vip
      1        ONLINE  ONLINE       rac1                                         
ora.scan3.vip
      1        ONLINE  ONLINE       rac1                      

[grid@rac1 grid]$ crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------ClusterResources--------------------------------------------------------------------------------
ora.asm1        ONLINE  ONLINE       rac1                     Started             
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       rac1                                         
ora.crf
      1        ONLINE  ONLINE       rac1                                         
ora.crsd
      1        ONLINE  ONLINE       rac1                                         
ora.cssd
      1        ONLINE  ONLINE       rac1                                         
ora.cssdmonitor
      1        ONLINE  ONLINE       rac1                                         
ora.ctssd
      1        ONLINE  ONLINE       rac1                     ACTIVE:0            
ora.diskmon
      1        OFFLINE OFFLINE                                                   
ora.drivers.acfs
      1        ONLINE  ONLINE       rac1                                         
ora.evmd
      1        ONLINE  ONLINE       rac1                                         
ora.gipcd
      1        ONLINE  ONLINE       rac1                                         
ora.gpnpd
      1        ONLINE  ONLINE       rac1                                         
ora.mdnsd
      1        ONLINE  ONLINE       rac1              

[grid@rac1 grid]$ crsctl check cluster -all
**************************************************************
rac1:
CRS-4537:ClusterReadyServicesis online
CRS-4529:ClusterSynchronizationServicesis online
CRS-4533:EventManageris online
**************************************************************
rac2:
CRS-4537:ClusterReadyServicesis online
CRS-4529:ClusterSynchronizationServicesis online
CRS-4533:EventManageris online
**************************************************************
 

OK,非常順利,到此GI成功安裝完畢!

3、創建ASM磁盤組 我們目前只創建了OCRVOTE ASM磁盤組,其實我們可以使用ASMCA圖形方式很方便容易地創建其他ASM磁盤,ASMCA也提供了靜默方式,請參考MOS ID 1068788.1。

[root@rac1 ~]# oracleasm listdisks
ARCH
CRS1
CRS2
DATA
MYDATA

[grid@rac1 ~]$ asmca -silent -configureASM -sysAsmPassword Oracle_12345-asmsnmpPassword Oracle_12345-diskString '/dev/oracleasm/disks/*'-diskGroupName MYDATA -disk '/dev/oracleasm/disks/MYDATA'-redundancy EXTERNAL
 

當然我們也可以使用命令來創建其他ASM磁盤組,參考下面我的操作:

SQL>set pagesize 9999
SQL>set line 130
SQL> col NAME for a20
SQL>select name,type,state,total_mb,free_mb from v$asm_diskgroup;

NAME                 TYPE         STATE                    TOTAL_MB    FREE_MB
--------------------------------------------------------------------------
OCRVOTE              EXTERN       MOUNTED                      1012580--可見當前只有OCRVOTE磁盤組

SQL> col FAILGROUP for a30
SQL> col PATH for a40
SQL>select name, failgroup, path, disk_number from v$asm_disk;

NAME                 FAILGROUP                      PATH                                     DISK_NUMBER
-----------------------------------------------------------------------------------------------------/dev/oracleasm/disks/MYDATA                        0/dev/oracleasm/disks/ARCH                          2
OCRVOTE_0000         OCRVOTE_0000                   /dev/oracleasm/disks/MYCRS                         0--這裏還有/dev/oracleasm/disks/MYDATA和/dev/oracleasm/disks/ARCH磁盤組。

SQL> create diskgroup MYDATA external redundancy disk '/dev/oracleasm/disks/MYDATA';Diskgroup created.

SQL>select name,type,state,total_mb,free_mb from v$asm_diskgroup;

NAME                 TYPE         STATE                    TOTAL_MB    FREE_MB
--------------------------------------------------------------------------
OCRVOTE              EXTERN       MOUNTED                      1012580
MYDATA               EXTERN       MOUNTED                      59625912-- OK,加載進來了,但這不表示其他節點也加載進來了。

SQL>select name,type,state,total_mb,free_mb from gv$asm_diskgroup;

NAME                 TYPE         STATE                    TOTAL_MB    FREE_MB
--------------------------------------------------------------------------
OCRVOTE              EXTERN       MOUNTED                      1012580
MYDATA               EXTERN       MOUNTED                      59625912
OCRVOTE              EXTERN       MOUNTED                      1012580
MYDATA                            DISMOUNTED                      00--第二個節點MYDATE磁盤組狀態爲DISMOUNTED。節點2:[root@rac2 ~]# su - grid[grid@rac2 ~]$ sqlplus /as sysasm

SQL> alter diskgroup mydata mount;Diskgroup altered.節點1:
SQL>select name,type,state,total_mb,free_mb from gv$asm_diskgroup;

NAME                 TYPE         STATE                    TOTAL_MB    FREE_MB
--------------------------------------------------------------------------
OCRVOTE              EXTERN       MOUNTED                      1012580
MYDATA               EXTERN       MOUNTED                      59625869
OCRVOTE              EXTERN       MOUNTED                      1012580
MYDATA               EXTERN       MOUNTED                      59625869-- OK已經都加載了。
 

4、靜默安裝RDBMS Softwar 首先,做SSH用戶等效性配置,這裏操作略。

依然進行CVU檢測 [root@rac1 ~]# su – grid [grid@rac1 ~]$ cd /install/grid/ [grid@rac1 grid]$ ./runcluvfy.sh stage -pre dbinst -n rac1,rac2 -r 11gR2 -verbose ……省略N個PASSED行

Check: Membership of user "grid" in group "dba"   Node Name         User Exists   Group Exists  User in Group  Status           —————-  ————  ————  ————  —————-   rac2              yes           yes           no            failed           rac1              yes           yes           no            failed         Result: Membership check for user "grid" in group "dba" failed ……省略N個PASSED行 – 依然是同樣的問題,這裏忽略之

這次我使用database目錄下的db_install.rsp

[oracle@rac1 response]$ vi db_install.rsp
編輯之後[oracle@rac1 response]$ cat db_install.rsp | grep -v ^# | grep -v ^$ > /tmp/db_install.rsp[oracle@rac1 database]$ cat /tmp/db_install.rsp 
oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v11_2_0
oracle.install.option=INSTALL_DB_AND_CONFIG
ORACLE_HOSTNAME=rac1
UNIX_GROUP_NAME=oinstall
INVENTORY_LOCATION=/u01/app/oraInventory
SELECTED_LANGUAGES=en
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
ORACLE_BASE=/u01/app/oracle
oracle.install.db.InstallEdition=
oracle.install.db.EEOptionsSelection=false
oracle.install.db.optionalComponents=oracle.rdbms.partitioning:11.2.0.3.0,oracle.oraolap:11.2.0.3.0,oracle.rdbms.dm:11.2.0.3.0,oracle.rdbms.dv:11.2.0.3.0,oracle.rdbms.lbac:11.2.0.3.0,oracle.rdbms.rat:11.2.0.3.0
oracle.install.db.DBA_GROUP=dba
oracle.install.db.OPER_GROUP=oper
oracle.install.db.CLUSTER_NODES=rac1,rac2
oracle.install.db.isRACOneInstall=
oracle.install.db.racOneServiceName=
oracle.install.db.config.starterdb.type=GENERAL_PURPOSE
oracle.install.db.config.starterdb.globalDBName=www.luocs.com
oracle.install.db.config.starterdb.SID=luocs
oracle.install.db.config.starterdb.characterSet=AL32UTF8
oracle.install.db.config.starterdb.memoryOption=true
oracle.install.db.config.starterdb.memoryLimit=700
oracle.install.db.config.starterdb.installExampleSchemas=ture
oracle.install.db.config.starterdb.enableSecuritySettings=false
oracle.install.db.config.starterdb.password.ALL=Oracle12345
oracle.install.db.config.starterdb.password.SYS=
oracle.install.db.config.starterdb.password.SYSTEM=
oracle.install.db.config.starterdb.password.SYSMAN=
oracle.install.db.config.starterdb.password.DBSNMP=
oracle.install.db.config.starterdb.control=DB_CONTROL
oracle.install.db.config.starterdb.gridcontrol.gridControlServiceURL=
oracle.install.db.config.starterdb.automatedBackup.enable=false
oracle.install.db.config.starterdb.automatedBackup.osuid=
oracle.install.db.config.starterdb.automatedBackup.ospwd=
oracle.install.db.config.starterdb.storageType=ASM_STORAGE
oracle.install.db.config.starterdb.fileSystemStorage.dataLocation=
oracle.install.db.config.starterdb.fileSystemStorage.recoveryLocation=
oracle.install.db.config.asm.diskGroup=MYDATA
oracle.install.db.config.asm.ASMSNMPPassword=Oracle12345
MYORACLESUPPORT_USERNAME=
MYORACLESUPPORT_PASSWORD=
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false
DECLINE_SECURITY_UPDATES=true
PROXY_HOST=
PROXY_PORT=
PROXY_USER=
PROXY_PWD=
PROXY_REALM=
COLLECTOR_SUPPORTHUB_URL=
oracle.installer.autoupdates.option=SKIP_UPDATES
oracle.installer.autoupdates.downloadUpdatesLoc=
AUTOUPDATES_MYORACLESUPPORT_USERNAME=

AUTOUPDATES_MYORACLESUPPORT_PASSWORD=
 

說明:這裏我們採用僅安裝RDBMS Software only,但我建議其他參數也最好配置上,免得出現Warning。

另外ORACLE_HOME我們需要注意一下,我們在圖形方式安裝的時候只要創建ORACLE_BASE目錄即可,家目錄會自動創建,但靜默方式不可以。 我們先檢查一下:

節點1:[root@rac1 ~]# su - oracle[oracle@rac1 ~]$ echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0/dbhome_1
[oracle@rac1 ~]$ ls $ORACLE_HOME
ls:/u01/app/oracle/product/11.2.0/dbhome_1:No such file or directory
--可見這個目錄尚不存在,下面創建個[oracle@rac1 ~]$ mkdir $ORACLE_BASE/product/11.2.0/dbhome_1 -p

節點2:[root@rac2 ~]# su - oracle[oracle@rac2 ~]$ mkdir $ORACLE_BASE/product/11.2.0/dbhome_1 -p
 

開始執行runInstaller:

[oracle@rac1 database]$ ./runInstaller -ignorePrereq -silent -force -responseFile /tmp/db_install.rsp
StartingOracleUniversalInstaller...CheckingTemp space: must be greater than 120 MB.Actual21986 MB    PassedChecking swap space: must be greater than 150 MB.Actual3098 MB    PassedPreparing to launch OracleUniversalInstallerfrom/tmp/OraInstall2012-11-01_05-01-59AM.Please wait ...[oracle@rac1 database]$ You can find the log of this install session at:/u01/app/oraInventory/logs/installActions2012-11-01_05-01-59AM.log
The installation of OracleDatabase11g was successful.Please check '/u01/app/oraInventory/logs/silentInstall2012-11-01_05-01-59AM.log'for more details.As a root user, execute the following script(s):1./u01/app/oracle/product/11.2.0/dbhome_1/root.sh

Execute/u01/app/oracle/product/11.2.0/dbhome_1/root.sh on the following nodes:[rac1, rac2]SuccessfullySetupSoftware.在兩個節點執行腳本:節點1:[root@rac1 ~]# /u01/app/oracle/product/11.2.0/dbhome_1/root.shCheck/u01/app/oracle/product/11.2.0/dbhome_1/install/root_rac1.luocs.com_2012-11-01_05-15-34.logfor the output of root script

輸出的日誌:[root@rac1 ~]# cat /u01/app/oracle/product/11.2.0/dbhome_1/install/root_rac1.luocs.com_2012-11-01_05-15-34.logPerforming root user operation forOracle11gThe following environment variables are setas:
    ORACLE_OWNER= oracle
    ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
Entries will be added to the /etc/oratab file as needed byDatabaseConfigurationAssistantwhen a database is created
Finished running generic part of root script.Now product-specific root actions will be performed.Finished product-specific root actions.節點2:[root@rac2 ~]# /u01/app/oracle/product/11.2.0/dbhome_1/root.shCheck/u01/app/oracle/product/11.2.0/dbhome_1/install/root_rac2.luocs.com_2012-11-01_05-16-06.logfor the output of root script

輸出的日誌:[root@rac2 ~]# cat /u01/app/oracle/product/11.2.0/dbhome_1/install/root_rac2.luocs.com_2012-11-01_05-16-06.logPerforming root user operation forOracle11gThe following environment variables are setas:
    ORACLE_OWNER= oracle
    ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
Entries will be added to the /etc/oratab file as needed byDatabaseConfigurationAssistantwhen a database is created
Finished running generic part of root script.Now product-specific root actions will be performed.Finished product-specific root actions.
 

OK,到這裏Oracle RDBMS Software已經安裝完畢!

5、DBCA靜默創建數據庫 參考官方聯機文檔,執行如下語句

[oracle@rac1 ~]$ $ORACLE_HOME/bin/dbca -silent -createDatabase -templateName General_Purpose.dbc  -gdbName www.luocs.com -sid luocs -sysPassword oracle_12345 -systemPassword  oracle_12345 -storageType ASM -diskGroupName MYDATA -datafileJarLocation $ORACLE_HOME/assistants/dbca/templates -nodeinfo rac1,rac2 -characterset AL32UTF8 -obfuscatedPasswords false-sampleSchema false-asmSysPassword Oracle_12345Copying database files
1% complete
3% complete
9% complete
15% complete
21% complete
27% complete
30% complete
Creatingand starting Oracle instance
32% complete
36% complete
40% complete
44% complete
45% complete
48% complete
50% complete
Creating cluster database views
52% complete
70% complete
CompletingDatabaseCreation73% complete
76% complete
85% complete
94% complete
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/www/www.log"for further details.查看日誌信息:[oracle@rac1 ~]$ cat /u01/app/oracle/cfgtoollogs/dbca/www/www.log
Copying database files
DBCA_PROGRESS :1%
DBCA_PROGRESS :3%
DBCA_PROGRESS :9%
DBCA_PROGRESS :15%
DBCA_PROGRESS :21%
DBCA_PROGRESS :27%
DBCA_PROGRESS :30%Creatingand starting Oracle instance
DBCA_PROGRESS :32%
DBCA_PROGRESS :36%
DBCA_PROGRESS :40%
DBCA_PROGRESS :44%
DBCA_PROGRESS :45%
DBCA_PROGRESS :48%
DBCA_PROGRESS :50%Creating cluster database views
DBCA_PROGRESS :52%
DBCA_PROGRESS :70%CompletingDatabaseCreation
DBCA_PROGRESS :73%
DBCA_PROGRESS :76%
DBCA_PROGRESS :85%
DBCA_PROGRESS :94%
DBCA_PROGRESS :100%Database creation complete.For details check the logfiles at:/u01/app/oracle/cfgtoollogs/dbca/www.DatabaseInformation:GlobalDatabaseName:www.luocs.com
SystemIdentifier(SID)Prefix:luocs
 

OK,至此數據庫創建完畢!

6、檢查與後期完善

[root@rac1 ~]# su - grid[grid@rac1 ~]$ crs_stat -t
NameTypeTargetStateHost------------------------------------------------------------
ora.CRS.dg     ora....up.type ONLINE    ONLINE    rac1        
ora....ER.lsnr ora....er.type ONLINE    ONLINE    rac1        
ora....N1.lsnr ora....er.type ONLINE    ONLINE    rac2        
ora....N2.lsnr ora....er.type ONLINE    ONLINE    rac1        
ora....N3.lsnr ora....er.type ONLINE    ONLINE    rac1        
ora.MYDATA.dg  ora....up.type ONLINE    ONLINE    rac1        
ora.asm        ora.asm.type   ONLINE    ONLINE    rac1        
ora.cvu        ora.cvu.type   ONLINE    ONLINE    rac1        
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE               
ora....network ora....rk.type ONLINE    ONLINE    rac1        
ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    rac1        
ora.ons        ora.ons.type   ONLINE    ONLINE    rac1        
ora....SM1.asm application    ONLINE    ONLINE    rac1        
ora....C1.lsnr application    ONLINE    ONLINE    rac1        
ora.rac1.gsd   application    OFFLINE   OFFLINE               
ora.rac1.ons   application    ONLINE    ONLINE    rac1        
ora.rac1.vip   ora....t1.type ONLINE    ONLINE    rac1        
ora....SM2.asm application    ONLINE    ONLINE    rac2        
ora....C2.lsnr application    ONLINE    ONLINE    rac2        
ora.rac2.gsd   application    OFFLINE   OFFLINE               
ora.rac2.ons   application    ONLINE    ONLINE    rac2        
ora.rac2.vip   ora....t1.type ONLINE    ONLINE    rac2        
ora....ry.acfs ora....fs.type ONLINE    ONLINE    rac1        
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    rac2        
ora.scan2.vip  ora....ip.type ONLINE    ONLINE    rac1        
ora.scan3.vip  ora....ip.type ONLINE    ONLINE    rac1        
ora.www.db     ora....se.type ONLINE    ONLINE    rac1    


[oracle@rac1 ~]$ sqlplus /as sysdba

SQL*Plus:Release11.2.0.3.0Production on ThuNov105:34:362012Copyright(c)1982,2011,Oracle.All rights reserved.Connected to:OracleDatabase11gEnterpriseEditionRelease11.2.0.3.0-64bitProductionWith the Partitioning,RealApplicationClusters,AutomaticStorageManagement, OLAP,DataMiningandRealApplicationTesting options

SQL>select*from global_name;

GLOBAL_NAME
--------------------------------------------------------------------------------
WWW.LUOCS.COM

SQL>select instance_name, status from gv$instance;

INSTANCE_NAME                    STATUS
--------------------------------------------------------
luocs1                           OPEN
luocs2                           OPEN

SQL> archive log list
Database log mode              NoArchiveModeAutomatic archival             DisabledArchive destination            /u01/app/oracle/product/11.2.0/dbhome_1/dbs/arch
Oldest online log sequence     3Current log sequence           4
 

我們就簡單檢查這些,然後我們後期完善一下,比如開啓歸檔、開啓閃回恢復區、添加日誌組與日誌成員並調整大小、配置客戶端等一系列操作,這裏並不記錄。

最後,在生產應用之前,請務必在測試環境操作一遍!

參考文章:

Silent Installation Experiences with Oracle Database 11g R2 Real Application Clusters on Linux on System
How to use ASMCA in silent mode to configure ASM for a stand-alone server [ID 1068788.1]
Using DBCA Noninteractive (Silent) Configuration for Oracle RAC
Oracle安裝與操作系統用戶組
分類: oracle

./runcluvfy.sh stage -pre crsinst -n node1,node2 -fixup -verbose


[grid@rac1 grid]$ You can find the log of this install session at:
 /oracle/app/oraInventory/logs/installActions2018-08-23_03-31-49PM.log
The following configuration scripts need to be executed as the "root" user.
 #!/bin/sh
 #Root scripts to run

/oracle/app/oraInventory/orainstRoot.sh
/oracle/11.2.0/grid/root.sh
To execute the configuration scripts:
     1. Open a terminal window
     2. Log in as "root"
     3. Run the scripts
     4. Return to this window and hit "Enter" key to continue

Configuration assistants have not been run. This can happen for following reasons - either root.sh is to be ron.
"/oracle/11.2.0/grid/cfgtoollogs/configToolAllCommands" script contains all commands to be executed by the cooutside of OUI. Note that you may have to update this script with passwords (if any) before executing the sam


Successfully Setup Software.


xw_ORACLDB2018

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章