[HACMP] HACMP下安裝RAC你一定要知道的
曾經碰到這樣的問題: 環境:AIX5.3+HACMP5.3+ORACLE10G+EMC陣列; 1.stop HACMP後,ORACLE能夠正常shutdown,但是用lsvg -o可以發現某個節點的併發卷組並沒有VARYOFF,手動VARYOFF該VG失敗;於是嘗試varyon,卻又提示 “0516-034 varyonvg: Failed to open VG special file. Probable cause is the VG was forced offline. Execute thevaryoffvg and varyonvg commands to bring the VG online.” 嘗試兩次之後,可以完成varyon\varyoff的操作。 2.start hacmp,只有一個節點的實例能起來,另外一個節點的PV沒有CONCURRENT ACTIVE 3.執行varyonvg,可以完成;但是lsvg -p datavg後發現有個2個LUN的狀態是pvmissing,多次嘗試均如此;因爲是陣列,而且其中一個節點能夠正常讀取,所以這2個LUN並沒有物理損壞。 4.EXPORT\IMPORT VG,同步HA,故障依舊 解決辦法: 忽然想起,RAC環境中,需要修改PV的reserve_lock(reserve_policy)的屬性.......該不是這個問題吧? lsattr -El hdiskpowerX,果然發現reserve_lock=yes,於是馬上關閉HACMP,然後分別在2個節點上執行: chdev -l powerhdiskX -a reserve_lock=no 啓動HACMP...... netstat -in IP起來了 lsvg -o vg也已經CONCURRENT ACTIVE ps -ef |Grep oracle已經看到很多進程在RUNNING 類似的情況碰到兩三次了,都是因爲安裝ORACLE的傢伙沒仔細看ORACLE的官方文檔,從網上DOWN一個STEP BY STEP的文檔就幹活,害人不淺。 我曾經檢查過不少RAC,發現將近一半並有按照RELEASE NOTES來修改PV屬性,運氣好的話兩三年不出問題,基本不出問題; 但運氣不好的話HA一關閉啓動很可能就會碰以上問題。 其實在HACMP+RAC環境中,PV的這個屬性reserve_lock(reserve_policy)必須爲否,以提供多節點的併發訪問; ORACLE的RELEASE NOTES上有提到; 可惜很多工程師並沒留意到,杯具啊 ====================================================== To enable simultaneous access to a disk device from multiple nodes, you must set the appropriate Object Data Manager (ODM) attribute listed in the following table to the value shown, depending on the disk type: Disk Type Attribute Value SSA, FAStT, or non-MPIO-capable disks reserve_lock no ESS, EMC, HDS, CLARiiON, or MPIO-capable disks reserve_policy no_reserve To determine whether the attribute has the correct value, enter a command similar to the following on all cluster nodes for each disk device that you want to use: # /usr/sbin/lsattr -E -l hdiskn If the required attribute is not set to the correct value on any node, then enter a command similar to one of the following on that node: ■ SSA and FAStT devices # /usr/sbin/chdev -l hdiskn -a reserve_lock=no ■ ESS, EMC, HDS, CLARiiON, and MPIO-capable devices # /usr/sbin/chdev -l hdiskn -a reserve_policy=no_reserve ===================================================== |