Oracle 11gR2 RAC Install – cleaning up a failed install on Linux

This article describes how to clean up a failed Grid Infrastructure installation. It specifically focuses on what to do if the "root.sh" script fails during this process and you want to rewind and start again.

  • Grid Infrastructure

  • ASM Disks

Grid Infrastructure

On all cluster nodes except the last, run the following command as the "root" user.

# perl $GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force
[root@tbird1 install]# ./rootcrs.pl -deconfig -verbose -force
2012-10-28 17:04:38: Parsing the host name
2012-10-28 17:04:38: Checking for super user privileges
2012-10-28 17:04:38: User has super user privileges
Using configuration parameter file: ./crsconfig_params
VIP exists.:tbird1

<output removed to aid clarity>

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'tbird1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node

On the last cluster node, run the following command as the "root" user.

# perl $GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force -lastnode

This final command will blank the OCR configuration and voting disk.

You should be in a position to rerun the "root.sh" file now, but if you are using ASM, you will need to prepare your ASM disks before doing so.


If that doesn’t work, we need to resort to somewhat more belligerent methods.

The following will wipe out the Oracle Grid install completely, allowing you start over with install media.

First, make sure any CRS software is shut down. If it is not shut down then use the crsctl command to stop all the clusterware software:

[root@tbird2 oraInventory]# . oraenv
ORACLE_SID = [root] ? +ASM1     

The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/oracle

[root@tbird2 oraInventory]# crsctl stop has
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'tbird2'
CRS-2673: Attempting to stop 'ora.crsd' on 'tbird2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'tbird2'
CRS-2673: Attempting to stop 'ora.tbird2.vip' on 'tbird2'

<output removed to aid clarity>

CRS-2677: Stop of 'ora.gipcd' on 'tbird2' succeeded
CRS-2677: Stop of 'ora.diskmon' on 'tbird2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'tbird2' has completed
CRS-4133: Oracle High Availability Services has been stopped.

Make sure that nothing is running as Oracle:

[root@tbird2 oraInventory]# ps -ef | grep oracleroot     19214  4529  0 16:51 pts/1    00:00:00 grep oracle
Now we can remove the Oracle install as follows:Disable the OHASD Daemon from starting on reboot – do this on all nodes:
[root@tbird2 etc]# cat /etc/inittab
# Run xdm in runlevel 5
 x:5:respawn:/etc/X11/prefdm -nodaemon
h1:35:respawn:/etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null
 
 Remove the last line that spawns the ohasd daemon, and save the file.
Now locate the Oracle Inventory and the location of the current Oracle installs. I am assuming in this case you want to remove everything.
The Oracle inventory location is stored in the oraInst.loc file:
[root@tbird2 etc]# cat /etc/oraInst.locinventory_loc=/u01/app/oraInventoryinst_group=oinstall
 
 Navigate to the Oracle Inventory, listed here at /u01/app/oraInventory and inspect the contents of the ContentsXML/inventory.xml file – do this on all nodes:
 [root@tbird2 oraInventory]# cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2009, Oracle. All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>11.2.0.1.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true">
   <NODE_LIST>
      <NODE NAME="tbird1"/>
      <NODE NAME="tbird2"/>
   </NODE_LIST>
</HOME>
</HOME_LIST>
</INVENTORY>

We can see we have a Grid install at /u01/app/11.2.0/grid. We can remove this as follows:[root@tbird2 oraInventory]# rm -R /u01/app/11.2.0
 
 Now we can remove the inventory directory – do this on all nodes:
[root@tbird2 oraInventory]# rm -R /u01/app/oraInventory
 
 Now we can remote the Oracle directory and files under /etc – do this on all nodes.
[root@tbird2 ~]# rm -R /etc/oracle [root@tbird2 ~]# rm /etc/oraInst.loc [root@tbird2 ~]# rm /etc/oratab 
 
 Now we delete the files added to /usr/local/bin – do this on all nodes.
[root@tbird2 ~]# rm /usr/local/bin/dbhome[root@tbird2 ~]# rm /usr/local/bin/oraenv[root@tbird2 ~]# rm /usr/local/bin/coraenv
 
 Reset the permissions on /u01/app – do this on all nodes.
[root@tbird2 ~]# chown oracle:dba /u01/app
 
 Now we need to clear the ASM devices we created – do this on both nodes.
[root@tbird2 ~]# oracleasm deletedisk DATAClearing disk header: done
Dropping disk: done
 
Finally re-stamp the devices for ASM.
[root@tbird1 ~]# oracleasm createdisk DATA /dev/sdc1Writing disk header: done
Instantiating disk: done
 
 And scan it on the secondary nodes:
[root@tbird2 ~]# oracleasm scandisksReloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "DATA"

ASM Disks

Once you attempt an installation, your ASM disks are marked as being used, so they can no longer be used as candidate disks. To revert them to candidate disk do the following.

Overwrite the header for the relevant partitions using the "dd" command.

# dd if=/dev/zero of=/dev/sdb1 bs=1024 count=100

Remove and create the ASM disk for each partition.

# /etc/init.d/oracleasm deletedisk DATA /dev/sdb1
# /etc/init.d/oracleasm createdisk DATA /dev/sdb1

The disks will now be available as candidate disks.


Now Oracle is completely removed, you can start your Grid install again.


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章