Skip Headers
Oracle® Clusterware Administration and Deployment Guide
11g Release 2 (11.2)

Part Number E10717-04
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

3 Cloning Oracle Clusterware to Create a Cluster

This chapter describes how to clone an Oracle grid infrastructure home and use the cloned home to create a cluster. You perform the cloning procedures in this chapter by running scripts in silent mode. The cloning procedures are applicable to Linux and UNIX systems. Although the examples in this chapter use Linux and UNIX commands, the cloning concepts and procedures apply generally to all platforms.

Note:

This chapter assumes that you are cloning an Oracle Clusterware 11g release 2 (11.2) installation configured as follows:

This chapter contains the following topics:

Introduction to Cloning Oracle Clusterware

Cloning is the process of copying an existing Oracle Clusterware installation to a different location and then updating the copied installation to work in the new environment. Changes made by one-off patches applied on the source Oracle grid infrastructure home are also present after cloning. During cloning, you run a script that replays the actions that installed the Oracle grid infrastructure home.

Cloning requires that you start with a successfully installed Oracle grid infrastructure home. You use this home as the basis for implementing a script that extends the Oracle grid infrastructure home to create a cluster based on the original Grid home.

Manually creating the cloning script can be error prone because you prepare the script without interactive checks to validate your input. Despite this, the initial effort is worthwhile for scenarios where you run a single script to configure tens or even hundreds of clusters. If you have only one cluster to install, then you should use the traditional, automated and interactive installation methods, such as Oracle Universal Installer (OUI) or the Provisioning Pack feature of Oracle Enterprise Manager.

Note:

Cloning is not a replacement for Oracle Enterprise Manager cloning that is a part of the Provisioning Pack. During Oracle Enterprise Manager cloning, the provisioning process simplifies cloning by interactively asking for details about the Oracle home. The interview questions cover such topics as the location to which you want to deploy the cloned environment, the name of the Oracle database home, a list of the nodes in the cluster, and so on.

The Provisioning Pack feature of Oracle Enterprise Manager Grid Control provides a framework that automates the provisioning of nodes and clusters. For data centers with many clusters, the investment in creating a cloning procedure to provision new clusters and new nodes to existing clusters is worth the effort.

The following list describes some situations in which cloning is useful:

A cloned installation acts the same as its source installation. For example, you can remove the cloned Oracle grid infrastructure home using OUI or patch it using OPatch. You can also use the cloned Oracle grid infrastructure home as the source for another cloning operation. You can create a cloned copy of a test, development, or production installation by using the command-line cloning scripts.

The default cloning procedure is adequate for most cases. However, you can also customize some aspects of cloning, for example, to specify custom port assignments or to preserve custom settings.

The cloning process works by copying all of the files from the source Oracle grid infrastructure home to the destination Oracle grid infrastructure home. You can clone either a non-shared or shared Oracle grid infrastructure home. Thus, any files used by the source instance that are located outside the source Oracle grid infrastructure home's directory structure are not copied to the destination location.

The size of the binary files at the source and the destination may differ because these files are relinked as part of the cloning operation, and the operating system patch levels may also differ between these two locations. Additionally, the number of files in the cloned home would increase because several files copied from the source, specifically those being instantiated, are backed up as part of the clone operation.

Preparing the Oracle Grid Infrastructure Home for Cloning

To prepare the source Oracle grid infrastructure home to be cloned, create a copy of an installed Oracle grid infrastructure home and then use it to perform the cloning procedure on other nodes. Use the following step-by-step procedure to prepare the copy of the Oracle grid infrastructure home.

Step 1: Install Oracle Clusterware

Use the detailed instructions in the Oracle Grid Infrastructure Installation Guide to perform the following steps on the source node:

  1. Install Oracle Clusterware 11g release 2 (11.2). This installation puts Oracle Cluster Registry (OCR) and the voting disk on Oracle Automatic Storage Management (Oracle ASM).

    Note:

    Either install and configure the grid infrastructure for a cluster or install just the Oracle Clusterware software, as described in your platform-specific Oracle Grid Infrastructure Installation Guide.

    To install and configure the grid infrastructure for a cluster, stop Oracle Clusterware before performing the cloning procedures. If you install just Oracle Clusterware, then you do not have to stop Oracle Clusterware.

  2. Install any patches that are required (for example, 11.2.0.n), if necessary.

  3. Apply one-off patches, if necessary.

    See Also:

    Oracle Grid Infrastructure Installation Guide for Oracle Clusterware installation instructions

Step 2: Shut Down Running Software

Before copying the source Oracle grid infrastructure home, shut down all of the services, databases, listeners, applications, Oracle Clusterware, and Oracle ASM instances that run on the node. Use the Server Control (SRVCTL) and Oracle Clusterware Control (CRSCTL) utilities to shut down these components.

Step 3: Create a Copy of the Oracle Grid Infrastructure Home

To keep the installed Oracle grid infrastructure home as a working home, make a full copy of the source Oracle grid infrastructure home for cloning. Because the Oracle grid infrastructure home contains files that are relevant only to the source node, you can optionally remove the unnecessary files from the copy.

Note:

When creating the copy, a best practice is to include the release number in the name of the file.

Use one of the following methods to create a compressed copy of the Oracle grid infrastructure home.

Method 1: Create a copy of the Oracle grid infrastructure home and remove the unnecessary files from the copy:

  1. On the source node, create a copy of the Oracle grid infrastructure home. To keep the installed Oracle grid infrastructure home as a working home, make a full copy of the source Oracle grid infrastructure home and remove the unnecessary files from the copy. For example, as root on Linux systems, run the cp command:

    # cp -prf Grid_home location_of_the_copy_of_Grid_home
    
  2. Delete unnecessary files from the copy.

    The Oracle grid infrastructure home contains files that are relevant only to the source node, so you can remove the unnecessary files from the copy in the log, crs/init, and cdata directories. The following example for Linux and UNIX systems shows the commands to run to remove the unnecessary files from the copy of the Oracle grid infrastructure home:

    [root@node1 root]# cd /opt/oracle/product/11g/crs
    [root@node1 crs]# rm -rf /opt/oracle/product/11g/crs/log/host_name
    [root@node1 crs]# rm -rf crs/init
    [root@node1 crs]# rm -rf cdata
    [root@node1 crs]# rm -rf gpnp/*
    [root@node1 crs]# rm -rf network/admin/*.ora
    [root@node1 crs]# find . -name '*.ouibak' -exec rm {} \;
    [root@node1 crs]# find . -name '*.ouibak.1' -exec rm {} \;
    [root@node1 crs]# rm -rf root.sh*
    [root@node1 crs]# rm -rf Grid_home/inventory/ContentsXML/oraclehomeproperties.xml
    [root@node1 crs]# cd cfgtoollogs
    [root@node1 cfgtoollogs]# find . -type f -exec rm -f {} \;
    
  3. Create a compressed copy of the previously copied Oracle grid infrastructure home using tar or gzip on Linux and UNIX systems. Ensure that the tool you use preserves the permissions and file timestamps. For example:

    On Linux and UNIX systems:

    [root@node1 root]# cd /opt/oracle/product/11g/crs/
    [root@node1 crs]# tar -zcvf /path_name/gridHome.tgz .
    

    In the example, the cd command changes the location to the Oracle grid infrastructure home, and the tar command creates the copy named crs11101.tgz. In the tar command, path_name represents the location of the file.

    On AIX or HPUX systems:

    tar cpf - . | compress -fv > temp_dir/gridHome.tar.Z
    

Method 2: Create a compressed copy of the Oracle grid infrastructure home using the -X option:

  1. Create a file that lists the unnecessary files in the Oracle grid infrastructure home. For example, list the following file names, using the asterisk (*) wildcard, in a file called excludeFileList:

    ./opt/oracle/product/11g/crs/log/host_name
    ./opt/oracle/product/11g/crs/root.sh*
    ./opt/oracle/products/11g/crs/gpnp
    ./opt/oracle/products/11g/crs/network/admin/*.ora
    
  2. Use the tar command or Winzip to create a compressed copy of the Oracle grid infrastructure home. For example, on Linux and UNIX systems, run the following command to archive and compress the source Oracle grid infrastructure home:

    tar cpfX - excludeFileList . | compress -fv > temp_dir/gridHome.tar.Z
    

    Note:

    Do not use the jar utility to copy and compress the Oracle grid infrastructure home.

Creating a Cluster by Cloning Oracle Clusterware

This section explains how to create a cluster by cloning a successfully installed Oracle Clusterware environment and copying it to the nodes on the destination cluster. OCR and voting disks are not shared between the two clusters after you successfully create a cluster from a clone.

For example, you can use cloning to quickly duplicate a successfully installed Oracle Clusterware environment to create a cluster. Figure 3-1 shows the result of a cloning procedure in which the Oracle grid infrastructure home on Node 1 has been cloned to Node 2 and Node 3 on Cluster 2, making Cluster 2 a new two-node cluster.

Figure 3-1 Cloning to Create a Oracle Clusterware Environment

Description of Figure 3-1 follows
Description of "Figure 3-1 Cloning to Create a Oracle Clusterware Environment"

The steps to create a cluster through cloning are as follows:

Step 1: Prepare the New Cluster Nodes

On each destination node, perform the following preinstallation steps:

  • Specify the kernel parameters

  • Configure block devices for Oracle Clusterware devices

  • Ensure that you have set the block device permissions correctly

  • Use short, nondomain-qualified names for all of the names in the Hosts file

  • Test whether the interconnect interfaces are reachable using the ping command

  • Verify that the VIP addresses are not active at the start of the cloning process by using the ping command (the ping command of the VIP address must fail)

  • Delete all files in the Grid_home/gpnp folder

    Note:

    If the Grid_home/gpnp folder contains any files, then cluster creation fails. All resources are added to the existing cluster, instead.
  • Run CVU to verify your hardware and operating system environment

Refer to your platform-specific Oracle Clusterware installation guide for the complete preinstallation checklist.

Note:

Unlike traditional methods of installation, the cloning process does not validate your input during the preparation phase. (By comparison, during the traditional method of installation using OUI, various checks occur during the interview phase.) Thus, if you make errors during the hardware setup or in the preparation phase, then the cloned installation fails.

Step 2: Deploy Oracle Clusterware on the Destination Nodes

Before you begin the cloning procedure that is described in this section, ensure that you have completed the prerequisite tasks to create a copy of the Oracle grid infrastructure home, as described in the section titled "Preparing the Oracle Grid Infrastructure Home for Cloning".

  1. On each destination node, deploy the copy of the Oracle grid infrastructure home that you created in "Step 3: Create a Copy of the Oracle Grid Infrastructure Home", as follows:

    If you do not have a shared Oracle grid infrastructure home, then restore the copy of the Oracle grid infrastructure home on each node in the destination cluster. Use the equivalent directory structure as the directory structure that was used in the Oracle grid infrastructure home on the source node. Skip this step if you have a shared Oracle grid infrastructure home.

    For example, on Linux or UNIX systems, run commands similar to the following:

    [root@node1 root]# mkdir -p /u01/app/11.2.0/grid
    [root@node1 root]# cd /u01/app/11.2.0/grid
    [root@node1 crs]# tar -zxvf /path_name/gridHome.tgz
    

    In this example, path_name represents the directory structure in which you want to install the Oracle grid infrastructure home. Note that you can change the Grid Home location as part of the clone process

  2. Change the ownership of all of the files to belong to the oracle:oinstall group, and create a directory for the Oracle Inventory. The following example shows the commands to do this on a Linux system:

    [root@node1 crs]# chown -R oracle:oinstall /u01/app/11.2.0/grid
    [root@node1 crs]# mkdir -p /u01/app/oraInventory
    [root@node1 crs]# chown oracle:oinstall /u01/app/oracle/oraInventory
    
  3. It is important to remove any Oracle network files from the /u01/app/11.2.0/grid/network/admin directory on both nodes before continuing. For example, remove any tnsnames.ora, listener.ora or sqlnet.ora files.

Step 3: Run the clone.pl Script on Each Destination Node

Note:

Step 3 must be run to completion before you start Step 4. Similarly, Step 4 must be run to completion before you start Step 5.

You can perform Step 3, Step 4, and Step 5 simultaneously on different nodes. Step 5 must be complete on all nodes before you can run Step 6.

To set up the new Oracle Clusterware environment, the clone.pl script requires you to provide several setup values for the script. You can provide the variable values by either supplying input on the command line when you run the clone.pl script, or by creating a file in which you can assign values to the cloning variables. The following discussions describe these options.

Supplying input to the clone.pl script on the command line

If you do not have a shared Oracle grid infrastructure home, navigate to the $ORACLE_HOME/clone/bin directory on each destination node and run the clone.pl script, which performs the main Oracle Clusterware cloning tasks. To run the script, you must supply input to several parameters. Table 3-1 describes the clone.pl script parameters.

Table 3-1 Parameters for the clone.pl Script

Parameters Description

ORACLE_BASE=ORACLE_BASE

The complete path to the Oracle base to be cloned. If you specify an invalid path, then the script exits. This parameter is required.

ORACLE_HOME=GRID_HOME

The complete path to the grid infrastructure home for cloning. If you specify an invalid path, then the script exits. This parameter is required.

ORACLE_HOME_NAME=Oracle_home_name (or) -defaultHomeName

The Oracle home name of the home to be cloned. Optionally, you can specify the -defaultHomeName flag. This parameter is not required.

INVENTORY_LOCATION=location_of_inventory

The location for the Oracle Inventory.

-O'"CLUSTER_NODES={node1, node2}"'

The short node names for the nodes that are to be part of this new cluster.

-O'"LOCAL_NODE=node1"'

The short node name for the node on which clone.pl is running.

-debug

Specify this option to run the clone.pl script in debug mode.

-help

Specify this option to obtain help for the clone.pl script.


For example, on Linux and UNIX systems:

$ perl clone.pl -silent ORACLE_BASE=/u01/app/oracle ORACLE_HOME=
/u01/app/1.2.0/grid ORACLE_HOME_NAME=OraHome1Grid \
INVENTORY_LOCATION=/u01/app/oraInventory -O'"CLUSTER_NODES={node1, node2}"'
-O'"LOCAL_NODE=node1"'

Refer to Table 3-2 and Table 3-3 for descriptions of the various variables in the preceding examples.

If you have a shared Oracle grid infrastructure home, then append the -cfs option to the command example in this step and provide a complete path location for the cluster file system.

Supplying Input to the clone.pl Script in a File

Because the clone.pl script is sensitive to the parameter values that it receives, you must be accurate in your use of brackets, single quotation marks, and double quotation marks. To avoid errors, create a file that is similar to the start.sh script shown in Example 3-1 in which you can specify environment variables and cloning parameters for the clone.pl script.

Example 3-1 shows an excerpt from an example script called start.sh that calls the clone.pl script; the example is configured for a cluster named crscluster. Run the script as the operating system user that installed Oracle Clusterware.

Example 3-1 Excerpt From the start.sh Script to Clone Oracle Clusterware

#!/bin/sh
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/11.2.0/grid
THIS_NODE=`hostname -s`
 
E01=ORACLE_BASE=${ORACLE_BASE}
E02=ORACLE_HOME=${ORACLE_HOME}
E03=ORACLE_HOME_NAME=OraGridHome1
E04=INVENTORY_LOCATION=${ORACLE_BASE}/oraInventory
 
#C00="-O'-debug'"
C01="'-O\"CLUSTER_NODES={node1,node2}\"'"
C02="'-O\"LOCAL_NODE=${THIS_NODE}\"'"
 
perl ${GRID_HOME}/clone/bin/clone.pl -silent $E01 $E02 $E03 $E04 $C01 $C02 

The start.sh script sets several environment variables and cloning parameters, as described in Table 3-2 and Table 3-3, respectively. Table 3-2 describes the environment variables E01, E02, E03, and E04 that are shown in bold typeface in Example 3-1.

Table 3-2 Environment Variables Passed to the clone.pl Script

Symbol Variable Description

E01

ORACLE_BASE

The location of the Oracle base directory.

E02

ORACLE_HOME

The location of the Oracle grid infrastructure home. This directory location must exist and must be owned by the Oracle operating system group: oinstall.

E03

ORACLE_HOME_NAME

The name of the Oracle grid infrastructure home. This is stored in the Oracle Inventory.

E04

INVENTORY_LOCATION

The location of the Oracle Inventory. This directory location must exist and must initially be owned by the Oracle operating system group: oinstall.

C01

CLUSTER_NODES

This list of short node names for the nodes in the cluster.

C02

LOCAL_NODE

The short name of the local node.


Step 4: Prepare the crsconfig_params File on All Nodes

Prepare the /u01/app/11.2.0/grid/install/crsconfig_params file on all of the nodes in the cluster. You can copy the file from one node to all of the other nodes.

Table 3-3 Key Parameters for the CRSCONFIG_PARAMS File

Parameter Name and Value Description

SILENT=true

OUI sets the value to true or false based on the mode of installation. true for silent install (runInstaller -silent) and false otherwise.

ORACLE_OWNER=oracle

OUI sets the value to login name of the installing user.

ORA_DBA_GROUP=oinstall

OUI sets the value to active primary group of the installing user.

ORA_ASM_GROUP=oinstall

OUI sets the value to the OSASM group selected by you during the interview. For example, asmadmin.

LANGUAGE_ID='AMERICAN_AMERICA.WE8ISO8859P1'

OUI sets the value to the LANGUAGE_TERRITORY.CHARACTERSET corresponding to the locale in which OUI is run.

ORACLE_HOME=/u01/app/11.2.0/grid

OUI sets the value to the Oracle home ('Software Location') entered during the grid installation interview.

ORACLE_BASE=/u01/app/oracle

OUI sets the value to the ORACLE_BASE ('Oracle Base') entered during interview of the grid installation. For example. /u01/app/oracle.

JREDIR=/u01/app/11.2.0/grid/jdk/jre/

Set to $ORACLE_HOME/jdk/jre.

JLIBDIR=/u01/app/11.2.0/grid/jlib

Set to $ORACLE_HOME/jlib.

NETCFGJAR_NAME=netcfg.jar

Explicitly set.

EWTJAR_NAME=ewt3.jar

Explicitly set.

JEWTJAR_NAME=jewt4.jar

Explicitly set.

SHAREJAR_NAME=share.jar

Explicitly set.

HELPJAR_NAME=help4.jar

Explicitly set.

EMBASEJAR_NAME=oemlt.jar

Explicitly set.

VNDR_CLUSTER=false

Explicitly set.

OCR_LOCATIONS=NO_VAL

OUI sets the value for the OCR and mirror locations to a comma-delimited list of OCR and mirror location path names. If OCR and the voting disk are on Oracle ASM, then set it to NO_VAL.

CLUSTER_NAME=rac-cluster

OUI sets the value to cluster name specified by you during interview.

HOST_NAME_LIST=node1,node2

OUI sets the value to list of nodes specified by you.

NODE_NAME_LIST=node1,node2

OUI sets the value to list of nodes specified by you.

PRIVATE_NAME_LIST=

 

VOTING_DISKS=NO_VAL

OUI sets the value for the voting disk and mirror locations to a comma-delimited list of voting disk location path names specified. If OCR and voting disk are on Oracle ASM, then set it to NO_VAL.

#VF_DISCOVERY_STRING=%s_vfdiscoverystring%

 

ASM_UPGRADE=false

OUI sets the value to true if Oracle ASM is being upgraded in this session, otherwise false.

ASM_SPFILE=

Leave blank.

ASM_DISK_GROUP=DATA

OUI sets the value to the name specified by you during the interview for the disk group to store OCR and the voting disk. For example, DATA

ASM_DISCOVERY_STRING=/dev/sd?1

OUI sets the value to the disk discovery string specified by you during the interview for the disk group to store OCR and the voting disk. If disks are on default raw device locations, such as /dev/raw/* on Linux, then this is left empty.

ASM_DISKS=/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1,/dev/sdf1

OUI sets the value to the disks selected by you during the interview for the disk group to store OCR and the voting disks. The disks that you specify should match the discovery pattern specified.

ASM_REDUNDANCY=NORMAL

OUI sets the value to the redundancy level specified by you during the interview for the disk group to store OCR and the voting disk.

CRS_STORAGE_OPTION=1

OUI sets the value to 1 for OCR and the voting disk on Oracle ASM and 2 for OCR and voting disk on a file system.

CSS_LEASEDURATION=400

Explicitly set.

CRS_NODEVIPS=node1-vip/255.255.255.0/eth0,node2-vip/255.255.255.0/eth0

OUI sets the value to be the list of the VIPs that you specified. If you are using DHCP, then set this to AUTO. The format of the list of VIPs is {name | ip}/netmask[/if1[|if2|...]].

See Also: Oracle Real Application Clusters Administration and Deployment Guide for more information about this format

NODELIST=node1,node2

OUI sets the value to be the list of nodes that you specify.

NETWORKS="eth0"/10.10.10.0:public,"eth1"/192.168.1.0:cluster_interconnect

OUI sets the value to list of interface selections by the user format of each part: "interface name"/Subnet/Type, where Type is one of public or cluster_interconnect.

SCAN_NAME=rac-cluster-scan

OUI sets the value to the SCAN Name specified by you.

SCAN_PORT=1521

OUI sets the value to port number for SCAN listener specified by you.

GPNP_PA=

 

OCFS_CONFIG=

OUI sets it to the list of partitions selected by you to be formatted to OCFS format, on Windows. Can be left empty for UNIX.

GNS_CONF=false

OUI sets the value to true if you have selected to enable GNS for this installation.

GNS_ADDR_LIST=

OUI sets the value to GNS virtual IP address specified by you.

GNS_DOMAIN_LIST=

OUI sets the value to GNS domain specified by you.

GNS_ALLOW_NET_LIST=

 

GNS_DENY_NET_LIST=

 

GNS_DENY_ITF_LIST=

 

NEW_HOST_NAME_LIST=

 

NEW_NODE_NAME_LIST=

 

NEW_PRIVATE_NAME_LIST=

 

NEW_NODEVIPS=node1-vip/255.255.255.0/eth0,node2-vip/255.255.255.0/eth0

Same as CRS_NODEVIPS, however this one is only used when adding a new node.

GPNPCONFIGDIR=$ORACLE_HOME

 

GPNPGCONFIGDIR=$ORACLE_HOME

 

OCRLOC=

 

OLRLOC=

 

OCRID=

 

CLUSTER_GUID=

 

CLSCFG_MISSCOUNT=

 

To use GNS, Oracle recommends that you add GNS to the cluster after your cloned cluster is running. Add GNS using the crsctl add gns command.

Step 5: Run the orainstRoot.sh Script on Each Node

In the Central Inventory directory on each destination node, run the orainstRoot.sh script as the operating system user that installed Oracle Clusterware. This script populates the /etc/oraInst.loc directory with the location of the central inventory. You can run the script on each node simultaneously. For example:

[root@node1 root]# /opt/oracle/oraInventory/orainstRoot.sh 

Ensure that the orainstRoot.sh script has completed on each destination node before proceeding to the next step.

Step 6: Run the GRID_HOME/root.sh and rootcrs.pl Scripts

On each destination node, run the GRID_HOME/root.sh script, followed by the rootcrs.pl script. You must run these scripts on only one node at a time. The following example is for a Linux or UNIX system. On the first node, run the following command:

[root@node1 root]# /u01/app/11.2.0/grid/root.sh 

Then run the following command:

[root@node1 root]# /u01/app/11.2.0/grid/perl/bin/perl \
-I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install \
/u01/app/11.2.0/grid/crs/install/rootcrs.pl 

Ensure that the root.sh and rootcrs.pl scripts have completed on the first node before running them on the second node and subsequent nodes. On each subsequent node, run the following command:

[root@node2 root]# /u01/app/11.2.0/grid/root.sh 

Then run the following command:

[root@node1 root]# /u01/app/11.2.0/grid/perl/bin/perl \
-I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install \ 
/u01/app/11.2.0/grid/crs/install/rootcrs.pl 

The root.sh script automatically configures the following node applications:

  • Global Services Daemon (GSD)

  • Oracle Notification Service (ONS)

  • Enhanced ONS (eONS)

  • Virtual IP (VIP) resources in the Oracle Cluster Registry (OCR)

  • Single Client Access Name (SCAN) VIPs and SCAN listeners

  • Oracle ASM

Step 7: Run the Configuration Assistants and the Oracle Cluster Verification Utility

  1. At the end of the Oracle Clusterware installation, on each new node, manually run the configuration assistants and CVU. The following commands should be run from the first node only.

    [oracle] $ /u01/app/11.2.0/grid/bin/netca \
                    /orahome /u01/app/11.2.0/grid \
                    /orahnam OraGridHome1 \
                    /instype typical \
                    /inscomp client,oraclenet,javavm,server\
                    /insprtcl tcp \
                    /cfg local \
                    /authadp NO_VALUE \
                    /responseFile /u01/app/11.2.0/grid/network/install/netca_ typ.rsp \
                     /silent
    
  2. To complete ASM configuration, run the following command:

    [oracle] $ /u01/app/11.2.0/grid/bin/asmca -silent -postConfigureASM
    -sysAsmPassword oracle -asmsnmpPassword oracle
    
  3. If you a plan to run a pre-11g release 2 (11.2) database on this cluster, then you should run oifcfg as described in the Oracle Database 11g release 2 (11.2) documentation.

  4. To use IPMI, use the crsctl command to configure IPMI on each node.

    See Also:

    "Configuration and Installation for IPMI Node Fencing" for information about configuring IPMI
  5. Run a final CVU check to confirm that your grid infrastructure home has been cloned correctly.

    [oracle] $ /u01/app/11.2.0/grid/bin/cluvfy  stage -post  crsinst -n  node1,node2
    

Locating and Viewing Log Files Generated During Cloning

The cloning script runs multiple tools, each of which can generate log files. After the clone.pl script finishes running, you can view log files to obtain more information about the status of your cloning procedures. Table 3-4 shows the log files that are generated during cloning that are the key log files for diagnostic purposes:

Table 3-4 Cloning Log Files and their Descriptions

Log File Name and Location Description

Central_Inventory/logs/cloneActions timestamp.log

Contains a detailed log of the actions that occur during the OUI part of the cloning.

Central_Inventory/logs/oraInstall timestamp.err

Contains information about errors that occur when OUI is running.

Central_Inventory/logs/oraInstall timestamp.out

Contains other miscellaneous information.


Table 3-5 describes how to find the location of the Oracle inventory directory.

Table 3-5 Finding the Location of the Oracle Inventory Directory

Type of System Location of the Oracle Inventory Directory

All UNIX computers except Linux and IBM AIX

/var/opt/oracle/oraInst.loc

IBM AIX and Linux

/etc/oraInst.loc file