Skip Headers
Oracle® Grid Infrastructure Installation Guide
11g Release 2 (11.2) for Linux

Part Number E10812-03
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

3 Configuring Storage for Grid Infrastructure for a Cluster and Oracle Real Application Clusters (Oracle RAC)

This chapter describes the storage configuration tasks that you must complete before you start the installer to install Oracle Clusterware and Automatic Storage Management (ASM), and that you must complete before adding an Oracle Real Application Clusters (Oracle RAC) installation to the cluster.

This chapter contains the following topics:

3.1 Reviewing Oracle Grid Infrastructure Storage Options

This section describes supported options for storing Oracle grid infrastructure for a cluster storage options. It contains the following sections:

See Also:

The Oracle Certify site for a list of supported vendors for Network Attached Storage options:
http://www.oracle.com/technology/support/metalink/

Refer also to the Certify site on My Oracle Support for the most current information about certified storage options:

https://metalink.oracle.com/

3.1.1 Overview of Oracle Clusterware and Oracle RAC Storage Options

There are two ways of storing Oracle Clusterware files:

  • Automatic Storage Management (Oracle ASM): You can install Oracle Clusterware files (OCR and voting disks) in Oracle ASM diskgroups.

    Oracle ASM is the required database storage option for Typical installations, and for Standard Edition Oracle RAC installations. It is an integrated, high-performance database file system and disk manager for Oracle Clusterware and Oracle Database files. It performs striping and mirroring of database files automatically.

    Automatic Storage Management Cluster File System (ACFS) provides a general purpose file system. You can place Oracle Database binaries on this system, but you cannot place Oracle data files or Oracle Clusterware files on ACFS.

    Only one Oracle ASM instance is permitted for each node regardless of the number of database instances on the node.

    Note:

    For Oracle Automatic Storage Management (Oracle ASM) 11g release 2 (11.2) for Linux, Oracle Automatic Storage Management Cluster File System (Oracle ACFS) and Oracle ASM Dynamic Volume Manager (Oracle ADVM) are only supported in the following environments:
    • Red Hat and Oracle Enterprise Linux 5, 32-bit

    • Red Hat and Oracle Enterprise Linux 5, 64-bit

    For OVM environments, Red Hat and Oracle Enterprise Linux 5 Update 4 or later is required.

    You cannot put Oracle Clusterware binaries and files on Oracle Automatic Storage Management Cluster File System (Oracle ACFS).

    You cannot put Oracle Database files on Oracle ACFS.

    You can put Oracle Database binaries on Oracle ACFS.

    If you plan to install an Oracle RAC home on a shared OCFS2 location, then you must upgrade OCFS2 to at least version 1.4.1, which supports shared writable mmaps.

    ACFS provides a general purpose file system for other files.

  • A supported shared file system: Supported file systems include the following:

    • Network File System (NFS): Note that if you intend to use NFS for your data files, then you should create partitions large enough for the database files when you create partitions for Oracle grid infrastructure. NFS mounts differ for software binaries, Oracle Clusterware files, and database files.

      Note:

      Placing Oracle grid infrastructure for a cluster binaries on a cluster file system is not supported.

      You can no longer use OUI to install Oracle Clusterware or Oracle Database files on block or raw devices.

      See Also:

      My Oracle Support for supported file systems and NFS or NAS filers

3.1.2 General Storage Considerations for Oracle Grid Infrastructure and Oracle RAC

For all installations, you must choose the storage option to use for Oracle grid infrastructure (Oracle Clusterware and Oracle ASM), and Oracle Real Application Clusters databases (Oracle RAC). To enable automated backups during the installation, you must also choose the storage option to use for recovery files (the Fast Recovery Area). You do not have to use the same storage option for each file type.

3.1.2.1 General Storage Considerations for Oracle Clusterware

Oracle Clusterware voting disks are used to monitor cluster node status, and Oracle Cluster Registry (OCR) files contain configuration information about the cluster. You can place voting disks and OCR files either in an ASM diskgroup, or on a cluster file system or shared network file system. Storage must be shared; any node that does not have access to an absolute majority of voting disks (more than half) will be restarted.

3.1.2.2 General Storage Considerations for Oracle RAC

Use the following guidelines when choosing the storage options to use for each file type:

  • You can choose any combination of the supported storage options for each file type provided that you satisfy all requirements listed for the chosen storage options.

  • Oracle recommends that you choose Oracle ASM as the storage option for database and recovery files.

  • For Standard Edition Oracle RAC installations, Oracle ASM is the only supported storage option for database or recovery files.

  • If you intend to use Oracle ASM with Oracle RAC, and you are configuring a new Oracle ASM instance, then your system must meet the following conditions:

    • All nodes on the cluster have Oracle Clusterware and Oracle ASM 11g release 2 (11.2) installed as part of an Oracle grid infrastructure for a cluster installation.

    • Any existing Oracle ASM instance on any node in the cluster is shut down.

  • Raw or block devices are supported only when upgrading an existing installation using the partitions already configured. On new installations, using raw or block device partitions is not supported by Automatic Storage Management Configuration Assistant (ASMCA) or Oracle Universal Installer (OUI), but is supported by the software if you perform manual configuration.

    See Also:

    Oracle Database Upgrade Guide for information about how to prepare for upgrading an existing database
  • If you do not have a storage option that provides external file redundancy, then you must configure at least three voting disk areas to provide voting disk redundancy.

3.1.3 Supported Storage Options

The following table shows the storage options supported for storing Oracle Clusterware and Oracle RAC files.

Note:

For information about OCFS2, refer to the following Web site:
http://oss.oracle.com/projects/ocfs2/

If you plan to install an Oracle RAC home on a shared OCFS2 location, then you must upgrade OCFS2 to at least version 1.4.1, which supports shared writable mmaps.

For OCFS2 certification status, and for other cluster file system support, refer to the Certify page on My Oracle Support.

Table 3-1 Supported Storage Options for Oracle Clusterware and Oracle RAC

Storage Option OCR and Voting Disks Oracle Clusterware binaries Oracle RAC binaries Oracle Database Files Oracle Recovery Files

Automatic Storage Management

Yes

No

No

Yes

Yes

Automatic Storage Management Cluster File System (ACFS)

No

No

Yes

No

No

NFS file system on a certified NAS filer

Note: Direct NFS does not support Oracle Clusterware files.

Yes

Yes

Yes

Yes

Yes

Shared disk partitions (block devices or raw devices)

Not supported by OUI or ASMCA, but supported by the software. They can be added or removed after installation.

No

No

Not supported by OUI or ASMCA, but supported by the software. They can be added or removed after installation.

No


Use the following guidelines when choosing storage options:

  • You can choose any combination of the supported storage options for each file type provided that you satisfy all requirements listed for the chosen storage options.

  • You can use Oracle ASM 11g release 2 (11.2) to store Oracle Clusterware files. You cannot use prior Oracle ASM releases to do this.

  • If you do not have a storage option that provides external file redundancy, then you must configure at least three voting disk locations and at least three Oracle Cluster Registry locations to provide redundancy.

3.1.4 After You Have Selected Disk Storage Options

When you have determined your disk storage options, configure shared storage:

3.2 Shared File System Storage Configuration

The installer does not suggest a default location for the Oracle Cluster Registry (OCR) or the Oracle Clusterware voting disk. If you choose to create these files on a file system, then review the following sections to complete storage requirements for Oracle Clusterware files:

Note:

The OCR is a file that contains the configuration information and status of the cluster. The installer automatically initializes the OCR during the Oracle Clusterware installation. Database Configuration Assistant uses the OCR for storing the configurations for the cluster databases that it creates.

3.2.1 Requirements for Using a Shared File System

To use a shared file system for Oracle Clusterware, Oracle ASM, and Oracle RAC, the file system must comply with the following requirements:

  • To use an NFS file system, it must be on a certified NAS device. Log in to My Oracle Support at the following URL, and click the Certify tab to find a list of certified NAS devices.

    https://metalink.oracle.com/

  • If you choose to place your Oracle Cluster Registry (OCR) files on a shared file system, then Oracle recommends that one of the following is true:

    • The disks used for the file system are on a highly available storage device, (for example, a RAID device).

    • At least two file systems are mounted, and use the features of Oracle Clusterware 11g release 2 (11.2) to provide redundancy for the OCR.

  • If you choose to place your database files on a shared file system, then one of the following should be true:

    • The disks used for the file system are on a highly available storage device, (for example, a RAID device).

    • The file systems consist of at least two independent file systems, with the database files on one file system, and the recovery files on a different file system.

  • The user account with which you perform the installation (oracle or grid) must have write permissions to create the files in the path that you specify.

Note:

Upgrading from Oracle9i release 2 using the raw device or shared file for the OCR that you used for the SRVM configuration repository is not supported.

If you are upgrading Oracle Clusterware, and your existing cluster uses 100 MB OCR and 20 MB voting disk partitions, then you can continue to use those partition sizes.

All storage products must be supported by both your server and storage vendors.

Use Table 3-2 and Table 3-3 to determine the minimum size for shared file systems:

Table 3-2 Oracle Clusterware Shared File System Volume Size Requirements

File Types Stored Number of Volumes Volume Size

Voting disks with external redundancy

3

At least 280 MB for each voting disk volume.

Oracle Cluster Registry (OCR) with external redundancy

1

At least 280 MB for each OCR volume

Oracle Clusterware files (OCR and voting disks) with redundancy provided by Oracle software.

1

At least 280 MB for each OCR volume

At least 280 MB for each voting disk volume


Table 3-3 Oracle RAC Shared File System Volume Size Requirements

File Types Stored Number of Volumes Volume Size

Oracle Database files

1

At least 1.5 GB for each volume

Recovery files

Note: Recovery files must be on a different volume than database files

1

At least 2 GB for each volume


In Table 3-2 and Table 3-3, the total required volume size is cumulative. For example, to store all Oracle Clusterware files on the shared file system with normal redundancy, you should have at least 2 GB of storage available over a minimum of three volumes (three separate volume locations for the OCR and two OCR mirrors, and one voting disk on each volume). You should have a minimum of three physical disks, each at least 500 MB, to ensure that voting disks and OCR files are on separate physical disks. If you add Oracle RAC using one volume for database files and one volume for recovery files, then you should have at least 3.5 GB available storage over two volumes, and at least 5.5 GB available total for all volumes.

Note:

If you create partitions on shared partitions with fdisk by specifying a device size, such as +300M, the actual device created may be smaller than the size requested, based on the cylinder geometry of the disk. This is due to current fdisk restrictions. Oracle recommends that you partition the entire disk that you allocate for use by Oracle ASM.

3.2.2 Deciding to Use a Cluster File System for Oracle Clusterware Files

For new installations, Oracle recommends that you use Automatic Storage Management (Oracle ASM) to store voting disk and OCR files. For Linux x86 (32-bit) and x86-64 (64-bit) platforms, Oracle provides a cluster file system, OCFS2. However, Oracle does not recommend using OCFS2 for Oracle Clusterware files.

3.2.3 Deciding to Use Direct NFS for Data Files

Direct NFS is an alternative to using kernel-managed NFS. This section contains the following information about Direct NFS:

3.2.3.1 About Direct NFS Storage

With Oracle Database 11g release 2 (11.2), instead of using the operating system kernel NFS client, you can configure Oracle Database to access NFS V3 servers directly using an Oracle internal Direct NFS client.

To enable Oracle Database to use Direct NFS, the NFS file systems must be mounted and available over regular NFS mounts before you start installation. Direct NFS manages settings after installation. You should still set the kernel mount options as a backup, but for normal operation, Direct NFS will manage NFS mounts.

Refer to your vendor documentation to complete NFS configuration and mounting.

Some NFS file servers require NFS clients to connect using reserved ports. If your filer is running with reserved port checking, then you must disable it for Direct NFS to operate. To disable reserved port checking, consult your NFS file server documentation.

Note:

Use NFS servers certified for Oracle RAC. Refer to the following URL for certification information:

https://metalink.oracle.com

3.2.3.2 Using the Oranfstab File with Direct NFS

If you use Direct NFS, then you can choose to use a new file specific for Oracle data file management, oranfstab, to specify additional options specific for Oracle Database to Direct NFS. For example, you can use oranfstab to specify additional paths for a mount point. You can add the oranfstab file either to /etc or to $ORACLE_HOME/dbs.

With shared Oracle homes, when the oranfstab file is placed in $ORACLE_HOME/dbs, the entries in the file are specific to a single database. In this case, all nodes running an Oracle RAC database use the same $ORACLE_HOME/dbs/oranfstab file. In non-shared RAC installs, oranfstab must be replicated on all nodes.

When the oranfstab file is placed in /etc, then it is globally available to all Oracle databases, and can contain mount points used by all Oracle databases running on nodes in the cluster, including standalone databases. However, on Oracle RAC systems, if the oranfstab file is placed in /etc, then you must replicate the file /etc/oranfstab file on all nodes, and keep each /etc/oranfstab file synchronized on all nodes, just as you must with the /etc/fstab file.

See Also:

Section 3.2.5, "Configuring Storage NFS Mount and Buffer Size Parameters" for information about configuring /etc/fstab

In all cases, mount points must be mounted by the kernel NFS system, even when they are being served using Direct NFS.

Caution:

Direct NFS will not serve an NFS server with write size values (wtmax) less than 32768.

3.2.3.3 Mounting NFS Storage Devices with Direct NFS

Direct NFS determines mount point settings to NFS storage devices based on the configurations in /etc/mtab, which are changed with configuring the /etc/fstab file.

Direct NFS searches for mount entries in the following order:

  1. $ORACLE_HOME/dbs/oranfstab

  2. /etc/oranfstab

  3. /etc/mtab

Direct NFS uses the first matching entry found.

Note:

You can have only one active Direct NFS implementation for each instance. Using Direct NFS on an instance will prevent another Direct NFS implementation.

If Oracle Database uses Direct NFS mount points configured using oranfstab, then it first verifies kernel NFS mounts by cross-checking entries in oranfstab with operating system NFS mount points. If a mismatch exists, then Direct NFS logs an informational message, and does not operate.

If Oracle Database cannot open an NFS server using Direct NFS, then Oracle Database uses the platform operating system kernel NFS client. In this case, the kernel NFS mount options must be set up as defined in "Checking NFS Mount and Buffer Size Parameters for Oracle RAC". Additionally, an informational message is logged into the Oracle alert and trace files indicating that Direct NFS could not be established. The Oracle files resident on the NFS server that are served by the Direct NFS Client are also accessible through the operating system kernel NFS client. The usual considerations for maintaining integrity of the Oracle files apply in this situation.

3.2.3.4 Specifying Network Paths with the Oranfstab File

Direct NFS can use up to four network paths defined in the oranfstab file for an NFS server. The Direct NFS client performs load balancing across all specified paths. If a specified path fails, then Direct NFS reissues I/O commands over any remaining paths.

Use the following SQL*Plus views for managing Direct NFS in a cluster environment:

  • gv$dnfs_servers: Shows a table of servers accessed using Direct NFS.

  • gv$dnfs_files: Shows a table of files currently open using Direct NFS.

  • gv$dnfs_channels: Shows a table of open network paths (or channels) to servers for which Direct NFS is providing files.

  • gv$dnfs_stats: Shows a table of performance statistics for Direct NFS.

Note:

Use v$ views for single instances, and gv$ views for Oracle Clusterware and Oracle RAC storage.

3.2.4 Deciding to Use NFS for Data Files

Network-attached storage (NAS) systems use NFS to access data. You can store data files on a supported NFS system.

NFS file systems must be mounted and available over NFS mounts before you start installation. Refer to your vendor documentation to complete NFS configuration and mounting.

Be aware that the performance of Oracle software and databases stored on NAS devices depends on the performance of the network connection between the Oracle server and the NAS device.

For this reason, Oracle recommends that you connect the server to the NAS device using a private dedicated network connection, which should be Gigabit Ethernet or better.

3.2.5 Configuring Storage NFS Mount and Buffer Size Parameters

If you are using NFS for the Grid home or Oracle RAC home, then you must set up the NFS mounts on the storage so that they allow root on the clients mounting to the storage to be considered root instead of being mapped to an anonymous user, and allow root on the client server to create files on the NFS filesystem that are owned by root.

On NFS, you can obtain root access for clients writing to the storage by enabling no_root_squash on the server side. For example, to set up Oracle Clusterware file storage in the path /vol/grid, with nodes node1, node 2, and node3 in the domain mycluster.example.com, add a line similar to the following to the /etc/exports file:

/vol/grid/ node1.mycluster.example.com(rw,no_root_squash)
node2.mycluster.example.com(rw,no_root_squash) node3.mycluster.example.com
(rw,no_root_squash) 

If the domain or DNS is secure so that no unauthorized system can obtain an IP address on it, then you can grant root access by domain, rather than specifying particular cluster member nodes:

For example:

/vol/grid/ *.mycluster.example.com(rw,no_root_squash)

Oracle recommends that you use a secure DNS or domain, and grant root access to cluster member nodes using the domain, as using this syntax allows you to add or remove nodes without the need to reconfigure the NFS server.

If you use Grid Naming Service (GNS), then the subdomain allocated for resolution by GNS within the cluster is a secure domain. Any server without a correctly signed Grid Plug and Play (GPnP) profile cannot join the cluster, so an unauthorized system cannot obtain or use names inside the GNS subdomain.

Caution:

Granting root access by domain can be used to obtain unauthorized access to systems. System administrators should refer to their operating system documentation for the risks associated with using no_root_squash.

After changing /etc/exports, reload the file system mount using the following command:

# /usr/sbin/exportfs -avr

3.2.6 Checking NFS Mount and Buffer Size Parameters for Oracle Clusterware

On the cluster member nodes, you must set the values for the NFS buffer size parameters rsize and wsize to 32768.

The NFS client-side mount options are:

rw,bg,hard,nointr,tcp,nfsvers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0

If you have Oracle grid infrastructure binaries on an NFS mount, then you must include the suid option.

Update the /etc/fstab file on each node with an entry containing the NFS mount options for your platform. For example, if your platform is x86-64, and you are creating a mount point for Oracle Clusterware files, then update the /etc/fstab files with an entry similar to the following:

nfs_server:/vol/grid  /u02/oracle/cwfiles nfs \
rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0

Note that mount point options are different for Oracle software binaries, Oracle Clusterware files (OCR and voting disks), and data files.

To create a mount point for binaries only, provide an entry similar to the following for a binaries mount point:

nfs_server:/vol/bin /u02/oracle/grid nfs -yes \
rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actime=0,suid

See Also:

My Oracle Support bulletin 359515.1, "Mount Options for Oracle Files When Used with NAS Devices" for the most current information about mount options, available from the following URL:

https://metalink.oracle.com

Note:

Refer to your storage vendor documentation for additional information about mount options.

3.2.7 Checking NFS Mount and Buffer Size Parameters for Oracle RAC

If you use kernel-managed NFS mounts, then you must mount NFS volumes used for storing database files with special mount options on each node that has an Oracle RAC instance. When mounting an NFS file system, Oracle recommends that you use the same mount point options that your NAS vendor used when certifying the device. Refer to your device documentation or contact your vendor for information about recommended mount-point options.

In general, most vendors recommend that you use the NFS mount options listed in Table 3-4.

Table 3-4 NFS Mount Options for Oracle RAC

Option Requirement Description

hard

Mandatory

Generate a hard mount of the NFS file system. If the connection to the server fails or is temporarily lost, then connection attempts are made until the NAS device responds

bg

Optional

Try to connect in the background if connection fails.

rw

Mandatory

Read and write access

tcp

Optional

Use the TCP protocol rather than UDP. TCP is more reliable than UDP.

vers=3

Optional

Use NFS version 3. Oracle recommends that you use NFS version 3 where available, unless the performance of version 2 is higher.

suid

Optional

Allow clients to run software binaries with SUID enabled. SUID is required for all NFS mounts that contain Oracle software.

rsize

Mandatory

The number of bytes used when reading from the NAS device. This value should be set to the maximum database block size supported by this platform. A value of 8192 is often recommended for NFS version 2 and 32768 is often recommended for NFS version 3.

wsize

Mandatory

The number of bytes used when writing to the NAS device. This value should be set to the maximum database block size supported by this platform. A value of 8192 is often recommended for NFS version 2 and 32768 is often recommended for NFS version 3.

nointr (or intr)

Optional

Do not allow (or allow) keyboard interrupts to stop a process that is hung while waiting for a response on a hard-mounted file system.

Note: Different vendors have different recommendations about this option. Contact your vendor for advice.

actime=0

Optional

Disable attribute caching.

Note: You must specify this option for NFS file systems where you want to install the software binaries. If you do not use this option, then the installer will not install the software in the directory that you specify.

actimeo

Optional

Using actimeo sets all of acregmin, acregmax, acdirmin, and acdirmax to the same value. There is no default value.

timeo

Optional

Timeout setting. Better overall performance may be achieved by increasing the timeout when mounting on a busy network, to a slow server, or through several routers or gateways. Oracle recommends that you set the timeout value to the maximum timeout for TCP, which is 600 seconds.


Update the /etc/fstab file on each node with an entry similar to the following:

nfs_server:/vol/DATA/oradata  /u02/oradata     nfs\   
rw,bg,hard,nointr,tcp,nfsvers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0

The mandatory mount options comprise the minimum set of mount options that you must use while mounting the NFS volumes. These mount options are essential to protect the integrity of the data and to prevent any database corruption. Failure to use these mount options may result in the generation of file access errors. Refer to your operating system or NAS device documentation for more information about the specific options supported on your platform.

See Also:

My Oracle Support note 359515.1 for updated NAS mount option information, available at the following URL:
https://metalink.oracle.com

3.2.8 Enabling Direct NFS Client Oracle Disk Manager Control of NFS

Complete the following procedure to enable Direct NFS:

  1. Create an oranfstab file with the following attributes for each NFS server to be accessed using Direct NFS:

    • Server: The NFS server name.

    • Local: Up to four paths on the database host, specified by IP address or by name, as displayed using the ifconfig command run on the database host

    • Path: Up to four network paths to the NFS server, specified either by IP address, or by name, as displayed using the ifconfig command on the NFS server.

    • Export: The exported path from the NFS server.

    • Mount: The corresponding local mount point for the exported volume.

    • Mnt_timeout: Specifies (in seconds) the time Direct NFS client should wait for a successful mount before timing out. This parameter is optional. The default timeout is 10 minutes (600).

    • Dontroute: Specifies that outgoing messages should not be routed by the operating system, but instead sent using the IP address to which they are bound.

    The examples that follow show three possible NFS server entries in oranfstab. A single oranfstab can have multiple NFS server entries.

    Example 3-1 Using Local and Path NFS Server Entries

    The following example uses both local and path. Since they are in different subnets, we do not have to specify dontroute.

    server: MyDataServer1
    local: 192.0.2.0
    path: 192.0.2.1
    local: 192.0.100.0
    path: 192.0.100.1
    export: /vol/oradata1 mount: /mnt/oradata1
    

    Example 3-2 Using Local and Path in the Same Subnet, with dontroute

    The following example shows local and path in the same subnet. dontroute is specified in this case:

    server: MyDataServer2
    local: 192.0.2.0
    path: 192.0.2.128
    local: 192.0.2.1
    path: 192.0.2.129
    dontroute
    export: /vol/oradata2 mount: /mnt/oradata2
    

    Example 3-3 Using Names in Place of IP Addresses, with Multiple Exports

    server: MyDataServer3
    local: LocalPath1
    path: NfsPath1
    local: LocalPath2
    path: NfsPath2
    local: LocalPath3
    path: NfsPath3
    local: LocalPath4
    path: NfsPath4
    dontroute
    export: /vol/oradata3 mount: /mnt/oradata3
    export: /vol/oradata4 mount: /mnt/oradata4
    export: /vol/oradata5 mount: /mnt/oradata5
    export: /vol/oradata6 mount: /mnt/oradata6
    
  2. Oracle Database uses an ODM library, libnfsodm11.so, to enable Direct NFS. To replace the standard ODM library, $ORACLE_HOME/lib/libodm11.so, with the ODM NFS library, libnfsodm11.so, complete the following steps on all nodes unless the Oracle home directory is shared:

    1. Change directory to $ORACLE_HOME/lib.

    2. Enter the following commands:

      cp libodm11.so libodm11.so_stub
      ln -s libnfsodm11.so libodm11.so
      

3.2.9 Creating Directories for Oracle Clusterware Files on Shared File Systems

Use the following instructions to create directories for Oracle Clusterware files. You can also configure shared file systems for the Oracle Database and recovery files.

Note:

For both NFS and OCFS2 storage, you must complete this procedure only if you want to place the Oracle Clusterware files on a separate file system from the Oracle base directory.

To create directories for the Oracle Clusterware files on separate file systems from the Oracle base directory, follow these steps:

  1. If necessary, configure the shared file systems to use and mount them on each node.

    Note:

    The mount point that you use for the file system must be identical on each node. Ensure that the file systems are configured to mount automatically when a node restarts.
  2. Use the df command to determine the free disk space on each mounted file system.

  3. From the display, identify the file systems to use. Choose a file system with a minimum of 600 MB of free disk space (one OCR and one voting disk, with external redundancy).

    If you are using the same file system for multiple file types, then add the disk space requirements for each type to determine the total disk space requirement.

  4. Note the names of the mount point directories for the file systems that you identified.

  5. If the user performing installation (typically, grid or oracle) has permissions to create directories on the storage location where you plan to install Oracle Clusterware files, then OUI creates the Oracle Clusterware file directory.

    If the user performing installation does not have write access, then you must create these directories manually using commands similar to the following to create the recommended subdirectories in each of the mount point directories and set the appropriate owner, group, and permissions on the directory. For example, where the user is oracle, and the Oracle Clusterware file storage area is cluster:

    # mkdir /mount_point/cluster
    # chown oracle:oinstall /mount_point/cluster
    # chmod 775 /mount_point/cluster
    

    Note:

    After installation, directories in the installation path for the Oracle Cluster Registry (OCR) files should be owned by root, and not writable by any account other than root.

When you have completed creating a subdirectory in the mount point directory, and set the appropriate owner, group, and permissions, you have completed OCFS2 or NFS configuration for Oracle grid infrastructure.

3.2.10 Creating Directories for Oracle Database Files on Shared File Systems

Use the following instructions to create directories for shared file systems for Oracle Database and recovery files (for example, for an Oracle RAC database).

  1. If necessary, configure the shared file systems and mount them on each node.

    Note:

    The mount point that you use for the file system must be identical on each node. Ensure that the file systems are configured to mount automatically when a node restarts.
  2. Use the df -h command to determine the free disk space on each mounted file system.

  3. From the display, identify the file systems:

    File Type File System Requirements
    Database files Choose either:
    • A single file system with at least 1.5 GB of free disk space.

    • Two or more file systems with at least 1.5 GB of free disk space in total.

    Recovery files Choose a file system with at least 2 GB of free disk space.

    If you are using the same file system for multiple file types, then add the disk space requirements for each type to determine the total disk space requirement.

  4. Note the names of the mount point directories for the file systems that you identified.

  5. If the user performing installation (typically, oracle) has permissions to create directories on the disks where you plan to install Oracle Database, then DBCA creates the Oracle Database file directory, and the Recovery file directory.

    If the user performing installation does not have write access, then you must create these directories manually using commands similar to the following to create the recommended subdirectories in each of the mount point directories and set the appropriate owner, group, and permissions on them:

    • Database file directory:

      # mkdir /mount_point/oradata
      # chown oracle:oinstall /mount_point/oradata
      # chmod 775 /mount_point/oradata
      
    • Recovery file directory (Fast Recovery Area):

      # mkdir /mount_point/fast_recovery_area
      # chown oracle:oinstall /mount_point/fast_recovery_area
      # chmod 775 /mount_point/fast_recovery_area
      

By making members of the oinstall group owners of these directories, this permits them to be read by multiple Oracle homes, including those with different OSDBA groups.

When you have completed creating subdirectories in each of the mount point directories, and set the appropriate owner, group, and permissions, you have completed OCFS2 or NFS configuration for Oracle Database shared storage.

3.2.11 Disabling Direct NFS Client Oracle Disk Management Control of NFS

Use one of the following methods to disable the Direct NFS client:

Note:

If you remove an NFS path that Oracle Database is using, then you must restart the database for the change to be effective.

3.3 Automatic Storage Management Storage Configuration

Review the following sections to configure storage for Automatic Storage Management:

3.3.1 Configuring Storage for Automatic Storage Management

This section describes how to configure storage for use with Automatic Storage Management.

3.3.1.1 Identifying Storage Requirements for Automatic Storage Management

To identify the storage requirements for using Automatic Storage Management, you must determine how many devices and the amount of free disk space that you require. To complete this task, follow these steps:

  1. Determine whether you want to use Automatic Storage Management for Oracle Clusterware files (OCR and voting disks), Oracle Database files, recovery files, or all files except for Oracle Clusterware or Oracle Database binaries. Oracle Database files include data files, control files, redo log files, the server parameter file, and the password file.

    Note:

    You do not have to use the same storage mechanism for Oracle Clusterware, Oracle Database files and recovery files. You can use a shared file system for one file type and Automatic Storage Management for the other.

    If you choose to enable automated backups and you do not have a shared file system available, then you must choose Automatic Storage Management for recovery file storage.

    If you enable automated backups during the installation, then you can select Automatic Storage Management as the storage mechanism for recovery files by specifying an Automatic Storage Management disk group for the Fast Recovery Area. Depending on how you choose to create a database during the installation, you have the following options:

    • If you select an installation method that runs ASMCA in interactive mode (for example, by choosing the Advanced database configuration option) then you can decide whether you want to use the same Automatic Storage Management disk group for database files and recovery files, or use different failure groups for each file type.

    • If you select an installation method that runs DBCA in noninteractive mode, then you must use the same Automatic Storage Management disk group for database files and recovery files.

  2. Choose the Automatic Storage Management redundancy level to use for the Automatic Storage Management disk group.

    The redundancy level that you choose for the Automatic Storage Management disk group determines how Automatic Storage Management mirrors files in the disk group and determines the number of disks and amount of free disk space that you require, as follows:

    • External redundancy

      An external redundancy disk group requires a minimum of one disk device. The effective disk space in an external redundancy disk group is the sum of the disk space in all of its devices.

      Because Automatic Storage Management does not mirror data in an external redundancy disk group, Oracle recommends that you use external redundancy with storage devices such as RAID, or other similar devices that provide their own data protection mechanisms.

    • Normal redundancy

      In a normal redundancy disk group, to increase performance and reliability, Automatic Storage Management by default uses two-way mirroring. A normal redundancy disk group requires a minimum of two disk devices (or two failure groups). The effective disk space in a normal redundancy disk group is half the sum of the disk space in all of its devices.

      For Oracle Clusterware files, Normal redundancy disk groups provide 3 voting disk files, 1 OCR and 2 copies (one primary and one secondary mirror). With normal redundancy, the cluster can survive the loss of one failure group.

      For most installations, Oracle recommends that you select normal redundancy.

    • High redundancy

      In a high redundancy disk group, Automatic Storage Management uses three-way mirroring to increase performance and provide the highest level of reliability. A high redundancy disk group requires a minimum of three disk devices (or three failure groups). The effective disk space in a high redundancy disk group is one-third the sum of the disk space in all of its devices.

      For Oracle Clusterware files, High redundancy disk groups provide 5 voting disk files, 1 OCR and 3 copies (one primary and two secondary mirrors). With high redundancy, the cluster can survive the loss of two failure groups.

      While high redundancy disk groups do provide a high level of data protection, you should consider the greater cost of additional storage devices before deciding to select high redundancy disk groups.

  3. Determine the total amount of disk space that you require for Oracle Clusterware files, and for the database files and recovery files.

    Use Table 3-5 and Table 3-6 to determine the minimum number of disks and the minimum disk space requirements for installing Oracle Clusterware files, and installing the starter database, where you have voting disks in a separate disk group:

    Table 3-5 Total Oracle Clusterware Storage Space Required by Redundancy Type

    Redundancy Level Minimum Number of Disks Oracle Cluster Registry (OCR) Files Voting Disk Files Both File Types

    External

    1

    280 MB

    280 MB

    560 MB

    Normal

    3

    560 MB

    840 MB

    1.4 GBFoot 1 

    High

    5

    840 MB

    1.4 GB

    2.3 GB


    Footnote 1 If you create a diskgroup during installation, then it must be at least 2 GB.

    Note:

    If the voting disk files are in a disk group, be aware that disk groups with Oracle Clusterware files (OCR and voting disks) have a higher minimum number of failure groups than other disk groups.

    If you create a diskgroup as part of the installation in order to install the OCR and voting disk files, then the installer requires that you create these files on a diskgroup with at least 2 GB of available space.

    Table 3-6 Total Oracle Database Storage Space Required by Redundancy Type

    Redundancy Level Minimum Number of Disks Database Files Recovery Files Both File Types

    External

    1

    1.5 GB

    3 GB

    4.5 GB

    Normal

    2

    3 GB

    6 GB

    9 GB

    High

    3

    4.5 GB

    9 GB

    13.5 GB


  4. For Oracle Clusterware installations, you must also add additional disk space for the Automatic Storage Management metadata. You can use the following formula to calculate the additional disk space requirements (in MB):

    total = [2 * ausize * disks] + [redundancy * (ausize * (nodes * (clients + 1) + 30) + (64 * nodes) + 533)]

    Where:

    • redundancy = Number of mirrors: external = 1, normal = 2, high = 3.

    • ausize = Metadata AU size in megabytes.

    • nodes = Number of nodes in cluster.

    • clients - Number of database instances for each node.

    • disks - Number of disks in disk group.

    For example, for a four-node Oracle RAC installation, using three disks in a normal redundancy disk group, you require an additional X MB of space:

    [2 * 1 * 3] + [2 * (1 * (4 * (4 + 1)+ 30)+ (64 * 4)+ 533)] = 1684 MB

    To ensure high availability of Oracle Clusterware files on Oracle ASM, you need to have at least 2 GB of disk space for Oracle Clusterware files in three separate failure groups, with at least three physical disks. Each disk must have at least 1 GB of capacity to ensure that there is sufficient space to create Oracle Clusterware files.

  5. For Oracle RAC installations, you must also add additional disk space for the Automatic Storage Management metadata. You can use the following formula to calculate the additional disk space requirements (in MB):

    total = [2 * ausize * disks] + [redundancy * (ausize * (nodes * (clients + 1) + 30) + (64 * nodes) + 533)]

    Where:

    • ausize = Metadata AU size in megabytes.

    • clients = Number of database instances for each node.

    • disks = Number of disks in disk group.

    • nodes = Number of nodes in cluster.

    • redundancy = Number of mirrors: external = 1, normal = 2, high = 3.

    For example, for a four-node Oracle RAC installation, using three disks in a normal redundancy disk group, you require an additional 1684 MB of disk space:

    [2 * 1 * 3] + [2 * (1 * (4 * (4+1) + 30) + (64 * 4) + 533)] = 1684 MB

    If an Automatic Storage Management instance is already running on the system, then you can use an existing disk group to meet these storage requirements. If necessary, you can add disks to an existing disk group during the installation.

  6. Optionally, identify failure groups for the Automatic Storage Management disk group devices.

    Note:

    Complete this step only if you intend to use an installation method that runs Database Configuration Assistant in interactive mode; for example, if you intend to choose the Custom installation type or the Advanced database configuration option, then complete this step. Other installation types do not enable you to specify failure groups.

    If you intend to use a normal or high redundancy disk group, then you can further protect your database against hardware failure by associating a set of disk devices in a custom failure group. By default, each device comprises its own failure group. However, if two disk devices in a normal redundancy disk group are attached to the same SCSI controller, then the disk group becomes unavailable if the controller fails. The controller in this example is a single point of failure.

    To protect against failures of this type, you could use two SCSI controllers, each with two disks, and define a failure group for the disks attached to each controller. This configuration would enable the disk group to tolerate the failure of one SCSI controller.

    Note:

    Define custom failure groups after installation, using the GUI tool ASMCA, the command line tool asmctl, or SQL commands.

    If you define custom failure groups, then for failure groups containing database files only, you must specify a minimum of two failure groups for normal redundancy disk groups and three failure groups for high redundancy disk groups.

    For failure groups containing database files and clusterware files, including voting disks, you must specify a minimum of three failure groups for normal redundancy disk groups, and five failure groups for high redundancy disk groups.

    Disk groups containing voting files must have at least 3 failure groups for normal redundancy or at least 5 failure groups for high redundancy. Otherwise, the minimum is 2 and 3 respectively. The minimum number of failure groups applies whether or not they are custom failure groups.

  7. If you are sure that a suitable disk group does not exist on the system, then install or identify appropriate disk devices to add to a new disk group. Use the following guidelines when identifying appropriate disk devices:

    • All of the devices in an Automatic Storage Management disk group should be the same size and have the same performance characteristics.

    • Do not specify multiple partitions on a single physical disk as a disk group device. Automatic Storage Management expects each disk group device to be on a separate physical disk.

    • Although you can specify a logical volume as a device in an Automatic Storage Management disk group, Oracle does not recommend their use. Logical volume managers can hide the physical disk architecture, preventing Automatic Storage Management from optimizing I/O across the physical devices. They are not supported with Oracle RAC.

3.3.1.2 Creating Files on a NAS Device for Use with Automatic Storage Management

If you have a certified NAS storage device, then you can create zero-padded files in an NFS mounted directory and use those files as disk devices in an Automatic Storage Management disk group.

To create these files, follow these steps:

  1. If necessary, create an exported directory for the disk group files on the NAS device.

    Refer to the NAS device documentation for more information about completing this step.

  2. Switch user to root.

  3. Create a mount point directory on the local system. For example:

    # mkdir -p /mnt/oracleasm
    
  4. To ensure that the NFS file system is mounted when the system restarts, add an entry for the file system in the mount file /etc/fstab.

    See Also:

    My Oracle Support note 359515.1 for updated NAS mount option information, available at the following URL:
    https://metalink.oracle.com
    

    For more information about editing the mount file for the operating system, refer to the man pages. For more information about recommended mount options, refer to the section "Checking NFS Mount and Buffer Size Parameters for Oracle RAC".

  5. Enter a command similar to the following to mount the NFS file system on the local system:

    # mount /mnt/oracleasm
    
  6. Choose a name for the disk group to create. For example: sales1.

  7. Create a directory for the files on the NFS file system, using the disk group name as the directory name. For example:

    # mkdir /mnt/oracleasm/nfsdg
    
  8. Use commands similar to the following to create the required number of zero-padded files in this directory:

    # dd if=/dev/zero of=/mnt/oracleasm/nfsdg/disk1 bs=1024k count=1000
    

    This example creates 1 GB files on the NFS file system. You must create one, two, or three files respectively to create an external, normal, or high redundancy disk group.

  9. Enter commands similar to the following to change the owner, group, and permissions on the directory and files that you created, where the installation owner is grid, and the OSASM group is asmadmin:

    # chown -R grid:asmadmin /mnt/oracleasm
    # chmod -R 660 /mnt/oracleasm
    
  10. If you plan to install Oracle RAC or a standalone Oracle Database, then during installation, edit the Automatic Storage Management disk discovery string to specify a regular expression that matches the file names you created. For example:

    /mnt/oracleasm/sales1/
    

    Note:

    During installation, disk paths mounted on Oracle ASM and registered on ASMLIB with the string ORCL:* are listed as default database storage candidate disks.

3.3.1.3 Using an Existing Automatic Storage Management Disk Group

To store either database or recovery files in an existing Automatic Storage Management disk group, then you have the following choices, depending on the installation method that you select:

  • If you select an installation method that runs Database Configuration Assistant in interactive mode (for example, by choosing the Advanced database configuration option), then you can decide whether you want to create a disk group, or to use an existing one.

    The same choice is available to you if you use Database Configuration Assistant after the installation to create a database.

  • If you select an installation method that runs Database Configuration Assistant in noninteractive mode, then you must choose an existing disk group for the new database; you cannot create a disk group. However, you can add disk devices to an existing disk group if it has insufficient free space for your requirements.

Note:

The Automatic Storage Management instance that manages the existing disk group can be running in a different Oracle home directory.

To determine if an existing Automatic Storage Management disk group exists, or to determine if there is sufficient disk space in a disk group, you can use the ASM command line tool (asmcmd), Oracle Enterprise Manager Grid Control or Database Control. Alternatively, you can use the following procedure:

  1. View the contents of the oratab file to determine if an Automatic Storage Management instance is configured on the system:

    $ more /etc/oratab
    

    If an Automatic Storage Management instance is configured on the system, then the oratab file should contain a line similar to the following:

    +ASM2:oracle_home_path
    

    In this example, +ASM2 is the system identifier (SID) of the Automatic Storage Management instance, with the node number appended, and oracle_home_path is the Oracle home directory where it is installed. By convention, the SID for an Automatic Storage Management instance begins with a plus sign.

  2. Set the ORACLE_SID and ORACLE_HOME environment variables to specify the appropriate values for the Automatic Storage Management instance.

  3. Connect to the Automatic Storage Management instance and start the instance if necessary:

    $ $ORACLE_HOME/bin/asmcmd
    ASMCMD> startup
    
  4. Enter one of the following commands to view the existing disk groups, their redundancy level, and the amount of free disk space in each one:

    ASMCMD> lsdb
    

    or:

    $ORACLE_HOME/bin/asmcmd -p lsdg
    
  5. From the output, identify a disk group with the appropriate redundancy level and note the free space that it contains.

  6. If necessary, install or identify the additional disk devices required to meet the storage requirements listed in the previous section.

    Note:

    If you are adding devices to an existing disk group, then Oracle recommends that you use devices that have the same size and performance characteristics as the existing devices in that disk group.

3.3.1.4 Configuring Disks for Automatic Storage Management with ASMLIB

The Automatic Storage Management library driver (ASMLIB) simplifies the configuration and management of the disk devices by eliminating the need to rebind disk devices used with Oracle ASM each time the system is restarted.

Without ASMLIB Linux 2.6 kernel and later, block device paths do not maintain permissions and path persistence unless you create a permissions or rules file on each cluster member node; block device paths that were /dev/sda can appear as /dev/sdb after a system restart. Adding new disks requires you to modify the udev file to provide permissions and path persistence for the new disk.

With ASMLIB, you define the range of disks you want to have made available as Oracle ASM disks. ASMLIB maintains permissions and disk labels that are persistent on the storage device, so that label is available even after an operating system upgrade. You can update storage paths on all cluster member nodes by running one oracleasm command on each node.

If you intend to use Automatic Storage Management on block devices for database storage for Linux, then Oracle recommends that you install the ASMLIB driver and associated utilities, and use them to configure the disks for ASM.

To use the Automatic Storage Management library driver (ASMLIB) to configure Automatic Storage Management devices, complete the following tasks.

Note:

To create a database during the installation using the ASM library driver, you must choose an installation method that runs ASMCA in interactive mode. You must also change the default disk discovery string to ORCL:*.
3.3.1.4.1 Installing and Configuring the ASM Library Driver Software

If you are a member of the Unbreakable Linux Network, then you can install the ASMLIB rpms by subscribing to the Oracle Software for Enterprise Linux channel, and using up2date to retrieve the most current package for your system and kernel. For additional information, refer to the following URL:

http://www.oracle.com/technology/tech/linux/asmlib/uln.html

To install and configure the ASMLIB driver software manually, follow these steps:

  1. Enter the following command to determine the kernel version and architecture of the system:

    # uname -rm
    
  2. Download the required ASMLIB packages from the OTN Web site:

    http://www.oracle.com/technology/tech/linux/asmlib/index.html
    

    Note:

    You must install oracleasm-support package version 2.0.1 or later to use ASMLIB on Red Hat Enterprise Linux Advanced Server, or SUSE Linux Enterprise Server.

    You must install the following packages, where version is the version of the ASMLIB driver, arch is the system architecture, and kernel is the version of the kernel that you are using:

    oracleasm-support-version.arch.rpm
    oracleasm-kernel-version.arch.rpm
    oracleasmlib-version.arch.rpm
    
  3. Switch user to the root user:

    $ su -
    
  4. Enter a command similar to the following to install the packages:

    # rpm -Uvh oracleasm-support-version.arch.rpm \
               oracleasm-kernel-version.arch.rpm \
               oracleasmlib-version.arch.rpm
    

    For example, if you are using the Red Hat Enterprise Linux AS 4 enterprise kernel on an AMD64 system, then enter a command similar to the following:

    # rpm -Uvh oracleasm-support-2.0.1.x86_64.rpm \
               oracleasmlib-2.0.1.x86_64.rpm \
               oracleasm-2.6.9-11.EL-2.0.1.x86_64.rpm
    
  5. Enter the following command to run the oracleasm initialization script with the configure option:

    # /usr/sbin/oracleasm configure -i
    

    Note:

    The oracleasm command in /usr/sbin is the command you should use. The /etc/init.d path is not deprecated, but the oracleasm binary in that path is now used typically for internal commands.
  6. Enter the following information in response to the prompts that the script displays:

    Prompt Suggested Response
    Default user to own the driver interface: Standard groups and users configuration: Specify the Oracle software owner user (for example, oracle)

    Job role separation groups and users configuration: Specify the Grid Infrastructure software owner (for example, grid)

    Default group to own the driver interface: Standard groups and users configuration: Specify the OSDBA group for the database (for example, dba).

    Job role separation groups and users configuration: Specify the OSASM group for storage administration (for example, asmadmin).

    Start Oracle Automatic Storage Management Library driver on boot (y/n): Enter y to start the Oracle Automatic Storage Management library driver when the system starts.
    Fix permissions of Oracle ASM disks on boot? (y/n) Enter y to fix permissions of Oracle ASM disks when the system starts.

    The script completes the following tasks:

    • Creates the /etc/sysconfig/oracleasm configuration file

    • Creates the /dev/oracleasm mount point

    • Mounts the ASMLIB driver file system

      Note:

      The ASMLIB driver file system is not a regular file system. It is used only by the Automatic Storage Management library to communicate with the Automatic Storage Management driver.
  7. Enter the following command to load the oracleasm kernel module:

    # /usr/sbin/oracleasm init
    
  8. Repeat this procedure on all nodes in the cluster where you want to install Oracle RAC.

3.3.1.4.2 Configuring Disk Devices to Use ASM Library Driver on x86 Systems

To configure the disk devices to use in an Automatic Storage Management disk group, follow these steps:

  1. If you intend to use IDE, SCSI, or RAID devices in the Automatic Storage Management disk group, then follow these steps:

    1. If necessary, install or configure the shared disk devices that you intend to use for the disk group and restart the system.

    2. To identify the device name for the disks to use, enter the following command:

      # /sbin/fdisk -l
      

      Depending on the type of disk, the device name can vary:

      Disk Type Device Name Format Description
      IDE disk
      /dev/hdxn
      
      In this example, x is a letter that identifies the IDE disk and n is the partition number. For example, /dev/hda is the first disk on the first IDE bus.
      SCSI disk
      /dev/sdxn
      
      In this example, x is a letter that identifies the SCSI disk and n is the partition number. For example, /dev/sda is the first disk on the first SCSI bus.

      To include devices in a disk group, you can specify either whole-drive device names or partition device names.

      Note:

      Oracle recommends that you create a single whole-disk partition on each disk.
    3. Use either fdisk or parted to create a single whole-disk partition on the disk devices.

  2. Enter a command similar to the following to mark a disk as an Automatic Storage Management disk:

    # /usr/sbin/oracleasm createdisk DISK1 /dev/sdb1
    

    In this example, DISK1 is the name you assign to the disk.

    Note:

    The disk names that you specify can contain uppercase letters, numbers, and the underscore character. They must start with an uppercase letter.

    If you are using a multi-pathing disk driver with Automatic Storage Management, then make sure that you specify the correct logical device name for the disk.

  3. To make the disk available on the other nodes in the cluster, enter the following command as root on each node:

    # /usr/sbin/oracleasm scandisks
    

    This command identifies shared disks attached to the node that are marked as Automatic Storage Management disks.

3.3.1.4.3 Administering the ASM Library Driver and Disks

To administer the Automatic Storage Management library driver and disks, use the oracleasm initialization script with different options, as described in Table 3-7.

Table 3-7 ORACLEASM Script Options

Option Description
configure

Use the configure option to reconfigure the Automatic Storage Management library driver, if necessary:

# /usr/sbin/oracleasm configure -i

To see command options, enter oracleasm configure without the -i flag.

enable
disable

Use the disable and enable options to change the actions of the Automatic Storage Management library driver when the system starts. The enable option causes the Automatic Storage Management library driver to load when the system starts:

# /usr/sbin/oracleasm enable
start
stop
restart

Use the start, stop, and restart options to load or unload the Automatic Storage Management library driver without restarting the system:

# /usr/sbin/oracleasm restart
createdisk

Use the createdisk option to mark a disk device for use with the Automatic Storage Management library driver and give it a name:

# /usr/sbin/oracleasm createdisk DISKNAME devicename
deletedisk

Use the deletedisk option to unmark a named disk device:

# /usr/sbin/oracleasm deletedisk DISKNAME

Caution: Do not use this command to unmark disks that are being used by an Automatic Storage Management disk group. You must delete the disk from the Automatic Storage Management disk group before you unmark it.

querydisk

Use the querydisk option to determine if a disk device or disk name is being used by the Automatic Storage Management library driver:

# /usr/sbin/oracleasm querydisk {DISKNAME | devicename}
listdisks

Use the listdisks option to list the disk names of marked Automatic Storage Management library driver disks:

# /usr/sbin/oracleasm listdisks
scandisks

Use the scandisks option to enable cluster nodes to identify which shared disks have been marked as Automatic Storage Management library driver disks on another node:

# /usr/sbin/oracleasm scandisks

3.3.1.5 Configuring Disk Devices Manually for Oracle ASM

By default, the 2.6 kernel device file naming scheme udev dynamically creates device file names when the server is started, and assigns ownership of them to root. If udev applies default settings, then it changes device file names and owners for voting disks or Oracle Cluster Registry partitions, corrupting them when the server is restarted. For example, a voting disk on a device named /dev/sdd owned by the user grid may be on a device named /dev/sdf owned by root after restarting the server.If you use ASMLIB, then you do not need to ensure permissions and device path persistency in udev.

If you do not use ASMLIB, then you must create a custom rules file. When udev is started, it sequentially carries out rules (configuration directives) defined in rules files. These files are in the path /etc/udev/rules.d/. Rules files are read in lexical order. For example, rules in the file 10-wacom.rules are parsed and carried out before rules in the rules file 90-ib.rules.

Where rules files describe the same devices, on Asianux, Red Hat, and Oracle Enterprise Linux, the last file read is the one that is applied. On SUSE 2.6 kernels, the first file read is the one that is applied.

To configure a permissions file for disk devices, complete the following tasks:

  1. Configure SCSI devices as trusted devices (white listed), by editing the /etc/scsi_id.config file and adding "options=-g" to the file. For example:

    # cat > /etc/scsi_id.config
    vendor="ATA",options=-p 0x80
    options=-g
    
  2. Using a text editor, create a UDEV rules file for the Oracle ASM devices, setting permissions to 0660 for the installation owner and the group whose members are administrators of the grid infrastructure software. For example, using the installation owner grid and using a role-based group configuration, with the OSASM group asmadmin:

    # vi /etc/udev/rules.d/99-oracle-asmdevices.rules
    
    KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id",
    RESULT=="14f70656e66696c00000000", OWNER="grid", GROUP="asmadmin", MODE="0660"
    KERNEL=="sd?2", BUS=="scsi", PROGRAM=="/sbin/scsi_id",
    RESULT=="14f70656e66696c00000000", OWNER="grid", GROUP="asmadmin", MODE="0660"
    KERNEL=="sd?3", BUS=="scsi", PROGRAM=="/sbin/scsi_id",
    RESULT=="14f70656e66696c00000000", OWNER="grid", GROUP="asmadmin", MODE="0660"
    
  3. Copy the rules.d file to all other nodes on the cluster. For example:

    # scp 99-oracle-asmdevices.rules root@node2:/etc/udev/rules.d/99-oracle-asmdevices.rules
    
  4. Load updated block device partition tables on all member nodes of the cluster, using /sbin/partprobe devicename. You must do this as root.

  5. Enter the command to restart the UDEV service.

    On Asianux, OEL5, and RHEL5, the commands are:

    # /sbin/udevcontrol reload_rules
    # /sbin/start_udev
    

    On SLES10, the command is:

    # /etc/init.d boot.udev restart
    

    Check to ensure that your system is configured correctly.

3.3.2 Using Diskgroups with Oracle Database Files on ASM

Review the following sections to configure Automatic Storage Management storage for Oracle Clusterware and Oracle Database Files:

3.3.2.1 Identifying and Using Existing Oracle Database Diskgroups on ASM

The following section describes how to identify existing diskgroups and determine the free disk space that they contain.

  • Optionally, identify failure groups for the Automatic Storage Management disk group devices.

    Note:

    Complete this step only if you intend to use an installation method that runs Automatic Storage Management Configuration Assistant (ASMCA) in interactive mode; for example, if you intend to choose the Advanced database configuration option, then complete this step. Typical installation does not enable you to specify failure groups.

    If you intend to use a normal or high redundancy disk group, then you can further protect your database against hardware failure by associating a set of disk devices in a custom failure group. By default, each device comprises its own failure group. However, if two disk devices in a normal redundancy disk group are attached to the same SCSI controller, then the disk group becomes unavailable if the controller fails. The controller in this example is a single point of failure.

    To protect against failures of this type, you could use two SCSI controllers, each with two disks, and define a failure group for the disks attached to each controller. This configuration would enable the disk group to tolerate the failure of one SCSI controller.

    Note:

    If you define custom failure groups, then you must specify a minimum of two failure groups for normal redundancy and three failure groups for high redundancy.

3.3.2.2 Creating Diskgroups for Oracle Database Data Files

If you are sure that a suitable disk group does not exist on the system, then install or identify appropriate disk devices to add to a new disk group. Use the following guidelines when identifying appropriate disk devices:

  • All of the devices in an Automatic Storage Management disk group should be the same size and have the same performance characteristics.

  • Do not specify multiple partitions on a single physical disk as a disk group device. Automatic Storage Management expects each disk group device to be on a separate physical disk.

  • Although you can specify a logical volume as a device in an Automatic Storage Management disk group, Oracle does not recommend their use. Logical volume managers can hide the physical disk architecture, preventing Automatic Storage Management from optimizing I/O across the physical devices. They are not supported with Oracle RAC.

3.3.3 Configuring Oracle Automatic Storage Management Cluster File System (ACFS)

Oracle ACFS is installed as part of an Oracle grid infrastructure installation (Oracle Clusterware and Automatic Storage Management) for 11g release 2 (11.2).

Note:

Oracle ACFS is supported only on Oracle Enterprise Linux 5.0 and Red Hat Enterprise Linux 5.0. All other Linux releases supported with Oracle grid infrastructure for a cluster 11g release 2 (11.2) are not supported for Oracle ACFS.

To configure Automatic Storage Management Cluster File System for an Oracle Database home for an Oracle RAC database:

  1. Install Oracle grid infrastructure for a cluster (Oracle Clusterware and Automatic Storage Management)

  2. Change directory to the grid infrastructure home. For example:

    $ cd /u01/app/11.2.0/grid
    
  3. Start ASM Configuration Assistant as the grid installation owner. For example:

    ./asmca
    
  4. The Configure ASM: ASM Disk Groups page shows you the Oracle ASM disk group you created during installation. Click the ASM Cluster File Systems tab.

  5. On the ASM Cluster File Systems page, right-click the Data disk, then select Create ACFS for Database Home.

  6. In the Create ACFS Hosted Database Home window, enter the following information:

    • Database Home ADVM Volume Device Name: Enter the name of the database home. The name must be unique in your enterprise. For example: dbase_01

    • Database Home Mountpoint: Enter the directory path for the mountpoint. For example: /u02/acfsmounts/dbase_01

      Make a note of this mountpoint for future reference.

    • Database Home Size (GB): Enter in gigabytes the size you want the database home to be.

    • Database Home Owner Name: Enter the name of the Oracle Database installation owner you plan to use to install the database. For example: oracle1

    • Database Home Owner Group: Enter the OSDBA group whose members you plan to provide when you install the database. Members of this group are given operating system authentication for the SYSDBA privileges on the database. For example: dba1

    • Click OK when you have completed your entries.

  7. During Oracle RAC installation, ensure that you or the DBA who installs Oracle RAC selects for the Oracle home the mountpoint you provided in the Database Home Mountpoint field (in the preceding example, /u02/acfsmounts/dbase_01).

See Also:

Oracle Database Storage Administrator's Guide for more information about configuring and managing your storage with Oracle ACFS

3.3.4 Migrating Existing Oracle ASM Instances

If you have an Oracle ASM installation from a prior release installed on your server, or in an existing Oracle Clusterware installation, then you can use Automatic Storage Management Configuration Assistant (ASMCA, located in the path Grid_home/bin) to upgrade the existing Oracle ASM instance to 11g release 2 (11.2), and subsequently configure failure groups, ASM volumes and Automatic Storage Management Cluster File System (ACFS).

Note:

You must first shut down all database instances and applications on the node with the existing Oracle ASM instance before upgrading it.

During installation, if you chose to use Oracle ASM and ASMCA detects that there is a prior Oracle ASM version installed in another ASM home, then after installing the Oracle ASM 11g release 2 (11.2) binaries, you can start ASMCA to upgrade the existing Oracle ASM instance. You can then choose to configure an ACFS deployment by creating ASM volumes and using the upgraded Oracle ASM to create the ACFS.

On an existing Oracle Clusterware or Oracle RAC installation, if the prior version of Oracle ASM instances on all nodes is 11g release 1, then you are provided with the option to perform a rolling upgrade of Oracle ASM instances. If the prior version of Oracle ASM instances on an Oracle RAC installation are from a release prior to 11g release 1, then rolling upgrades cannot be performed. Oracle ASM on all nodes will be upgraded to 11g release 2 (11.2).

3.3.5 Converting Standalone Oracle ASM Installations to Clustered Installations

If you have existing standalone Oracle ASM installations on one or more nodes you select as member nodes of the cluster, then OUI proceeds to install Oracle grid infrastructure for a cluster.

If you place Oracle Clusterware files (OCR and voting disks) on Oracle ASM, then ASMCA is started at the end of the clusterware installation, and provides prompts for you to migrate and upgrade the Oracle ASM instance on the local node, so that you have an Oracle ASM 11g release 2 (11.2) installation.

On remote nodes, ASMCA identifies any standalone Oracle ASM instances that are running, and prompts you to shut down those Oracle ASM instances, and any database instances that use them. ASMCA then extends clustered Oracle ASM instances to all nodes in the cluster. However, diskgroup names on the cluster-enabled Oracle ASM instances must be different from existing standalone diskgroup names.

3.4 Desupport of Block and Raw Devices

With the release of Oracle Database 11g release 2 (11.2) and Oracle RAC 11g release 2 (11.2), using Database Configuration Assistant or the installer to store Oracle Clusterware or Oracle Database files on block or raw devices is not supported.

If you intend to upgrade an existing Oracle RAC database, or an Oracle RAC database with Oracle ASM instances, then you can use an existing raw or block device partition, and perform a rolling upgrade of your existing installation. Performing a new installation using block or raw devices is not allowed.