Monday, 12 December 2011

Installing and Configuring Oracle Clusterware and Oracle RAC

Installing and Configuring Oracle
Clusterware and Oracle RAC
This chapter explains how to install Oracle Real Application Clusters (Oracle RAC)
using Oracle Universal Installer (OUI). You must install Oracle Clusterware before
installing Oracle RAC. After your Oracle Clusterware is operational, you can use OUI
to install the Oracle Database software with the Oracle RAC components.
The example Oracle RAC environment described in this guide uses Oracle Automatic
Storage Management (ASM), so this chapter also includes instructions on how to
install ASM in its own home directory.
This chapter includes the following sections:
■ Preparing the Oracle Media Installation File
■ Installing Oracle Clusterware 10g
■ Configuring Automatic Storage Management in an ASM Home Directory
■ Installing the Oracle Database Software and Creating a Cluster Database
■ Performing Postinstallation Tasks
■ Converting an Oracle Database to an Oracle RAC Database
Preparing the Oracle Media Installation File
Oracle Clusterware is not installed as part of Oracle Database 10g, but is installed from
the Oracle Clusterware installation media. Because Oracle Clusterware works closely
with the operating system, system administrator access is required for some of the
installation tasks. In addition, some of the Oracle Clusterware processes must run as
the special operating system user, root.
The Oracle RAC Database software is installed from the Oracle Database 10g
installation media. By default, the standard Oracle Database 10g software installation
process installs the Oracle RAC option when OUI recognizes that you are performing
the installation on a cluster. OUI installs Oracle RAC into a directory structure that is
referred to as Oracle_home. This home is separate from the home directories of other
Oracle software products installed on the same server.
If the Oracle Clusterware installation software and Oracle Database installation
software are in ZIP files, create a staging directory on one node, for example,
docrac1, to store the unzipped files, as shown here:
mkdir -p /stage/oracle/10.2.0

Copy the ZIP files to this staging directory. For example, if the files were downloaded
to a directory named /home/user1, and the ZIP files are named
10201_clusterware_linux32.zip and 10201_database_linux32.zip, you
would you use the following commands to move the ZIP files to the staging directory:
cd /home/user1
cp 10201_clusterware_linux32.zip /stage/oracle/10.2.0
cp 10201_database_linux32.zip /stage/oracle/10.2.0
Then, as the oracle user on docrac1, unzip the Oracle media, as shown in the
following example:
cd /stage/oracle/10.2.0
unzip 10201_clusterware_linux32.zip
unzip 10201_database_linux32.zip
If you have the Oracle Clusterware and Oracle Database software on CDs, insert the
distribution media for the database into a disk drive on your computer. Make sure the
disk drive has been mounted at the operating system level.
Installing Oracle Clusterware 10g
The following topics describe the process of installing Oracle Clusterware:
■ Configuring the Operating System Environment
■ Verifying the Configuration Using the Cluster Verification Utility
■ Using Oracle Universal Installer to Install Oracle Clusterware
■ Completing the Oracle Clusterware Configuration
Configuring the Operating System Environment
You run Oracle Universal Installer from the oracle user account. However, before
you start Oracle Universal Installer you must configure the environment of the
oracle user. You must set the ORACLE_SID and ORACLE_BASE environment
variables to the desired values for your environment.
For example, if you want to create an Oracle database named sales on the mount
point directory /opt/oracle, you would set ORACLE_SID to sales and
ORACLE_BASE to the directory /opt/oracle/10gR2.
To modify the user environment on Red Hat Linux:
1. As the oracle user, modify the user profile in the /home/oracle directory on
both nodes using the following commands:
[oracle] $ cd $HOME
[oracle] $ vi .bash_profile
Add the following lines at the end of the file:
export ORACLE_SID=sales
export ORACLE_BASE=/opt/oracle/10gR2
export ORACLE_HOME=/opt/oracle/crs
export PATH=$ORACLE_HOME/bin:$PATH
In the previous example, the ORACLE_HOME variable has been set to the location of
the Oracle Clusterware home directory. After Oracle Clusterware has been
installed the ORACLE_HOME environment variable will be modified to reflect the
value of the Oracle Database home directory.
2. Read and execute the changes made to the .bash_profile file:
source .bash_profile
Verifying the Configuration Using the Cluster Verification Utility
If you have not configured your nodes, network, and operating system correctly, your
installation of the Oracle Clusterware or Oracle Database 10g software will not
complete successfully.
As the oracle user, change directories to the staging directory for the Oracle
Clusterware software, or to the mounted installation disk. Then, enter the following
command to verify your hardware and operating system setup, where
staging_area is the location of the installation media (for example,
/home/oracle/downloads/10gR2/10.2.0 or /dev/dvdrom):
[oracle] $ cd /staging_area/clusterware/cluvfy
[oracle] $ ./runcluvfy.sh stage -pre crsinst -n docrac1,docrac2 -verbose
The preceding command instructs the CVU to verify that the system meets all the
criteria for an Oracle Clusterware installation. It checks that all the nodes are reachable
from the local nodes, proper user equivalence exists, connectivity exists between all
the nodes through the public and private interconnects, the user has proper
permissions to install the software, and that all system requirements (including kernel
version, kernel parameters, memory, swap space, temporary directory space, required
software packages) are met.
Using Oracle Universal Installer to Install Oracle Clusterware
As the oracle user on the docrac1 node, install Oracle Clusterware. Note that OUI
uses Secure Shell (SSH) to copy the binary files from docrac1 to docrac2 during the
installation.
Note: For the RMAN utility to work properly, the
$ORACLE_HOME/bin directory must appear in the PATH variable
before the /usr/X11R6/bin directory on Linux platforms.
See Also: Oracle Database Oracle Clusterware and Oracle Real
Application Clusters Administration and Deployment Guide for more
information about resolving the CVU errors
Note: If you are installing Oracle Clusterware on a server that
already has a single-instance Oracle Database 10g installation, then
stop the existing ASM instances, if any. After Oracle Clusterware is
installed, start up the ASM instances again. When you restart the
single-instance Oracle database and then the ASM instances, the ASM
instances use the Cluster Synchronization Services Daemon (CSSD)
instead of the daemon for the single-instance Oracle database.
To install Oracle Clusterware:
1. Use the following command to start Oracle Universal Installer, where
staging_area is the location of the staging area on disk, or the location of the
mounted installation disk:
cd /staging_area/clusterware
./runInstaller
2. The OUI Welcome window appears. Click Next.
3. If you have not installed any Oracle software previously on this server, the Specify
Inventory directory and credentials window appears. The path displayed for the
inventory directory should be the oraInventory subdirectory of your Oracle
base directory. For example, if you set the ORACLE_BASE environment variable to
/opt/oracle/10gR2 before starting OUI, then the path displayed is
/opt/oracle/10gR2/oraInventory. For the operating system group name,
choose oinstall. Click Next.
4. The Specify Home Details window appears. Accept the default value for the Name
field, which is the name of the Oracle home directory for this product. For the Path
field, click Browse to go to and select the directory /opt/oracle/crs, if this
path is not already displayed.
After you have selected the path, click Next.
5. The next window, Product-Specific Prerequisite Checks, appears after a short
period of time. When you see the message "Check complete. The overall result of
this check is: Passed", as shown in the following screen shot, click Next.
6. The Specify Cluster Configuration window appears.
Change the default cluster name from crs to a name that is unique throughout
your entire enterprise network. For example, you might choose a name that is
based on the node names' common prefix. This guide will use the cluster name
docrac.
The local node, docrac1, appears in the Cluster Nodes section. If the cluster node
names include the domain name, click Edit and remove the domain name from
the public, private, and virtual node names. For example, if the node name is
docrac1, edit the entries so that they are displayed as docrac1, docrac1-priv,
and docrac1-vip. When you have finished removing the domain names in the
"Modify a node in the existing cluster" window, click OK.
When you are returned to the Specify Cluster Configuration window, click Add.
In the "Add a new node to the existing cluster" dialog window, enter the second
node's public name (docrac2), private name (docrac2-priv), and virtual IP
name (docrac2-vip), then click OK.
The Specify Cluster Configuration window now displays both nodes in the
Cluster Nodes section.
Click Next.
7. The Specify Network Interface Usage window appears. Verify eth0 and eth1 are
configured correctly (proper subnet and interface type displayed), then click Next.
The Specify Oracle Cluster Registry (OCR) Location window appears.
8. Choose Normal Redundancy for the OCR Configuration. You will be prompted
for two file locations. In the Specify OCR Location field enter the name of the
device configured for the first OCR file. For example, /dev/raw/raw1. In the
Specify OCR Mirror Location field, enter the name of the device configured for the
OCR mirror file, for example /dev/raw/raw2. When finished, click Next. During
installation, the OCR data will be written to the specified locations.
The Specify Voting Disk Location window appears.
9. Select Normal Redundancy for the voting disk location. You will be prompted for
three file locations. For the Voting Disk Location, enter the name of the device
configured for the first voting disk file, for example, /dev/raw/raw3. Repeat this
process for the other two Voting Disk Location fields. When finished, click Next.
10. The OUI Summary window appears. Review the contents of the Summary
window and then click Install.
OUI displays a progress indicator during the installation process.
11. During the installation process, the Execute Configuration Scripts window
appears. Do not click OK until you have run the scripts.
The Execute Configuration Scripts window shows configuration scripts, and the
path where the configuration scripts are located. Run the scripts on all nodes as
directed, in the order shown. For example, on Red Hat Linux you perform the
following steps (note that for clarity, the examples show the current user, node
and directory in the prompt):
a. As the oracle user on docrac1, open a terminal window, and enter the
following commands:
[oracle@docrac1 oracle]$ cd /opt/oracle/10gR2/oraInventory
[oracle@docrac1 oraInventory]$ su
b. Enter the password for the root user, and then enter the following command
to run the first script on docrac1:
[root@docrac1 oraInventory]# ./orainstRoot.sh
c. After the orainstRoot.sh script finishes on docrac1, open another
terminal window, and as the oracle user, enter the following commands:
[oracle@docrac1 oracle]$ ssh docrac2
[oracle@docrac2 oracle]$ cd /opt/oracle/10gR2/oraInventory
[oracle@docrac2 oraInventory]$ su
d. Enter the password for the root user, and then enter the following command
to run the first script on docrac2:
[root@docrac2 oraInventory]# ./orainstRoot.sh
e. After the orainstRoot.sh script finishes on docrac2, go to the terminal
window you opened in step b. As the root user on docrac1, enter the
following commands to run the second script, root.sh:
[root@docrac1 oraInventory]# cd
[root@docrac1 crs]# ./root.sh
At the completion of this script, the following message is displayed:
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
f. After the root.sh script finishes on docrac1, go to the terminal window
you opened in step c. As the root user on docrac2, enter the following
commands:
[root@docrac2 oraInventory]# cd /opt/oracle/crs
[root@docrac2 crs]# ./root.sh
After the root.sh script completes, return to the OUI window where the Installer
prompted you to run the orainstRoot.sh and root.sh scripts. Click OK.
12. The Configuration Assistants window appears. When the configuration assistants
finish, OUI displays the End of Installation window. Click Exit to complete the
installation process.
If you encounter any problems, refer to the configuration log for information. The
path to the configuration log is displayed on the Configuration Assistants
window.


  Computer Programmer/(Oracle|JDEdwards)
  Syed@ju.edu.sa

http://syed7861.blogshot.com/










No comments:

Post a Comment