Skip Headers
Oracle® Database 2 Day + Real Application Clusters Guide
10g Release 2 (10.2)

Part Number B28759-01
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

9 Adding Nodes and Instances

This chapter describes how to add nodes and instances in Oracle Real Application Clusters (Oracle RAC) environments. You can use these methods when configuring a new Oracle RAC cluster, or when scaling up an existing Oracle RAC cluster.

This chapter includes the following sections:

For this chapter, it is very important that you perform each step in the order shown.

See Also:

Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide for more information about adding and removing nodes from your cluster database

Preparing Access to the New Node

To prepare the new node prior to installing the Oracle software, refer to Chapter 2, "Preparing Your Cluster".

It is critical that you follow the configuration steps for the following procedures to work. These steps include, but are not limited to the following:

Extending the Oracle Clusterware Home Directory

Now that the new node has been configured to support Oracle Clusterware, you use Oracle Universal Installer (OUI) to add an Oracle Clusterware home to the node being added to your Oracle RAC cluster. This chapter assumes that you are adding a node named docrac3 and that you have already successfully installed Oracle Clusterware on docrac1 in a nonshared home, where CRS_home represents the successfully installed Oracle Clusterware home.

To extend the Oracle Clusterware installation to include the new node:

  1. Verify the $ORACLE_HOME environment variable on docrac1 directs you to the successfully installed Oracle Clusterware home on that node.

  2. Go to CRS_home/oui/bin and run the addNode.sh script.

    cd /opt/oracle/crs/oui/bin
    ./addNode.sh
    
    

    OUI starts and first displays the Welcome window.

  3. Click Next.

    The Specify Cluster Nodes to Add to Installation window appears.

  4. Select the node or nodes that you want to add. After selecting docrac3, click Next.

  5. Verify the entries that OUI displays on the Summary Page and click Next.

  6. Run the rootaddNode.sh script from the CRS_home/install/ directory on docrac1 when prompted to do so.

    Basically, this script adds the node applications of the new node to the OCR configuration.

  7. Run the orainstRoot.sh script on the node docrac3 if OUI prompts you to do so.

  8. Run the CRS_home/root.sh script on the node docrac3 to start Oracle Clusterware on the new node.

  9. Add the new node's Oracle Notification Services (ONS) configuration information to the shared Oracle Cluster Registry (OCR). Obtain the ONS port identifier used by the new node, which you need to know for the next step, by running the following command from the CRS_home/opmn/conf directory on the docrac1 node:

    cat ons.config
    
    

    After you locate the ONS port number for the new node, you must make sure that the ONS on docrac1 can communicate with the ONS on the new node, docrac3.

  10. From the CRS_home/bin directory on the node docrac1, run the Oracle Notification Services configuration utility as shown in the following example, where remote_port is the port number from step 9, and docrac3 is the name of the node that you are adding:

    ./racgons add_config docrac3:remote_port
    
    

At the end of the cloning process, you should have Oracle Clusterware running on the new node. To verify the installation of Oracle Clusterware on the new node, you can run the following command as the root user on the newly configured node, docrac3:

CRS_home/bin/cluvfy stage -post crsinst -n docrac3 -verbose

Extending the Oracle Automatic Storage Management Home Directory

To extend an existing Oracle RAC database to a new node, you must configure the shared storage for the new database instances that will be created on new node. You must configure access to the same shared storage that is already used by the existing database instances in the cluster. For example, the sales cluster database in this guide uses Oracle Automatic Storage Management (ASM) for the database shared storage, so you must configure ASM on the node being added to the cluster.

Because you installed ASM in its own home directory, you must configure an ASM home on the new node using OUI. The procedure for adding an ASM home to the new node is very similar to the procedure you just completed for extending Oracle Clusterware to the new node.

To extend the ASM installation to include the new node:

  1. Ensure that you have successfully installed the ASM software on at least one node in your cluster environment. To use these procedures as shown, your $ASM_HOME environment variable must identify your successfully installed ASM home directory.

  2. Go to the $ASM_HOME/oui/bin directory on docrac1 and run the addNode.sh script.

  3. When OUI displays the Node Selection window, select the node to be added (docrac3), then click Next.

  4. Verify the entries that OUI displays on the Summary window, then click Next.

  5. Run the root.sh script on the new node, docrac3, from the ASM home directory on that node when OUI prompts you to do so.

You now have a copy of the ASM software on the new node.

Extending the Oracle RAC Software Home Directory

Now that you have extended the Oracle Clusterware and ASM homes to the new node, you must extend the Oracle Database home on docrac1 to docrac3. The following steps assume that you have already completed the previous tasks described in this chapter, and that docrac3 is already a member node of the cluster to which docrac1 belongs.

The procedure for adding an Oracle RAC home to the new node is very similar to the procedure you just completed for extending ASM to the new node.

To extend the Oracle RAC installation to include the new node:

  1. Ensure that you have successfully installed the Oracle RAC software on at least one node in your cluster environment. To use these procedures as shown, your $ORACLE_HOME environment variable must identify your successfully installed Oracle RAC home directory.

  2. Go to the $ORACLE_HOME/oui/bin directory on docrac1 and run the addNode.sh script.

  3. When OUI displays the Specify Cluster Nodes to Add to Installation window, select the node to be added (docrac3), then click Next.

  4. Verify the entries that OUI displays in the Cluster Node Addition Summary window, then click Next.

  5. Run the root.sh script on the new node, docrac3, from the $ORACLE_HOME directory on that node when OUI prompts you to do so.

After completing these steps, you should have an installed Oracle RAC home on the new node.

Creating a Listener on the New Node

To service database instance connection requests on the new node, you must create a Listener on that node. Use the Oracle Net Configuration Assistant (NETCA) to create a Listener on the new node. Before beginning this procedure, ensure that your existing nodes have the $ORACLE_HOME environment variable set correctly.

To create a new Listener on the new node using Oracle Net Configuration Assistant:

  1. Start the Oracle Net Configuration Assistant by entering netca at the system prompt from the $ORACLE_HOME/bin directory.

    NETCA displays the Welcome window. Click Help on any NETCA window for additional information.

  2. Select Listener configuration, and click Next.

    NETCA displays the Listener Configuration, Listener window.

  3. Select Add to create a new Listener, then click Next.

    NETCA displays the Listener Configuration, Listener Name window.

  4. Accept the default value of LISTENER for the Listener name by clicking Next.

    NETCA displays the Listener Configuration, Select Protocols window.

  5. Choose TCP and move it to the Selected Protocols area, then click Next.

    NETCA displays the Listener Configuration, TCP/IP Protocol window.

  6. Choose Use the standard port number of 1521, then click Next.

    NETCA displays the Real Application Clusters window.

  7. Select Cluster configuration for the type of configuration to perform, then click Next.

    NETCA displays the Real Application Clusters, Active Nodes window.

  8. Select the name of the node you are adding, for example docrac3, then click Next.

    NETCA creates a Listener using the configuration information provided. You can now exit NETCA.

You should now have a Listener named LISTENER running on the new node.

At this point, you should perform any needed service configuration procedures for the new database instance as described inChapter 7, " Managing Database Workload Using Services".

See Also:

Oracle Database Net Services Administrator's Guide for more information about configuring a Listener using Oracle Net Configuration Assistant

Adding a New Cluster Instance on the New Node

You can use the Oracle Database Configuration Assistant (DBCA) to add database instances to new nodes. Before beginning this procedure, ensure that your existing nodes have the $ORACLE_HOME environment variable set correctly.

To create a new cluster instance on the new node using DBCA:

  1. Start DBCA by entering dbca at the system prompt from the $ORACLE_HOME/bin directory.

    DBCA displays the Welcome window for Oracle RAC. Click Help on any DBCA page for additional information.

  2. Select Oracle Real Application Clusters database, and then click Next.

    DBCA displays the Operations window.

  3. Select Instance Management, and then click Next.

    DBCA displays the Instance Management window.

  4. Select Add an Instance, then click Next.

    DBCA displays the List of Cluster Databases window, which shows the databases and their current status, such as ACTIVE or INACTIVE.

  5. In the List of Cluster Databases window, select the active Oracle RAC database to which you want to add an instance, for example sales. Enter the user name and password for the database user that has SYSDBA privileges. Click Next.

    DBCA will spend a few minutes performing tasks in the background, then it will display the Instance naming and node selection window.

  6. In the Instance naming and node selection window, enter the instance name in the field at the top of this window if the default instance name provided by DBCA does not match your existing instance naming scheme. For example, instead of the sales3 instance, you might want to create the sales_03 instance.

    Click Next to accept the default instance name of sales3.

    DBCA displays the Instance Storage window.

  7. In the Instance Storage window, you have the option of changing the default storage options and file locations for the new database instance. In this example, you accept all the default values and click Finish.

    DBCA displays the Summary window.

  8. Review the information in the Summary window, then click OK to start the database instance addition operation. DBCA displays a progress dialog box showing DBCA performing the instance addition operation.

  9. During the instance addition operation, if you are using ASM for your cluster database storage, DBCA detects the need for a new ASM instance on the new node.

    When DBCA displays a dialog box, asking if you want to ASM to be extended, click Yes.

    After DBCA extends ASM on the new node and completes the instance addition operation, DBCA displays a dialog box asking whether or not you want to perform another operation. Click No to exit DBCA.

You should now have a new cluster database instance and ASM instance running on the new node. After you terminate your DBCA session, you should run the following command to verify the administrative privileges on the new node and obtain detailed information about these privileges:

CRS_home/bin/cluvfy comp admprv -o db_config -d oracle_home -n docrac3 -verbose