CONVERT NON ASM (File System) SINGLE INSTANCE DATABASE TO RAC DATABASE USING RCONFIG: Oracle 11g R2 RAC:

CONVERT NON ASM ( File System ) SINGLE INSTANCE DATABASE TO RAC DATABASE USING RCONFIG: Oracle 11g R2 RAC:

For converting standalone to RAC database, both the environments should be running on the same operating system and using the same oracle release. Oracle supports the following methods to convert a single-instance database to an RAC database:

  1. DBCA
  2. Oracle Enterprise Manager (grid control)
  3. RCONFIG
  4. Manual method

With my post, I will demonstrate you to convert non ASM single instance database to RAC database by using the rconfig command-line tool. During the conversion, rconfig performs the following steps automatically:

  • Migrating the database to ASM, if specified
  • Creating RAC database instances on all specified nodes in the cluster.
  • Configuring the Listener and Net Service entries.
  • Registering services with CRS.
  • Starting up the instances and listener on all nodes.

In Oracle 11g R2, a single-instance database can either be converted to an administrator-managed cluster database or a policy-managed cluster database.

When you navigate through the $ORACLE_HOME/assistants/rconfig/sampleXMLS, you will find two sample XML input files.

  • ConvertToRAC_AdminManaged.xml
  • ConvertToRAC_PolicyManaged.xml

While converting a single-instance database, with file system storage, to an RAC database with Automatic Storage Management (ASM), rconfig invokes RMAN internally to back up the database to proceed with converting non-ASM to ASM. Therefore, to make backup faster it is better to increase the PARALLELISM in configuration settings, which will use more RMAN channels in the RMAN on the local node and will make backup run faster and eventually reduces the conversion duration. For example, you may configure the following in the RMAN settings of pawdb database on the local node.

RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 3;

CURRENT SCENARIO:

  • RAC 2 Node Cluster setup
  • Names of nodes : paw-racnode1, paw-racnode2
  • Name of Multi instance RAC database with ASM storage : racdb
  • Name of single instance database with file system storage : pawdb
  • Source Oracle home : /u01/app/oracle/product/11.2.0/db_1
  • Target Oracle home : /u01/app/oracle/product/11.2.0/db_1

OBJECTIVE:

  • Convert pawdb single instance Non ASM database to an Admin managed RAC database running on two nodes paw-racnode1 and paw-racnode2.
  • Change storage from File system to ASM with
  • Data files on +PAWDB_DATA diskgroup
  • Flash recovery area on +FRA diskgroup

IMPLEMENTATION:

– Created a single Instance File system ( Non ASM ) database : pawdb

[oracle@paw-racnode1 ~ ]$ srvctl config database -d pawdb

Database unique name: pawdb

Database name: pawdb

Oracle home: /u01/app/oracle/product/11.2.0/db_1

Oracle user: oracle

Spfile:

Domain:

Start options: open

Stop options: immediate

Database role: PRIMARY

Management policy: AUTOMATIC

Server pools: pawdb

Database instances: pawdb

Disk Groups:

Services:

Database is administrator managed

[grid@paw-racnode1 ~]$ srvctl status database -d pawdb

Instance pawdb is running on node paw-racnode1

[oracle@paw-racnode1 ~]$ . oraenv

ORACLE_SID = [orcl] ? pawdb

The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 is /u01/app/oracle

[oracle@paw-racnode1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Sat Dec 10 16:33:52 2016

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – 64bit Production

With the Partitioning, Real Application Clusters, OLAP, Data Mining

and Real Application Testing options

SQL> select name from v$datafile;

NAME

————————————————————————–u01/app/oracle/oradata/pawdb/system01.dbf

/u01/app/oracle/oradata/pawdb/sysaux01.dbf

/u01/app/oracle/oradata/pawdb/undotbs01.dbf

/u01/app/oracle/oradata/pawdb/users01.dbf

/u01/app/oracle/oradata/pawdb/example01.dbf

– Copy  ConvertToRAC_AdminManaged.xml   to another file  convert.xml 

[oracle@paw-racnode1 ~]$ cd $ORACLE_HOME/assistants/rconfig/sampleXMLs

[oracle@paw-racnode1 sampleXMLs]$ cp ConvertToRAC_AdminManaged.xml convert.xml

[oracle@paw-racnode1 sampleXMLs]$ cat convert.xml

<?xml version=”1.0″ encoding=”UTF-8″?>

<n:RConfig xmlns:n=”http://www.oracle.com/rconfig”

           xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”

           xsi:schemaLocation=”http://www.oracle.com/rconfig”>

    <n:ConvertToRAC>

<!– Verify does a precheck to ensure all pre-requisites are met, before the conversion is attempted. Allowable values are: YES|NO|ONLY –>

        <n:Convert verify=”YES”>

<!–Specify current OracleHome of non-rac database for SourceDBHome –>

              <n:SourceDBHome>/oracle/product/11.2.0/db_1</n:SourceDBHome>

<!–Specify OracleHome where the rac database should be configured. It can be same as SourceDBHome –>

              <n:TargetDBHome>/oracle/product/11.2.0/db_1</n:TargetDBHome>

<!–Specify SID of non-rac database and credential. User with sysdba role is required to perform conversion –>

              <n:SourceDBInfo SID=”orcl”>

                <n:Credentials>

                  <n:User>sys</n:User>

                  <n:Password>oracle</n:Password>

                  <n:Role>sysdba</n:Role>

                </n:Credentials>

              </n:SourceDBInfo>

<!–Specify the list of nodes that should have rac instances running for the Admin Managed Cluster Database. LocalNode should be the first node in this nodelist. –>

              <n:NodeList>

                <n:Node name=”node1″/>

                <n:Node name=”node2″/>

              </n:NodeList>

<!–Instance Prefix tag is optional starting with 11.2. If left empty, it is derived from db_unique_name.–>

              <n:InstancePrefix>sales</n:InstancePrefix>

<!– Listener details are no longer needed starting 11.2. Database is registered with default listener and SCAN listener running from Oracle Grid Infrastructure home. –>

<!–Specify the type of storage to be used by rac database. Allowable values are CFS|ASM. The non-rac database should have same storage type. ASM credentials are no needed for conversion. –>

              <n:SharedStorage type=”ASM”>

<!–Specify Database Area Location to be configured for rac database.If this field is left empty, current storage will be used for rac database. For CFS, this field will have directory path. –>

                <n:TargetDatabaseArea>+ASMDG</n:TargetDatabaseArea>

<!–Specify Flash Recovery Area to be configured for rac database. If this field is left empty, current recovery area of non-rac database will be configured for rac database. If current database is not using recovery Area, the resulting rac database will not have a recovery area. –>

                <n:TargetFlashRecoveryArea>+ASMDG</n:TargetFlashRecoveryArea>

              </n:SharedStorage>

– Edit convert.xml file and make following changes :

  • Verify does a precheck to ensure all pre-requisites are met, before the conversion is attempted. Allowable values are: YES|NO|ONLY : In my case , I have Taken : YES
  • Specify current OracleHome of non-rac database for SourceDBHome : In my case , I have Taken : /u01/app/oracle/product/11.2.0/db_1
  • Specify OracleHome where the rac database should be configured. It can be same as Source DBHome. : In my case , I have Taken : /u01/app/oracle/product/11.2.0/db_1
  • Specify SID of non-rac database and credential. User with sysdba role is required to perform conversion. : In my case : database is pawdb and my sysdba user is sys and sys user password is sys and role is
  • Specify the list of nodes that should have rac instances running for the Admin Managed Cluster Database. LocalNode should be the first node in this nodelist. : In my case : paw-racnode1, paw-racnode2
  • Instance Prefix tag is optional starting with 11.2. If left empty, it is derived from db_unique_name. : In my case , I have Taken : pawdb ( instance names will appear pawdb1, pawdb2)
  • Listener details are no longer needed starting 11.2. Database is registered with default listener and SCAN listener running from Oracle Grid Infrastructure home. : Check local and scan listeners are up and running
  • Specify the type of storage to be used by rac database. Allowable values are CFS|ASM. The non-rac database should have same storage type. ASM credentials are no needed for conversion. : In my case: ASM ( Because my storage type is ASM )
  • Specify Database Area Location to be configured for rac database. If this field is left empty, current storage will be used for rac database. For CFS, this field will have directory path. : In my case , I have Taken : +PAWDB_DATA ( I have created a separate diskgroup PAWDB_DATA to store datafiles of pawdb database on ASM )
  • Specify Flash Recovery Area to be configured for rac database. If this field is left empty, current recovery area of non-rac database will be configured for rac database. If current database is not using recovery Area, the resulting rac database will not have a recovery area. : In my case , I have Taken : +FRA

– Modified:                                     

[oracle@paw-racnode1 sampleXMLs]$ vi convert.xml

<?xml version=”1.0″ encoding=”UTF-8″?>

<n:RConfig xmlns:n=”http://www.oracle.com/rconfig”

           xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”

           xsi:schemaLocation=”http://www.oracle.com/rconfig”>

    <n:ConvertToRAC>  

<!– Verify does a precheck to ensure all pre-requisites are met, before the conversion is attempted. Allowable values are: YES|NO|ONLY –>

        <n:Convert verify=”YES“>

<!–Specify current OracleHome of non-rac database for SourceDBHome –>

              <n:SourceDBHome>/u01/app/oracle/product/11.2.0/db_1</n:SourceDBHome>

<!–Specify OracleHome where the rac database should be configured. It can be same as SourceDBHome –>

              <n:TargetDBHome>/u01/app/oracle/product/11.2.0/db_1</n:TargetDBHome>

<!–Specify SID of non-rac database and credential. User with sysdba role is required to perform conversion –>

              <n:SourceDBInfo SID=”pawdb“>

                <n:Credentials>

                  <n:User>sys</n:User>

                  <n:Password>sys</n:Password>

                  <n:Role>sysdba</n:Role>

                </n:Credentials>

              </n:SourceDBInfo>

<!–Specify the list of nodes that should have rac instances running for the Admin Managed Cluster Database. LocalNode should be the first node in this nodelist. –>

              <n:NodeList>       

                <n:Node name=”paw-racnode1“/>

                <n:Node name=”paw-racnode2“/>

              </n:NodeList>

<!–Instance Prefix tag is optional starting with 11.2. If left empty, it is derived from db_unique_name.–>

              <n:InstancePrefix>pawdb</n:InstancePrefix>

<!– Listener details are no longer needed starting 11.2. Database is registered with default listener and SCAN listener running from Oracle Grid Infrastructure home. –>

<!–Specify the type of storage to be used by rac database. Allowable values are CFS|ASM. The non-rac database should have same storage type. ASM credentials are no needed for conversion. –>

              <n:SharedStorage type=”ASM“>

<!–Specify Database Area Location to be configured for rac database.If this field is left empty, current storage will be used for rac database. For CFS, this field will have directory path. –>

                <n:TargetDatabaseArea>+PAWDB_DATA</n:TargetDatabaseArea>

<!–Specify Flash Recovery Area to be configured for rac database. If this field is left empty, current recovery area of non-rac database will be configured for rac database. If current database is not using recovery Area, the resulting rac database will not have a recovery area. –>

                <n:TargetFlashRecoveryArea>+FRA</n:TargetFlashRecoveryArea>

              </n:SharedStorage>

        </n:Convert>

    </n:ConvertToRAC>

</n:RConfig>

– Run rconfig to convert pawdb from single instance database to 2 instance RAC database

[oracle@paw-racnode1 sampleXMLs]$ rconfig convert.xml

Converting Database “pawdb” to Cluster Database. Target Oracle Home: /u01/app/oracle/product/11.2.0/db_1. Database Role: PRIMARY.

Setting Data Files and Control Files

Adding Database Instances

Adding Redo Logs

Enabling threads for all Database Instances

Setting TEMP tablespace

Adding UNDO tablespaces

Adding Trace files

Setting Flash Recovery Area

Updating Oratab

Creating Password file(s)

Configuring Listeners

Configuring related CRS resources

Starting Cluster Database

<?xml version=”1.0″ ?>

<RConfig version=”1.1″ >

<ConvertToRAC>

    <Convert>

      <Response>

        <Result code=”0″ >

          Operation Succeeded

        </Result>

      </Response>

      <ReturnValue type=”object”>

<Oracle_Home>

         /u01/app/oracle/product/11.2.0/db_1

       </Oracle_Home>

       <Database type=”ADMIN_MANAGED”  >

         <InstanceList>

           <Instance SID=”pawdb1″ Node=”paw-racnode1″  >

           </Instance>

           <Instance SID=”pawdb2″ Node=”paw-racnode2″  >

           </Instance>

         </InstanceList>

       </Database>     </ReturnValue>

    </Convert>

  </ConvertToRAC></RConfig>

– Check the latest log file for rconfig while conversion is going on :

[oracle@paw-racnode1 sampleXMLs]$   ls -lrt $ORACLE_BASE/cfgtoollogs/rconfig/*.log

[oracle@paw-racnode1 sampleXMLs]$   tail  -f $ORACLE_BASE/cfgtoollogs/rconfig/*.log

Click on link to see the log generated in my conversion: rconfig_08_11_15_17_56_43

– Note that rconfig adds password file to all the nodes but entry to tnsnames.ora needs  to be modified ( We have to mention scan name instead of host name ) on the local node and add the same to  the other nodes.

Following is the entry I have modified on the local node and copied to rest of the nodes :

– Original:

PAWDB =

  (DESCRIPTION =

    (ADDRESS = (PROTOCOL = TCP)(HOST = paw-racnode1.airydba.com)(PORT = 1521))

    (CONNECT_DATA =

      (SERVER = DEDICATED)

      (SERVICE_NAME = pawdb)

    )

  )

– Modified:

PAWDB =

  (DESCRIPTION =

    (ADDRESS = (PROTOCOL = TCP)(HOST = paw-rac01-scan.airydba.com)(PORT = 1521))

    (CONNECT_DATA =

      (SERVER = DEDICATED)

      (SERVICE_NAME = pawdb)

    )

  )

– Add entry in /etc/oratab of paw-racnode1 as :

         pawdb1: /u01/app/oracle/product/11.2.0/db_1

 – Add entry in /etc/oratab of paw-racnode2 as :

        pawdb2: /u01/app/oracle/product/11.2.0/db_1

– Check that the database has been converted successfully and 2 instances (pawdb1,pawdb2) are running on different nodes:

[oracle@paw-racnode1 sampleXMLs]$ srvctl status database -d pawdb

Instance pawdb1 is running on node pawracnode1

Instance pawdb2 is running on node paw-racnode2

[grid@paw-racnode1 sampleXMLs]$ crsctl stat res -t

--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS 
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
 ONLINE ONLINE paw-racnode1 
 ONLINE ONLINE paw-racnode2 
ora.FRA.dg
 ONLINE ONLINE paw-racnode1 
 ONLINE ONLINE paw-racnode2 
ora.LISTENER.lsnr
 ONLINE ONLINE paw-racnode1 
 ONLINE ONLINE paw-racnode2 
ora.OCR_DG.dg
 ONLINE ONLINE paw-racnode1 
 ONLINE ONLINE paw-racnode2 
ora.PAWDB_DATA.dg
 ONLINE ONLINE paw-racnode1 
 ONLINE ONLINE paw-racnode2 
ora.asm
 ONLINE ONLINE paw-racnode1 Started 
 ONLINE ONLINE paw-racnode2 Started 
ora.eons
 ONLINE ONLINE paw-racnode1 
 ONLINE ONLINE paw-racnode2 
ora.gsd
 OFFLINE OFFLINE paw-racnode1 
 OFFLINE OFFLINE paw-racnode2 
ora.net1.network
 ONLINE ONLINE paw-racnode1 
 ONLINE ONLINE paw-racnode2 
ora.ons
 ONLINE ONLINE paw-racnode1 
 ONLINE ONLINE paw-racnode2 
ora.registry.acfs
 ONLINE ONLINE paw-racnode1 
 ONLINE ONLINE paw-racnode2 
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
 1 ONLINE ONLINE paw-racnode2 
ora.LISTENER_SCAN2.lsnr
 1 ONLINE ONLINE paw-racnode1 
ora.LISTENER_SCAN3.lsnr
 1 ONLINE ONLINE paw-racnode1 
ora.oc4j
 1 OFFLINE OFFLINE 
ora.orcl.db
 1 OFFLINE OFFLINE 
ora.paw-racnode1.vip
 1 ONLINE ONLINE paw-racnode1 
ora.paw-racnode2.vip
 1 ONLINE ONLINE paw-racnode2 
ora.pawdb.db
 1 ONLINE ONLINE paw-racnode1 Open 
 2 ONLINE ONLINE paw-racnode2 Open 
ora.racdb.db
 1 OFFLINE OFFLINE 
 2 OFFLINE OFFLINE 
ora.scan1.vip
 1 ONLINE ONLINE paw-racnode2 
ora.scan2.vip
 1 ONLINE ONLINE paw-racnode1 
ora.scan3.vip
 1 ONLINE ONLINE paw-racnode1                              

– Check that database can be connected remotely from second node (paw-racnde2 ) and also check that datafiles have been Converted in to ASM:

[grid@paw-racnode2 ~]$ su oracle

Password:

[oracle@paw-racnode2 grid]$ . oraenv

ORACLE_SID = [+ASM2] ? pawdb2

The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 is /u01/app/oracle

[oracle@paw-racnode2 grid]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Sun Aug 11 18:31:48 2015

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – 64bit Production

With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,

Data Mining and Real Application Testing options

SQL> select name from v$database;

NAME

———

PAWDB

SQL> select name from V$datafile;

NAME

————————————————————————–

+PAWDB_DATA/pawdb/datafile/system.256.930333443

+PAWDB_DATA/pawdb/datafile/sysaux.257.930333527

+PAWDB_DATA/pawdb/datafile/undotbs1.258.930333589

+PAWDB_DATA/pawdb/datafile/users.259.930333597

+PAWDB_DATA/pawdb/datafile/undotbs2.270.930333939

Okay, so we are able to recognize that single instance ( Non ASM ) database pawdb has been successfully converted into RAC database. Hope you enjoyed and learn some thing from my this post. Please do comment on this post if you liked it.

Thank you for reading… This is Airy…Enjoy Learning:)

 

 

#rac

GPnP ( Grid plug n play ) profile in Oracle 11g R2/12c RAC

GPnP ( Grid plug n play ) profile in Oracle 11g R2/12c RAC :

What is GPnP profile and Why it is needed?

With reference to my OCR and Voting Disk Blog post, In Oracle 11g R2 RAC, we can store OCR and Voting disk in ASM, but clusterware needs OCR and Voting disk to start CRSD and CSSD process but point is, both OCR and Voting disk are stored in ASM, which itself  is a resource for the nodes that means CRSD and CSSD process needs the OCR and Voting file before the ASM startup. So the question arise ” how the clusterware will start?”, we shall find the answer of this question in this same document, just wait..

To resolve this issue Oracle introduced two new node specific files OLR & GPnP, in Oracle 11g R2.

Now If we talk about GPnP profile, This GPnP profile is a new feature included in Oracle 11g R2.The GPnP profile is a small XML file located in

$GRID_HOME/gpnp//profiles/peer with name profile.xml.   

gpnp1

Each node of the cluster maintains a local copy of this profile and is maintained by GPnP daemon along with mdns daemon . GPnP deamon ensures the synchronization of  GPnP profile across all the nodes in the cluster and GPnP profile is used by clusterware to establish the correct global personality of a node. it cannot be stored on ASM as it is required prior to start of ASM. Hence, it is stored locally on each node and is kept synchronized across all the nodes by GPnPd.

How does GPnP Profile used ?:

When a node of an Oracle Clusterware cluster restarts, OHASD is started by platform-specific means, OHASD has access to the OLR (Oracle Local Registry) stored on the local file system. OLR provides needed data to complete OHASD initialization. OHASD brings up GPnP Daemon and CSS Daemon. CSS Daemon has access to the GPNP Profile stored on the local file system. The information regarding voting disk is on ASM , is read from GPnP profile i.e. 

We can even read voting disk by using kfed utility ,even if ASM is not up.

In next step, the clusterware checks whether all the nodes have the updated GPnP profile and the nodes joins the cluster based on the GPnP configuration . Whenever a node is started or added to the cluster, the clusterware software on the starting node starts a GPnP agent and perform following task.

  1. If the node is already part of the cluster, the GPnP agent reads the existing profile on that node.
  2. If the node is being added to the cluster, GPnP agent locates agent on another existing node using multicast protocol (provided by mDNS) and gets the profile from other node’s GPnP agent.

The Voting Files locations on ASM Disks are accessed by CSSD with well-known pointers in the ASM Disk headers and CSSD is able to complete initialization and start or join an existing cluster.

Now OHASD starts an ASM instance and ASM can now operate with initialized and operating CSSD.

With, an ASM instance running and its Diskgroup mounted, access to Clusterware’s OCR is available to CRSD (CRSD needs to read OCR to startup various resources on the node and hence update it, as status of resources changes )Now OHASD starts CRSD with access to the OCR in an ASM Diskgroup and thus Clusterware completes initialization and brings up other services under its control.

The ASM instance uses special code to locate the contents of the ASM SPFILE , which is stored in a Diskgroup.

Next. Since OCR is also on ASM, location of ASM spfile should be known. The order of searching the ASM SPfile is

  • GPnP profile
  • ORACLE_HOME/dbs/spfile
  • ORACLE_HOME/dbs/init

ASM spfile is stored in ASM. But to start ASM, we’ll need spfile.  Oracle know spfile  location from GPnP profile & it reads spfile flag from underlying disk(s) and then starts the ASM.

Thus with the use of GPnP profile stores several information. GPnP profile information along with the information in the OLR have enough information , that have sufficient to automate several tasks or eased for the administrators and also the dependency on OCR is gradually reduced but not eliminated.

 What Information GPnP Profile Contains:

GPnP profile defines a node’s metadata about:

  • Cluster Name
  • Network interfaces for public and private interconnect
  • ASM server parameter file Location and ASM Diskstring etc.
  • CSS voting disks Discovery String
  • Digital Signature Information

it contains digital signature information of the provisioning authority because the profile is security sensitive. It might identify the storage to be used as the root partition of a machine. This profile is protected by a wallet against modification. As in my case the WALLET information can be found in : /u01/app/11.2.0/grid/gpnp/paw-racnode1/wallets/peer  “OR” /u01/app/11.2.0/grid/gpnp/wallets/peer .

If you have to manually modify the profile, it must first be unsigned with $GRID_HOME/bin/gpnptool, modified, and then signed again with the same utility, however there is a very slight chance you would ever be required to do so.

Now we can use the gpnptool with get option to dump this xml file into standard output. Below is the formatted output .

[grid@paw-racnode1 peer]$ pwd

/u01/app/11.2.0/grid/gpnp/paw-racnode1/profiles/peer

[grid@paw-racnode1 peer]$ gpnptool get

Warning: some command line parameters were defaulted. Resulting command line:

         /u01/app/11.2.0/grid/bin/gpnptool.bin get -o-

<?xml version=”1.0″ encoding=”UTF-8″?><gpnp:GPnP-Profile Version=”1.0″ xmlns=”http://www.grid-pnp.org/2005/11/gpnp-profile” xmlns:gpnp=”http://www.grid-pnp.org/2005/11/gpnp-profile” xmlns:orcl=”http://www.oracle.com/gpnp/2005/11/gpnp-profile” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” xsi:schemaLocation=”http://www.grid-pnp.org/2005/11/gpnp-profile gpnp-profile.xsd” ProfileSequence=”5″ ClusterUId=”1c12005940a3efa8bf244ccd47060927″ ClusterName=”paw-rac-cluster PALocation=””><gpnp:Network-Profile><gpnp:HostNetwork id=”gen” HostName=”*”><gpnp:Network id=”net1″ IP=”192.168.75.0″ Adapter=”eth0″ Use=”public”/><gpnp:Network id=”net2″ IP=”10.0.0.0″ Adapter=”eth1″ Use=”cluster_interconnect”/></gpnp:HostNetwork></gpnp:Network-Profile><orcl:CSS-Profile id=”css” DiscoveryString=”+asm” LeaseDuration=”400″/><orcl:ASM-Profile id=”asm” DiscoveryString=”/dev/oracleasm/disks” SPFile=”+DATA/paw-rac-cluster/asmparameterfile/registry.253.919259819″/><ds:Signature xmlns:ds=”http://www.w3.org/2000/09/xmldsig#”><ds:SignedInfo><ds:CanonicalizationMethod Algorithm=”http://www.w3.org/2001/10/xml-exc-c14n#”/><ds:SignatureMethod Algorithm=”http://www.w3.org/2000/09/xmldsig#rsa-sha1″/><ds:Reference URI=””><ds:Transforms><ds:Transform Algorithm=”http://www.w3.org/2000/09/xmldsig#enveloped-signature”/><ds:Transform Algorithm=”http://www.w3.org/2001/10/xml-exc-c14n#”> <InclusiveNamespaces xmlns=”http://www.w3.org/2001/10/xml-exc-c14n#” PrefixList=”gpnp orcl xsi”/></ds:Transform></ds:Transforms><ds:DigestMethod Algorithm=”http://www.w3.org/2000/09/xmldsig#sha1″/><ds:DigestValue>HIz8dOjUIFB32YPkmXW2HMVazoY=</ds:DigestValue></ds:Reference></ds:SignedInfo><ds:SignatureValue>L6GOD0rB03Hp+NoKVcIHb9/Rp3xznBKpUJGfixN/27Qo6IL8/4HkjSnzsbHf1IuK1SQfqV5624tygB0x9HJfVcW+k6E6cQWwAgZOzpPR3ltctD7XeikkXtt5TOWQ6boMvCKJ5mOwzGzuj4S/qDu7lWPBHM9EPzHAEn/8NOlDcDo=</ds:SignatureValue></ds:Signature></gpnp:GPnP-Profile>

Success.

Who and When GPNP PROFILE UPDATES? :

GPnP daemon replicates changes to the profile during

  • installation
  • system boot
  • when system updated using standard cluster tools

Profile is automatically updated Whenever changes are made to a cluster during installation and with configuration tools like

  • oifcfg (Change network),
  • crsctl (change location of voting disk),
  • asmcmd (change ASM_DISKSTRING, spfile location) etc.

I hope the above information will help you to understand the Grid plug and play ( GPnP ) profile.
gpnptool Commands to access GPnP Profile:

 [grid@paw-racnode1 peer]$ pwd

/u01/app/11.2.0/grid/gpnp/paw-racnode1/profiles/peer

[grid@paw-racnode1 peer]$ gpnptool get

Warning: some command line parameters were defaulted. Resulting command line:

         /u01/app/11.2.0/grid/bin/gpnptool.bin get -o-

<?xml version=”1.0″ encoding=”UTF-8″?><gpnp:GPnP-Profile Version=”1.0″ xmlns=”http://www.grid-pnp.org/2005/11/gpnp-profile” xmlns:gpnp=”http://www.grid-pnp.org/2005/11/gpnp-profile” xmlns:orcl=”http://www.oracle.com/gpnp/2005/11/gpnp-profile” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” xsi:schemaLocation=”http://www.grid-pnp.org/2005/11/gpnp-profile gpnp-profile.xsd” ProfileSequence=”5″ ClusterUId=”1c12005940a3efa8bf244ccd47060927″ ClusterName=”paw-rac-cluster PALocation=””><gpnp:Network-Profile><gpnp:HostNetwork id=”gen” HostName=”*”><gpnp:Network id=”net1″ IP=”192.168.75.0″ Adapter=”eth0″ Use=”public”/><gpnp:Network id=”net2″ IP=”10.0.0.0″ Adapter=”eth1″ Use=”cluster_interconnect”/></gpnp:HostNetwork></gpnp:Network-Profile><orcl:CSS-Profile id=”css” DiscoveryString=”+asm” LeaseDuration=”400″/><orcl:ASM-Profile id=”asm” DiscoveryString=”/dev/oracleasm/disks” SPFile=”+DATA/paw-rac-cluster/asmparameterfile/registry.253.919259819″/><ds:Signature xmlns:ds=”http://www.w3.org/2000/09/xmldsig#”><ds:SignedInfo><ds:CanonicalizationMethod Algorithm=”http://www.w3.org/2001/10/xml-exc-c14n#”/><ds:SignatureMethod Algorithm=”http://www.w3.org/2000/09/xmldsig#rsa-sha1″/><ds:Reference URI=””><ds:Transforms><ds:Transform Algorithm=”http://www.w3.org/2000/09/xmldsig#enveloped-signature”/><ds:Transform Algorithm=”http://www.w3.org/2001/10/xml-exc-c14n#”> <InclusiveNamespaces xmlns=”http://www.w3.org/2001/10/xml-exc-c14n#” PrefixList=”gpnp orcl xsi”/></ds:Transform></ds:Transforms><ds:DigestMethod Algorithm=”http://www.w3.org/2000/09/xmldsig#sha1″/><ds:DigestValue>HIz8dOjUIFB32YPkmXW2HMVazoY=</ds:DigestValue></ds:Reference></ds:SignedInfo><ds:SignatureValue>L6GOD0rB03Hp+NoKVcIHb9/Rp3xznBKpUJGfixN/27Qo6IL8/4HkjSnzsbHf1IuK1SQfqV5624tygB0x9HJfVcW+k6E6cQWwAgZOzpPR3ltctD7XeikkXtt5TOWQ6boMvCKJ5mOwzGzuj4S/qDu7lWPBHM9EPzHAEn/8NOlDcDo=</ds:SignatureValue></ds:Signature></gpnp:GPnP-Profile>

Success.

[grid@paw-racnode1 peer]$ gpnptool getpval -asm_spf

Warning: some command line parameters were defaulted. Resulting command line:

         /u01/app/11.2.0/grid/bin/gpnptool.bin getpval -asm_spf -p=profile.xml -o-

+DATA/paw-rac-cluster/asmparameterfile/registry.253.919259819

 [grid@paw-racnode1 peer]$ gpnptool getpval -asm_dis

Warning: some command line parameters were defaulted. Resulting command line:

         /u01/app/11.2.0/grid/bin/gpnptool.bin getpval -asm_dis -p=profile.xml -o-

/dev/oracleasm/disks

[grid@paw-racnode1 peer]$ gpnptool find

 Found 2 instances of service ‘gpnp’.

        mdns:service:gpnp._tcp.local.://paw-racnode2:64098/agent=gpnpd,cname=paw-rac-cluster,host=paw-racnode2,pid=6444/gpnpd h:paw-racnode2 c:paw-rac-cluster

        mdns:service:gpnp._tcp.local.://paw-racnode1:55790/agent=gpnpd,cname=paw-rac-cluster,host=paw-racnode1,pid=6677/gpnpd h:paw-racnode1 c:paw-rac-cluster

I hope the above information will help you to understand the Grid plug and play ( GPnP ) profile.

Thank you for Reading…This is AIRY…Enjoy Learning :)

 

 

#gpnp, #rac

Voting Disk in Oracle 11g R2 RAC–Airy’s Notes

Voting Disk in Oracle11g R2 RAC :                                  

  1. The voting disk is a shared area that Oracle Clusterware uses to verify cluster node membership and status. Voting disk maintains the node membership information by collecting the heartbeats of all nodes in the cluster periodically.
  1. The voting disk must reside on ASM OR shared disk(s) that is accessible by all of the nodes in the cluster. After ASM is introduced to store these files, these are called as VOTING FILE.
  1. CSSD process is responsible for collecting the heartbeats and recording them in to the voting disk.
  1. CSSD of the individual nodes registers the information regarding their nodes in the voting disk and with that pwrite() system call at a specific offset and then a pread() system call to read the status of other CSSD processes.
  1. Oracle Clusterware uses the voting disk to determine which instances are members of a cluster by way of a health check and arbitrates cluster ownership among the instances in case of network failures.
  1. For high availability, Oracle recommends that you have multiple voting disks.
  1. In 10g, Oracle Clusterware can supports 32 voting disks but in Oracle Clusterware 11gR2 can supports only 15 voting disks. Oracle recommends minimum of 3 and maximum of 5. If you define a single voting disk, then you should use external mirroring to provide redundancy.
  1. Oracle Clusterware can be configured to maintain multiple voting disks (multiplexing) but you must have an odd number of voting disks, such as three, five, and so on.
  1. A node must be able to access more than half of the voting disks at any time. For example, if you have 5 voting disks configured, then a node must be able to access at least 3 of the voting disks at any time. If a node cannot access the minimum required number of voting disks it is evicted, or removed, from the cluster. After the cause of the failure has been corrected and access to the voting disks has been restored, you can instruct Oracle Clusterware to recover the failed node and restore it to the cluster.
  1. As information regarding the nodes also exist in OCR/OLR and system calls have nothing to do with previous calls, there is not any useful data kept in the voting disk except hear beats. So, if you lose voting disks, you can simply add them back without losing any data. But, of course, losing voting disks can lead to node reboots.
  1. If you lose all voting disks, then you will have to keep the CRS daemons down, then only you can add the voting disks. Now that we have understood both the heartbeats which was the most important part, we cluster keep it into the Voting Disk/Files
  1. All nodes in the RAC cluster register their heartbeat information in the voting disks/files. RAC  heartbeat is the polling mechanism that is sent over the cluster interconnect to ensure all RAC nodes are available.
  1. The primary function of the voting disk is to manage node membership and prevent what is known as Split Brain Syndrome in which two or more instances attempt to control the RAC database. This can occur in cases where there is a break in communication between nodes through the interconnect.

Now finally to understand the whole concept of voting disk we need to know What Type of Data Voting Disk consists, what is Voting, How Voting Happens, What is I/O Fencing, What is NETWORK and DISK HEARTBEAT,what is split brain syndrome and concept of simple majority rule.

What Type of Data Voting Disk consists?

Voting disk consists of two types of data:

  1. Static data: Information about the nodes in cluster.
  2. Dynamic data: Disk heartbeat logging.

Voting Disk/Files contains the important details of the cluster nodes membership like:

  1. Node membership information.
  2. Heartbeat information of all nodes in the cluster.
  3. How many nodes in the cluster.
  4. Who is joining the cluster?
  5. Who is leaving the cluster?

What is Voting in Cluster Environment:

  1. The CGS (Cluster Group Services)is responsible for checking whether members are valid.
  2. To determine periodically whether all members are alive, a voting mechanism is used to check the validity of each member.
  3. All members in the database group vote by providing details of what they presume the instance membership bitmap looks like andthe bitmap is stored in the GRD (Global Resource Directory).
  4. A predetermined master member tallies the vote flags of the status flag and communicates to the respective processes that the voting is done; then it waits for registration by all the members who have received the reconfigured bitmap.

How Voting Happens in Cluster Environment:

  1. The CKPT process updates the control file every 3 seconds in an operation known as the heartbeat.
  2. CKPT writes into a single block that is unique for each instance, thus intra-instance coordination is not required. This block is called the checkpoint progress record.
  3. All members attempt to obtain a lock on a control file record (the result record) for updating.
  4. The instance that obtains the lock tallies the votes from all members.
  5. The group membership must conform to the decided (voted) membership before allowing the GCS/GES (Global Enqueue Service) reconfiguration to proceed.
  6. The control file vote result record is stored in the same block as the heartbeat in the control file checkpoint progress record.

What is I/O Fencing in Cluster Environment?

  1. There will be some situation where the leftover write operations from failed database instances (The cluster function failed on the nodes, but the nodes are still running at OS level) reach the storage system after the recovery process starts.
  2. Since these write operations are no longer in the proper serial order, they can damage the consistency of the data stored data.
  3. Therefore when a cluster node fails, the failed node needs to be fenced off from all the shared disk devices or disk groups. This methodology is called I/O fencing or failure fencing.
  4. I/O fencing implementation is a function of CM and depends on the clusterware vendor.
  5. I/O fencing is designed to guarantee data integrity in the case of faulty cluster communications causing a split-brain condition.

Why Voting disk is essential and needed:

The Voting Disk Files are used for overall health check,by the Oracle Clusterware.

  1. Voting disk Files is used by CSS to determine which nodes are currently members of the cluster.
  2. In concert with other Cluster components such as CRS to shut down, fence, or reboot either single or multiple nodes whenever network communication is lost between any nodes within the cluster, in order to prevent the dreaded split-brain condition in which two or more instances attempt to control the RAC database. It thus protects the database information.
  3. Voting disk will be used by the CSS daemon to arbitrate with peers that it cannot see over the private interconnect in the event of an outage, allowing it to salvage the largest fully connected sub cluster for further operation.
  4. It checks the voting disk to determine if there is a failure on any other nodes in the cluster. During this operation, NM (Node Monitor) will make an entry in the voting disk to inform its vote on availability. Similar operations are performed by other instances in the cluster.
  5. The three voting disks configured also provide a method to determine who in the cluster should survive. For example, if eviction of one of the nodes is necessitated by an unresponsive action, then the node that has two voting disks will start evicting the other node. NM (Node Monitor) alternates its action between the heartbeat and the voting disk to determine the availability of other nodes in the cluster.

What is NETWORK and DISK HEARTBEAT and how it registers in VOTING DISKS/FILES?

 All nodes in the RAC cluster register their heartbeat information in the voting disks/files.  AC heartbeat is the polling mechanism that is sent over the cluster interconnect to ensure all nodes are available.Voting disks/files are just like attendance register where you have nodes mark their attendance (heartbeats).

1: NETWORK HEARTBEAT:

Network heartbeat is across the interconnect. CSSD process on every node makes entries in the voting disk to ascertain the membership of the node, by the way in every second CSSD process sends a thread (sending) of CSSD i.e anetwork TCP heartbeat to itself and all other nodes, another thread (receiving) of CSSD receives the heartbeat. That means while marking their own presence, all the nodes also register the information about their communicability with other nodes in the voting disk. This is called NETWORK HEARTBEAT.If the network packets are dropped or has error, the error correction mechanism on TCP would re-transmit the package, Oracle does not re-transmit in this case. In the CSSD log, you will see a WARNING message about missing of heartbeat if a node does not receive a heartbeat from another node for 15 seconds (50% of misscount). Another warning is reported in CSSD log if the same node is missing for 22 seconds (75% of misscount) and similarly at 90% of misscount and when the heartbeat is missing for a period of 100% of the misscount (i.e. 30 seconds by default), the node is evicted.

Hertbeat-Voting-Disk

2: DISK HEARTBEAT:

Disk heartbeat is between the cluster nodes and the voting disk. CSSD process in each RAC node maintains a heartbeat in a block of size 1 OS block in a specific offset by read/write system calls (pread/pwrite), in the voting disk. In addition to maintaining its own disk block, CSSD processes also monitors the disk blocks maintained by the CSSD processes running in other cluster nodes. The written block has a header area with the node name and a counter which is incremented with every next beat (pwrite) from the other nodes. Disk heart beat is maintained in the voting disk by the CSSD processes and If a node has not written a disk heartbeat within the I/O timeout, the node is declared dead. Nodes that are of an unknown state, i.e. cannot be definitively said to be dead, and are not in the group of nodes designated to survive, are evicted, i.e. the node’s kill block is updated to indicate that it has been evicted.In this case, a message to this effect is written in the KILL BLOCK of node. Each nodes reads its KILL BLOCK once per second/beat, if the kill block is not overwritten, node commits suicide.

During reconfig (leaving or joining), CSSD monitors all nodes heartbeat information and determines whether the nodes has a disk heartbeat including those with no network heartbeat.  If no disk heartbeat is detected, then node is considered as dead.

Thus summarizing the heartbeats, N/W Heartbeat is pinged every second, nodes must respond in css_misscount time, failure would lead to node eviction. Similarly Disk Heartbeat, node pings (r/w) voting disk every second, nodes must receive a response in (long/short) disk timeout time.

What are the different possibilities of individual heartbeat failures?

As we know voting disk is the key communication mechanism within the Oracle Clusterware where all nodes in the cluster read and write heartbeat information. Break in heart beat indicates a possible error scenario. There are few different scenarios possible with missing heart beats:

  1. Network heart beat is successful, but disk heart beat is missed.
  2. Disk heart beat is successful, but network heart beat is missed.
  3. Both heart beats failed.

In addition, with numerous nodes, there are other possible scenarios too. Few possible scenarios:

  1. Nodes have split in to N sets of nodes, communicating within the set, but not with members in other set.
  2. Just one node is unhealthy.

Nodes with quorum will maintain active membership of the cluster and other node(s) will be fenced/rebooted.

Misscount Parameter: The CSS misscount parameter represents the maximum time, in seconds, that a network heartbeat can be missed before entering into a cluster reconfiguration to evict the node.

For NETWORK HEARTBEAT: That means CSSmisscount parameter determines network heartbeat, defaults to 30 seconds. Disk timeout is 200 seconds.If network heartbeat is missed after a timeout of 30 seconds, reboot is initiated (approximately, it is 34 seconds). It doesn’t matter what happens with disk heartbeat.

For DISK HEARTBEAT (Voting Disk): If the heartbeat did not complete in 200 seconds then the node will be rebooted.If the disk heartbeat completes under 200 seconds,then the reboot will not happen as long as network heartbeat is successful.This is little bit different at cluster reconfiguration time.

By default Misscount is less than Disktimeout seconds.

Also, if there is a vendor clusterware in play, then misscount is set to 600.

The following are the default values in seconds for the misscount parameter and their respective versions when using Oracle Clusterware:

Operating System RAC Oracle 10g R1 and R2 Oracle 11g R1 and R2
Windows 30 30
Linux 60 30
Unix 30 30
VMS 30 30

Below given table will also provide you the different possibilities of individual heartbeat failures on the basis of misscount.

Network Ping Disk Ping Reboot
Completes within misscount seconds Completes within Misscount seconds N
Completes within Misscount seconds Takes more than misscount seconds but less than Disktimeout seconds N
Completes within Misscount seconds Takes more than Disk timeout seconds Y
Takes more than Misscount Seconds Completes within Misscount seconds Y

What is Split Brain Condition or syndrome in cluster Environment?

  1. A split-brain occurs when cluster nodes hang or node interconnects fail, and as a result, the nodes lose the communication link between them and the cluster.
  1. Split-brain is a problem in any clustered environment and is a symptom of clustering solutions and not RAC.
  1. Split-brain conditions can cause database corruption when nodes become uncoordinated in their access to the shared data files.
  1. For a two-node cluster, split-brain occurs when nodes in a cluster cannot talk to each other (the internode links fail) and each node assumes it is the only surviving member of the cluster. If the nodes in the cluster have uncoordinated access to the shared storage area, they would end up overwriting each other’s data, causing data corruption because each node assumes ownership of shared data.
  1. To prevent data corruption, one node must be asked to leave the cluster or should be forced out immediately. This is where IMR (Instance Membership Recovery)comes in.
  1. Many internal (hidden) parameters control IMR (Instance Membership Recovery)and determine when it should start.
  2. If a vendor clusterware is used, split-brain resolution is left to it and Oracle would have to wait for the clusterware to provide a consistent view of the cluster and resolve the split-brain issue. This can potentially cause a delay (and a hang in the whole cluster) because each node can potentially think it is the master and try to own all the shared resources. Still, Oracle relies on the clusterware for resolving these challenging issues.
  1. Note that Oracle does not wait indefinitely for the clusterware to resolve a split-brain issue, but a timer is used to trigger an IMR-based node eviction. Theseinternal timers are also controlled using hidden parameters. The default values of these hidden parameters are not to be touched as that can cause severe performance or operational issues with the cluster.
  1. As mentioned time and again, Oracle completely relies on the cluster software to provide cluster services, and if something is awry, Oracle, in its overzealous quest to protect data integrity, evicts nodes or aborts an instance and assumes that something is wrong with the cluster.

Split Brain Syndrome in Oracle RAC:

In an Oracle RAC environment all the instances/servers communicate with each other using high-speed interconnects on the private network. This private network interface or interconnect are redundant and are only used for inter-instance oracle data block transfers. Now talking about split-brain concept with respect to oracle RAC systems, it occurs when the instance members in a RAC fail to ping/connect to each other via this private interconnect, but the servers are all physically up and running and the database instance on each of these servers is also running. These individual nodes are running fine and can conceptually accept user connections and work independently. So basically due to lack of communication the instance thinks that the other instance that it is not able to connect is down and it needs to do something about the situation. The problem is if we leave these instances running, the same block might get read, updated in these individual instances and there would be data integrity issue, as the blocks changed in one instance, will not be locked and could be over-written by another instance. This situation is termed as Split Brain Syndrome.

Split-Brain-Syndrom

Now in givenPicture, In case of 3 Node cluster and in case of a network error, a Split-Brain problem would occur – without a Voting Disk. Suppose node1 has lost the network connection to the Interconnect. Here, node1 cannot use the Interconnect anymore. It can still access the Voting Disk, though. Nodes 2 and 3 see their heartbeats still but no longer node1, which is indicated by the green Vs and red fs in the picture. The node with the network problem gets evicted by placing the Poison Pill into the Voting File for node1. CSSD of node1 will commit suicide now and leave the cluster.

Simple Majority win Rule:

According to Oracle – “An absolute majority of voting disks configured (more than half) must be available and responsive at all times for Oracle Cluster ware to operate.” Which means to survive from loss of ‘N’ voting disks, you must configure atleast ‘2N+1′ voting disks.

That means a node must be able to access more than half of the voting disks at any time. 

Example1: Suppose we have a 2 node cluster with an even number of voting disks, let’s say 2 voting disks. Let Node1 is able to access voting disk1 and Node2 is able to access voting disk2. This means that there is no common file where clusterware can check the heartbeat of both the nodes.  Hence, if we have 2 voting disks, all the nodes in the cluster should be able to access both the voting disks.

Example 2:If we have 3 voting disks and both the nodes are able to access more than half i.e. 2 voting disks, there will be at least on disk which will be accessible by both the nodes. The clusterware can use that disk to check the heartbeat of both the nodes. Hence, each node should be able to access more than half the number of voting disks.A node not able to do so will have to be evicted from the cluster to maintain the integrity of the cluster. After the cause of the failure has been corrected and access to the voting disks has been restored, you can instruct Oracle clusterware to recover the failed node and restore it to the cluster.

 Loss of more than half your voting disks will cause the entire cluster to fail.

Example 3:Suppose in a 3 node cluster with 3 voting disks, a network heartbeat fails between Node 1 and Node 3 & Node 2 and Node 3 whereas Node 1 and Node 2 are able to communicate via interconnect, and from the Voting Disk CSSD notices that all the nodes are able to write to Voting Disks thus split brain, so the healthy nodes Node 1 & Node 2 would update the kill block in the voting disk for Node 3.

Voting-Disk-Example3

Then when during pread() system call of CSSD of Node 3, it sees a self-kill flag set and thus the CSSD of Node 3 evicts itself. And then the I/O fencing and finally the OHASD will finally attempt to restart the stack after graceful shutdown.

 Example 4: Suppose in a 2 node cluster with 3 voting disk, a disk heartbeat fails such that Node 1 can see 2 Voting Disks and Node 2 can see 1 Voting Disk, ( If here the Voting Disk wouldn’t have been odd then both the Nodes would have thought the other node should be killed hence would have been difficult to avoid split-brain), thus based on Simple Majority Rule, CSSD process of Node 1 (2 Voting Disks) sends a kill request to the CSSD process of Node 2 (1 Voting Disk) and thus the Node 2 evicts itself and then the I/O fencing and finally the OHASD will finally attempt to restart the stack after graceful shutdown.

That’s why voting disks are configured in odd Numbers.

A node in the cluster must be able to access more than half of the voting disks at any time in order to be able to tolerate a failure of n voting disks. Therefore, it is strongly recommended that you configure an odd number of voting disks such as 3, 5, and so on.

Here is a table which represents the number of voting disks whose failure can be tolerated for different numbers of voting disks:

 Total Voting Disks No. of voting disks Which should be accessible Whose failure can be tolerated
1 1 0
2 2 0
3 2 1
4 3 1
5 3 2
6 4 2

It can be seen that number of voting disks whose failure can be tolerated is same for (2n-1) as well as 2n voting disks where n can be 1, 2 or 3. Hence to save a redundant voting disk, (2n-1) i.e. an odd number of voting disks are desirable.

Thus Voting disk/File plays a role in both the heartbeat failures, and hence a very important file for node eviction & I/O fencing in case of a split brain situation.

Storage Mechanism of Voting Disk/Files:

Voting disks must be stored on shared accessible storage, because cluster during an operation, voting disk must be accessed by all member nodes in the clusterware.

  1. Prior to 11g R2 RAC, it could be placed ona raw device, a clustered filesystem supported by Oracle RAC such as OCFS, Sun Cluster, or Veritas Cluster filesystem.
  2. You should plan on allocating 280MB for each voting disk file.

Storage Mechanism of Voting Disk/Files in Oracle 11g R2 RAC:

  1. As of Oracle 11g R2 RAC, it can be placed on ASM disks.
  2. This simplifies management and improves performance.  But this brought up a puzzle too.
  3. For a node to join the cluster, it must be able to access voting disk, but voting disk is on ASM and ASM can’t be up until node is up.
  4. To resolve this issue, Oracle ASM reserves several blocks at a fixed location for every Oracle ASM disk used for storing the voting disk.
  5. As a result,Oracle Clusterware can access the voting disks present in ASM even, if the ASM instance is down and CSS can continue to maintain the Oracle cluster even if the ASM instance has failed.
  6. The physical location of the voting files in used ASM disks is fixed, i.e. the cluster stack does not rely on a running ASM instance to access the files. The location of the file is visible in the ASM disk header.
  7. The voting disk is not striped but put as a whole on ASM Disks.
  8.  In the event that the disk containing the voting disk fails, Oracle ASM will choose another disk on which to store this data.
  9. It eliminates the need for using a third-party cluster volume manager.
  10. You can reduce the complexity of managing disk partitions for voting disks during Oracle Clusterware installations.
  11. Voting disk needs to be mirrored, if it became unavailable, cluster will come down. Hence, you should maintain multiple copies of the voting disks on separate disk LUNs so that you eliminate a Single Point of Failure (SPOF) in your Oracle 11g RAC configuration.
  12. If voting disk is stored on ASM, multiplexing level of voting disk is decided by the redundancy of the ASM diskgroup.
Redundancy of the Diskgroup

 

No. of copies of voting disk

 

( Minimum # of disks in the Diskgroup)
External 1 1
Normal 3 3
High 5 5

i. If voting disk is on a diskgroup with external redundancy, one copy of voting file will be stored on one disk in the diskgroup.

ii. If we store voting disk on a diskgroup with normal redundancy, then one copy of voting file will be stored on 3 disk in the diskgroup. We should be able to tolerate the loss of one disk i.e. even if we lose one disk, we should have sufficient number of voting disks so that clusterware can continue.

iii. If the diskgroup with normal redundancy has 2 disks (minimum required for normal redundancy), we can store 2 copies of voting disk on it. If we lose one disk, only one copy of voting disk will be left and clusterware won’t be able to continue, Because to continue, clusterware should be able to access more than half the no. of voting disks i.e.> (2*1/2) , i.e. accessible voting disks must be greater than1 or equals to 2. Hence to be able to tolerate the loss of one disk, we should have 3 copies of the voting disk on a diskgroupwith normal redundancy. So, a normal redundancy diskgroup having voting disk should have minimum 3 disks in it.

iv. Similarly, if we store voting disk on diskgroup with high redundancy, 5 Voting Files are placed, each on one ASM Disk i.e a high redundancy diskgroup should have at least 5 disks so that even of we lose 2 disks, clusterware can continue.

13. Ensure that all the nodes participating in the cluster have read/write permissions on disks.

14. You can have up to a maximum of 15 voting disks. However, Oracle recommends minimum 3 voting disks and do not go beyond 5 voting disks.

Backing up voting disk:

  1. In previous versions of Oracle Clusterware you needed to backup the voting disks with the dd command.
  2. Starting with Oracle Clusterware 11g R2, Backup of Voting disk using “dd” command is not supported.
  3. Automatic backup of Voting disk and OCR happen after every four hours, end of the day, end of the week. That means there is no to take backup of voting disks manually.
  4. Voting disk and OCR automatic backup and kept together in a single file.
  5. In fact, Oracle explicitly indicates that you should not use a backup tool like dd to backup or restore voting disks. Doing so can lead to the loss of the voting disk.
  6. Although the Voting disk contents are not changed frequently, but you will need to back up the Voting disk file every time, when you perform following activities.
  7. You add or remove a node from the cluster or
  8. Immediately after you configure or upgrade a cluster.

    Thank you for reading… This is Airy…Enjoy Learning:)

#rac, #voting-disk

Oracle Cluster Registry (OCR) in Oracle 11gR2-RAC- Airy’s Notes:

Oracle Cluster Registry (OCR) in Oracle 11gR2-RAC- Airy’s Notes:

  1. Oracle cluster Registry (OCR) is the central repository for CRS, which maintains the metadata, configuration and state information of all cluster resources defined in clusterware and cluster database.
  2. OCR is the repository of configuration information for the cluster that manages information like, the cluster node list and cluster database instance-to-node mapping information and CRS application resource profile.
  3. It is a cluster registry used to maintain application resources and their availability within the RAC environment. It also stores configuration information for CRS daemons and clusterware managed applications.
  4. This configuration information is used by many of the processes that make up the CRS, as well as other cluster-aware applications which use this repository to share information among them.
  5. OCR also maintains dependency and status information for application resources defined within CRS, specifically databases, instances, services and node applications.
  6. The OCR uses a file-based repository to store configuration information in a series of key-value pairs, using a directory tree-like structure.
  7. The OCR must reside on a shared disk(s) that is accessible by all of the nodes in the cluster.
  8. Starting with Oracle Clusterware 10g R2, we are allowed to multiplex the OCR and Oracle recommends that you use this feature to ensure cluster high availability.
  9. Oracle Clusterware allows for a maximum of 5 OCR locations; one is the primary and the other will be OCR mirror.
  10. If you define a single OCR, then you should use external mirroring to provide redundancy.
  11. OCR can be replaced online .You can replace a failed OCR online, and you can update the OCR through supported APIs such as Enterprise Manager, the Server Control Utility (SRVCTL), or the Database Configuration Assistant (DBCA).
  12. It is highly recommended to take a backup of OCR file before making any changes.
  13. To view the contents of the OCR in a human-readable format, we have to run the ocrdump This will dump the contents of the OCR into an ASCII text file in the current directory named OCRDUMPFILE.
  14. The name of the configuration file is ocr.loc and the configuration file variable is ocrconfig.loc.

What Information Exist in OCR?

There are some of the main components, which are included in the OCR :

  1. Node membership information.
  2. The cluster node list and instance-to-node mapping information.
  3. Starting with Oracle 11g, OCR also contain the information about the location of voting disk.
  4. Server, Network, RAC Database, Instance, Node and Listener UP/Down and other mapping information
  5. ASM Instance and Disk groups Information.
  6. Cluster and Application resource profiles Information (such as RAC database, listener, Instance, VIP, Scan IP, Serverpool and Services etc ).
  7. Database Service characteristics like preferred and available nodes.
  8. TAF Policy, Load Balancing information.
  9. Details of the network interfaces held by the cluster network.
  10. Information about processes that Oracle Clusterware controls.
  11. Information about any third-party applications controlled by CRS.
  12. Information about OCR Backups.
  13. Software active version.
  14. OCR also maintains information regarding Dependencies, Management policy (automatic/manual), Callout Scripts and Retries.

Name of Process and Utilities, Who Updates OCR?

  1. CSSd at the time of cluster setup to update the status of the servers.
  2. CSS during node addition and Node Deletion.
  3. CRSd about status of nodes during failures and reconfiguration.
  4. Utilities OUI, SRVCTL, CRSCTL, OEM, NETCA, DBCA, DBUA, ASMCA.

How OCR Updated?

  1. Oracle uses a distributed shared cache architecture during cluster management to optimize queries against the cluster repository.
  2. For better performance each node in the cluster maintains an in-memory copy of OCR, along with an OCR process that accesses its OCR cache. Every time each node is required to update the OCR as required.
  3. Oracle Clusterware uses a background process to access the OCR cache.
  4. The CRSd process is responsible for reading and writing to the OCR files as well as refreshing the local OCR cache and the caches on the other nodes in the cluster.
  5. Only one CRSd process (designated as the master) in the cluster performs any disk read/write activity.
  6. Once any new information is read by the master CRSd process, it performs a refresh of the local OCR cache and the OCR cache on other nodes in the cluster.
  7. Since the OCR cache is distributed across all nodes in the cluster, OCR client application needs to update the OCR, they communicate through their local OCR process via the local CRSd process, to the OCR process that is performing input/output (I/O) for writing to the physical OCR binary file on disk.
  8. The OCR client applications are Oracle Universal Installer (OUI), SRVCTL, Enterprise Manger (EM), Database Configuration Assistant (DBCA), Database Upgrade Assistant(DBUA), NetCA and Virtual Internet Protocol Configuration assistant (VIPCA).

 Where OCR be stored?

  1. We can find the location of the OCR in a file on each individual node of the cluster. This location varies by platform but on Linux the location of the OCR is stored in the file /etc/oracle/ocr.loc
  2. The OCR must reside on a shared disk(s) that is accessible by all of the nodes in the cluster.
  3. Prior to oracle 11g R2, we had to create Oracle Cluster Repository (OCR) on raw devices. Since in Oracle 11g R2, the raw devices have been deprecated, so now we can choose in between a cluster file system or an ASM disk group.
  4. The OCR and voting disk must be on a shared device so putting it on local file system is not going to work.
  5. Now if we have a choice to choose between a cluster file system or an ASM disk group for keeping OCR: Clustered file systems may not be an option due to high cost. Other options may include network file systems but NFS are usually slow and unreliable. So, ASM remains the best choice.
  6. The OCR and voting disks could be on any available ASM disk group so there is no need to create exclusively disk groups for them.
  7. The OCR is striped and mirrored (if we have a redundancy other than external), similar to ordinary Database Files. So we can now have leverage the mirroring capabilities of ASM to mirror the OCR also, without having to use multiple RAW devices for that purpose only.
  8. The OCR is replicated across all the underlying disks of the disk group, so failure of a disk does not bring the failure of the disk group.
  9. Considering the criticality of the OCR contents to the cluster functionality, Oracle strongly recommends us to multiplex the OCR file. In Oracle 11g R2, we can have up to five OCR copies.
  10. Due to its shared location, from a single location, all the components running on all nodes and instances of Oracle can be administrated, irrespective of the node on which the registry was created.

What is the need of OCR?

  1. Oracle Clusterware reads the ocr.loc file for the location of the registry and to determine which applications resources need to be started and the nodes on which to start them.
  2. It is used to bootstrap the CSS for port info, nodes in the cluster and similar info.
  3. The CRSd, or Oracle Clusterware daemon’s function is to define and manage resources managed by Clusterware. Resources have profiles that define metadata about them. This metadata is stored in the OCR. The CRS reads the OCR and manages and Implement the following:
  •      Manages the application resources: starts, stops, monitors and manages their    failover
  •        Maintains and tracks information pertaining to the definition, availability, and current state of the services.
  •       Implements the workload balancing and continuous availability features of         services.
  •       Generates events during cluster state changes.
  •       Maintains configuration profiles of resources in the OCR.
  •      Records the currently known state of the cluster on a regular basis and provides  the same when queried (using srvctl, crsctl etc.)

How is the information stored in OCR?

  1. The OCR uses a file-based repository to store configuration information in a series of key-value pairs, using a directory tree-like structure.
  2. It contains information pertaining to all tiers of the clustered database.
  3. Various parameters are stored as name-value pairs used and maintained at different levels of the architecture.
  4. Each tier is managed and administrated by daemon processes with appropriate privileges to manage them. For example:
  5. All SYSTEM level resource or application definitions would require root or superuser privileges to start, stop, and execute resources defined at this level.
  6. Those defined at the DATABASE level will require dba privileges to execute.

Backup of OCR:

  1. Starting from Oracle Clusterware 11g Release 2, Oracle backs up the OCR automatically in every four hours on a schedule that is dependent on when the node started (not clock time).
  2. The default location where OCR backups are made is GRID_HOME/cdata/<cluster name> directory on the node performing the backups, where the <cluster_name> is the name of your cluster and GRID_HOME is the home directory of your Oracle GRID installation.
  3. One node known as the master node is dedicated to these backups, but in case master node is down, some other node may become the master. Hence, backups could be spread across nodes due to outages.
  4. These backups are named as follows:
  •    4-hour backups   (3 max) : backup00.ocr, backup01.ocr, and backup02.ocr.
  •    Daily backups     (2 max) : day.ocr and day_.ocr
  •    Weekly backups (2 max) : week.ocr and week_.ocr
  1. Oracle Clusterware maintains the last three backups, overwriting the older backups. Thus, you will have three 4-4hour backups, (i) the current one, (ii) one four hours old and (iii) one eight hours old. Therefore no additional clean-up tasks are required of the DBA.
  2. Oracle Clusterware will also take a backup at the end of the day. The last two of these backups are retained. Finally, at the end of each week Oracle will perform another backup, and again the last two of these backups are retained. You should make sure that your routine file system backups backup the OCR location.
  3. There is no way to customize the backup frequencies or the number of files that Oracle Grid Infrastructure retains while automatically backing OCR.
  4. Point to be noted RMAN does not backup the OCR.
  5. You can use the ocrconfig command to view the current OCR backups:
[root@paw-racnode1 bin]# ocrconfig -showbackup
racnode1  2016/07/21 17:16:03 /u01/app/grid/product/11.2.0/grid/cdata/rac01-scan/backup00.ocr
racnode1  2016/07/21 17:16:03 /u01/app/grid/product/11.2.0/grid/cdata/rac01-scan/day.ocr
racnode1  2016/07/21 17:16:03 /u01/app/grid/product/11.2.0/grid/cdata/rac01-scan/week.ocr
racnode1  2016/07/21 16:14:00 /u01/app/grid/product/11.2.0/grid/cdata/rac01-scan/backup_20160721_161400.ocr
racnode1  2016/07/21 15:59:15 /u01/app/grid/product/11.2.0/grid/cdata/rac01-scan/backup_20160721_155915.ocr
racnode1  2016/07/21 15:58:50 /u01/app/grid/product/11.2.0/grid/cdata/rac01-scan/backup_20160721_155850.ocr
racnode1  2016/07/21 15:58:39 /u01/app/grid/product/11.2.0/grid/cdata/rac01-scan/backup_20160721_155839.ocr

[root@paw-racnode1 app]# ocrconfig -showbackup auto
racnode1     2016/07/21 17:16:03     /u01/app/grid/product/11.2.0/grid/cdata/rac01-scan/backup00.ocr
racnode1     2016/07/21 17:16:03     /u01/app/grid/product/11.2.0/grid/cdata/rac01-scan/day.ocr
racnode1     2016/07/21 17:16:03     /u01/app/grid/product/11.2.0/grid/cdata/rac01-scan/week.ocr

[root@paw-racnode1 ~]# ocrconfig -showbackup manual
racnode1 2016/07/21 16:14:00  /u01/app/grid/product/11.2.0/grid/cdata/rac01-scan/backup_20160721_161400.ocr
racnode1 2016/07/21 15:59:15  /u01/app/grid/product/11.2.0/grid/cdata/rac01-scan/backup_20160721_155915.ocr
racnode1 2016/07/21 15:58:50  /u01/app/grid/product/11.2.0/grid/cdata/rac01-scan/backup_20160721_155850.ocr
racnode1 2016/07/21 15:58:39  /u01/app/grid/product/11.2.0/grid/cdata/rac01-scan/backup_20160721_155839.ocr
  1. It is recommended that OCR backups may be placed on a shared location which can be configured using ocrconfig -backuploc <new location>
[root@paw-racnode1 named]# ocrconfig -backuploc /u01/app/ocr-backup
  1. If  your cluster is shutdown, then the automatic backups will not occur (nor will the purging). The timer restarts from the beginning when the cluster is restarted. When you start the cluster backup, a backup will not be taken immediately.  Hence, if you are stopping and starting your cluster that you could impact the OCR backups and the backup period could go long beyond 4 hours.
  2. If you feel that you need to backup the OCR immediately (for example, you have made a number of cluster related changes) then you can use the ocrconfig command to perform a manual backup:
[root@paw-racnode1 named]# ocrconfig -manualbackup
racnode1  2016/07/21 17:27:02 /u01/app/ocr-backup/backup_20160721_172702.ocr
racnode1  2016/07/21 16:14:00 /u01/app/grid/product/11.2.0/grid/cdata/rac01-scan/backup_20160721_161400.ocr
racnode1  2016/07/21 15:59:15 /u01/app/grid/product/11.2.0/grid/cdata/rac01-scan/backup_20160721_155915.ocr
racnode1  2016/07/21 15:58:50 /u01/app/grid/product/11.2.0/grid/cdata/rac01-scan/backup_20160721_155850.ocr
racnode1  2016/07/21 15:58:39 /u01/app/grid/product/11.2.0/grid/cdata/rac01-scan/backup_20160721_155839.ocr

 

Thank you for reading… This is Airy…Enjoy Learning:)

 

#ocr, #rac

Four Linux Machine Creation (RHEL5.7-64Bit)- Airy’s Notes

Configuring Four Red Hat Enterprise Linux 5.7 - 64 Bit Machines: 

( 1 For Storage + DNS and 3 as Node Machines):
  
1: For machine configuring for Storage and DNS ( paw-racstorage-dns ):

My Machine'S Configuration:

Hardisk                  : 100GB

RAM                      : 2GB

Mount Point Boot         : 100MB

Mount Point Tmp          : 6GB

Mount Point Swap         : 4GB ( Preferred Double of your Machine's RAM but not more then 16GB)

Mount Point /u01         : 18GB

Mount Point /u02         : 44 GB ( In this mount point we will create storage disks)

Mount Point /            : 28GB
 
[root@paw-racstorage-dns ~]# df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/sda7              28G  5.6G   21G  22% /

/dev/sda5             5.9G  141M  5.4G   3% /tmp

/dev/sda3              18G  3.5G   14G  21% /u01

/dev/sda2              44G   42G     0 100% /u02

/dev/sda1              99M   12M   82M  13% /boot

tmpfs                 1.8G     0  1.8G   0% /dev/shm

/dev/sr0              3.6G  3.6G     0 100% /media/SAI-OS-5U7_64bit


2: For All Node's Machines ( paw-racnode1, paw-racnode2, paw-racnode3):

My Machine's Configurations:
 
Hardisk                    : 80GB

RAM                        : 3GB

Mount Point Boot           : 100MB

Mount Point Tmp            : 6GB

Mount Point Swap           : 4GB ( Preferred Double of your Machine's RAM but not more then 16GB)

Mount Point /u01           : 25GB

Mount Point /u02           : 20GB

Mount Point /              : 25GB

 
Node1: ( paw-racnode1):
 

[root@paw-racnode1 ~]# df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/sda7              25G  5.7G   18G  25% /

/dev/sda5             5.9G  141M  5.4G   3% /tmp

/dev/sda3              20G  196M   19G   2% /u02

/dev/sda2              25G  173M   23G   1% /u01

/dev/sda1              99M   12M   82M  13% /boot

tmpfs                 1.8G     0  1.8G   0% /dev/shm

/dev/sr0              3.6G  3.6G     0 100% /media/SAI-OS-5U7_64bit


Node2: ( paw-racnode2):

[root@paw-racnode2 ~]# df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/sda7              25G  5.7G   18G  25% /

/dev/sda5             5.9G  141M  5.4G   3% /tmp

/dev/sda3              20G  196M   19G   2% /u02

/dev/sda2              25G  173M   23G   1% /u01

/dev/sda1              99M   12M   82M  13% /boot

tmpfs                 1.8G     0  1.8G   0% /dev/shm

/dev/sr0              3.6G  3.6G     0 100% /media/SAI-OS-5U7_64bit


Node3: ( paw-racnode3):

[root@paw-racnode3 ~]# df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/sda7              25G  5.7G   18G  25% /

/dev/sda5             5.9G  141M  5.4G   3% /tmp

/dev/sda3              20G  196M   19G   2% /u02

/dev/sda2              25G  173M   23G   1% /u01

/dev/sda1              99M   12M   82M  13% /boot

tmpfs                 1.8G     0  1.8G   0% /dev/shm

/dev/sr0              3.6G  3.6G     0 100% /media/SAI-OS-5U7_64bit

 

Thank you for reading….This is Airy..Enjoy Learning 🙂

 

 

#linux, #rac

RAC Installation’s Pre requisites

RAC Installation Pre requisites  :

 Perform following below activities on all the Nodes (paw-racnode1 and paw-racnode2, paw-racnode3)

 [root@paw-racnode1 ~]# umount tmpfs

[root@paw-racnode1 ~]# mount -t tmpfs shmfs -o size=1800m /dev/shm

[root@paw-racnode1 ~]# vi /etc/fstab

 tmpfs                   /dev/shm                tmpfs   size=1800m      0 0

[root@paw-racnode1 ~]# df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/sda7              25G  5.7G   18G  25% /

/dev/sda5             5.9G  141M  5.4G   3% /tmp

/dev/sda3              20G  196M   19G   2% /u02

/dev/sda2              25G  173M   23G   1% /u01

/dev/sda1              99M   12M   82M  13% /boot

tmpfs                 1.8G     0  1.8G   0% /dev/shm

/dev/sr0              3.6G  3.6G     0 100% /media/SAI-OS-5U7_64bit

[root@paw-racnode1 ~]# vi /etc/sysctl.conf

# My Entries

echo kernel.shmall = 2097152 >> /etc/sysctl.conf

echo kernel.shmmax = 1054504960 >> /etc/sysctl.conf

echo kernel.shmmni = 4096 >> /etc/sysctl.conf

echo # semaphores: semmsl, semmns, semopm, semmni >> /etc/sysctl.conf

echo kernel.sem = 250 32000 100 128 >> /etc/sysctl.conf

echo fs.file-max = 6815744 >> /etc/sysctl.conf

echo net.ipv4.ip_local_port_range = 9000 65500 >> /etc/sysctl.conf

echo net.core.rmem_default = 262144 >> /etc/sysctl.conf

echo net.core.rmem_max = 4194304 >> /etc/sysctl.conf

echo net.core.wmem_default = 262144 >> /etc/sysctl.conf

echo net.core.wmem_max = 1048576 >> /etc/sysctl.conf

echo fs.aio-max-nr = 1048576 >> /etc/sysctl.conf

[root@paw-racnode1 ~]# /sbin/sysctl -p

net.ipv4.ip_forward = 0

net.ipv4.conf.default.rp_filter = 1

net.ipv4.conf.default.accept_source_route = 0

kernel.sysrq = 0

kernel.core_uses_pid = 1

net.ipv4.tcp_syncookies = 1

kernel.msgmnb = 65536

kernel.msgmax = 65536

kernel.shmmax = 68719476736

kernel.shmall = 4294967296

kernel.shmall = 4294967296

fs.aio-max-nr = 1048576

fs.file-max = 6815744

kernel.shmall = 2097152

kernel.shmmax = 1054504960

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default = 262144

net.core.rmem_max = 4194304

net.core.wmem_default = 262144

net.core.wmem_max = 1048586

[root@paw-racnode1 ~]# vi /etc/security/limits.conf

 # My Entries

oracle soft nproc 2047

oracle hard nproc 16384

oracle soft nofile 1024

oracle hard nofile 65536

“OR”

echo oracle soft nproc 2047 >> /etc/security/limits.conf

echo oracle hard nproc 16384 >> /etc/security/limits.conf

echo oracle soft nofile 1024 >> /etc/security/limits.conf

echo oracle hard nofile 65536 >> /etc/security/limits.conf

echo grid hard nproc 16384 >> /etc/security/limits.conf

echo grid hard nofile 65536 >> /etc/security/limits.conf

 [root@paw-racnode1 ~]# vi /etc/pam.d/login

 session    required     pam_limits.so

“OR”

echo session    required     /lib/security/pam_limits.so >> /etc/pam.d/login

————————————————————————

selinux should be disable:

 [root@paw-racnode1 ~]# vi /etc/selinux/config

# This file controls the state of SELinux on the system.

# SELINUX= can take one of these three values:

#       enforcing - SELinux security policy is enforced.

#       permissive - SELinux prints warnings instead of enforcing.

#       disabled - SELinux is fully disabled.

SELINUX=disabled

# SELINUXTYPE= type of policy in use. Possible values are:

#       targeted - Only targeted network daemons are protected.

#       strict - Full SELinux protection.

SELINUXTYPE=targeted

————————————————————————

Now Add following O/S User groups( Do this:

  [root@paw-racnode1 ~]#

      groupadd -g 501 oinstall

      groupadd -g 502 dba

      groupadd -g 503 oper

      groupadd -g 504 asmadmin

      groupadd -g 505 asmdba

      groupadd -g 506 asmoper

And Create 2 Users – 1: oracle and   2: grid :–

      useradd -g oinstall  -G dba,oper,asmdba oracle

      useradd -g oinstall  -G asmadmin,asmdba,asmoper,dba grid

Change the password of bot the users :–

      passwd oracle

      Changing password for user oracle.

      New UNIX password:oracle

      BAD PASSWORD: it is based on a dictionary word

      Retype new UNIX password:oracle

      passwd: all authentication tokens updated successfully.

      passwd grid

      Changing password for user oracle.

      New UNIX password:grid

      BAD PASSWORD: it is based on a dictionary word

      Retype new UNIX password:grid

      passwd: all authentication tokens updated successfully.

Now Create a User for Grid Installation:

      mkdir -p /u01/app/11.2.0/grid  #GRID_HOME

      mkdir -p /u01/app/oracle/product/11.2.0/db_1  # ORACLE_HOME

      mkdir -p /u01/app/oraInventory

      mkdir -p /u01/app/grid

      mkdir -p /u01/app/oracle

      mkdir -p /u02/software

      mkdir -p /u02/RPM

      chown -R grid:oinstall /u01/app

      chown -R grid:oinstall /u01/app/11.2.0/grid

      chown -R grid:oinstall /u01/app/oraInventory

      chown -R oracle:oinstall /u01/app/oracle

      chown -R oracle:oinstall /u01/app/oracle/product/11.2.0/db_1

      chmod -R 775 /u01/app/oracle/product/11.2.0/db_1

      chmod -R 775 /u01/app/grid

      chmod -R 775 /u01/app/oracle

     

      chmod 777 /u02

      chmod 777 /u02/software

      chmod 777 /u02/RPM

 [root@paw-racnode1 ~]# vi /home/oracle/.bash_profile

TMP=/tmp; export TMP

TMPDIR=$TMP; export TMPDIR

ORACLE_HOSTNAME=paw-racnode1.airydba.com; export ORACLE_HOSTNAME

ORACLE_UNQNAME=racdb; export ORACLE_UNQNAME

ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE

GRID_BASE=/u01/app/grid; export GRID_BASE

ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1; export ORACLE_HOME

GRID_HOME=/u01/app/11.2.0/grid; export GRID_HOME

ORACLE_SID=racdb1; export ORACLE_SID

ORACLE_TERM=xterm; export ORACLE_TERM BASE_PATH=/usr/sbin:$PATH;

export BASE_PATH PATH=$ORACLE_HOME/bin:$BASE_PATH:$GRID_HOME/bin;

export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH

CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib;

export CLASSPATH

if [ $USER = "oracle" ]; then

if [ $SHELL = "/bin/ksh" ]; then

ulimit -p 16384

ulimit -n 65536

else

ulimit -u 16384 -n 65536

fi

fi

alias grid_env='. /home/oracle/grid_env'

alias db_env='. /home/oracle/db_env'

[root@paw-racnode1 ~]# vi /home/grid/.bash_profile

TMP=/tmp; export TMP

TMPDIR=$TMP; export TMPDIR

ORACLE_HOSTNAME=paw-racnode1.airydba.com; export ORACLE_HOSTNAME

ORACLE_UNQNAME=racdb; export ORACLE_UNQNAME

ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE

GRID_BASE=/u01/app/grid; export GRID_BASE

ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1; export ORACLE_HOME

GRID_HOME=/u01/app/11.2.0/grid; export GRID_HOME

ORACLE_SID=+ASM1; export ORACLE_SID

ORACLE_TERM=xterm; export ORACLE_TERM BASE_PATH=/usr/sbin:$PATH;

export BASE_PATH PATH=$ORACLE_HOME/bin:$BASE_PATH:$GRID_HOME/bin;

export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH

CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib;

export CLASSPATH

if [ $USER = "oracle" ]; then

if [ $SHELL = "/bin/ksh" ]; then

ulimit -p 16384

ulimit -n 65536

else

ulimit -u 16384 -n 65536

fi

fi

alias grid_env='. /home/oracle/grid_env'

alias db_env='. /home/oracle/db_env'
[root@paw-racnode1 ~]#  chmod 777 /home/oracle/.bash_profile 

[root@paw-racnode1 ~]#  . /home/oracle/.bash_profile

[root@paw-racnode1 ~]#  echo $ORACLE_HOME

[root@paw-racnode1 ~]#  echo $GRID_HOME

--------------------------------------------------------------------

[root@paw-racnode1 ~]#  chmod 777 /home/grid/.bash_profile

[root@paw-racnode1 ~]#  . /home/grid/.bash_profile 

[root@paw-racnode1 ~]#  echo $ORACLE_HOME

[root@paw-racnode1 ~]#  echo $GRID_HOME

 ———————————————————————–

[root@paw-racnode1 ~]# vi /home/oracle/db_env

ORACLE_SID=racdb1; export ORACLE_SID

ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1; export ORACLE_HOME PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH

CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib;

export CLASSPATH

[root@paw-racnode1 ~]#  cp /home/oracle/db_env /home/grid/db_env

[root@paw-racnode1 ~]#  chmod 777 /home/oracle/db_env

[root@paw-racnode1 ~]#  chmod 777 /home/grid/db_env

————————————————————————

[root@paw-racnode1 ~]# vi /home/oracle/grid_env

ORACLE_SID=+ASM1; export ORACLE_SID

ORACLE_HOME=/u01/app/11.2.0/grid; export ORACLE_HOME

GRID_HOME=$ORACLE_HOME; export GRID_HOME

PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH

CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib;

export CLASSPATH

 [root@paw-racnode1 ~]#  cp /home/oracle/grid_env /home/grid/grid_env

 [root@paw-racnode1 ~]#  chmod 777 /home/oracle/grid_env

 [root@paw-racnode1 ~]#  chmod 777 /home/grid/grid_env

Now copy oracle’s and ASM’s, RPMs in a Directory  /u01/RPM  on all the nodes and perform below given RPM’s installation on all the Nodes(paw-racnode1, paw-racnode2, paw-racnode3).

[root@paw-racnode1 ~]# cd /u01/RPM

 [root@paw-racnode1 RPM]# ll

total 24096

-rwxrw-rw- 1 root root 3016394 Sep 16  2011 binutils-2.17.50.0.6-6.0.1.el5.i386.rpm

-rwxrw-rw- 1 root root 5477834 Nov 15  2011 gcc-4.1.2-44.el5.i386.rpm

-rwxrw-rw- 1 root root 3593086 Nov 15  2011 gcc-c++-4.1.2-44.el5.i386.rpm

-rwxrw-rw- 1 root root 4558796 Nov 15  2011 glibc-2.5-24.i386.rpm

-rwxrw-rw- 1 root root   11345 Jan 24  2012 libaio-devel-0.3.106-3.2.i386.rpm

-rwxrw-rw- 1 root root   12094 Jan 24  2012 libaio-devel-0.3.106-5.i386.rpm

-rwxrw-rw- 1 root root   11884 Jan 24  2012 libaio-devel-0.3.106-5.x86_64.rpm

-rwxrw-rw- 1 root root   95764 Nov 15  2011 libgcc-4.1.2-44.el5.i386.rpm

-rwxrw-rw- 1 root root   68422 Nov 15  2011 libgomp-4.3.2-7.el5.i386.rpm

-rwxrw-rw- 1 root root  371622 Nov 15  2011 libstdc++-4.1.2-44.el5.i386.rpm

-rwxrw-rw- 1 root root 2999656 Nov 15  2011 libstdc++-devel-4.1.2-44.el5.i386.rpm

-rwxrw-rw- 1 root root   23102 Sep 17  2011 libXp-1.0.0-8.1.el5.i386.rpm

-rwxrw-rw- 1 root root 1079629 Sep 17  2011 openmotif21-2.1.30-11.EL5.i386.rpm

-rwxrw-rw- 1 root root 1013605 Sep 16  2011 openmotif21-2.1.30-11.RHEL4.2.i386.rpm

-rwxrw-rw- 1 root root  130018 Nov 14  2011 oracleasm-2.6.18-194.el5-2.0.5-1.el5.i686.rpm

-rwxrw-rw- 1 root root  137329 Jan 24  2012 oracleasm-2.6.18-274.el5-2.0.5-1.el5.x86_64.rpm

-rwxrw-rw- 1 root root   13929 Nov 14  2011 oracleasmlib-2.0.4-1.el5.i386.rpm

-rwxrw-rw- 1 root root   85687 Nov 14  2011 oracleasm-support-2.1.7-1.el5.i386.rpm

-rwxrw-rw- 1 root root  173457 Nov 15  2011 sysstat-7.0.2-3.el5.i386.rpm

-rwxrw-rw- 1 root root  868885 Nov 14  2011 unixODBC-devel-2.2.11-7.1.i386.rpm

-rwxrw-rw- 1 root root  814677 Jan 24  2012 unixODBC-devel-2.2.11-7.1.x86_64.rpm

[root@paw-racnode1 RPM]# rpm -ivh oracleasm-support-2.1.7-1.el5.i386.rpm

warning: oracleasm-support-2.1.7-1.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159

Preparing...                ########################################### [100%]

   1:oracleasm-support      ########################################### [100%]

[root@paw-racnode1 RPM]# rpm -ivh oracleasm-2.6.18-274.el5-2.0.5-1.el5.x86_64.rpm

warning: oracleasm-2.6.18-274.el5-2.0.5-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159

Preparing...                ########################################### [100%]

   1:oracleasm-2.6.18-274.el########################################### [100%]

[root@paw-racnode1 RPM]# rpm -ivh oracleasmlib-2.0.4-1.el5.i386.rpm

warning: oracleasmlib-2.0.4-1.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159

Preparing...                ########################################### [100%]

   1:oracleasmlib           ########################################### [100%]

[root@paw-racnode1 RPM]# rpm -ivh unixODBC-devel-2.2.11-7.1.i386.rpm

warning: unixODBC-devel-2.2.11-7.1.i386.rpm: Header V3 DSA signature: NOKEY, key ID 652e84dc

Preparing...                ########################################### [100%]

   1:unixODBC-devel         ########################################### [100%]

 [root@paw-racnode1 RPM]# rpm -ivh unixODBC-devel-2.2.11-7.1.x86_64.rpm

warning: unixODBC-devel-2.2.11-7.1.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID e8562897

Preparing...                ########################################### [100%]

   1:unixODBC-devel         ########################################### [100%]

[root@paw-racnode1 RPM]# rpm -ivh libaio-devel-0.3.106-5.i386.rpm

warning: libaio-devel-0.3.106-5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 652e84dc

Preparing...                ########################################### [100%]

   1:libaio-devel           ########################################### [100%]

[root@paw-racnode1 RPM]# rpm -ivh libaio-devel-0.3.106-5.x86_64.rpm

warning: libaio-devel-0.3.106-5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 652e84dc

Preparing...                ########################################### [100%]

   1:libaio-devel           ########################################### [100%]

————————————————————————–

Modify /etc/sysconfig/ntpd by adding “-x” in the “OPTIONS” on all nodes.

 [root@paw-racnode1 RPM]#cat /etc/sysconfig/ntpd

# Drop root to id 'ntp:ntp' by default.

OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

# Set to 'yes' to sync hw clock after successful ntpdate

SYNC_HWCLOCK=no

# Additional options for ntpdate

NTPDATE_OPTIONS=""

[root@paw-racnode1 RPM]# service ntpd start

 

Thank you for reading… This is Airy…Enjoy Learning:)

 

#rac, #rac-installation

Network configuration for Oracle 11g R2 – RAC using Host File – Airy’s Notes

IP Addresses and other settings using Host Files:

Storage Server Details:

paw-racstorage:

eth0 : 192.168.75.10    ---- For Public Network

gateway :

subnet : 255.255.255.0

hostname: paw-racstorage

Primary DNS :

DNS Search Path : airydba

3 Nodes Details:

Node1:

paw-racnode1:

eth0: 192.168.75.11     ---- For Public Network

eth1: 10.0.0.1          ---- For Private Network

gateway :

subnet : 255.255.255.0

hostname: paw-racnode1

Primary DNS :

DNS Search Path : airydba

Node2:

paw-racnode2:

eth0: 192.168.75.12     ---- For Public Network

eth1: 10.0.0.2          ---- For Private Network

gateway :

subnet : 255.255.255.0

hostname: paw-racnode2

Primary DNS :

DNS Search Path : airydba

Node3:

paw-racnode3

eth0: 192.168.75.13     ---- For Public Network

eth1: 10.0.0.3          ---- For Private Network

gateway :

subnet : 255.255.255.0

hostname: paw-racnode3

Primary DNS :

DNS Search Path : airydba

————————————————————————–

VIP:

192.168.75.17

192.168.75.18

192.168.75.19

Scan IP:

192.168.75.20

————————————————————————–

Update  /etc/hosts file on all 3 nodes :

[root@paw-racnode1 ~]# vi /etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1         racnode1.airydba racnode1 localhost.airydba localhost

::1         localhost6.airydba6 localhost6

#Public IPs

192.168.75.11 racnode1.airydba racnode1

192.168.75.13 racnode2.airydba racnode2

192.168.75.15 racnode3.airydba racnode3

# Private IPs

10.0.0.1 racnode1-priv.airydba racnode1-priv

10.0.0.2 racnode2-priv.airydba racnode2-priv

10.0.0.3 racnode3-priv.airydba racnode3-priv

#Virtual IPs

192.168.75.17 racnode1-vip.airydba racnode1-vip

192.168.75.18 racnode2-vip.airydba racnode2-vip

192.168.75.19 racnode3-vip.airydba racnode3-vip

#SCAN IP

192.168.75.20 racnode-scan.airydba racnode-scan

Now restart all the nodes and check the connectivity by pinging each other.

 

Thank you for reading… This is Airy…Enjoy Learning:)

#dns, #linux, #public, #rac

Network Configuration For Oracle-RAC in RHEL

There are following ways to define Networking for Oracle 11g R2 – RAC:

  1. Network configuration for Oracle 11g R2 – RAC using Host File

  2. Network configuration for Oracle 11g R2 – RAC using DNS Server

    I have used DNS Server to configure My Oracle RAC Machine.

 

Thank you for reading… This is Airy…Enjoy Learning:)

 

#dns, #host-file, #networking, #rac

Shared Storage Creation for RAC- Airy’s Notes:

Shared Storage Creation for Oracle 11g R2-RAC Installation:

Now Copy ClusterStorage Directory from Linux O/S CD on your storage Machine to the location /u01.

[root@paw-racstorage-dns ~]# cd /u01/ClusterStorage/

[root@paw-racstorage-dns ClusterStorage]# pwd

/u01/ClusterStorage

[root@paw-racstorage-dns ClusterStorage]# rpm -ivh perl-Config-General-2.40-1.el5.noarch.rpm

warning: perl-Config-General-2.40-1.el5.noarch.rpm: Header V3 DSA signature: NOKEY, key ID 37017186

Preparing...                ########################################### [100%]

   1:perl-Config-General    ########################################### [100%]

[root@paw-racstorage-dns ClusterStorage]# rpm -ivh scsi-target-utils-1.0.14-1.el5.x86_64.rpm

warning: scsi-target-utils-1.0.14-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 37017186

Preparing...                ########################################### [100%]

   1:scsi-target-utils      ########################################### [100%]

[root@paw-racstorage-dns ClusterStorage]# cd /u02

[root@paw-racstorage-dns u02]# mkdir disks

[root@paw-racstorage-dns u02]# chmod 777 disks/

[root@paw-racstorage-dns u02]# for i in 1 to 1 2 3 4 5 6 7 8 9 10; do dd if=/dev/zero of=/u02/disks/disk$i.dat bs=1M count=2048; done

2048+0 records in

2048+0 records out

2147483648 bytes (2.1 GB) copied, 2.51422 seconds, 854 MB/s

2048+0 records in

2048+0 records out

2147483648 bytes (2.1 GB) copied, 2.55864 seconds, 839 MB/s

2048+0 records in

2048+0 records out

2147483648 bytes (2.1 GB) copied, 2.82151 seconds, 761 MB/s

2048+0 records in

2048+0 records out

2147483648 bytes (2.1 GB) copied, 2.85926 seconds, 751 MB/s

2048+0 records in

2048+0 records out

2147483648 bytes (2.1 GB) copied, 2.92506 seconds, 734 MB/s

2048+0 records in

2048+0 records out

2147483648 bytes (2.1 GB) copied, 3.25308 seconds, 660 MB/s

2048+0 records in

2048+0 records out

2147483648 bytes (2.1 GB) copied, 2.88697 seconds, 744 MB/s

2048+0 records in

2048+0 records out

2147483648 bytes (2.1 GB) copied, 3.47474 seconds, 618 MB/s

2048+0 records in

2048+0 records out

2147483648 bytes (2.1 GB) copied, 3.15013 seconds, 682 MB/s

2048+0 records in

2048+0 records out

2147483648 bytes (2.1 GB) copied, 4.00997 seconds, 536 MB/s

2048+0 records in

2048+0 records out

2147483648 bytes (2.1 GB) copied, 3.58843 seconds, 598 MB/s

2048+0 records in

2048+0 records out

2147483648 bytes (2.1 GB) copied, 3.19352 seconds, 672 MB/s

[root@paw-racstorage-dns u02]# chkconfig –level 345 tgtd on

 [root@paw-racstorage-dns u02]# vi /etc/rc.d/rc.local

# Create a target

tgtadm --lld iscsi --op new --mode target --tid 1 -T pawanm-san

# Create LUNs within the target

for i in 1 2 3 4 5 6 7 8 9 10

do

tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun $i -b /u02/disks/disk$i.dat

done

# Expose target to all

tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL

[root@paw-racstorage-dns ~]# init 6

[root@paw-racstorage-dns ~]# tgtadm –lld iscsi –op show –mode target

Target 1: pawanm-san

    System information:

        Driver: iscsi

        State: ready

    I_T nexus information:

    LUN information:

        LUN: 0

            Type: controller

            SCSI ID: IET     00010000

            SCSI SN: beaf10

            Size: 0 MB, Block size: 1

            Online: Yes

            Removable media: No

            Readonly: No

            Backing store type: null

            Backing store path: None

            Backing store flags:

        LUN: 1

            Type: disk

            SCSI ID: IET     00010001

            SCSI SN: beaf11

            Size: 2147 MB, Block size: 512

            Online: Yes

            Removable media: No

            Readonly: No

            Backing store type: rdwr

            Backing store path: /u01/disks/disk1.dat

            Backing store flags:

        LUN: 2

            Type: disk

            SCSI ID: IET     00010002

            SCSI SN: beaf12

            Size: 2147 MB, Block size: 512

            Online: Yes

            Removable media: No

            Readonly: No

            Backing store type: rdwr

            Backing store path: /u01/disks/disk2.dat

            Backing store flags:

        LUN: 3

            Type: disk

            SCSI ID: IET     00010003

            SCSI SN: beaf13

            Size: 2147 MB, Block size: 512

            Online: Yes

            Removable media: No

            Readonly: No

            Backing store type: rdwr

            Backing store path: /u01/disks/disk3.dat

            Backing store flags:

        LUN: 4

            Type: disk

            SCSI ID: IET     00010004

            SCSI SN: beaf14

            Size: 2147 MB, Block size: 512

            Online: Yes

            Removable media: No

            Readonly: No

            Backing store type: rdwr

            Backing store path: /u01/disks/disk4.dat

            Backing store flags:

        LUN: 5

            Type: disk

            SCSI ID: IET     00010005

            SCSI SN: beaf15

            Size: 2147 MB, Block size: 512

            Online: Yes

            Removable media: No

            Readonly: No

            Backing store type: rdwr

            Backing store path: /u01/disks/disk5.dat

            Backing store flags:

        LUN: 6

            Type: disk

            SCSI ID: IET     00010006

            SCSI SN: beaf16

            Size: 2147 MB, Block size: 512

            Online: Yes

            Removable media: No

            Readonly: No

            Backing store type: rdwr

            Backing store path: /u01/disks/disk6.dat

            Backing store flags:

        LUN: 7

            Type: disk

            SCSI ID: IET     00010007

            SCSI SN: beaf17

            Size: 2147 MB, Block size: 512

            Online: Yes

            Removable media: No

            Readonly: No

            Backing store type: rdwr

            Backing store path: /u01/disks/disk7.dat

            Backing store flags:

        LUN: 8

            Type: disk

            SCSI ID: IET     00010008

            SCSI SN: beaf18

            Size: 2147 MB, Block size: 512

            Online: Yes

            Removable media: No

            Readonly: No

            Backing store type: rdwr

            Backing store path: /u01/disks/disk8.dat

            Backing store flags:

        LUN: 9

            Type: disk

            SCSI ID: IET     00010009

            SCSI SN: beaf19

            Size: 2147 MB, Block size: 512

            Online: Yes

            Removable media: No

            Readonly: No

            Backing store type: rdwr

            Backing store path: /u01/disks/disk9.dat

            Backing store flags:

        LUN: 10

            Type: disk

            SCSI ID: IET     0001000a

            SCSI SN: beaf110

            Size: 2147 MB, Block size: 512

            Online: Yes

            Removable media: No

            Readonly: No

            Backing store type: rdwr

            Backing store path: /u01/disks/disk10.dat

            Backing store flags:

    Account information:

    ACL information:

        ALL

[root@paw-racnode1 ~]# service iscsi status

iscsid (pid  3382) is running...

[root@paw-racnode2 ~]# service iscsi status

iscsid (pid  3386) is running...

[root@paw-racnode3 ~]# service iscsi status

iscsid (pid  3386) is running...

[root@paw-racnode1 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.75.10

192.168.75.10:3260,1 pawanm-san

[root@paw-racnode2 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.75.10

192.168.75.10:3260,1 pawanm-san

[root@paw-racnode3 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.75.10

192.168.75.10:3260,1 pawanm-san

[root@paw-racnode1 ~]# init 6                ——-> Reboot the paw-racnode1 Machine

[root@paw-racnode2 ~]# init 6                ——-> Reboot the paw-racnode2 Machine

[root@paw-racnode3 ~]# init 6                ——-> Reboot the paw-racnode3 Machine

[root@paw-racnode1 ~]# fdisk -l

Disk /dev/sda: 53.6 GB, 53687091200 bytes

255 heads, 63 sectors/track, 6527 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1          13      104391   83  Linux

/dev/sda2              14        1318    10482412+  83  Linux

/dev/sda3            1319        1840     4192965   83  Linux

/dev/sda4            1841        6527    37648327+   5  Extended

/dev/sda5            1841        5952    33029608+  83  Linux

/dev/sda6            5953        6527     4618656   82  Linux swap / Solaris


Disk /dev/sdb: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


Disk /dev/sdb doesn't contain a valid partition table


Disk /dev/sdc: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


Disk /dev/sdc doesn't contain a valid partition table


Disk /dev/sdd: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


Disk /dev/sdd doesn't contain a valid partition table


Disk /dev/sde: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


Disk /dev/sde doesn't contain a valid partition table


Disk /dev/sdf: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


Disk /dev/sdf doesn't contain a valid partition table


Disk /dev/sdg: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


Disk /dev/sdg doesn't contain a valid partition table


Disk /dev/sdh: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


Disk /dev/sdh doesn't contain a valid partition table


Disk /dev/sdi: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


Disk /dev/sdi doesn't contain a valid partition table


Disk /dev/sdj: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


Disk /dev/sdj doesn't contain a valid partition table


Disk /dev/sdk: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


Disk /dev/sdk doesn't contain a valid partition table

[root@paw-racnode2 ~]# fdisk -l

Disk /dev/sda: 53.6 GB, 53687091200 bytes

255 heads, 63 sectors/track, 6527 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1          13      104391   83  Linux

/dev/sda2              14        1318    10482412+  83  Linux

/dev/sda3            1319        1840     4192965   83  Linux

/dev/sda4            1841        6527    37648327+   5  Extended

/dev/sda5            1841        5952    33029608+  83  Linux

/dev/sda6            5953        6527     4618656   82  Linux swap / Solaris


Disk /dev/sdb: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


Disk /dev/sdb doesn't contain a valid partition table


Disk /dev/sdc: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


Disk /dev/sdc doesn't contain a valid partition table


Disk /dev/sdd: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


Disk /dev/sdd doesn't contain a valid partition table


Disk /dev/sde: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


Disk /dev/sde doesn't contain a valid partition table


Disk /dev/sdf: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


Disk /dev/sdf doesn't contain a valid partition table


Disk /dev/sdg: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


Disk /dev/sdg doesn't contain a valid partition table


Disk /dev/sdh: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


Disk /dev/sdh doesn't contain a valid partition table


Disk /dev/sdi: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


Disk /dev/sdi doesn't contain a valid partition table


Disk /dev/sdj: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


Disk /dev/sdj doesn't contain a valid partition table

Disk /dev/sdk: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes

Disk /dev/sdk doesn't contain a valid partition table

[root@paw-racnode3 ~]# fdisk -l

 Disk /dev/sda: 53.6 GB, 53687091200 bytes

255 heads, 63 sectors/track, 6527 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1          13      104391   83  Linux

/dev/sda2              14        1318    10482412+  83  Linux

/dev/sda3            1319        1840     4192965   83  Linux

/dev/sda4            1841        6527    37648327+   5  Extended

/dev/sda5            1841        5952    33029608+  83  Linux

/dev/sda6            5953        6527     4618656   82  Linux swap / Solaris

Disk /dev/sdb: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes




Disk /dev/sdb doesn't contain a valid partition table




Disk /dev/sdc: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes




Disk /dev/sdc doesn't contain a valid partition table




Disk /dev/sdd: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes




Disk /dev/sdd doesn't contain a valid partition table




Disk /dev/sde: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes




Disk /dev/sde doesn't contain a valid partition table




Disk /dev/sdf: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes




Disk /dev/sdf doesn't contain a valid partition table




Disk /dev/sdg: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes




Disk /dev/sdg doesn't contain a valid partition table




Disk /dev/sdh: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes




Disk /dev/sdh doesn't contain a valid partition table




Disk /dev/sdi: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes




Disk /dev/sdi doesn't contain a valid partition table




Disk /dev/sdj: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes




Disk /dev/sdj doesn't contain a valid partition table




Disk /dev/sdk: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes




Disk /dev/sdk doesn't contain a valid partition table

 

[root@paw-racnode1 ~]# chkconfig –level 345 iscsi on

[root@paw-racnode2 ~]# chkconfig –level 345 iscsi on

[root@paw-racnode3 ~]# chkconfig –level 345 iscsi on

 

Thank you for reading… This is Airy…..Enjoy Learning:)

 

#rac, #shared-storage

DNS Server Configuration For Oracle-RAC in RHEL 5/6/7- Airy’s notes

 

DNS Configuration:

IP Addresses and other settings:

DNS Server Details:

paw-racstorage-dns.airydba.com :

eth0 : 192.168.75.10    ---- For Public Network

gateway : 192.168.75.1

subnet : 255.255.255.0

hostname: paw-racstorage-dns.airydba.com

Primary DNS : 192.168.75.10

DNS Search Path : airydba.com

3 Nodes Details:

Node1:

paw-racnode1.airydba.com :

eth0: 192.168.75.11     ---- For Public Network

eth1: 10.0.0.1          ---- For Private Network

gateway : 192.168.75.1

subnet : 255.255.255.0

hostname: paw-racnode1.airydba.com

Primary DNS : 192.168.75.10

DNS Search Path : airydba.com

Node2:

paw-racnode2.airydba.com :

eth0: 192.168.75.12     ---- For Public Network

eth1: 10.0.0.2          ---- For Private Network

gateway : 192.168.75.1

subnet : 255.255.255.0

hostname: paw-racnode2.airydba.com

Primary DNS : 192.168.75.10

DNS Search Path : airydba.com

Node3:

paw-racnode3.airydba.com

eth0: 192.168.75.13     ---- For Public Network

eth1: 10.0.0.3          ---- For Private Network

gateway : 192.168.75.1

subnet : 255.255.255.0

hostname: paw-racnode3.airydba.com

Primary DNS : 192.168.75.10

DNS Search Path : airydba.com

Work To be performed on Machine on which DNS to be configure:

[root@paw-racstorage-dns ~]# mkdir -p /u01/rhel5_rpms

[root@paw-racstorage-dns ~]# chmod 777 /u01/rhel5_rpms

[root@paw-racstorage-dns ~]# cp -ar /media/SAI-OS-5U7_64bit/Server/*.* /u01/rhel5_rpms

[root@paw-racstorage-dns ~]# cd /u01/rhel5_rpms
[root@paw-racstorage-dns rhel5_rpms]# rpm -ivh bind-chroot*

warning: bind-chroot-9.3.6-16.P1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 37017186

Preparing...                ########################################### [100%]

   1:bind-chroot               ########################################### [100%]
 [root@paw-racstorage-dns rhel5_rpms]# rpm -ivh bind-libs*

warning: bind-libs-9.3.6-16.P1.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 37017186

Preparing...                ########################################### [100%]

   1: bind-libs              ########################################### [100%]
[root@paw-racstorage-dns rhel5_rpms]# rpm -ivh ypbind-1.19-12.el5_6.1.x86_64.rpm

warning: ypbind-1.19-12.el5_6.1.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 37017186

Preparing...                ########################################### [100%]

    1: ypbind                ########################################### [100%]
[root@paw-racstorage-dns rhel5_rpms]# rpm -ivh bind-9.3.6-16.P1.el5.x86_64.rpm

warning: bind-9.3.6-16.P1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 37017186

Preparing...                ########################################### [100%]

   1: bind-9.3.6           ########################################### [100%]
[root@paw-racstorage-dns rhel5_rpms]# rpm -ivh bind-utils-9.3.6-16.P1.el5.x86_64.rpm

warning: bind-utils-9.3.6-16.P1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 37017186

Preparing...                ########################################### [100%]

   1: bind-utils            ########################################### [100%]
[root@paw-racstorage-dns rhel5_rpms]# rpm -ivh bind-sdb-9.3.6-16.P1.el5.x86_64.rpm


warning: bind-sdb-9.3.6-16.P1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 37017186

Preparing...                ########################################### [100%]

   1:bind-sdb               ########################################### [100%]
[root@paw-racstorage-dns rhel5_rpms]# rpm -ivh bind-libbind-devel-9.3.6-16.P1.el5.x86_64

warning: bind-libbind-devel-9.3.6-16.P1.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 37017186

Preparing...                ########################################### [100%]

   1:bind-libbind-devel     ########################################### [ 100%]
[root@paw-racstorage-dns rhel5_rpms]# rpm -ivh bind-devel-9.3.6-16.P1.el5.x86_64.rpm

warning: bind-devel-9.3.6-16.P1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 37017186

Preparing...                ########################################### [100%]

   1:bind-devel             ########################################### [100%]
[root@paw-racstorage-dns rhel5_rpms]# rpm -ivh caching-nameserver-9.3.6-16.P1.el5.x86_64.rpm

warning: caching-nameserver-9.3.6-16.P1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 37017186

Preparing...                ########################################### [100%]

   1:caching-nameserver     ########################################### [100%]
[root@paw-racstorage-dns ~]# rpm -qa bind*

bind-9.3.6-16.P1.el5

bind-utils-9.3.6-16.P1.el5

bind-devel-9.3.6-16.P1.el5

bind-sdb-9.3.6-16.P1.el5

bind-libbind-devel-9.3.6-16.P1.el5

bind-libs-9.3.6-16.P1.el5

bind-chroot-9.3.6-16.P1.el5
[root@paw-racstorage-dns ~]# rpm -qa system-config-bind*

system-config-bind-4.0.3-4.el5

[root@paw-racstorage-dns ~]# rpm -qa ypbind*

ypbind-1.19-12.el5_6.1

[root@paw-racstorage-dns ~]# rpm -qa caching-nameserver*

caching-nameserver-9.3.6-16.P1.el5
[root@paw-racstorage-dns ~]# cd /etc

[root@paw-racstorage-dns etc]# ll named*

lrwxrwxrwx 1 root named 52 May 11 12:21 named.caching-nameserver.conf -> /var/named/chroot//etc/named.caching-nameserver.conf

lrwxrwxrwx 1 root named 42 May 11 12:21 named.rfc1912.zones -> /var/named/chroot//etc/named.rfc1912.zones
[root@paw-racstorage-dns etc]# cp named.caching-nameserver.conf named.caching-nameserver.conf1

[root@paw-racstorage-dns etc]# mv named.caching-nameserver.conf named.conf

[root@paw-racstorage-dns etc]# ll named*

-rw-r----- 1 root root  1230 May 11 12:32 named.caching-nameserver.conf1

lrwxrwxrwx 1 root named   52 May 11 12:21 named.conf -> /var/named/chroot//etc/named.caching-nameserver.conf

lrwxrwxrwx 1 root named   42 May 11 12:21 named.rfc1912.zones -> /var/named/chroot//etc/named.rfc1912.zones

Orignal  “/etc/named.conf” file :

[root@paw-racstorage-dns etc]# cat named.conf

//

// named.caching-nameserver.conf

//

// Provided by Red Hat caching-nameserver package to configure the

// ISC BIND named(8) DNS server as a caching only nameserver

// (as a localhost DNS resolver only).

//

// See /usr/share/doc/bind*/sample/ for example named configuration files.

//

// DO NOT EDIT THIS FILE - use system-config-bind or an editor

// to create named.conf - edits to this file will be lost on

// caching-nameserver package upgrade.

//

options {

        listen-on port 53 { 127.0.0.1; };

        listen-on-v6 port 53 { ::1; };

        directory       "/var/named";

        dump-file       "/var/named/data/cache_dump.db";

        statistics-file "/var/named/data/named_stats.txt";

        memstatistics-file "/var/named/data/named_mem_stats.txt";

        // Those options should be used carefully because they disable port

        // randomization

        // query-source    port 53;

        // query-source-v6 port 53;

        allow-query     { localhost; };

        allow-query-cache { localhost; };

};

logging {

        channel default_debug {

                file "data/named.run";

                severity dynamic;

        };

};

view localhost_resolver {

        match-clients      { localhost; };

        match-destinations { localhost; };

        recursion yes;

        include "/etc/named.rfc1912.zones";

};

 

Modified “/etc/named.conf” file :

[root@paw-racstorage-dns etc]# cat named.conf

//      match-clients      { localhost; };

// named.caching-nameserver.confalhost; };

//      recursion yes;

// Provided by Red Hat caching-nameserver package to configure the

// ISC BIND named(8) DNS server as a caching only nameserver

// (as a localhost DNS resolver only). .conf

//

// See /usr/share/doc/bind*/sample/ for example named configuration files.

//

// DO NOT EDIT THIS FILE - use system-config-bind or an editor

// to create named.conf - edits to this file will be lost on

// caching-nameserver package upgrade.

//

options {

        listen-on port 53 { 127.0.0.1; 192.168.75.10; };

        listen-on-v6 port 53 { ::1; };

        directory       "/var/named";

        dump-file       "/var/named/data/cache_dump.db";

        statistics-file "/var/named/data/named_stats.txt";

        memstatistics-file "/var/named/data/named_mem_stats.txt";

        // Those options should be used carefully because they disable port

        // randomization

        // query-source    port 53;

        // query-source-v6 port 53;


        allow-query     { localhost; any; };

        allow-query-cache { localhost; any; };

};

logging {

        channel default_debug {

                file "data/named.run";

                severity dynamic;

        };

};

view localhost_resolver {

        match-clients      { localhost; any; };

        match-destinations { localhost; any; };

        recursion yes;

        include "/etc/named.rfc1912.zones";

};

Orignal “/etc/named.rfc1912.zones” :

[root@paw-racstorage-dns etc]# cat named.rfc1912.zones

// named.rfc1912.zones:

//

// Provided by Red Hat caching-nameserver package

//

// ISC BIND named zone configuration for zones recommended by

// RFC 1912 section 4.1 : localhost TLDs and address zones

//

// See /usr/share/doc/bind*/sample/ for example named configuration files.

//

zone "." IN {

        type hint;

        file "named.ca";

};

zone "localdomain" IN {

        type master;

        file "localdomain.zone";

        allow-update { none; };

};

zone "localhost" IN {

        type master;

        file "localhost.zone";

        allow-update { none; };

};

zone "0.0.127.in-addr.arpa" IN {

        type master;

        file "named.local";

        allow-update { none; };

};

zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa" IN {

        type master;

        file "named.ip6.local";

        allow-update { none; };

};

zone "255.in-addr.arpa" IN {

        type master;

        file "named.broadcast";

        allow-update { none; };

};

zone "0.in-addr.arpa" IN {

        type master;

        file "named.zero";

        allow-update { none; };

};

Modifed “/etc/named.rfc1912.zones”:

[root@paw-racstorage-dns etc]# cat /etc/named.rfc1912.zones

// named.rfc1912.zones:

//oot@paw-racstorage-dns etc]# vi named.rfc1912.zones

// Provided by Red Hat caching-nameserver package

//

// ISC BIND named zone configuration for zones recommended by

// RFC 1912 section 4.1 : localhost TLDs and address zones

//

// See /usr/share/doc/bind*/sample/ for example named configuration files.

//

zone "." IN {

        type hint;

        file "named.ca";

};

zone "airydba.com" IN {

        type master;

        file "for.zone";

        allow-update { none; };

};

zone "localhost" IN {

        type master;

        file "localhost.zone";

        allow-update { none; };

};

zone "75.168.192.in-addr.arpa" IN {

        type master;

        file "rev.zone";

        allow-update { none; };

};

zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa" IN {

        type master;

        file "named.ip6.local";

        allow-update { none; };

};

zone "255.in-addr.arpa" IN {

        type master;

        file "named.broadcast";

        allow-update { none; };

};

zone "0.in-addr.arpa" IN {

        type master;

        file "named.zero";

        allow-update { none; };

};
[root@paw-racstorage-dns etc]# cd /var/named/chroot/var/named/

[root@paw-racstorage-dns named]# ll

total 44

drwxrwx--- 2 named named 4096 Aug 26  2004 data

-rw-r----- 1 root  named  198 Dec  2  2010 localdomain.zone

-rw-r----- 1 root  named  195 Dec  2  2010 localhost.zone

-rw-r----- 1 root  named  427 Dec  2  2010 named.broadcast

-rw-r----- 1 root  named 1892 Dec  2  2010 named.ca

-rw-r----- 1 root  named  424 Dec  2  2010 named.ip6.local

-rw-r----- 1 root  named  426 Dec  2  2010 named.local

-rw-r----- 1 root  named  427 Dec  2  2010 named.zero

drwxrwx--- 2 named named 4096 Jul 27  2004 slaves


[root@paw-racstorage-dns named]# cp -a named.local for.zone

[root@paw-racstorage-dns named]# cp -a named.zero rev.zone

[root@paw-racstorage-dns named]# ll

total 52

drwxrwx--- 2 named named 4096 Aug 26  2004 data

-rw-r----- 1 root  named 1087 May 11 12:54 for.zone

-rw-r----- 1 root  named  198 Dec  2  2010 localdomain.zone

-rw-r----- 1 root  named  195 Dec  2  2010 localhost.zone

-rw-r----- 1 root  named  427 Dec  2  2010 named.broadcast

-rw-r----- 1 root  named 1892 Dec  2  2010 named.ca

-rw-r----- 1 root  named  424 Dec  2  2010 named.ip6.local

-rw-r----- 1 root  named  426 Dec  2  2010 named.local

-rw-r----- 1 root  named  427 Dec  2  2010 named.zero

-rw-r----- 1 root  named  953 May 11 12:59 rev.zone

drwxrwx--- 2 named named 4096 Jul 27  2004 slaves

 

Orignal “/var/named/chroot/var/named/for.zone”file:

[root@paw-racstorage-dns named]# cat for.zone

$TTL    86400

@       IN      SOA     localhost. root.localhost.  (

                                      1997022700 ; Serial

                                      28800      ; Refresh

                                      14400      ; Retry

                                      3600000    ; Expire

                                      86400 )    ; Minimum

        IN      NS      localhost.

1       IN      PTR     localhost.

Modifed  “/var/named/chroot/var/named/for.zone” file :

[root@paw-racstorage-dns named]# cat for.zone

$TTL    86400

@       IN      SOA     paw-racstorage-dns.airydba.com. root.paw-racstorage-dns.airydba.com.  (

                                      1997022700 ; Serial

                                      28800      ; Refresh

                                      14400      ; Retry

                                      3600000    ; Expire

                                      86400 )    ; Minimum

                        IN      NS      paw-racstorage-dns.airydba.com.

airydba.com.            IN      A       192.168.75.10

paw-racstorage-dns      IN      A       192.168.75.10

paw-racnode1            IN      A       192.168.75.11

paw-racnode2            IN      A       192.168.75.12

paw-racnode3            IN      A       192.168.75.13

paw-racnode1-priv       IN      A       10.0.0.1

paw-racnode2-priv       IN      A       10.0.0.2

paw-racnode3-priv       IN      A       10.0.0.3

paw-racnode1-vip        IN      A       192.168.75.21

paw-racnode2-vip        IN      A       192.168.75.22

paw-racnode3-vip        IN      A       192.168.75.23

paw-rac01-scan          IN      A       192.168.75.101

paw-rac01-scan          IN      A       192.168.75.102

paw-rac01-scan          IN      A       192.168.75.103

 

Orignal “/var/named/chroot/var/named/rev.zone” file:

[root@paw-racstorage-dns named]# cat rev.zone 

$TTL    86400

@               IN SOA  localhost.      root.localhost. (

                                        42              ; serial (d. adams)

                                        3H              ; refresh

                                        15M             ; retry

                                        1W              ; expiry

                                        1D )            ; minimum

        IN      NS      localhost.

Modifed  “/var/named/chroot/var/named/rev.zone” file:

[root@paw-racstorage-dns named]# cat rev.zone

$TTL    86400

@               IN SOA  paw-racstorage-dns.airydba.com. root.paw-racstorage-dns.airydba.com. (

                                        42              ; serial (d. adams)

                                        3H              ; refresh

                                        15M             ; retry

                                        1W              ; expiry

                                        1D )            ; minimum

        IN      NS      paw-racstorage-dns.airydba.com.

10      IN      PTR     paw-racstorage-dns.airydba.com.

11      IN      PTR     paw-racnode1.airydba.com.

12      IN      PTR     paw-racnode2.airydba.com.

13      IN      PTR     paw-racnode3.airydba.com.

21      IN      PTR     paw-racnode1-vip.airydba.com.

22      IN      PTR     paw-racnode2-vip.airydba.com.

23      IN      PTR     paw-racnode3-vip.airydba.com.

101     IN      PTR     paw-rac01-scan.airydba.com.

102     IN      PTR     paw-rac01-scan.airydba.com.

103     IN      PTR     paw-rac01-scan.airydba.com.

Orignal “/etc/resolv.conf” file :

[root@paw-racstorage-dns named]# cat /etc/resolv.conf

; generated by /sbin/dhclient-script

search localdomain

nameserver 192.168.75.2

Modified “/etc/resolv.conf” file :

 [root@paw-racstorage-dns named]# cat /etc/resolv.conf

; generated by /sbin/dhclient-script

search airydba.com

nameserver 192.168.75.10

Orignal “/etc/hosts” file :

[root@paw-racstorage-dns named]# cat /etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1               paw-racstorage-dns.airydba.com paw-racstorage-dns localhost.localdomain localhost

::1             localhost6.localdomain6 localhost6

Modified “/etc/hosts” file :

[root@paw-racstorage-dns named]# cat /etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1               localhost.localdomain localhost

::1             localhost6.localdomain6 localhost6

 

Work To be performed on all Nodes (paw-racnode1,paw-racnode2,paw-racnode3):

Orignal “/etc/resolv.conf” file :

[root@paw-racstorage-dns named]# cat /etc/resolv.conf

search localdomain

nameserver 192.168.75.2

Modified “/etc/resolv.conf” file :

 [root@paw-racnode1 named]# cat /etc/resolv.conf

search airydba.com

nameserver 192.168.75.10

 

Orignal “/etc/hosts” file :

[root@ paw-racnode1 named]# cat /etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1               paw-racnode1.airydba.com paw-racnode1 localhost.localdomain localhost

::1             localhost6.localdomain6 localhost6

Modified “/etc/hosts” file :

[root@ paw-racnode1 named]# cat /etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1               localhost.localdomain localhost

::1             localhost6.localdomain6 localhost6

 

DNS Testing :

[root@paw-racstorage-dns named]# ping paw-racstorage-dns

PING paw-racstorage-dns.airydba.com (192.168.75.10) 56(84) bytes of data.

64 bytes from paw-racstorage-dns.airydba.com (192.168.75.10): icmp_seq=1 ttl=64 time=0.014 ms

64 bytes from paw-racstorage-dns.airydba.com (192.168.75.10): icmp_seq=2 ttl=64 time=0.029 ms

64 bytes from paw-racstorage-dns.airydba.com (192.168.75.10): icmp_seq=3 ttl=64 time=0.029 ms

64 bytes from paw-racstorage-dns.airydba.com (192.168.75.10): icmp_seq=4 ttl=64 time=0.028 ms

--- paw-racstorage-dns.airydba.com ping statistics ---

4 packets transmitted, 4 received, 0% packet loss, time 3001ms

rtt min/avg/max/mdev = 0.014/0.025/0.029/0.006 ms
[root@paw-racstorage-dns named]# ping paw-racnode1

PING paw-racnode1.airydba.com (192.168.75.11) 56(84) bytes of data.

64 bytes from paw-racnode1.airydba.com (192.168.75.11): icmp_seq=1 ttl=64 time=0.659 ms

64 bytes from paw-racnode1.airydba.com (192.168.75.11): icmp_seq=2 ttl=64 time=0.238 ms

64 bytes from paw-racnode1.airydba.com (192.168.75.11): icmp_seq=3 ttl=64 time=0.245 ms

--- paw-racnode1.airydba.com ping statistics ---

3 packets transmitted, 3 received, 0% packet loss, time 2000ms

rtt min/avg/max/mdev = 0.238/0.380/0.659/0.198 ms
[root@paw-racstorage-dns named]# ping paw-racnode2

PING paw-racnode2.airydba.com (192.168.75.12) 56(84) bytes of data.

64 bytes from paw-racnode2.airydba.com (192.168.75.12): icmp_seq=1 ttl=64 time=1.13 ms

64 bytes from paw-racnode2.airydba.com (192.168.75.12): icmp_seq=2 ttl=64 time=0.235 ms

64 bytes from paw-racnode2.airydba.com (192.168.75.12): icmp_seq=3 ttl=64 time=0.227 ms

--- paw-racnode2.airydba.com ping statistics ---

3 packets transmitted, 3 received, 0% packet loss, time 1999ms

rtt min/avg/max/mdev = 0.227/0.532/1.135/0.426 ms
[root@paw-racstorage-dns named]# ping paw-racnode3

PING paw-racnode3.airydba.com (192.168.75.13) 56(84) bytes of data.

64 bytes from paw-racnode3.airydba.com (192.168.75.13): icmp_seq=1 ttl=64 time=1.14 ms

64 bytes from paw-racnode3.airydba.com (192.168.75.13): icmp_seq=2 ttl=64 time=0.229 ms

--- paw-racnode3.airydba.com ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 1000ms

rtt min/avg/max/mdev = 0.229/0.688/1.147/0.459 ms
[root@paw-racstorage-dns named]# dig -x 192.168.75.10

; <<>> DiG 9.3.6-P1-RedHat-9.3.6-16.P1.el5 <<>> -x 192.168.75.10

;; global options:  printcmd

;; Got answer:

;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 44221

;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1

;; QUESTION SECTION:

;10.75.168.192.in-addr.arpa.    IN      PTR

;; ANSWER SECTION:

10.75.168.192.in-addr.arpa. 86400 IN    PTR     paw-racstorage-dns.airydba.com.

;; AUTHORITY SECTION:

75.168.192.in-addr.arpa. 86400  IN      NS      paw-racstorage-dns.airydba.com.

;; ADDITIONAL SECTION:

paw-racstorage-dns.airydba.com. 86400 IN A      192.168.75.10

;; Query time: 0 msec

;; SERVER: 192.168.75.10#53(192.168.75.10)

;; WHEN: Wed May 11 16:39:53 2016

;; MSG SIZE  rcvd: 118
[root@paw-racstorage-dns named]# dig -x 192.168.75.11

; <<>> DiG 9.3.6-P1-RedHat-9.3.6-16.P1.el5 <<>> -x 192.168.75.11

;; global options:  printcmd

;; Got answer:

;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 5827

;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1

;; QUESTION SECTION:

;11.75.168.192.in-addr.arpa.    IN      PTR

;; ANSWER SECTION:

11.75.168.192.in-addr.arpa. 86400 IN    PTR     paw-racnode1.airydba.com.

;; AUTHORITY SECTION:

75.168.192.in-addr.arpa. 86400  IN      NS      paw-racstorage-dns.airydba.com.

;; ADDITIONAL SECTION:

paw-racstorage-dns.airydba.com. 86400 IN A      192.168.75.10

;; Query time: 0 msec

;; SERVER: 192.168.75.10#53(192.168.75.10)

;; WHEN: Wed May 11 16:42:10 2016

;; MSG SIZE  rcvd: 131
[root@paw-racstorage-dns named]# dig -x 192.168.75.12

; <<>> DiG 9.3.6-P1-RedHat-9.3.6-16.P1.el5 <<>> -x 192.168.75.12

;; global options:  printcmd

;; Got answer:

;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 34370

;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1

;; QUESTION SECTION:

;12.75.168.192.in-addr.arpa.    IN      PTR

;; ANSWER SECTION:

12.75.168.192.in-addr.arpa. 86400 IN    PTR     paw-racnode2.airydba.com.

;; AUTHORITY SECTION:

75.168.192.in-addr.arpa. 86400  IN      NS      paw-racstorage-dns.airydba.com.

;; ADDITIONAL SECTION:

paw-racstorage-dns.airydba.com. 86400 IN A      192.168.75.10

;; Query time: 0 msec

;; SERVER: 192.168.75.10#53(192.168.75.10)

;; WHEN: Wed May 11 16:42:16 2016

;; MSG SIZE  rcvd: 131
[root@paw-racstorage-dns named]# dig -x 192.168.75.13

; <<>> DiG 9.3.6-P1-RedHat-9.3.6-16.P1.el5 <<>> -x 192.168.75.13

;; global options:  printcmd

;; Got answer:

;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 38759

;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1

;; QUESTION SECTION:

;13.75.168.192.in-addr.arpa.    IN      PTR

;; ANSWER SECTION:

13.75.168.192.in-addr.arpa. 86400 IN    PTR     paw-racnode3.airydba.com.

;; AUTHORITY SECTION:

75.168.192.in-addr.arpa. 86400  IN      NS      paw-racstorage-dns.airydba.com.

;; ADDITIONAL SECTION:

paw-racstorage-dns.airydba.com. 86400 IN A      192.168.75.10

;; Query time: 0 msec

;; SERVER: 192.168.75.10#53(192.168.75.10)

;; WHEN: Wed May 11 16:42:18 2016

;; MSG SIZE  rcvd: 131
[root@paw-racstorage-dns named]# dig -x paw-racstorage-dns

; <<>> DiG 9.3.6-P1-RedHat-9.3.6-16.P1.el5 <<>> -x paw-racstorage-dns

;; global options:  printcmd

;; Got answer:

;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 31271

;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:

;paw-racstorage-dns.in-addr.arpa. IN    PTR

;; Query time: 30 msec

;; SERVER: 192.168.75.10#53(192.168.75.10)

;; WHEN: Wed May 11 16:40:29 2016

;; MSG SIZE  rcvd: 49
[root@paw-racstorage-dns named]# dig -x paw-racstorage-dns.airydba.com

; <<>> DiG 9.3.6-P1-RedHat-9.3.6-16.P1.el5 <<>> -x paw-racstorage-dns.airydba.com

;; global options:  printcmd

;; Got answer:

;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 30867

;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:

;com.example.paw-racstorage-dns.in-addr.arpa. IN        PTR

;; Query time: 4 msec

;; SERVER: 192.168.75.10#53(192.168.75.10)

;; WHEN: Wed May 11 16:40:43 2016

;; MSG SIZE  rcvd: 61
[root@paw-racstorage-dns named]# dig -x paw-racnode1

; <<>> DiG 9.3.6-P1-RedHat-9.3.6-16.P1.el5 <<>> -x paw-racnode1

;; global options:  printcmd

;; Got answer:

;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 36154

;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:

;paw-racnode1.in-addr.arpa.     IN      PTR

;; Query time: 5 msec

;; SERVER: 192.168.75.10#53(192.168.75.10)

;; WHEN: Wed May 11 16:41:16 2016

;; MSG SIZE  rcvd: 43
[root@paw-racstorage-dns named]# dig -x paw-racnode2

; <<>> DiG 9.3.6-P1-RedHat-9.3.6-16.P1.el5 <<>> -x paw-racnode2

;; global options:  printcmd

;; Got answer:

;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 2062

;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:

;paw-racnode2.in-addr.arpa.     IN      PTR

;; Query time: 4 msec

;; SERVER: 192.168.75.10#53(192.168.75.10)

;; WHEN: Wed May 11 16:41:20 2016

;; MSG SIZE  rcvd: 43
 [root@paw-racstorage-dns named]# dig -x paw-racnode3

; <<>> DiG 9.3.6-P1-RedHat-9.3.6-16.P1.el5 <<>> -x paw-racnode3

;; global options:  printcmd

;; Got answer:

;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 19888

;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:

;paw-racnode3.in-addr.arpa.     IN      PTR

;; Query time: 4 msec

;; SERVER: 192.168.75.10#53(192.168.75.10)

;; WHEN: Wed May 11 16:41:23 2016

;; MSG SIZE  rcvd: 43
[root@paw-racstorage-dns named]# nslookup paw-racstorage-dns.airydba.com

Server:         192.168.75.10

Address:        192.168.75.10#53

Name:   paw-racstorage-dns.airydba.com

Address: 192.168.75.10

[root@paw-racstorage-dns named]# nslookup paw-racstorage-dns

Server:         192.168.75.10

Address:        192.168.75.10#53

Name:   paw-racstorage-dns.airydba.com

Address: 192.168.75.10

[root@paw-racstorage-dns named]# nslookup paw-racnode1.airydba.com

Server:         192.168.75.10

Address:        192.168.75.10#53

Name:   paw-racnode1.airydba.com

Address: 192.168.75.11

[root@paw-racstorage-dns named]# nslookup paw-racnode1

Server:         192.168.75.10

Address:        192.168.75.10#53

Name:   paw-racnode1.airydba.com

Address: 192.168.75.11

[root@paw-racstorage-dns named]# nslookup paw-racnode2.airydba.com

Server:         192.168.75.10

Address:        192.168.75.10#53

Name:   paw-racnode2.airydba.com

Address: 192.168.75.12

[root@paw-racstorage-dns named]# nslookup paw-racnode3

Server:         192.168.75.10

Address:        192.168.75.10#53

Name:   paw-racnode3.airydba.com

Address: 192.168.75.13

[root@paw-racstorage-dns named]# nslookup paw-racnode1-priv.airydba.com

Server:         192.168.75.10

Address:        192.168.75.10#53

Name:   paw-racnode1-priv.airydba.com

Address: 10.0.0.1

[root@paw-racstorage-dns named]# nslookup paw-racnode2-priv.airydba.com

Server:         192.168.75.10

Address:        192.168.75.10#53

Name:   paw-racnode2-priv.airydba.com

Address: 10.0.0.2

[root@paw-racstorage-dns named]# nslookup paw-racnode3-priv.airydba.com

Server:         192.168.75.10

Address:        192.168.75.10#53

Name:   paw-racnode3-priv.airydba.com

Address: 10.0.0.3

[root@paw-racstorage-dns named]# nslookup paw-racnode1-vip.airydba.com

Server:         192.168.75.10

Address:        192.168.75.10#53

Name:   paw-racnode1-vip.airydba.com

Address: 192.168.75.21

[root@paw-racstorage-dns named]# nslookup paw-racnode2-vip.airydba.com

Server:         192.168.75.10

Address:        192.168.75.10#53

Name:   paw-racnode2-vip.airydba.com

Address: 192.168.75.22

[root@paw-racstorage-dns named]# nslookup paw-racnode3-vip.airydba.com

Server:         192.168.75.10

Address:        192.168.75.10#53

Name:   paw-racnode3-vip.airydba.com

Address: 192.168.75.23

[root@paw-racstorage-dns named]# nslookup paw-rac01-scan.airydba.com

Server:         192.168.75.10

Address:        192.168.75.10#53

Name:   paw-rac01-scan.airydba.com

Address: 192.168.75.101

Name:   paw-rac01-scan.airydba.com

Address: 192.168.75.102

Name:   paw-rac01-scan.airydba.com

Address: 192.168.75.103

========================================================================

Checking on paw-racnode1 :

[root@paw-racnode1 ~]# ping paw-racstorage-dns

PING paw-racstorage-dns.airydba.com (192.168.75.10) 56(84) bytes of data.

64 bytes from paw-racstorage-dns.airydba.com (192.168.75.10): icmp_seq=1 ttl=64 time=0.172 ms

64 bytes from paw-racstorage-dns.airydba.com (192.168.75.10): icmp_seq=2 ttl=64 time=0.217 ms

64 bytes from paw-racstorage-dns.airydba.com (192.168.75.10): icmp_seq=3 ttl=64 time=0.236 ms

--- paw-racstorage-dns.airydba.com ping statistics ---

3 packets transmitted, 3 received, 0% packet loss, time 2001ms

rtt min/avg/max/mdev = 0.172/0.208/0.236/0.029 ms

[root@paw-racnode1 ~]# ping paw-racnode1

PING paw-racnode1.airydba.com (192.168.75.11) 56(84) bytes of data.

64 bytes from paw-racnode1.airydba.com (192.168.75.11): icmp_seq=1 ttl=64 time=0.019 ms

64 bytes from paw-racnode1.airydba.com (192.168.75.11): icmp_seq=2 ttl=64 time=0.029 ms

64 bytes from paw-racnode1.airydba.com (192.168.75.11): icmp_seq=3 ttl=64 time=0.030 ms

--- paw-racnode1.airydba.com ping statistics ---

3 packets transmitted, 3 received, 0% packet loss, time 2000ms

rtt min/avg/max/mdev = 0.019/0.026/0.030/0.005 ms

[root@paw-racnode1 ~]# ping paw-racnode2

PING paw-racnode2.airydba.com (192.168.75.12) 56(84) bytes of data.

64 bytes from paw-racnode2.airydba.com (192.168.75.12): icmp_seq=1 ttl=64 time=1.44 ms

64 bytes from paw-racnode2.airydba.com (192.168.75.12): icmp_seq=2 ttl=64 time=0.529 ms

--- paw-racnode2.airydba.com ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 999ms

rtt min/avg/max/mdev = 0.529/0.986/1.444/0.458 ms

You have new mail in /var/spool/mail/root

[root@paw-racnode1 ~]# ping paw-racnode3

PING paw-racnode3.airydba.com (192.168.75.13) 56(84) bytes of data.

64 bytes from paw-racnode3.airydba.com (192.168.75.13): icmp_seq=1 ttl=64 time=0.754 ms

64 bytes from paw-racnode3.airydba.com (192.168.75.13): icmp_seq=2 ttl=64 time=0.217 ms

--- paw-racnode3.airydba.com ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 1000ms

rtt min/avg/max/mdev = 0.217/0.485/0.754/0.269 ms

[root@paw-racnode1 ~]# ping paw-racnode1-priv

PING paw-racnode1-priv.airydba.com (10.0.0.1) 56(84) bytes of data.

64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=1.25 ms

64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.163 ms

--- paw-racnode1-priv.airydba.com ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 999ms

rtt min/avg/max/mdev = 0.163/0.708/1.253/0.545 ms

[root@paw-racnode1 ~]# ping paw-racnode2-priv

PING paw-racnode2-priv.airydba.com (10.0.0.2) 56(84) bytes of data.

64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=1.25 ms

64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.236 ms

--- paw-racnode2-priv.airydba.com ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 1000ms

rtt min/avg/max/mdev = 0.236/0.745/1.255/0.510 ms

[root@paw-racnode1 ~]# ping paw-racnode3-priv

PING paw-racnode3-priv.airydba.com (10.0.0.3) 56(84) bytes of data.

64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.018 ms

64 bytes from 10.0.0.3: icmp_seq=2 ttl=64 time=0.026 ms

--- paw-racnode3-priv.airydba.com ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 1000ms

rtt min/avg/max/mdev = 0.018/0.022/0.026/0.004 ms


========================================================================

Checking on paw-racnode2 :

[root@paw-racnode2 ~]# ping paw-racstorage-dns

PING paw-racstorage-dns.airydba.com (192.168.75.10) 56(84) bytes of data.

64 bytes from paw-racstorage-dns.airydba.com (192.168.75.10): icmp_seq=1 ttl=64 time=0.181 ms

64 bytes from paw-racstorage-dns.airydba.com (192.168.75.10): icmp_seq=2 ttl=64 time=0.517 ms

64 bytes from paw-racstorage-dns.airydba.com (192.168.75.10): icmp_seq=3 ttl=64 time=0.219 ms

--- paw-racstorage-dns.airydba.com ping statistics ---

3 packets transmitted, 3 received, 0% packet loss, time 2000ms

rtt min/avg/max/mdev = 0.181/0.305/0.517/0.151 ms

[root@paw-racnode2 ~]# ping paw-racnode1

PING paw-racnode1.airydba.com (192.168.75.11) 56(84) bytes of data.

64 bytes from paw-racnode1.airydba.com (192.168.75.11): icmp_seq=1 ttl=64 time=1.58 ms

64 bytes from paw-racnode1.airydba.com (192.168.75.11): icmp_seq=2 ttl=64 time=0.159 ms

64 bytes from paw-racnode1.airydba.com (192.168.75.11): icmp_seq=3 ttl=64 time=0.199 ms

--- paw-racnode1.airydba.com ping statistics ---

3 packets transmitted, 3 received, 0% packet loss, time 2000ms

rtt min/avg/max/mdev = 0.159/0.649/1.589/0.664 ms

[root@paw-racnode2 ~]# ping paw-racnode2

PING paw-racnode2.airydba.com (192.168.75.12) 56(84) bytes of data.

64 bytes from paw-racnode2.airydba.com (192.168.75.12): icmp_seq=1 ttl=64 time=0.017 ms

64 bytes from paw-racnode2.airydba.com (192.168.75.12): icmp_seq=2 ttl=64 time=0.028 ms

64 bytes from paw-racnode2.airydba.com (192.168.75.12): icmp_seq=3 ttl=64 time=0.031 ms

--- paw-racnode2.airydba.com ping statistics ---

3 packets transmitted, 3 received, 0% packet loss, time 2000ms

rtt min/avg/max/mdev = 0.017/0.025/0.031/0.007 ms

[root@paw-racnode2 ~]# ping paw-racnode3

PING paw-racnode3.airydba.com (192.168.75.13) 56(84) bytes of data.

64 bytes from paw-racnode3.airydba.com (192.168.75.13): icmp_seq=1 ttl=64 time=1.66 ms

64 bytes from paw-racnode3.airydba.com (192.168.75.13): icmp_seq=2 ttl=64 time=0.217 ms

64 bytes from paw-racnode3.airydba.com (192.168.75.13): icmp_seq=3 ttl=64 time=0.219 ms

--- paw-racnode3.airydba.com ping statistics ---

6 packets transmitted, 6 received, 0% packet loss, time 5001ms

rtt min/avg/max/mdev = 0.151/0.447/1.668/0.546 ms

[root@paw-racnode2 ~]# ping paw-racnode1-priv

PING paw-racnode1-priv.airydba.com (10.0.0.1) 56(84) bytes of data.

64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=1.25 ms

64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.163 ms

--- paw-racnode1-priv.airydba.com ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 999ms

rtt min/avg/max/mdev = 0.163/0.708/1.253/0.545 ms

[root@paw-racnode2 ~]# ping paw-racnode2-priv

PING paw-racnode2-priv.airydba.com (10.0.0.2) 56(84) bytes of data.

64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=1.25 ms

64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.236 ms

--- paw-racnode2-priv.airydba.com ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 1000ms

rtt min/avg/max/mdev = 0.236/0.745/1.255/0.510 ms

[root@paw-racnode2 ~]# ping paw-racnode3-priv

PING paw-racnode3-priv.airydba.com (10.0.0.3) 56(84) bytes of data.

64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.018 ms

64 bytes from 10.0.0.3: icmp_seq=2 ttl=64 time=0.026 ms

--- paw-racnode3-priv.airydba.com ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 1000ms

rtt min/avg/max/mdev = 0.018/0.022/0.026/0.004 ms

========================================================================

Checking on paw-racnode3 :

 [root@paw-racnode3 ~]# ping paw-racstorage-dns

PING paw-racstorage-dns.airydba.com (192.168.75.10) 56(84) bytes of data.

64 bytes from paw-racstorage-dns.airydba.com (192.168.75.10): icmp_seq=1 ttl=64 time=0.109 ms

64 bytes from paw-racstorage-dns.airydba.com (192.168.75.10): icmp_seq=2 ttl=64 time=0.221 ms

--- paw-racstorage-dns.airydba.com ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 1000ms

rtt min/avg/max/mdev = 0.109/0.165/0.221/0.056 ms

[root@paw-racnode3 ~]# ping paw-racnode1

PING paw-racnode1.airydba.com (127.0.0.1) 56(84) bytes of data.

64 bytes from paw-racnode1.airydba.com (127.0.0.1): icmp_seq=1 ttl=64 time=0.022 ms

64 bytes from paw-racnode1.airydba.com (127.0.0.1): icmp_seq=2 ttl=64 time=0.027 ms

--- paw-racnode1.airydba.com ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 999ms

rtt min/avg/max/mdev = 0.022/0.024/0.027/0.005 ms

[root@paw-racnode3 ~]# ping paw-racnode2

PING paw-racnode2.airydba.com (192.168.75.12) 56(84) bytes of data.

64 bytes from paw-racnode2.airydba.com (192.168.75.12): icmp_seq=1 ttl=64 time=0.725 ms

64 bytes from paw-racnode2.airydba.com (192.168.75.12): icmp_seq=2 ttl=64 time=0.227 ms

--- paw-racnode2.airydba.com ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 1000ms

rtt min/avg/max/mdev = 0.227/0.476/0.725/0.249 ms

[root@paw-racnode3 ~]# ping paw-racnode3

PING paw-racnode3.airydba.com (192.168.75.13) 56(84) bytes of data.

64 bytes from paw-racnode3.airydba.com (192.168.75.13): icmp_seq=1 ttl=64 time=0.017 ms

64 bytes from paw-racnode3.airydba.com (192.168.75.13): icmp_seq=2 ttl=64 time=0.026 ms

64 bytes from paw-racnode3.airydba.com (192.168.75.13): icmp_seq=3 ttl=64 time=0.028 ms

64 bytes from paw-racnode3.airydba.com (192.168.75.13): icmp_seq=4 ttl=64 time=0.027 ms

--- paw-racnode3.airydba.com ping statistics ---

4 packets transmitted, 4 received, 0% packet loss, time 3000ms

rtt min/avg/max/mdev = 0.017/0.024/0.028/0.006 ms

[root@paw-racnode3 ~]# ping paw-racnode1-priv

PING paw-racnode1-priv.airydba.com (10.0.0.1) 56(84) bytes of data.

64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=1.25 ms

64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.163 ms

--- paw-racnode1-priv.airydba.com ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 999ms

rtt min/avg/max/mdev = 0.163/0.708/1.253/0.545 ms

[root@paw-racnode3 ~]# ping paw-racnode2-priv

PING paw-racnode2-priv.airydba.com (10.0.0.2) 56(84) bytes of data.

64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=1.25 ms

64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.236 ms

--- paw-racnode2-priv.airydba.com ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 1000ms

rtt min/avg/max/mdev = 0.236/0.745/1.255/0.510 ms

[root@paw-racnode3 ~]# ping paw-racnode3-priv

PING paw-racnode3-priv.airydba.com (10.0.0.3) 56(84) bytes of data.

64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.018 ms

64 bytes from 10.0.0.3: icmp_seq=2 ttl=64 time=0.026 ms

--- paw-racnode3-priv.airydba.com ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 1000ms

rtt min/avg/max/mdev = 0.018/0.022/0.026/0.004 ms

 

Thank you for reading… This is Airy…Enjoy Learning:)

#dns, #rac