ASM-Disks-Creation For RAC- Airy’s Notes

ASM Disks Creation for RAC:

[root@paw-racnode1 ~]# chown oracle:oinstall /dev/sd[b-k]

 [root@paw-racnode1 ~]# vi /etc/rc.d/rc.local

 [root@paw-racnode1 ~]# fdisk -l                                       

Disk /dev/sda: 53.6 GB, 53687091200 bytes

255 heads, 63 sectors/track, 6527 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1          13      104391   83  Linux

/dev/sda2              14        1318    10482412+  83  Linux

/dev/sda3            1319        1840     4192965   83  Linux

/dev/sda4            1841        6527    37648327+   5  Extended

/dev/sda5            1841        5952    33029608+  83  Linux

/dev/sda6            5953        6527     4618656   82  Linux swap / Solaris


Disk /dev/sdb: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


Disk /dev/sdb doesn't contain a valid partition table


Disk /dev/sdc: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


Disk /dev/sdc doesn't contain a valid partition table


Disk /dev/sdd: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


Disk /dev/sdd doesn't contain a valid partition table


Disk /dev/sde: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


Disk /dev/sde doesn't contain a valid partition table


Disk /dev/sdf: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


Disk /dev/sdf doesn't contain a valid partition table


Disk /dev/sdg: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


Disk /dev/sdg doesn't contain a valid partition table


Disk /dev/sdh: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


Disk /dev/sdh doesn't contain a valid partition table


Disk /dev/sdi: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


Disk /dev/sdi doesn't contain a valid partition table


Disk /dev/sdj: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


Disk /dev/sdj doesn't contain a valid partition table


Disk /dev/sdk: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


Disk /dev/sdk doesn't contain a valid partition table

————————————————————————–

[root@paw-racnode1 ~]# fdisk /dev/sdb

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel. Changes will remain in memory only,

until you decide to write them. After that, of course, the previous

content won't be recoverable.


Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)


Command (m for help): n

Command action

e   extended

p   primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-1009, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-1009, default 1009):

Using default value 1009


Command (m for help): w

The partition table has been altered!


Calling ioctl() to re-read partition table.


WARNING: Re-reading the partition table failed with error 16: Device or resource busy.

The kernel still uses the old table.

The new table will be used at the next reboot.

Syncing disks.

 [root@paw-racnode1 ~]# fdisk /dev/sdc

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel. Changes will remain in memory only,

until you decide to write them. After that, of course, the previous

content won't be recoverable.


Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)


Command (m for help): n

Command action

e   extended

p   primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-1009, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-1009, default 1009):

Using default value 1009


Command (m for help): w

The partition table has been altered!


Calling ioctl() to re-read partition table.


WARNING: Re-reading the partition table failed with error 16: Device or resource busy.

The kernel still uses the old table.

The new table will be used at the next reboot.

Syncing disks.

[root@paw-racnode1 ~]# fdisk /dev/sdd

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel. Changes will remain in memory only,

until you decide to write them. After that, of course, the previous

content won't be recoverable.


Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)


Command (m for help): n

Command action

e   extended

p   primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-1009, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-1009, default 1009):

Using default value 1009


Command (m for help): w

The partition table has been altered!


Calling ioctl() to re-read partition table.


WARNING: Re-reading the partition table failed with error 16: Device or resource busy.

The kernel still uses the old table.

The new table will be used at the next reboot.

Syncing disks.

[root@paw-racnode1 ~]# fdisk /dev/sde

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel. Changes will remain in memory only,

until you decide to write them. After that, of course, the previous

content won't be recoverable.


Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)


Command (m for help): n

Command action

e   extended

p   primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-1009, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-1009, default 1009):

Using default value 1009


Command (m for help): w

The partition table has been altered!


Calling ioctl() to re-read partition table.


WARNING: Re-reading the partition table failed with error 16: Device or resource busy.

The kernel still uses the old table.

The new table will be used at the next reboot.

Syncing disks.

[root@paw-racnode1 ~]# fdisk /dev/sdf

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel. Changes will remain in memory only,

until you decide to write them. After that, of course, the previous

content won't be recoverable.


Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)


Command (m for help): n

Command action

e   extended

p   primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-1009, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-1009, default 1009):

Using default value 1009


Command (m for help): w

The partition table has been altered!


Calling ioctl() to re-read partition table.


WARNING: Re-reading the partition table failed with error 16: Device or resource busy.

The kernel still uses the old table.

The new table will be used at the next reboot.

Syncing disks.

[root@paw-racnode1 ~]# fdisk /dev/sdg

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel. Changes will remain in memory only,

until you decide to write them. After that, of course, the previous

content won't be recoverable.


Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)


Command (m for help): n

Command action

e   extended

p   primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-1009, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-1009, default 1009):

Using default value 1009


Command (m for help): w

The partition table has been altered!


Calling ioctl() to re-read partition table.


WARNING: Re-reading the partition table failed with error 16: Device or resource busy.

The kernel still uses the old table.

The new table will be used at the next reboot.

Syncing disks.

[root@paw-racnode1 ~]# fdisk /dev/sdh

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel. Changes will remain in memory only,

until you decide to write them. After that, of course, the previous

content won't be recoverable.


Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)


Command (m for help): n

Command action

e   extended

p   primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-1009, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-1009, default 1009):

Using default value 1009


Command (m for help): w

The partition table has been altered!


Calling ioctl() to re-read partition table.


WARNING: Re-reading the partition table failed with error 16: Device or resource busy.

The kernel still uses the old table.

The new table will be used at the next reboot.

Syncing disks.

[root@paw-racnode1 ~]# fdisk /dev/sdi

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel. Changes will remain in memory only,

until you decide to write them. After that, of course, the previous

content won't be recoverable.


Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)


Command (m for help): n

Command action

e   extended

p   primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-1009, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-1009, default 1009):

Using default value 1009


Command (m for help): w

The partition table has been altered!


Calling ioctl() to re-read partition table.


WARNING: Re-reading the partition table failed with error 16: Device or resource busy.

The kernel still uses the old table.

The new table will be used at the next reboot.

Syncing disks.

[root@paw-racnode1 ~]# fdisk /dev/sdj

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel. Changes will remain in memory only,

until you decide to write them. After that, of course, the previous

content won't be recoverable.


Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)


Command (m for help): n

Command action

e   extended

p   primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-1009, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-1009, default 1009):

Using default value 1009


Command (m for help): w

The partition table has been altered!


Calling ioctl() to re-read partition table.


WARNING: Re-reading the partition table failed with error 16: Device or resource busy.

The kernel still uses the old table.

The new table will be used at the next reboot.

Syncing disks.

[root@paw-racnode1 ~]# fdisk /dev/sdk

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel. Changes will remain in memory only,

until you decide to write them. After that, of course, the previous

content won't be recoverable.


Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)


Command (m for help): n

Command action

e   extended

p   primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-1009, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-1009, default 1009):

Using default value 1009


Command (m for help): w

The partition table has been altered!


Calling ioctl() to re-read partition table.


WARNING: Re-reading the partition table failed with error 16: Device or resource busy.

The kernel still uses the old table.

The new table will be used at the next reboot.

Syncing disks.

[root@paw-racnode1 ~]#partprobe

Warning: Unable to open /dev/sr0 read-write (Read-only file system).  /dev/sr0 has been opened read-only.

 [root@paw-racnode1 ~]# fdisk -l

Disk /dev/sda: 53.6 GB, 53687091200 bytes

255 heads, 63 sectors/track, 6527 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1          13      104391   83  Linux

/dev/sda2              14        1318    10482412+  83  Linux

/dev/sda3            1319        1840     4192965   83  Linux

/dev/sda4            1841        6527    37648327+   5  Extended

/dev/sda5            1841        5952    33029608+  83  Linux

/dev/sda6            5953        6527     4618656   82  Linux swap / Solaris


Disk /dev/sdb: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1               1        1009     2095662   83  Linux


Disk /dev/sdc: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sdc1               1        1009     2095662   83  Linux


Disk /dev/sdd: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sdd1               1        1009     2095662   83  Linux


Disk /dev/sde: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sde1               1        1009     2095662   83  Linux


Disk /dev/sdf: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sdf1               1        1009     2095662   83  Linux


Disk /dev/sdg: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sdg1               1        1009     2095662   83  Linux


Disk /dev/sdh: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sdh1               1        1009     2095662   83  Linux


Disk /dev/sdi: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sdi1               1        1009     2095662   83  Linux


Disk /dev/sdj: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sdj1               1        1009     2095662   83  Linux


Disk /dev/sdk: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sdk1               1        1009     2095662   83  Linux

————————————————————————–

[root@paw-racnode2 ~]# partprobe

Warning: Unable to open /dev/sr0 read-write (Read-only file system).  /dev/sr0 has been opened read-only.

[root@paw-racnode2 ~]# fdisk -l

Disk /dev/sda: 53.6 GB, 53687091200 bytes

255 heads, 63 sectors/track, 6527 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1          13      104391   83  Linux

/dev/sda2              14        1318    10482412+  83  Linux

/dev/sda3            1319        1840     4192965   83  Linux

/dev/sda4            1841        6527    37648327+   5  Extended

/dev/sda5            1841        5952    33029608+  83  Linux

/dev/sda6            5953        6527     4618656   82  Linux swap / Solaris


Disk /dev/sdb: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1               1        1009     2095662   83  Linux


Disk /dev/sdc: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sdc1               1        1009     2095662   83  Linux


Disk /dev/sdd: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sdd1               1        1009     2095662   83  Linux


Disk /dev/sde: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sde1               1        1009     2095662   83  Linux


Disk /dev/sdf: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sdf1               1        1009     2095662   83  Linux


Disk /dev/sdg: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sdg1               1        1009     2095662   83  Linux


Disk /dev/sdh: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sdh1               1        1009     2095662   83  Linux


Disk /dev/sdi: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sdi1               1        1009     2095662   83  Linux


Disk /dev/sdj: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sdj1               1        1009     2095662   83  Linux


Disk /dev/sdk: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sdk1               1        1009     2095662   83  Linux

————————————————————————

[root@paw-racnode3 ~]# partprobe

Warning: Unable to open /dev/sr0 read-write (Read-only file system).  /dev/sr0 has been opened read-only.

[root@paw-racnode3 ~]# fdisk -l

Disk /dev/sda: 53.6 GB, 53687091200 bytes

255 heads, 63 sectors/track, 6527 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1          13      104391   83  Linux

/dev/sda2              14        1318    10482412+  83  Linux

/dev/sda3            1319        1840     4192965   83  Linux

/dev/sda4            1841        6527    37648327+   5  Extended

/dev/sda5            1841        5952    33029608+  83  Linux

/dev/sda6            5953        6527     4618656   82  Linux swap / Solaris


Disk /dev/sdb: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1               1        1009     2095662   83  Linux


Disk /dev/sdc: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sdc1               1        1009     2095662   83  Linux


Disk /dev/sdd: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sdd1               1        1009     2095662   83  Linux


Disk /dev/sde: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sde1               1        1009     2095662   83  Linux


Disk /dev/sdf: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sdf1               1        1009     2095662   83  Linux


Disk /dev/sdg: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sdg1               1        1009     2095662   83  Linux


Disk /dev/sdh: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sdh1               1        1009     2095662   83  Linux


Disk /dev/sdi: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sdi1               1        1009     2095662   83  Linux


Disk /dev/sdj: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sdj1               1        1009     2095662   83  Linux


Disk /dev/sdk: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes


   Device Boot      Start         End      Blocks   Id  System

/dev/sdk1               1        1009     2095662   83  Linux

————————————————————————

[root@paw-racnode1 RPM]# oracleasm configure -i

Configuring the Oracle ASM library driver.


This will configure the on-boot properties of the Oracle ASM library

driver.  The following questions will determine whether the driver is

loaded on boot and what permissions it will have.  The current values

will be shown in brackets ('[]').  Hitting  without typing an

answer will keep that current value.  Ctrl-C will abort.


Default user to own the driver interface []: grid

Default group to own the driver interface []: asmadmin

Start Oracle ASM library driver on boot (y/n) [n]: y

Scan for Oracle ASM disks on boot (y/n) [y]: y

Writing Oracle ASM library driver configuration: done

[root@paw-racnode1 RPM]# /usr/sbin/oracleasm init

Creating /dev/oracleasm mount point: /dev/oracleasm

Loading module "oracleasm": oracleasm

Mounting ASMlib driver filesystem: /dev/oracleasm

[root@paw-racnode1 RPM]# /usr/sbin/oracleasm createdisk ASMDISK1 /dev/sdb1

Writing disk header: done

Instantiating disk: done

[root@paw-racnode1 RPM]# /usr/sbin/oracleasm createdisk ASMDISK2 /dev/sdc1

Writing disk header: done

Instantiating disk: done

[root@paw-racnode1 RPM]# /usr/sbin/oracleasm createdisk ASMDISK3 /dev/sdd1

Writing disk header: done

Instantiating disk: done

[root@paw-racnode1 RPM]# /usr/sbin/oracleasm createdisk ASMDISK4 /dev/sde1

Writing disk header: done

Instantiating disk: done

[root@paw-racnode1 RPM]# /usr/sbin/oracleasm createdisk ASMDISK5 /dev/sdf1

Writing disk header: done

Instantiating disk: done

[root@paw-racnode1 RPM]# /usr/sbin/oracleasm createdisk ASMDISK6 /dev/sdg1

Writing disk header: done

Instantiating disk: done

[root@paw-racnode1 RPM]# /usr/sbin/oracleasm createdisk ASMDISK7 /dev/sdh1

Writing disk header: done

Instantiating disk: done

[root@paw-racnode1 RPM]# /usr/sbin/oracleasm createdisk ASMDISK8 /dev/sdi1

Writing disk header: done

Instantiating disk: done

[root@paw-racnode1 RPM]# /usr/sbin/oracleasm createdisk ASMDISK9 /dev/sdj1

Writing disk header: done

Instantiating disk: done

[root@paw-racnode1 RPM]# /usr/sbin/oracleasm createdisk ASMDISK10 /dev/sdk1

Writing disk header: done

Instantiating disk: done

[root@paw-racnode1 RPM]# /usr/sbin/oracleasm listdisks

ASMDISK1

ASMDISK10

ASMDISK2

ASMDISK3

ASMDISK4

ASMDISK5

ASMDISK6

ASMDISK7

ASMDISK8

ASMDISK9

————————————————————————–

[root@paw-racnode2 RPM]# oracleasm configure -i

Configuring the Oracle ASM library driver.


This will configure the on-boot properties of the Oracle ASM library

driver.  The following questions will determine whether the driver is

loaded on boot and what permissions it will have.  The current values

will be shown in brackets ('[]').  Hitting  without typing an

answer will keep that current value.  Ctrl-C will abort.


Default user to own the driver interface []: grid

Default group to own the driver interface []: asmadmin

Start Oracle ASM library driver on boot (y/n) [n]: y

Scan for Oracle ASM disks on boot (y/n) [y]: y

Writing Oracle ASM library driver configuration: done

[root@paw-racnode2 RPM]# /usr/sbin/oracleasm init

Creating /dev/oracleasm mount point: /dev/oracleasm

Loading module "oracleasm": oracleasm

Mounting ASMlib driver filesystem: /dev/oracleasm

[root@paw-racnode2 RPM]# /usr/sbin/oracleasm scandisks

Reloading disk partitions: done

Cleaning any stale ASM disks...

Scanning system for ASM disks...

Instantiating disk "DISK1"

Instantiating disk "DISK2"

Instantiating disk "DISK3"

Instantiating disk "DISK4"

Instantiating disk "DISK5"

Instantiating disk "DISK6"

Instantiating disk "DISK7"

Instantiating disk "DISK8"

Instantiating disk "DISK9"

Instantiating disk "DISK10"

 [root@paw-racnode2 RPM]# /usr/sbin/oracleasm listdisks

DISK1

DISK10

DISK2

DISK3

DISK4

DISK5

DISK6

DISK7

DISK8

DISK9

————————————————————————–

[root@paw-racnode3 RPM]# oracleasm configure -i

Configuring the Oracle ASM library driver.


This will configure the on-boot properties of the Oracle ASM library

driver.  The following questions will determine whether the driver is

loaded on boot and what permissions it will have.  The current values

will be shown in brackets ('[]').  Hitting  without typing an

answer will keep that current value.  Ctrl-C will abort.


Default user to own the driver interface []: grid

Default group to own the driver interface []: asmadmin

Start Oracle ASM library driver on boot (y/n) [n]: y

Scan for Oracle ASM disks on boot (y/n) [y]: y

Writing Oracle ASM library driver configuration: done

[root@paw-racnode3 RPM]# /usr/sbin/oracleasm init

Creating /dev/oracleasm mount point: /dev/oracleasm

Loading module "oracleasm": oracleasm

Mounting ASMlib driver filesystem: /dev/oracleasm

[root@paw-racnode3 RPM]# /usr/sbin/oracleasm scandisks

Reloading disk partitions: done

Cleaning any stale ASM disks...

Scanning system for ASM disks...

Instantiating disk "DISK1"

Instantiating disk "DISK2"

Instantiating disk "DISK3"

Instantiating disk "DISK4"

Instantiating disk "DISK5"

Instantiating disk "DISK6"

Instantiating disk "DISK7"

Instantiating disk "DISK8"

Instantiating disk "DISK9"

Instantiating disk "DISK10"

[root@paw-racnode3 RPM]# /usr/sbin/oracleasm listdisks

DISK1

DISK10

DISK2

DISK3

DISK4

DISK5

DISK6

DISK7

DISK8

DISK9

Now ASM disk creation on all Nodes completed.

Thank you for reading… This is Airy…Enjoy Learning:)

 

Automatic Storage Management Fast Mirror Resync in Oracle 11g – Airy’s Notes

Automatic Storage Management Fast Mirror Resync in Oracle 11g:

The main purpose of Introducing ASM was to provide a batter Data Redundancy to the database and to discourage RAID at hardware level. ASM provides three types of redundancy.

1. Normal Redundancy
2. High Redundancy
3. External Redundancy.

In normal Redundancy one copy of data is maintained, High Redundancy maintains two copies of data into another disks and External leave it to external sources.
In Oracle 10g, whenever a disk goes offline i.e. not accessible to Automatic Storage Management for data reading/writing purpose. It drops the disk from disk group and as per redundancy start making new copies of extents to other disks and this is a very expensive process for I/O operations.
Suppose the disk failure was a temporary issue and got fixed in few hours and disk is back now, so ASM will again start rebalanced the data using this disk. In conclusion, temporary failure of disk needs two rebalance operations first when disk goes offline and second when disk is online.
In Oracle 11g, this two time balancing is reduced by using two things Disk_repair_time and keeping track of changed extents during disk offline time. Every disk group has a default disk_repair_time (3.6 Hours) for which it can tolerate disk unavailability means By default, Oracle ASM drops a disk in 3.6 hours after it is taken offline.

The time can be specified in units of minutes (m or M) or hours (h orH), If you omit the unit, then the default unit is hours

This parameter can modify using below command.

You can use ALTER DISKGROUP to set the DISK_REPAIR_TIME attribute to a specified hour or minute value, such as 4.5 hours or 270 minutes. For example:

ALTER DISKGROUP data SET ATTRIBUTE 'disk_repair_time' = '4.5h'
ALTER DISKGROUP data SET ATTRIBUTE 'disk_repair_time' = '270m'

Applicable to normal and high redundancy disk groups .Any problems that make a failure group temporarily unavailable are considered transient failures that can be recovered by the Oracle ASM fast mirror resync feature. For example, transient failures can be caused by disk path malfunctions, such as cable failures, host bus adapter failures, controller failures, or disk power supply interruptions.

As the disk goes offline, Automatic Storage Management start keeping track of changed extents using redundant copy of disk. Automatic storage Management will only write changed data into disk when this will be online. This wait will continue until disk_repair_time is over, if disk comes back before this time ASM will only update changed data into disk.
If disk doesn’t come back until disk_repair_time, it will drop the disk from disk group and start rebalance operation.
Using this method, we can choose a time by which Automatic Storage Management will wait disk to come back and can avoid disk rebalanced operations done in previous version.

The REPAIR_TIMER column of V$ASM_DISK shows the amount of time left (in seconds) before an offline disk is dropped. After the specified time has elapsed, Oracle ASM drops the disk.

Query to find disk_repair_time for a disk group:

SQL> select name,value,group_number from v$asm_attribute where name like 'disk_repair%';
 
 NAME                 VALUE      GROUP_NUMBER
 -------------------- ------------ ---------------------------
 disk_repair_time     3.6h                1
 disk_repair_time     3.6h                2

After you repair the disk, run the SQL statement ALTER DISKGROUP ONLINE DISK. This statement brings a repaired disk group back online to enable writes so that no new writes are missed. This statement also starts a procedure to copy of all of the extents that are marked as stale on their redundant copies.

The following example takes disk DATA_001 offline and drops it after five minutes.

ALTER DISKGROUP data OFFLINE DISK DATA_001 DROP AFTER 5m;

The next example takes the disk DATA_001 offline and drops it after the time period designated by DISK_REPAIR_TIME elapses:

ALTER DISKGROUP data OFFLINE DISK DATA_001;

This example takes all of the disks in failure group FG2 offline and drops them after the time period designated by DISK_REPAIR_TIME elapses. If you used a DROP AFTER clause, then the disks would be dropped after the specified time:

ALTER DISKGROUP data OFFLINE DISKS IN FAILGROUP FG2;

The next example brings all of the disks in failure group FG2 online:

ALTER DISKGROUP data ONLINE DISKS IN FAILGROUP FG2;

This example brings only disk DATA_001 online:

ALTER DISKGROUP data ONLINE DISK DATA_001;

This example brings all of the disks in disk group DATA online:

ALTER DISKGROUP data ONLINE ALL;

Querying the V$ASM_OPERATION view while you run ALTER DISKGROUP ONLINE statements displays the name and state of the current operation that you are performing. For example, the following SQL query shows values in the PASS column during an online operation.

SQL> SELECT GROUP_NUMBER, PASS, STATE FROM V$ASM_OPERATION;

GROUP_NUMBER           PASS          STAT

-------------------  --------------  ---------------------

           1         RESYNC           RUN

           1         REBALANCE        WAIT

           1         OMPACT           WAIT

An offline operation does not generate a display in a V$ASM_OPERATION view query.

You can set the FAILGROUP_REPAIR_TIME and CONTENT.TYPE disk group attributes. The FAILGROUP_REPAIR_TIME disk group attribute specifies a default repair time for the failure groups in the disk group. The CONTENT.TYPE disk group attribute specifies the type of data expected to be stored in a disk group. You can set these attributes with ASMCA, ASMCMD mkdg, or SQL CREATE and ALTER DISKGROUP statements. For information about disk group attributes.

The ASMCMD lsop command shows the resync time estimate. There are separate rows in the V$ASM_OPERATION table for different phases of rebalance: disk resync, rebalance, and data compaction.

The ASMCMD online command has a power option to specify the power for the online operation. The SQL ALTER DISKGROUP REPLACE DISK statement also has the power option.

The ASMCMD chdg command provides the replace option in addition to the add and drop tags. The ASMCMD mkdg command has an additional time parameter (-t) to specify the time to offline a failure group.

Practical Approach:

I will create a transient disk path failure, fix the failure and finally put the failed disks back online. The objective is to setup “ASM Fast Mirror Resync” and monitor it before and after a transient disk path failure. In this scenario, I have used 11gR2 Oracle Database and 11gR2 ASM instance on an RHEL5.7. If you would like to replay this scenario yourself then create or find a sand box, in other words, a box which does not have any significant importance to others. Otherwise some tasks performed here may cause terrible consequences that you would never want it to happen to you.

Following tasks will be performed in the order given.

1.Perform prerequisite actions to use ASM fast mirror resync
2.Simulate a transient failure
3.Fix the transient failure
4.Monitor the Fast Mirror Reync in action

All the scripts used in the scenario are listed at the end of this post.

1: Perform prerequisite actions to use ASM fast mirror resync :

As mentioned above we need to use ASM redundancy (not external redundancy). Create a disk group having normal redundancy and a tablespace having datafile using this disk group

[oracle@PawASM ~]$ . oraenv

ORACLE_SID = [MYASM] ? +ASM

The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.2.0/grid is /u01/app/oracle

[oracle@PawASM ~]$sqlplus / as sysasm

SQL> create diskgroup PAWDG normal redundancy

    failgroup PAWDG_FG1 disk

         '/dev/oracleasm/disks/DISK4',

         '/dev/oracleasm/disks/DISK5',

         '/dev/oracleasm/disks/DISK6'

    failgroup PAWDG_FG2 disk

         '/dev/oracleasm/disks/DISK7',

         '/dev/oracleasm/disks/DISK8',

         '/dev/oracleasm/disks/DISK9' ;

SQL> select name,header_status,mount_status,state,path from V$asm_disk;

NAME           HEADER_STATU MOUNT_S STATE    PATH

-------------- ------------ ------- -------- --------------------------------

               FORMER       CLOSED  NORMAL   /dev/oracleasm/disks/DISK10

DATA_0000      MEMBER       CACHED  NORMAL   /dev/oracleasm/disks/DISK1

DATA_0001      MEMBER       CACHED  NORMAL   /dev/oracleasm/disks/DISK2

DATA_0002      MEMBER       CACHED  NORMAL   /dev/oracleasm/disks/DISK3

PAWDG_0000     MEMBER       CACHED  NORMAL   /dev/oracleasm/disks/DISK4

PAWDG_0001     MEMBER       CACHED  NORMAL   /dev/oracleasm/disks/DISK5

PAWDG_0002     MEMBER       CACHED  NORMAL   /dev/oracleasm/disks/DISK6

PAWDG_0003     MEMBER       CACHED  NORMAL   /dev/oracleasm/disks/DISK7

PAWDG_0004     MEMBER       CACHED  NORMAL   /dev/oracleasm/disks/DISK8

PAWDG_0005     MEMBER       CACHED  NORMAL   /dev/oracleasm/disks/DISK9

[oracle@PawASM ~]$ . oraenv

ORACLE_SID = [MYASM] ? +ASM

The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.2.0/grid is /u01/app/oracle

[oracle@PawASM ~]$ asmcmd

 ASMCMD> lsdg

State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name

MOUNTED  NORMAL  N         512   4096  1048576     11460     5030             2144            1443              0             N  DATA/

MOUNTED  NORMAL  N         512   4096  1048576     22920    22810               20           11395              0             N  PAWDG/

ASMCMD>   exit

[oracle@PawASM ~]$ . oraenv

ORACLE_SID = [+ASM] ? MYASM

The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 is /u01/app/oracle

[oracle@PawASM ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Sun Jan 31 14:33:05 2016

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production

With the Partitioning, Automatic Storage Management, OLAP, Data Mining

and Real Application Testing options

SQL> create tablespace TS_ASMSYNC datafile '+PAWDG/TS_ASMSYNC_01.dbf' size 128M;

Tablespace created.

ASM disk group attributes COMPATIBLE.RDBMS and COMPATIBLE.ASM must be set to the minimum software release of 11.1. Therefore a 10g database can not use this feature. Setup Fast Mirror Sync as below. Setting up DISK_REPAIR_TIME is optional. It has a default value of 3.6h

[oracle@PawASM ~]$ sqlplus / as sysasm

SQL*Plus: Release 11.2.0.1.0 Production on Sun Jan 31 14:32:46 2016

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production

With the Automatic Storage Management option

SQL> alter diskgroup PAWDG set attribute 'compatible.asm' = '11.2.0.0.0';

Diskgroup altered.

SQL> alter diskgroup PAWDG set attribute 'compatible.rdbms' = '11.2.0.0.0';

Diskgroup altered.

SQL> alter diskgroup PAWDG set attribute 'disk_repair_time'='5h';

Diskgroup altered.

List the new attributes we have just updated using the script dg_attribs.sql

SQL> @dg_attribs.sql;

dg name              attrib name                    VALUE                READ_ON

-------------------- ------------------------------ -------------------- ---

PAWDG                 disk_repair_time               5h                   N

PAWDG                 au_size                        1048576              Y

PAWDG                 sector_size                    512                  Y

PAWDG                 access_control.umask           066                  N

PAWDG                 compatible.rdbms               11.2.0.0.0           N

PAWDG                 cell.smart_scan_capable        FALSE                N

PAWDG                 access_control.enabled         FALSE                N

PAWDG                 compatible.asm                 11.2.0.0.0           N

8 rows selected.

 2: Simulate a transient failure: 

Below, as it can be seen from path of the ASM disks, I have used ASM Disks as ASM storage. Each raw device corresponds to a disk in a disk group. Since we have the ability to change file access privileges of raw devices, I can create a transient disk path failure by changing read-write permission of the raw devices.

Below is the status of the ASM disks just before the failure.

SQL> @chkasmdiskstat.sql;

 DISK_NUMBER NAME       PATH                        MOUNT_S HEADER_STATU MODE_ST STATE    FAILGROUP  REPAIR_TIMER

----------- ---------- --------------------------- ------- ------------ ------- -------- ---------- ------------

          0  DATA_0000 /dev/oracleasm/disks/DISK1 CACHED  MEMBER       ONLINE  NORMAL   DATA_0000             0

          1  DATA_0001 /dev/oracleasm/disks/DISK2 CACHED  MEMBER       ONLINE  NORMAL   DATA_0001             0

          2  DATA_0002 /dev/oracleasm/disks/DISK3 CACHED  MEMBER       ONLINE  NORMAL   DATA_0002             0

          0 PAWDG_0000 /dev/oracleasm/disks/DISK4 CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG1             0

          1 PAWDG_0001 /dev/oracleasm/disks/DISK5 CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG1             0

          2 PAWDG_0002 /dev/oracleasm/disks/DISK6 CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG1             0

          3 PAWDG_0003 /dev/oracleasm/disks/DISK7 CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG2             0

          4 PAWDG_0004 /dev/oracleasm/disks/DISK8 CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG2             0

          5 PAWDG_0005 /dev/oracleasm/disks/DISK9 CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG2             0

          9           /dev/oracleasm/disks/DISK10 CLOSED  FORMER       ONLINE  NORMAL                         0

10 rows selected.

Below I am simulating a transient disk path failure by modifying the read-write permissions of disks to 000. From this point on, ASM can not access to these disks to read from or to write.

[root@PawASM ~]# chmod 000 /dev/oracleasm/disks/DISK7 /dev/oracleasm/disks/DISK8 /dev/oracleasm/disks/DISK9 

At this stage it is better to execute a workload to make some modifications in the existing redundant extent copies in the disk failure group PAWDG_FG1. It will also make ASM to realize that some disks are missing while it tries to mirror the redundant copies to the extents of failed disks.

SQL> @workload.sql;

Connected.

User created.

Grant succeeded.

User altered.

Connected.

drop table table_workload
           *
ERROR at line 1:

ORA-00942: table or view does not exist
Table created.

PL/SQL procedure successfully completed.

Commit complete.

  COUNT(1)
----------
   1048576

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Automatic Storage Management option

[oracle@PawASM ~]$ 

As it can be seen below, ASM can not access to any disk in failure group PAWDG_FG2 and it has taken these disks offline. Also disk repair timer has started. Besides this information, you can see further details regarding consequences of this failure in alert.log of ASM if you want.

[oracle@PawASM ~]$ sqlplus / as sysasm

SQL*Plus: Release 11.2.0.1.0 Production on Sun Jan 31 15:01:20 2016

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production

With the Automatic Storage Management option

SQL> @chkasmdiskstat.sql;

DISK_NUMBER NAME       PATH                        MOUNT_S HEADER_STATU MODE_ST STATE    FAILGROUP  REPAIR_TIMER

----------- ---------- --------------------------- ------- ------------ ------- -------- ---------- ------------

          0 DATA_0000  /dev/oracleasm/disks/DISK1 CACHED  MEMBER       ONLINE  NORMAL   DATA_0000             0

          1 DATA_0001  /dev/oracleasm/disks/DISK2 CACHED  MEMBER       ONLINE  NORMAL   DATA_0001             0

          2 DATA_0002  /dev/oracleasm/disks/DISK3 CACHED  MEMBER       ONLINE  NORMAL   DATA_0002             0

          0 PAWDG_0000 /dev/oracleasm/disks/DISK4 CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG1             0

          1 PAWDG_0001 /dev/oracleasm/disks/DISK5 CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG1             0

          2 PAWDG_0002 /dev/oracleasm/disks/DISK6 CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG1             0

          3 PAWDG_0003                            MISSING UNKNOWN      OFFLINE NORMAL   PAWDG_FG2            18000

          4 PAWDG_0004                            MISSING UNKNOWN      OFFLINE NORMAL   PAWDG_FG2            18000

          5 PAWDG_0005                            MISSING UNKNOWN      OFFLINE NORMAL   PAWDG_FG2            18000

          0            /dev/oracleasm/disks/DISK7 CLOSED  MEMBER       ONLINE  NORMAL                         0

          1            /dev/oracleasm/disks/DISK8 CLOSED  MEMBER       ONLINE  NORMAL                         0

          2            /dev/oracleasm/disks/DISK9 CLOSED  MEMBER       ONLINE  NORMAL                         0

          9            /dev/oracleasm/disks/DISK10 CLOSED  FORMER       ONLINE  NORMAL                        0

13 rows selected.

3: Fix the transient failure :

The fix is simply to revert the raw device read-write permissions to 660. This makes raw devices accessible by ASM again.

[root@PawASM ~]# chmod 000 /dev/oracleasm/disks/DISK7 /dev/oracleasm/disks/DISK8 /dev/oracleasm/disks/DISK9 

Now it is time to take the disks online.

SQL> alter diskgroup PAWDG online disks in failgroup PAWDG_FG2;

Diskgroup altered.

4: Monitor the Fast Mirror Resync in action:

Below it can be seen that stale extents of the disks taken online are being resynchronized. Look at the devices below having SYNCING value in the the column MODE_STATUS

 SQL> @chkasmdiskstat.sql;

DISK_NUMBER NAME       PATH                        MOUNT_S HEADER_STATU MODE_ST STATE    FAILGROUP  REPAIR_TIMER

----------- ---------- --------------------------- ------- ------------ ------- -------- ---------- ------------

          0 DATA_0000  /dev/oracleasm/disks/DISK1  CACHED  MEMBER       ONLINE  NORMAL   DATA_0000             0

          1 DATA_0001  /dev/oracleasm/disks/DISK2  CACHED  MEMBER       ONLINE  NORMAL   DATA_0001             0

          2 DATA_0002  /dev/oracleasm/disks/DISK3  CACHED  MEMBER       ONLINE  NORMAL   DATA_0002             0

          0 PAWDG_0000 /dev/oracleasm/disks/DISK4  CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG1             0

          1 PAWDG_0001 /dev/oracleasm/disks/DISK5  CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG1             0

          2 PAWDG_0002 /dev/oracleasm/disks/DISK6  CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG1             0

          3 PAWDG_0003 /dev/oracleasm/disks/DISK7  CACHED  MEMBER       SYNCING NORMAL   PAWDG_FG2           17816

          4 PAWDG_0004 /dev/oracleasm/disks/DISK8  CACHED  MEMBER       SYNCING NORMAL   PAWDG_FG2           17816

          5 PAWDG_0005 /dev/oracleasm/disks/DISK9  CACHED  MEMBER       SYNCING NORMAL   PAWDG_FG2           17816

          9            /dev/oracleasm/disks/DISK10 CLOSED  FORMER       ONLINE  NORMAL                         0

10 rows selected.

Also syncing operation is represented with a status ONLINE in the column OPERATION of v$ASM_OPERATION view. See below.

SQL> @chkasmoper.sql

GROUP_NUMBER OPERA STAT

------------ ----- ----

1            ONLIN RUN

After syncing operation completed, redundancy has been restored and disks are online again. See below.

SQL> @chkasmdiskstat.sql;

DISK_NUMBER NAME       PATH                        MOUNT_S HEADER_STATU MODE_ST STATE    FAILGROUP  REPAIR_TIMER

----------- ---------- --------------------------- ------- ------------ ------- -------- ---------- ------------

          0 DATA_0000  /dev/oracleasm/disks/DISK1  CACHED  MEMBER       ONLINE  NORMAL   DATA_0000             0

          1 DATA_0001  /dev/oracleasm/disks/DISK2  CACHED  MEMBER       ONLINE  NORMAL   DATA_0001             0

          2 DATA_0002  /dev/oracleasm/disks/DISK3  CACHED  MEMBER       ONLINE  NORMAL   DATA_0002             0

          0 PAWDG_0000  /dev/oracleasm/disks/DISK4 CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG1             0

          1 PAWDG_0001  /dev/oracleasm/disks/DISK5 CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG1             0

          2 PAWDG_0002  /dev/oracleasm/disks/DISK6 CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG1             0

          3 PAWDG_0003  /dev/oracleasm/disks/DISK7 CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG2             0

          4 PAWDG_0004  /dev/oracleasm/disks/DISK8 CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG2             0

          5 PAWDG_0005  /dev/oracleasm/disks/DISK9 CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG2             0

          9            /dev/oracleasm/disks/DISK10 CLOSED  FORMER       ONLINE  NORMAL                         0

10 rows selected.

 

Click on Link to download Automatic Storage Management Fast Mirror Resync in Oracle 11g-Scripts used.

Password for to Open document is Airydba@159

Thank you for Reading…This is AIRY…Enjoy:)

Oracle ASM File Access Control for Disk Groups in Oracle 11g – Airy’s Notes

Oracle ASM File Access Control for Disk Groups in Oracle 11g:

Oracle ASM File Access Control provides optional protection for the content of Oracle ASM disk groups from accidental access by unauthorized Oracle ASM clients, such as an unauthorized database.

To set up Oracle ASM File Access Control, you must designate separate operating system groups.

You can manage Oracle ASM file access control with ASMCMD commands, Oracle Enterprise Manager, and SQL statements.

What is Oracle ASM File Access Control?

Oracle ASM File Access Control restricts the access of files to specific Oracle ASM clients that connect as SYSDBA. An Oracle ASM client is typically a database, which is identified as the user that owns the database instance home. Oracle ASM File Access Control uses this user name to identify a database. Oracle ASM File Access Control restricts access based on the operating system effective user identification number of a database owner. For example, the databases are identified as oracle1 andoracle2.

Oracle ASM uses file access control to determine the additional privileges that are given to a database that has been authenticated AS SYSDBA on the Oracle ASM instance. These additional privileges include the ability to modify and delete certain files, aliases, and user groups.

You can set up user groups to specify the list of databases that share the same access permissions to Oracle ASM files. User groups are lists of databases and any database that authenticates AS SYSDBA can create a user group. However, only the creator of a group can delete it or modify its membership list.

Each Oracle ASM file has three categories of privileges: owner, group, and other. Each category can have no permission, read-only permission, or read-write permission.

The file owner is usually the creator of the file and can assign permissions for the file in any of the owner, group, or other categories. The owner can also change the group associated with the file.

When administering Oracle ASM File Access Control, Oracle recommends that you connect as SYSDBA to the database instance that is the owner, or planned owner, of the files in the disk group.

To set up Oracle ASM File Access Control for files in a disk group, perform the following steps:

  1. Alter a new or existing disk group to set the Oracle ASM File Access Control disk group attributes.

For a newly-created disk group, you should set the disk group attributes before creating any files in the disk group.

  1. For files that exist in a disk group before setting the Oracle ASM File Access Control disk group attributes, you must explicitly set the permissions and ownership on those existing files.

Ensure that the user exists before setting ownership or permissions on a file. The file must be closed before setting the ownership or permissions. We can do this by using

ALTER DISKGROUP SET PERMISSION and ALTER DISKGROUP SET OWNERSHIP.

Optionally, you can create user groups that are groups of database users that share the same access permissions to Oracle ASM files.

Setting Disk Group Attributes for Oracle ASM File Access Control

To manage Oracle ASM File Access Control for a disk group, you must set the ACCESS_CONTROL.ENABLED  and  ACCESS_CONTROL.UMASK  disk group attributes. You can set the attributes by altering the disk group with the ALTER DISKGROUP SQL statement or you can set the attributes with the ASMCMD setattr command. For information about the ASMCMD setattr command.

When you set up file access control on an existing disk group, the files previously created remain accessible by everyone, unless you run the ALTERDISKGROUP SET PERMISSION SQL statement or the ASMCMD chmod command to restrict the permissions

The COMPATIBLE.ASM and COMPATIBLE.RDBMS disk group attributes must be set to 11.2 or higher to enable Oracle ASM File Access Control. For information about disk group compatibility attributes.The disk group attributes that control Oracle ASM File Access Control are the following:

  • ACCESS_CONTROL.ENABLED

This attribute determines whether Oracle ASM File Access Control is enabled for a disk group.

The value can be true or false. The default is false.

If the attribute is set to true, accessing Oracle ASM files is subject to access control. If false, any user can access every file in the disk group. All other operations behave independently of this attribute.

  • ACCESS_CONTROL.UMASK

This attribute determines which permissions are masked out on the creation of an Oracle ASM file for the user that owns the file, users in the same user group, and others not in the user group. This attribute applies to all files on a disk group.

The values can be combinations of three digits {0|2|6} {0|2|6} {0|2|6}. The default is 066.

Setting to 0 masks out nothing. Setting to 2 masks out write permission. Setting to 6 masks out both read and write permissions.

Before setting the ACCESS_CONTROL.UMASK disk group attribute, you must set the ACCESS_CONTROL.ENABLED attribute to true to enable Oracle ASM File Access Control.

How to enable Oracle ASM File Access Control for a disk group using SQL*Plus. Below given  example, the permissions setting is 026which enables read-write access for the owner, read access for users in the group, and no access to others not in the group

ALTER DISKGROUP data1 SET ATTRIBUTE ‘access_control.enabled’ = ‘true’;

ALTER DISKGROUP data1 SET ATTRIBUTE ‘access_control.umask’ = ‘026’;

SQL Statements used to Manage Oracle ASM File Access Control :

You can use the ALTER DISKGROUP SQL statement to manage file access control for Oracle ASM disk groups. These SQL statements are available for both database and Oracle ASM instances.

The SQL statements that support disk group access control are:

  • ALTER DISKGROUP ADD USERGROUP … WITH MEMBER

Adds an Oracle ASM user group to a disk group. The user group name is limited to a maximum of 30 characters. The databases identified in the MEMBER clause must be in the disk group, as shown by V$ASM_USER, or the command returns an error. Any users authenticated as SYSASMor SYSDBA can create new user groups. For example:

SQL> SELECT group_number, os_name FROM V$ASM_USER;

GROUP_NUMBER OS_NAME

------------ --------------------------------------------------------------

           1 oracle1

           1 oracle2
SQL> ALTER DISKGROUP data ADD USERGROUP 'test_grp1' WITH MEMBER 'oracle1','oracle2';
  • ALTER DISKGROUP DROP USERGROUP

Drops an Oracle ASM user group from a disk group. Dropping a group might leave some files without a valid group. For those files to have a valid group, you must manually update the group associated with those files to a valid group.

SQL> ALTER DISKGROUP data DROP USERGROUP 'test_grp1';
  • ALTER DISKGROUP MODIFY USERGROUP ADD MEMBER

Adds users to the specified user group. The users must be in the disk group, as shown by V$ASM_USER, or the command returns an error. Only the creator of the group or the Oracle ASM administrator can modify group membership.

SQL> ALTER DISKGROUP data MODIFY USERGROUP 'test_grp2' ADD MEMBER 'oracle2';
  • ALTER DISKGROUP MODIFY USERGROUP DROP MEMBER

Removes users from the specified user group. If a member is not in the user group, then an error is returned. Only the creator of the group or the Oracle ASM administrator can modify group membership.

SQL> ALTER DISKGROUP data MODIFY USERGROUP 'test_grp2' DROP MEMBER 'oracle2';
  • ALTER DISKGROUP ADD USER

Adds operating system (OS) users to an Oracle ASM disk group, so that these users can have access privileges on the disk group. The users must be existing operating system users, and their user names must have a corresponding operating system user ID or system ID. If a user exists in the disk group, as shown by V$ASM_USER, then the command records an error and continues to add other users, if any.

The operating system user of a running database instance is automatically added to a disk group when the database instance accesses that disk group and creates files. However, for a database instance to read files in a disk group without creating any files, then you must use the ADDUSER clause to add that database user to the disk group. Also, you can use this clause to add a database user to an existing disk group immediately after setting the Oracle ASM File Access Control disk group attributes and before creating new files.

SQL> ALTER DISKGROUP DATA ADD USER 'oracle1';
  • ALTER DISKGROUP DROP USER

Drops operating system users from an Oracle ASM disk group. If a user is not in the disk group, then this command records an error and continues to drop other users, if any.

If the user owns any files on the same Oracle ASM disk group, then this command fails with an error, unless the CASCADE keyword is specified. If the latter case, then the user is deleted, along with all the files that the user owns.

If any files owned by the user are currently open, then the DROP USER command fails, and no files are deleted.

SQL>  ALTER DISKGROUP DATA DROP USER 'oracle1';
  • ALTER DISKGROUP SET PERMISSION

Modifies permissions of an Oracle ASM file. Setting read only permission to a file that has read write permission revokes the write permission. Only the file owner or the Oracle ASM administrator can change the permissions of a file. You cannot change the permissions on an open file.

SQL> ALTER DISKGROUP data SET PERMISSION OWNER=read write, GROUP=read only,

     OTHER=none FOR FILE '+data/controlfile.f';
  • ALTER DISKGROUP SET OWNERSHIP

Changes the owner or group of a file to the specified user or user group name, respectively. If the specified user or user group name does not exist, this command fails with an error. Only the owner of the file or the Oracle ASM administrator can run this command, and only the Oracle ASM administrator can change the owner. Also, the user group name must exist, and the owner of the file must be a member of that group. You cannot change the ownership of an open file.

SQL> ALTER DISKGROUP data SET OWNERSHIP OWNER='oracle1', GROUP='test_grp1'

     FOR FILE '+DATA/controlfile.f';

Thank you for reading… This is Airy…Enjoy Learning:)

Using ASM Scalability and Performance Enhancements in Oracle 11g – Airy’s Notes

Using ASM Scalability and Performance Enhancements in Oracle 11g:

This feature increases the maximum data file size that Oracle can support to 128 TB. ASM supports file sizes greater than 128 TB in any redundancy mode. This provides near unlimited capacity for future growth. The ASM file size limits are:

  • External redundancy – 140 PB
  • Normal redundancy – 42 PB
  • High redundancy – 15 PB

Customers can also increase the allocation unit size for a disk group in powers of 2 up to 64 MB.

These improvements reduce database startup time and memory requirements, and allow support for larger ASM files, making it feasible to implement several hundred TB or even PB Oracle databases on ASM. Larger allocation units provide better sequential read performance.

Use the following criteria to determine the number of disk groups to create:

  • Disks in a given disk group should have similar size and performance characteristics. If you have several different types of disks in terms of size and performance, then create several disk groups that contain similar characteristics.
  • Create separate disk groups for your database files and fast recovery area for backup files. This configuration allows fast recovery should a disk group failure occur.

ASM files are stored in a disk group as a collection of extents. In Oracle 10g each extent mapped directly to one allocation unit (AU), but in Oracle 11g an extent can be made up of one or more allocation units. As files get larger, the extent size can grow to reduce the size of the extent map describing the file, thus saving memory.

When the disk group compatibility attributes are set to 11.1 or higher, the extent size will automatically grow as the file grows. In 11.1, the first 20,000 extents match the allocation unit size (1*AU). The next 20,000 extents are made up of 8 allocation units (8*AU). Beyond that point, the extent size becomes 64 allocation units (64*AU). In 11.2 this behavior has changed from 1, 8, 64 to 1, 4, 16.

In addition to the automatic expansion of the extent sizes, Oracle 11g also allows control over the allocation unit size using the ATTRIBUTE clause in the CREATE DISKGROUP statement, with values ranging from 1M to 64M.

  SQL> CREATE DISKGROUP disk_group_2 EXTERNAL REDUNDANCY

       DISK '/dev/sde1'

       ATRRIBUTE 'au_size' = '32M';

The combination of expanding extent sizes and larger allocation units should result in increased I/O performance for very large databases.

Specifying the Allocation Unit Size

An AU is the basic unit of allocation within an ASM disk group. In Oracle Database 10g, each AU had a single extent, which created problems with memory usage. If you have a large database with numerous default-sized allocation units, the database would need a very large amount of memory in the shared pool. The default AU size is only 1MB. File extents contain a minimum of one AU and an ASM file consists of at least one extent. You can set variable size extents with extents of size 1, 4, 16, 32,and 64 megabytes. The ability to set variable-size ASM extents means that ASM can now support larger file size extents while using less memory. ASM sets thresholds for each file and, as a file grows, ASM will increase the extent size based on the file size thresholds. Thus, a file can start with 1MB extents, with ASM increasing the extent size to 4, 16, 32, or 64 megabytes as the file size grows. Note that the size of an extent can vary among files as well as within a file. As a result of the variable extents feature, the database needs fewer extent pointers to describe an ASM file and less memory to manage the extent maps in the shared pool, thus making it easier to implement large ASM configurations.

Variable size extents raise the maximum possible ASM file size and also reduce shared pool memory requirements. You can periodically rebalance a disk group to avoid external fragmentation, which may occur because of allocating and freeing up of small data extents. However, the database will also automatically perform defragmentation when it has a problem finding the right size extent during extent allocation. ASM is also more scalable in Oracle Database 11g as compared to Oracle -Database 10g. The maximum ASM file size for external redundancy is now 140 petabytes, instead of 35 terabytes in Oracle Database 11g. Variable extent sizes enable you to configure ASM installations that are several hundred terabytes or even several petabytes in size. You can set the au_size attribute easily in Oracle Database 11g, by using the new attribute clause in the create diskgroup command, as shown here:

SQL> create diskgroup PAWDG6
 external redundancy
 disk
 '/dev/raw/raw1'
 attribute 'au_size' = '8M';

You can also specify the au_size in bytes instead of megabytes. You can set an au_size of 1, 2, 4, 8, 16, 32, or 64 megabytes.
You can find out the allocation unit sizes for all disk groups by executing the following query on the V$ASM_DISKGROUP view:

 SQL> select name, allocation_unit_size from v$asm_diskgroup;

NAME                                   ALLOCATION_UNIT_SIZE
 ------------------  ---------------------------------------------------
 PAWDGROUP1                               1048576
 PAWDGROUP3                               1048576
 PAWDGGROUP4                              1048576
 PAWDGGROUP2                              1048576

The variable extent sizes feature means that you can tailor the allocation unit sizes to meet the needs of your system

The allocation unit size is specified with the disk group attribute AU_SIZE. The AU size cannot be altered after a disk group is created. Below given example shows how the AU_SIZE is specified with the CREATE DISKGROUP SQL statement.

 SQL> CREATE DISKGROUP data NORMAL REDUNDANCY

     FAILGROUP controller1 DISK

    '/devices/diska1' NAME diska1,

    '/devices/diska2' NAME diska2,

    '/devices/diska3' NAME diska3,

    '/devices/diska4' NAME diska4

  FAILGROUP controller2 DISK

    '/devices/diskb1' NAME diskb1,

    '/devices/diskb2' NAME diskb2,

    '/devices/diskb3' NAME diskb3,

    '/devices/diskb4' NAME diskb4

  ATTRIBUTE 'au_size'='4M',

    'compatible.asm' = '11.2',

    'compatible.rdbms' = '11.2',

    'compatible.advm' = '11.2';

DB_FILE_MULTIBLOCK_READ_COUNT is one of the parameters you can use to minimize I/O during table scans. It specifies the maximum number of blocks read in one I/O operation during a sequential scan. The total number of I/Os needed to perform a full table scan depends on such factors as the size of the table, the multiblock read count, and whether parallel execution is being utilized for the operation.

As of Oracle Database 10g release 2, the default value of this parameter is a value that corresponds to the maximum I/O size that can be performed efficiently. This value is platform-dependent and is 1MB for most platforms. Because the parameter is expressed in blocks, it will be set to a value that is equal to the maximum I/O size that can be performed efficiently divided by the standard block size. Note that if the number of sessions is extremely large the multiblock read count value is decreased to avoid the buffer cache getting flooded with too many table scan buffers.

Even though the default value may be a large value, the optimizer will not favor large plans if you do not set this parameter. It would do so only if you explicitly set this parameter to a large value.

Online transaction processing (OLTP) and batch environments typically have values in the range of 4 to 16 for this parameter. DSS and data warehouse environments tend to benefit most from maximizing the value of this parameter. The optimizer is more likely to choose a full table scan over an index if the value of this parameter is high.

The maximum value is the operating system’s maximum I/O size expressed as Oracle blocks ((max I/O size)/DB_BLOCK_SIZE). If you set this parameter to a value greater than the maximum, Oracle uses the maximum.

To change the DB_FILE_MULTIBLOCK_READ_COUNT setting, use the ALTER SYSTEM SQL statement, Below given example assumes an 8 K block size.

Altering the DB_FILE_MULTIBLOCK_READ_COUNT Setting

SQL> ALTER SYSTEM SET db_file_multiblock_read_count=512 scope=both sid=*

Thank you for reading… This is Airy…Enjoy Learning:)

Oracle 11g ASM Diskgroup Compatibility – Airy’s Notes

Oracle 11g ASM Diskgroup Compatibility:

In Oracle 10.2, the are two new column COMPATIBILITY and DATABASE_COMPATIBILITY added to the V$ASM_DISKGROUP view, but only in Oracle 11g introduce the concept of diskgroup attributes and the V$ASM_ATTRIBUTE view.

New Disk Group Compatibility Attributes

Oracle 11g ASM includes two new compatibility attributes (compatible.asm and compatible.rdbms ) that determine the version of the ASM and database software that can use specific disk groups :

  • COMPATIBLE.ASM – The minimum version of the ASM software that can access the disk group. In 11g, the default setting is 10.1.
  • COMPATIBLE.RDBMS – The minimum COMPATIBLE database initialization parameter setting for any database instance that uses the disk group. In 11g, the default setting is 10.1.

The compatibility versions of a disk group can only be increased, not decreased. If you have increased the version by mistake, you will need to create a new disk group.

The disk group compatibility attributes can be set during disk group creation by adding the ATTRIBUTE clause to the CREATE DISKGROUP command.

compatible.asm is the Oracle disk group compatibility attribute. The Oracle disk group compatible.asm attribute setting defines the format of the data on the ASM disks, and should always be equal to or greater than the Oracle database compatibility parameter.

The compatible.rdbms attribute is the second attribute known as the Oracle database compatibility attribute. The compatible.rdbms attribute defines the minimum version of an Oracle database that can mount a specific disk group. The compatible.rdbms attribute should be set to a value that is equivalent to the lowest compatibility setting of any Oracle database that will be mounting that disk group.

The compatible.asm attribute must always be set to a value that is equal to or greater than the compatible.rdbms attribute. Once the compatible.asm attribute is set, it cannot be rolled back.

Once the compatible.rdbms attribute is set, it cannot be rolled back. This is because ASM will be writing to the disk and communicating with the database in a version specific format. Oracle does allow you to roll the setting forward.

Thus, there are two ways to check the diskgroup attributes in Oracle 11g:

SQL> col COMPATIBILITY for a10               

SQL> col DATABASE_COMPATIBILITY for a10

SQL> col NAME form a20

SQL> select group_number, name, compatibility, database_compatibility from v$asm_diskgroup;

GROUP_NUMBER NAME                 COMPATIBIL DATABASE_C

------------ -------------------- ---------- ----------

           1 PAWFG                 11.2.0.0.0 11.2.0.0.0

           2 DATA                 11.2.0.0.0 10.1.0.0.0

SQL> col value for a10

SQL> @dg_attribs1.sql;

dg name              attrib name                    VALUE                READ_ON

-------------------- ------------------------------ -------------------- -------

PAWFG                 disk_repair_time               5h                   N

PAWFG               au_size                        1048576              Y

PAWFG                access_control.umask           066                  N

PAWFG                access_control.enabled         FALSE                N

PAWFG               cell.smart_scan_capable        FALSE                N

PAWFG              compatible.rdbms               11.2.0.0.0           N

PAWFG                 compatible.asm                 11.2.0.0.0           N

PAWFG                 sector_size                    512                  Y

DATA                 disk_repair_time               3.6h                 N

DATA                 access_control.enabled         FALSE                N

DATA                 cell.smart_scan_capable        FALSE                N

DATA                 compatible.rdbms               10.1.0.0.0           N

DATA                 compatible.asm                 11.2.0.0.0           N

DATA                 sector_size                    512                  Y

DATA                 au_size                        1048576              Y

DATA                 access_control.umask           066                  N

16 rows selected.

Note that V$ASM_ATTRIBUTES is filled only when compatible.asm is set to 11.1. What’s important is that you can only change compatibility level upwards; there is no way to reset it back to the lower value.

Compatibility attributes can be changed online one at a time:

SQL> alter diskgroup DATA set attribute 'compatible.asm'='11.2.0.0.0';

Diskgroup altered.

SQL> SQL> alter diskgroup DATA set attribute 'compatible.rdbms'='11.2.0.0.0';

Diskgroup altered.

 SQL> select group_number, name, compatibility, database_compatibility from v$asm_diskgroup;

GROUP_NUMBER NAME                 COMPATIBIL DATABASE_C

------------ -------------------- ---------- ----------

           1 PAWFG                11.2.0.0.0 11.2.0.0.0

           2 DATA                 11.2.0.0.0 11.2.0.0.0

What are compatibility attributes are useful for? They are very handy for upgrades and migrations when ASM diskgroups need to be available for ASM instances and database instances of different versions. Depending on the migration path, you might need to be able to access some diskgroups from different versions of ASM and different database instances.

It might also be useful for transportable tablespaces between 10g and 11g databases.

The compatible.asm diskgroup attribute controls the format of the ASM diskgroup metadata. Only ASM instances with a software version equal to or greater than compatible.asm can mount the diskgroup.

As we know The compatible.rdbms diskgroup attribute determines the format of ASM files themselves. The diskgroup can be accessed by any database instance with a compatible init.ora parameter set equal to or higher than the compatible.rdbms attribute.

Note that the compatible.rdbms attribute can be set to 10.2 as well, but we couldn’t see if there were any differences in the feature set except that the compatible.rdbms=10.2, setting requires a database instance with the compatible parameter set to 10.2 or higher. compatible.asm cannot be set below 11.1 except when it’s already 10.1 by default.

The compatible.rdbms attribute can be changed only if compatible.asm is advanced to 11.1. Otherwise, you get the following error message:

SQL> alter diskgroup DATA set attribute 'compatible.rdbms'='11.1';

alter diskgroup DATA set attribute 'compatible.rdbms'='11.1'

*

ERROR at line 1:

ORA-15032: not all alterations performed

ORA-15242: could not set attribute compatible.rdbms

ORA-15221: ASM operation requires compatible.asm of 11.1.0.0.0 or higher

When creating a new diskgroup, compatibility attributes can be explicitly specified. It is interesting that the default is still 10.1 for both attributes even when the diskgroup is created from an ASM 10g instance and all connected databases are of 11g version. Oracle is conservative here.

IF we want to set default compatibility , there are two underscore hidden parameter exits which shows, what exactly the default compatibilty: _asm_compatibility and _rdbms_compatibility:

SQL> select i.ksppinm, v.ksppstvl from x$ksppi i, x$ksppcv v where i.ksppinm in ('_rdbms_compatibility','_asm_compatibility') and i.indx=v.indx;

KSPPINM                        KSPPSTVL

------------------------------ ----------

_asm_compatibility             10.1

_rdbms_compatibility           10.1

Let’s test.

SQL> alter system set "_asm_compatibility"='11.1' scope=spfile;

System altered.

SQL> alter system set "_rdbms_compatibility"='11.1' scope=spfile;

System altered.

SQL> shutdown immediate

ASM diskgroups dismounted

ASM instance shutdown

SQL> startup

ASM instance started

Total System Global Area  284565504 bytes

Fixed Size                  1299428 bytes

Variable Size             258100252 bytes

ASM Cache                  25165824 bytes

ASM diskgroups mounted

SQL> select i.ksppinm, v.ksppstvl from x$ksppi i, x$ksppcv v where i.ksppinm in ('_rdbms_compatibility','_asm_compatibility') and i.indx=v.indx;

KSPPINM                        KSPPSTVL

------------------------------ ----------

_asm_compatibility             11.1

_rdbms_compatibility           11.1

SQL> create diskgroup DATA1 external redundancy disk ' /dev/oracleasm/disks/DISK10' ATTRIBUTE 'compatible.asm' = '11.1';

Diskgroup created.

SQL> select name, compatibility, database_compatibility from v$asm_diskgroup where name='DATA1';

NAME                 COMPATIBIL DATABASE_C

-------------------- ---------- ----------

DATA1                  11.1.0.0.0 11.1.0.0.0
SQL> CREATE DISKGROUP DATA2 DISK '/dev/oracleasm/disks/DISK11' ATTRIBUTE 'compatible.rdbms' = '11.2', 'compatible.asm' = '11.1';

Diskgroup created.

SQL> select group_number, allocation_unit_size, compatibility, database_compatibility from v$asm_diskgroup where name='DATA2';

GROUP_NUMBER ALLOCATION_UNIT_SIZE COMPATIBIL DATABASE_C

------------ -------------------- ---------- ----------

           4              2097152 11.2.0.0.0 11.1.0.0.0

Now what features are available with different compatibility settings?

 Diskgroup attributes:

compatible.asm — 11.1
compatible.rdbms — any

 Non-default allocation unit size :

For AU sizes 1-8 MB, both compatibility attributes can be 10.1.

SQL> select group_number, allocation_unit_size, compatibility, database_compatibility from v$asm_diskgroup where name='DATA3';

GROUP_NUMBER ALLOCATION_UNIT_SIZE COMPATIBIL DATABASE_C

------------ -------------------- ---------- ----------

           5             2097152 10.0.0.0.0 10.1.0.0.0

SQL> select group_number, name, value from v$asm_attribute where group_number=4;

no rows selected

The absence of the attribute au_size in V$ASM_ATTRIBUTE is confusing, but recall that attributes are displayed only when compatible.asm is set to 11.1.

SQL> select group_number, name, value from v$asm_attribute where group_number=4;

GROUP_NUMBER NAME                 VALUE

------------ -------------------- --------------------

           3 disk_repair_time     3.6h

           3 au_size              2097152

           3 compatible.asm       11.1.0.0.0

           3 compatible.rdbms     10.1.0.0.0

For AU sizes 16-64 MB both compatibility attributes must be 11.1.

@dg_attribs1.sql

--Show disk group attributes

col "dg name" for a20

col "attrib name" for a30

col value for a20

set pagesize 20

select dg.name  "dg name"

      ,a.name   "attrib name"

      ,a.value

      ,read_only

from v$asm_diskgroup dg,

     v$asm_attribute a

where

dg.group_number = a.group_number

and a.name not like 'template%';

Thank you for reading… This is Airy…Enjoy Learning:)

#asm

ASM Preferred Mirror Read in Oracle 11g R2 – Airy’s Notes

ASM Preferred Mirror Read in Oracle 11g R2:

Until Oracle 10g, Automatic Storage Management used to read only Primary copy of extents. This isn’t a problem when both nodes and both failure groups are all located in the same site (i.e. This works fine if Real Application Cluster environment is not extended), but it can be inefficient for extended clusters, causing needless network traffic. In case of extended RAC (when RAC nodes and Storage are placed at different locations) suppose there are two sites A (with RAC node and storage) and B (with RAC node and storage) located at two different locations.

If node B needs and extent which has primary copy at Storage A and mirror copy at Storage B, So, B has to go to Storage A for primary copy which cause a high traffic on network.

To overcome this, Oracle 11g introduce ASM preferred Mirror Read. Reference from above example, Now, if B has mirror copy of extent at storage B, it will read mirror copy only rather than going to primary extent at site A and will increase the performance. Oracle 11g allows each node to define a preferred failure group, allowing nodes in extended clusters to access local failure groups in preference to remote ones.

How to Setup ASM Preferred Mirror Read:

A new parameter ASM_PREFERRED_READ_FAILURE_GROUP is introduced at ASM instance level. Each Automatic Storage Instance has this parameter value which should points to its local disk groups.

To configure preferred read failure groups the disk group compatibility attributes must be set to 11.1 or higher. Once the compatibility options are correct, the ASM_PREFERRED_READ_FAILURE_GROUPS parameter is set to the preferred failure groups for each node.
Each time when there is a read request and extent exists at local disks either primary or mirror copy it will read from local disk and gives a improved performance.

Command to change parameter:

 SQL> alter system set asm_preferred_read_failure_groups='ams_diskgroup_name.asm_failure_group_name';

Steps to follow :

SQL> select name from v$asm_diskgroup;

NAME

----------

PAWFG

DATA
SQL> select name,failgroup from v$asm_disk;

NAME                      FAILGROUP

---------- ---------------------------

DATA_0000              DATA_0000

DATA_0001              DATA_0001

DATA_0002              DATA_0002

PAWFG_0000           PAW_FG1

PAWFG_0001           PAW_FG1

PAWFG_0002           PAW_FG1

PAWFG_0003           PAW_FG2

PAWFG_0004           PAW_FG2

PAWFG_0005           PAW_FG2

10 rows selected.
SQL> alter system set asm_preferred_read_failure_groups='PAWFG.PAW_FG2';
 
 System altered.

SQL> ALTER SYSTEM SET ASM_PREFERRED_READ_FAILURE_GROUPS = 'DATA.DATA_0000', 'DATA.DATA_0001', 'DATA.DATA_0002';

Crosscheck for implementation:

SQL> SHOW PARAMETER asm_preferred_read_failure_groups;
 
 NAME                                 TYPE        VALUE
 ------------------------------------ ----------- --------------------------
 asm_preferred_read_failure_groups    string      PAWFG.PAW_FG2

Thank you for reading… This is Airy…Enjoy Learning:)

#asm

ASM Enhancements-Airy’s Notes

1: ASM Optimal Disk Placement (ODP) in Oracle 11g R2-Airy’s Notes

2: ASM Preferred Mirror Read in Oracle 11g R2 – Airy’s Notes

3: ASM Restart Feature in Oracle 11g R2 – Airy’s Notes

4: Oracle 11g ASM Diskgroup Compatibility – Airy’s Notes

5: Using ASM Scalability and Performance Enhancements in Oracle 11g – Airy’s Notes

6: Oracle ASM File Access Control for Disk Groups in Oracle 11g –  Airy’s Notes

7: Automatic Storage Management Fast Mirror Resync in Oracle 11g – Airy’s Notes

 

 

Thank you for reading… This is Airy…Enjoy Learning:)

#asm

ASM Optimal Disk Placement (ODP) in Oracle 11g R2-Airy’s Notes

ASM Optimal Disk Placement (ODP) in Oracle 11gR2:

 As of Oracle 11gR2 it is possible to specify on which disk regions the ASM Disks will be placed. This enables us to make use of faster disk zones on the outer regions of disks for better I/O performance.

In spinning disk more data passes below the read/write on the outer portion of the disk because the greater circumference, hence overall throughput is improved, if all the data is in outer most segment then seek latency may reduce as well since the read/write head moves the shorter arc.

Optimal Disk Placement can be specified by associating an ASM Template in which primary extent zone is set to HOT or COLD. If set HOT then these file are stored in faster outer most track of the disk and experience greater throughput and reduce latency. If set COLD then these file are stored in middle track of the disk and experience low throughput and high latency.

ASM Optimal Disk Placement (ODP) in Oracle 11gR2-Pic1

Oracle has been sitting together with storage vendors and found ways to enable ASM to make use of storage intelligence features. These settings can be made on DiskGroup Templates and are changeable afterwards with an ALTER DISKGROUP statement.

Possible options are hot/mirrorhot and cold/mirrorcold.

ALTER DISKGROUP disk_group ADD TEMPLATE template

ATTRIBUTES (

MIRROR|UNPROTECTED

FINE|COARSE

HOT|COLD

MIRRORHOT|MIRRORCOLD )

 

ALTER DISKGROUP diskgroup ALTER FILE ‘filename’

ATTRIBUTES (

HOT|COLD

MIRRORHOT|MIRRORCOLD )

Intelligent Data Placement enables you to specify disk regions on Oracle ASM disks for best performance. Using the disk region settings you can ensure that frequently accessed data is placed on the outermost (hot) tracks which have greater speed and higher bandwidth. In addition, files with similar access patterns are located physically close, reducing latency. Intelligent Data Placement also enables the placement of primary and mirror extents into different hot or cold regions.ASM intelligent file placement : 

Intelligent file placement has been done in non-relational databases since the 1980’s but it is new to Oracle in Oracle 11g release 2.  Thecompatible.asm and compatible.rdbms disk group attributes must be set to 11.2 or higher to use intelligent data placement. Also see the v$asm_disk_iostat view to access this ASM information.

Manual file placement:  Middle tracks

Here is a randomized disk file where read-write head movement cripples response time:

As I note in the plague of super-large Oracle disks, intelligent file placement can dramatically improve throughout by minimizing the movement of the read-write head by intelligently placing popular data files near the middle absolute track and unpopular files on the outer tracks.

Per the “zoned bit recording” architecture, the outer cylinders contain more sectors than the inner platters.

As shown next, high impact data files should be placed to minimize read-write head movement. In this manual method, it is intended to minimize the travel of the read-write head, whose movement comprises over 90% of the cost of a disk I/O.  However, this approach does not hold true when you have disks with fixed read-write heads (a read/write head for every disk cylinder).

ASM Optimal Disk Placement (ODP) in Oracle 11gR2-Pic2

Oracle 11g R2 Intelligent file placement:  Outer tracks :

ASM Optimal Disk Placement (ODP) in Oracle 11gR2-Pic3

Intelligent file placement has been done in non-relational databases since the 1980’s but it is new to Oracle in Oracle 11g release 2 and it uses a different approach than the above manual intelligent file placement.

Oracle 11g release 2 introduced “intelligent data placement” (IDP) within ASM whereby a file or in disk group template can map to these hot and cold area of the Oracle platter.

It is the “Zoned Bit Recording” method (whereby the outer cylinders hold more sectors) that drives intelligent data placement, leveraging on the hardware geometry to optimize I/O into “hot” and “cold” areas.  Remember, it is the DBA who marks the files as hot and cold, Oracle does not do this for you.

ASM Optimal Disk Placement (ODP) in Oracle 11gR2-Pic1

On large disks where the data cannot be fully cached in the data buffers, we might consider placing high I/O data files in the middle absolute track number to minimize read-write head movement.  This is a situation in which the low access data files reside on the inner and outer cylinders of the disk.

The use of intelligent disk placement can improve I/O performance by placing more frequently accessed data in regions furthest from the spindle, while reducing your cost by increasing the usable space on a disk.

Here is hot to mark an ASM datafile as “hot”.  We assume that this causes Oracle to relocate this ASM file to the outermost disk cylinders where you get more data per revolution:

SQL> alter diskgroup
mygroup
modify
file ‘+data/ora/datafile/users’
attribute ( hot  mirrorhot);

The Oracle documentation does not say of the above command immediately moves the data files on the disk, but it is safe to assume that this should be a DBA task that should not be done while the production database is open to the end-users.

The use of intelligent disk placement can improve I/O performance by placing more frequently accessed data in regions furthest from the spindle, while reducing your cost by increasing the usable space on a disk.

The compatible.asm and compatible.rdbms disk group attributes must be set to 11.2 or higher to use intelligent data placement.

Alter diskgroup DATA set attribute ‘compatible.asm’=’11.2’;

Alter diskgroup DATA set attribute‘compatible.rdbms’=’11.2’;

Here is hot to mark an ASM datafile as “hot” and note the new “mirrorhot” argument, indicating that the mirrored disk will also be handled:

SQL> alter diskgroup mygroup

modify

           file ‘+data/ora/datafile/users’

attribute ( hot  mirrorhot);

 The Oracle documentation does not say of the above command immediately moves the data files on the disk, but it is safe to assume that this should be a DBA task that should not be done while the production database is open to the end-users.

The following query can be used to inspect hot reads and how writes:

display_hot_cold_files.sql

SQL> select

dg.name

diskgroup,
t.name,
t.stripe,
t.redundancy,
t.primary_region,
t.mirror_region
from
v$asm_diskgroup dg,
v$asm_template   t
where
dg.group_number = t.group_number
and
dg.name = ‘DATA’ ORDER BY t.name;

SQL> select
dg.name diskgroup,
f.file_number,
f.primary_region,
f.mirror_region,
f.hot_reads,
f.hot_writes,
f.cold_reads,
f.cold_writes
from
v$asm_diskgroup dg,
v$asm_file f
where
dg.group_number = f.group_number and dg.name = ‘DATA’;

Also note that the view v$asm_disk_iostat has new columns for hot_reads and hot_writes and cold_reads and  cold_writes to correspond with this disk segregation techniques that leverages on the geometry of the disk platter to improve overall throughput.  See later in this chapter for a script to measure these access speeds on an ASM disk.

Thank you for reading… This is Airy…Enjoy Learning:)

 

#asm, #odp

Oracle Restart Feature in Oracle 11g

What is Oracle Restart feature in Oracle 11g:

With Oracle Real Application Clusters the Oracle database can be configured to restart as each node in the cluster is restarted.  For single instance databases that are on a standalone server, often times the database is not configured to restart on a server bounce.

If you install Oracle Restart by installing the Oracle Grid Infrastructure for a standalone server and then create your database, the database is automatically added to the Oracle Restart configuration, and is then automatically restarted when required. However, if you install Oracle Restart on a host computer on which a database already exists, you must manually add the database, the listener, the Oracle Automatic Storage Management (Oracle ASM) instance, and possibly other components to the Oracle Restart configuration.

In the past, Oracle recommended to set the database for restart in the /etc/oratab file and create as script in the /etc/init.d directory to force the restart. Some times this approach often times did not work. How do we enable a restart of a single instance database successfully?

That means Oracle Restart is a component of the Oracle Grid Infrastructure (GI) package which is used to configure a restart of items on the server, ie Oracle Database (single instance).  The added benefit of using Oracle Restart (Grid Infrastructure) is the usage of Automatic Storage Management (ASM) is included. As we already know Databases that are created after Oracle Restart is installed will use ASM and all previous databases (file system based) will not be configured in the Cluster Ready Service (CRS) stack.  How do we add a database to the CRS stack after Oracle Restart is installed?

To add a database to the CRS stack, we need to use the Server Control (SRVCTL) command. The below command will add a database named “PAWASM” to the stack and enable it to be restarted when the server is restarted.  Each of the options used can be looked up with the help option ($GRID_HOME/bin/srvctl –h).

Using Oracle Restart is a great approach to enabling single instance databases for successful restarts.  Also, the ability to use Automatic Storage Management (ASM) and Cluster Ready Services (CRS) enables a DBA to learn more about these options before moving to a full Real Application Cluster (RAC).

Configuring Oracle Restart in Oracle 11g :

1: Adding Components to the Oracle Restart Configuration:

In most cases, creating an Oracle component on a host that is running Oracle Restart automatically adds the component to the Oracle Restart configuration. The component is then automatically restarted when required.

Adding a component to the Oracle Restart configuration is also referred to as “registering a component with Oracle Restart.”

 The following are occasions when you must manually add components to the Oracle Restart configuration with SRVCTL:

You create an additional Oracle database on the same host computer using the CREATE DATABASE SQL statement.

You create a database service with DBMS_SERVICE.CREATE_SERVICE package procedure. (The recommended way is to use SRVCTL.)

Adding a component to the Oracle Restart configuration does not start that component. You must use a srvctl start command to start it.

When you manually add a database to the Oracle Restart configuration, you must also add the Oracle grid infrastructure software owner as a member of the OSDBA group of that database. This is because the grid infrastructure components must be able to connect to the database as SYSDBA to start and stop the database.

For example, if the host user who installed the grid infrastructure home is named grid and the OSDBA group of the new database is named dba, then user grid must be a member of the dba group.

Manually Adding components to the Oracle Restart configuration with SRVCTL:

$srvctl add object options

Where object can be datatabase,service,listener.

1: Adding a Database :

This example adds a database with a DB_UNIQUE_NAME of ORCL. The mandatory -o option specifies the Oracle home location. The -c option specifies a single-instance database.

[oracle@PAWASM Desktop]$ srvctl add database -d ORCL -o /u01/app/oracle/product/11.2.0/db_1 -c SINGLE

2: Adding a Database Service :

For the database with the DB_UNIQUE_NAME of ORCL, this example both creates a new database service named SERVPAW and adds it to the Oracle Restart configuration.

[oracle@PAWASM Desktop]$ srvctl add service -d ORCL -s SERVPAW

3: Adding the Default Listener :

This example adds the default listener to the Oracle Restart configuration.

[oracle@PAWASM Desktop]$ srvctl add listener <Listener Name>

Adding Components with Oracle Enterprise Manager Database Control

With Oracle Enterprise Manager Database Control (Database Control), you can add only database instances and listeners to the Oracle Restart configuration.

To add a database instance with Database Control:

Access the Database Home page for the desired database instance.

In the High Availability section, next to the Oracle Restart label, click the Disabled link.

Oracle_Restart

If the Oracle Restart label shows “Enabled,” then the database is already being managed by Oracle Restart, and there is no need to continue.

  1. If prompted for host credentials, enter credentials for the user who installed the database Oracle home, and then click Login.
  2. On the confirmation page, click Continue.

To add a listener with Database Control:

  1. Access the Database Home page for the desired database instance.
  2. In the General section, click the link next to the Listener label.
  3. In the High Availability section, next to the Oracle Restart label, click the Disabled link.
  4. On the confirmation page, click Continue.

2: Removing Components from the Oracle Restart Configuration:

When you use an Oracle-recommended method to delete an Oracle component, the component is also automatically removed from the Oracle Restart configuration. For example, if you use Database Configuration Assistant (DBCA) to delete a database, DBCA removes the database from the Oracle Restart configuration. Likewise, if you use Oracle Net Configuration Assistant (NETCA) to delete a listener, NETCA removes the listener from the Oracle Restart configuration.

If you use a non-recommended or manual method to delete an Oracle component, you must first use SRVCTL to remove the component from the Oracle Restart configuration. Failing to do so could result in an error.

To remove a component from the Oracle Restart configuration:

$srvctl remove object [options]

Removing a Database

This example removes a database with a DB_UNIQUE_NAME of ORCL.

[oracle@PAWASM Desktop]$ srvctl remove database -d ORCL

3: Disabling and Enabling Oracle Restart Management for a Component:

You can temporarily disable Oracle Restart management for a component. One reason to do this is when you are performing maintenance on the component. For example, if a component must be repaired, then you might not want it to be automatically restarted if it fails or if the host computer is restarted.

When maintenance is complete, you can reenable management for the component.

When you disable a component:

  • It is no longer automatically restarted.
  • It is no longer automatically started through a dependency.
  • It cannot be started with SRVCTL.
  • Any component dependent on this resource is no longer automatically started or restarted.

To disable or enable automatic restart for a component:

To disable a component, enter the following command:

$srvctl disable object [options]

To enable a component, enter the following command:

$srvctl enable object [options]

Disabling Automatic Restart for a Database

This example disables automatic restart for a database with a DB_UNIQUE_NAME of ORCL.

[oracle@PAWASM Desktop]$ srvctl disable database -d ORCL

Disabling Automatic Restart for an Oracle ASM Disk Group

This example disables automatic restart for the Oracle ASM disk group named recovery.

[oracle@PAWASM Desktop]$ srvctl disable diskgroup -g recovery

Enabling Automatic Restart for an Oracle ASM Disk Group

This example reenables automatic restart for the disk group recovery.

[oracle@PAWASM Desktop]$ srvctl enable diskgroup -g recovery

4: Viewing Component Status

You can use SRVCTL to view the running status (running or not running) for any component managed by Oracle Restart. For some components, additional information is also displayed.

$srvctl status object [options]

Viewing Status of a Database

This example displays the status of the database with a DB_UNIQUE_NAME of ORCL.

[oracle@PAWASM Desktop]$ srvctl status database -d ORCL
Database is running.

5: Viewing the Oracle Restart Configuration for a Component

You can use SRVCTL to view the Oracle Restart configuration for any component. Oracle Restart maintains different configuration information for each component type. In one form of the SRVCTL command, you can obtain a list of components managed by Oracle Restart.

$srvctl config object options

Viewing a List of All Databases Managed by Oracle Restart

[oracle@PAWASM Desktop]$ srvctl config database
AIRYASM

ORCL

Viewing the Configuration of a Particular Database

This example displays the configuration of the database with a DB_UNIQUE_NAME of ORCL.

for a Component

[oracle@PAWASM Desktop]$ srvctl config database -d ORCL
Database unique name: ORCL

Database name: ORCL

Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1

Oracle user: oracle

Spfile: +DATA/ORCL/spfileORCL.ora

Domain: pawasm.example.com

Start options: open

Stop options: immediate

Database role:

Management policy: automatic

Disk Groups: DATA

Services: mfg,sales

6: Modifying the Oracle Restart Configuration

You can use SRVCTL to modify the Oracle Restart configuration of a component. For example, you can modify the port number that a listener listens on when Oracle Restart starts it, or the server parameter file (SPFILE) that Oracle Restart points to when it starts a database.

To modify the Oracle Restart configuration for a component:

[oracle@PAWASM Desktop]$ srvctl modify object options

Modifying the Oracle Restart Configuration for a Database

For the database with a DB_UNIQUE_NAME of ORCL, the following command changes the management policy to MANUAL and the start option to NOMOUNT.

[oracle@PAWASM Desktop]$ srvctl modify database -d ORCL -y MANUAL -s NOMOUNT

With a MANUAL management policy, the database is never automatically started when the database host computer is restarted. However, Oracle Restart continues to monitor the database and restarts it if a failure occurs.

As with the CRS on a full RAC, we can get the list of targets and check on the status of them using the same command (crs_stat –t –v).

[oracle@PAWASM Desktop]$ crs_stat -t -v

 Name           Type           R/RA   F/FT   Target    State     Host        
-------------------------------------------------------------------------
 ora.DATA.dg    ora….up.type   0/5    0/     ONLINE    ONLINE    PAWASM      
 ora….ER.lsnr   ora….er.type   0/5    0/     ONLINE    ONLINE    PAWASM      
 ora.asm        ora.asm.type   0/5    0/     ONLINE    ONLINE    PAWASM      
 ora….ORCL.db   ora….se.type   0/2    0/1    ONLINE    ONLINE    PAWASM      
 ora.cssd       ora.cssd.type  0/5    0/5    ONLINE    ONLINE    PAWASM      
 ora.diskmon    ora….on.type   0/10   0/5    OFFLINE   OFFLINE               
 ora.evmd       ora.evm.type   0/10   0/5    ONLINE    ONLINE    PAWASM      
 ora.ons        ora.ons.type   0/3    0/     OFFLINE   OFFLINE

Thank you for Reading…This is Airy… Enjoy :)

Also Read Below Given Links:

1: Using Database Replay

2: Database Replay System Architecture

3: Database Replay PL/SQL Procedures -I ( Capture )

4: Database Replay PL/SQL Procedures – II ( Replay )

5: Database Replay Workflow in Enterprise Manager-I ( Capture )

6: Database Replay Workflow in Enterprise Manager-II ( Replay )

 

#asm, #oracle-restart