Hybrid Columnar Compression in Oracle 11g R2 Airy’s Notes:

Hybrid Columnar Compression (Release 2 Only on Oracle Exadata) :

In 11g Release 2 (but only if pre-installed with Oracle Exadata v2), compression technology has been extended in the form of Hybrid Columnar Compression (HCC).

First of all, why there is a need to extend compression capability? The reason is simple: not all data in a database may be searched in the same way. For example, some data (such as company email) needs to be in the database purely for legal reasons; it must be accessible all the time, even if actually needed only rarely. This data needs to be stored, and storage costs money – for the disks, for power to run the disks, for cooling, for floorspace. So why use expensive storage for data that is rarely, if ever, accessed?

This is where HCC comes in. Typically compression works by replacing repeating values with some smaller symbols and reducing overall space consumption. For instance, suppose the rows of uncompressed data looks like this (with columns separated by “|”):

Row1: Quite_a_large_value1|Quite_a_long_value1|Another_quite_long_value1 
Row2: Quite_a_large_value2|Quite_a_long_value1|Another_quite_long_value1 
Row3: Quite_a_large_value1|Quite_a_long_value2|Another_quite_long_value1 
Row4: Quite_a_large_value1|Quite_a_long_value1|Another_quite_long_value2

The total size of the three rows is 264 bytes. Note that there are actually just six distinct values in these three rows, which have been used several times.

Quite_a_large_value1 Quite_a_large_value2 Quite_a_long_value1 Quite_a_long_value2 Another_quite_long_value1 Another_quite_long_value2

When this block is compressed, a special value is assigned to each distinct value and the actual values are replaced by the corresponding special values. For instance, here are the assigned special values:

Original Value Replaced by Symbol
Quite_a_large_value1 A1
Quite_a_large_value2 A2
Quite_a_long_value1 B1
Quite_a_long_value2 B2
Another_quite_long_value1 C1
Another_quite_long_value2 C2

The rows now look like:

 Row1: A1|B1|C1 
 Row1: A2|B1|C1 
 Row1: A1|B2|C1
 Row1: A1|B1|C2

Total size: 32 bytes, a substantial reduction from 264, by about 88%. (Of course the percentage of reduction will depend on the data, in particular how many distinct values are present, but the principle is the same.) The relationship between unique symbols (A1, A2, etc.) and the values they represent (“Quite_a_large_value1”, etc.) is stored in a structure called Symbol Directory and is stored in the block header. Each block header must store the symbols used in that block. Naturally, there will be repetitions since the same values will be repeated in the blocks.

If you look at a real life data, you will notice that the values are typically repeated across columns, not rows. For instance, a column called FIRST_NAME will have values like Airy, Pawan, etc. while another called CITY_NAME will have values like Kanpur or New Delhi. You will not expect to see a value called New Delhi on the first_name column. Since values repeat more frequently within the columns, rather than rows, you can have just one symbol directory per column. Since symbol directories take up space, the significant reduction in their numbers will reduce the overall space compared to traditional compression where every block must have a symbol directory.

Hybrid Columnar Compression uses this method. This accomplishes significant amount of compression but affects the DML performance. So, it is best used for tables that are queries that are not subject to DML.

Here is an example of a table that is created as

create table trans_comp nologging compress for query low as select * from trans;

The clause “compress” compresses the table. The clause “for query” compresses it in hybrid columnar manner. The clause “low” does not compress it as aggressively. So more space is consumed but compression and decompression do not consume as much CPU. A more aggressive compression can be achieved by replacing the “low” clause with “high”. This type of HCC is known as warehouse compression since it is useful for data warehouses where a lot of data is stored but they are queried frequently.

If a table will be accessed very rarely, you can compress it even more, using a “for archive” clause:

create table trans_comp nologging compress for archive low as select * from trans;

This will reduce the space consumption even more but at the cost of CPU. Like the “for query” clause’s setting it also has two values, high and low. This is known as archive compression where data is accessed infrequently. Here is an example of the compression ratios on a set of representative data. Of course they will vary widely with the data.

Type of Compression Compressed Table Size
Uncompressed 100.00%
Query Low 14.70%
Query High 8.82%
Archive Low 6.62%
Archive High 4.41%

Hybrid Columnar Compression allows you to compress tables that are not subject to too much DML but whose data can’t be dropped for legal or other reasons. Now you can store these tables yet consume far less storage, and possibly use less CPU as well. Note again that this feature is available only on Oracle Exadata Storage Server v2.

Thank you for reading… This is Airy…Enjoy Learning:)

Storage Enhancements in Oracle 11g:

1: Table Compression Enhancements in Oracle Database 11g R1 Airy’s Notes

2: Hybrid Columnar Compression in Oracle R2 Airy’s Notes

Thank you for reading… This is Airy…Enjoy Learning:)

Table Compression Enhancements in Oracle Database 11g R1 Airy’s Notes:

Table Compression Enhancements in Oracle Database 11g R1 :

Table compression was introduced in Oracle 9i as a space saving feature for data warehousing projects. In 11g it is now considered a mainstream feature that is acceptable for OLTP databases. In addition to saving storage space, compression can result in increased I/O performance and reduced memory use in the buffer cache. These advantages do come at a cost, since compression incurs a CPU overhead, so it won’t be of benefit to everyone.

 Basic table compression is a free feature of the Enterprise Edition database, but OLTP compression requires the Advanced Compression option.

The compression clause can be specified at the tablespace , table or partition level with the following options:

  • NOCOMPRESS – The table or partition is not compressed. This is the default action when no compression clause is specified.( turns off compression )
  • COMPRESS – This option is considered suitable for data warehouse systems. Compression is enabled on the table or partition during direct-path inserts only.
  •  Extended COMPRESS caluse: COMPRESS [BASIC | FOR OLTP]
  • BASIC – default, bulk-load operations from prior releases
  • FOR OLTP – it’s new used for OLTP + direct loads
  • COMPRESS FOR DIRECT_LOAD OPERATIONS – This option has the same affect as the simple COMPRESS keyword. It is equal to COMPRESS, COMPRESS BASIC
  • COMPRESS FOR ALL OPERATIONS – This option is considered suitable for OLTP systems. As the name implies, this option enables compression for all operations, including regular DML statements. – it’s equal to COMPRESS FOR OLTP

This option requires the COMPATIBLE initialization parameter to be set to 11.1.0 or higher. In 11gR2 this option has been renamed to COMPRESS FOR OLTP and the original name has been deprecated.

Restrictions Associated With Table Compression:

The restrictions associated with table compression include:

  1. Compressed tables can only have columns added or dropped if the COMPRESS FOR ALL OPERATIONS option was used.
  2. Compressed tables must not have more than 255 columns. ( BASIC and FOR OLTP are not supported for tables with more than 255 columns )
  3. Compression is not applied to lob segments. ( COMPRESS [BASIC|FOR OLTP] is not used for BasicFile LOBs.)
  4. Table compression is only valid for heap organized tables, not index organized tables. ( Compression can be different for each partition )
  5. The compression clause cannot be applied to hash or hash-list partitions. Instead, they must inherit their compression settings from the tablespace, table or partition settings.
  6. Table compression cannot be specified for external or clustered tables.
  7. Columns can only be dropped for compressed tables only by setting as unused.
  8. New columns COMPRESS, COMPRESS_FOR are added in views DBA_ / ALL_ / USER_Tables and DBA_ /ALL_ / USER_TAB_PARTITIONS .

Turn on compression for new table :

CREATE TABLE Pawan_tab_1 (

  id            NUMBER(10)    NOT NULL,

  description   VARCHAR2(50)  NOT NULL,

  created_date  DATE          NOT NULL

   )

COMPRESS FOR ALL OPERATIONS;

-------------------------------------------------

CREATE TABLE Pawan_tbl_basic COMPRESS

AS

SELECT * FROM dba_objects;

-------------------------------------------------

CREATE TABLE Pawan_tbl_oltp COMPRESS FOR OLTP

AS

SELECT * FROM dba_objects;

-------------------------------------------------

CREATE TABLE Pawan_tbl_nc NOCOMPRESS

AS

SELECT * FROM dba_objects;

Enable compression for existing table:

ALTER TABLE Pawan_tbl_nc COMPRESS FOR OLTP;

Turn off compression for existing table:

ALTER TABLE Pawan_tbl_nc NOCOMPRESS;

Checking compression option for table:

Table-level compression settings are reflected in the COMPRESSION and COMPRESS_FOR columns of the [DBA|ALL|USER]_TABLES views.

SELECT table_name, compression, compress_for

FROM user_tables

WHERE table_name IN ('PAWAN_TBL_BASIC', 'PAWAN_TBL_OLTP', 'PAWAN_TBL_NC', ' PAWAN_TAB_1');
TABLE_NAME                     COMPRESSION                COMPRESS_FOR

----------------------  -----------------------------   --------------------------------

PAWAN_TBL_BASIC                ENABLED                    BASIC        

PAWAN_TBL_NC                   DISABLED                 

PAWAN_TBL_OLTP                 ENABLED                    OLTP

PAWAN_TAB_1                    ENABLED                    FOR ALL OPERATIONS

Specifying compression on partition level

CREATE TABLE PAWAN_TBL_PART

(id1 NUMBER,

 id2 NUMBER)

PARTITION BY RANGE(id1)

(

  PARTITION P1 VALUES LESS THAN(10) ,

  PARTITION P2 VALUES LESS THAN(20) NOCOMPRESS,

  PARTITION P3 VALUES LESS THAN(30) COMPRESS,

  PARTITION P4 VALUES LESS THAN(40) COMPRESS BASIC,

  PARTITION P5 VALUES LESS THAN(50) COMPRESS FOR OLTP,

  PARTITION P6 VALUES LESS THAN(60) COMPRESS FOR ALL OPERATIONS

);
CREATE TABLE PAWAN_TAB_2 (

  id            NUMBER(10)    NOT NULL,

  description   VARCHAR2(50)  NOT NULL,

  created_date  DATE          NOT NULL

)

PARTITION BY RANGE (created_date) (

  PARTITION PAWAN_TAB_Q1 VALUES LESS THAN (TO_DATE('01/01/2008', 'DD/MM/YYYY')) COMPRESS,

  PARTITION PAWAN_TAB_Q2 VALUES LESS THAN (TO_DATE('01/04/2008', 'DD/MM/YYYY')) COMPRESS FOR DIRECT_LOAD OPERATIONS,

  PARTITION PAWAN_TAB_Q3 VALUES LESS THAN (TO_DATE('01/07/2008', 'DD/MM/YYYY')) COMPRESS FOR ALL OPERATIONS,

  PARTITION PAWAN_TAB_Q4 VALUES LESS THAN (MAXVALUE) NOCOMPRESS

);
SELECT table_name, partition_name, compression, compress_for

FROM user_tab_partitions

WHERE table_name IN ('PAWAN_TBL_PART', 'PAWAN_TAB_2');

TABLE_NAME             PARTITION_NAME          COMPRESSION   COMPRESS_FOR

------------------  ----------------------- ---------------  ----------------- -----------------------

PAWAN_TBL_PART         P1                        DISABLED                 

PAWAN_TBL_PART         P2                        DISABLED                 

PAWAN_TBL_PART         P3                        NABLED      BASIC        

PAWAN_TBL_PART         P4                        ENABLED     BASIC        

PAWAN_TBL_PART         P5                        ENABLED     OLTP         

PAWAN_TBL_PART         P6                        ENABLED     OLTP  

PAWAN_TAB_2            PAWAN_TAB_Q1              ENABLED     DIRECT LOAD ONLY

PAWAN_TAB_2            PAWAN_TAB_Q2              ENABLED     DIRECT LOAD ONLY

PAWAN_TAB_2            PAWAN_TAB_Q3              ENABLED     FOR ALL OPERATIONS

PAWAN_TAB_2            PAWAN_TAB_Q4              DISABLED

6 rows selected
ALTER TABLE PAWAN_TAB_2 MODIFY PARTITION PAWAN_TAB_Q4 COMPRESS FOR ALL OPERATIONS;

Specifying compression on Tablespace level

Default compression settings can be specified at the tablespace level using the CREATE TABLESPACE and  ALTER TABLESPACE commands. The current settings are displayed in the DEF_TAB_COMPRESSION and  COMPRESS_FOR  columns of the  DBA_TABLESPACES  view.

CREATE TABLESPACE test_ts

  DATAFILE '/u01/app/oracle/oradata/DB11G/test_ts01.dbf'

  SIZE 1M

  DEFAULT COMPRESS FOR ALL OPERATIONS;
SELECT def_tab_compression, compress_for FROM   dba_tablespaces

WHERE  tablespace_name = 'TEST_TS';
DEF_TAB_           COMPRESS_FOR

------------------  -----------------------------

ENABLED             FOR ALL OPERATIONS

1 row selected.
ALTER TABLESPACE test_ts DEFAULT NOCOMPRESS;
SELECT def_tab_compression, compress_for FROM   dba_tablespaces

WHERE  tablespace_name = 'TEST_TS';

DEF_TAB_           COMPRESS_FOR

----------------  ------------------------------

DISABLED

1 row selected.
DROP TABLESPACE test_ts INCLUDING CONTENTS AND DATAFILES;

When compression is specified at multiple levels, the most specific setting is always used. As such, partition settings always override table settings, which always override tablespace settings.

Pic1

On the beginning block is empty (Empty Block). When  data is inserted into block data is stored in uncompressed format (Initially Uncompressed block). When block is filled based on PCTFREE settings compression is triggered and block has compressed data and free space for another inserts (Compressed Block). So again next loads are loading uncompressed data (Partially Compressed Block) till again block is filled based on PCTFREE which triggers compression (Compressed Block).

NOTE – compression eliminates holes created due to deletions and maximizes contiguous free space.

Compression advisor DBMS_COMPRESSION:

Package DBMS_COMPRESSION has got following procedures:

GET_COMPRESSION_RATIO – analyzes the compression ratio of a table, and gives information about compressibility of a table .

GET_COMPRESSION_TYPE – returns the compression type for a specified row .

Example analysis of no compressed table for COMPRESS FOR OLTP

set serveroutput on

DECLARE

   blkcnt_cmp        NUMBER;

   blkcnt_uncmp      NUMBER;

   row_perblk_cmp    NUMBER;

   row_perblk_uncmp  NUMBER;

   cmp_ratio         NUMBER;

   comptype_str      VARCHAR2(60);

BEGIN

  dbms_compression.get_compression_ratio

  (

   'USERS',

   USER,

   'TEST_TBL_NC',

   NULL,

   DBMS_COMPRESSION.COMP_FOR_OLTP,

   blkcnt_cmp,

   blkcnt_uncmp,

   row_perblk_cmp,

   row_perblk_uncmp,

   cmp_ratio,

   comptype_str

  );

  dbms_output.put_line

  (

    'Number of blocks '                 ||chr(10)||

    '  in compressed table:   '         ||blkcnt_cmp||chr(10)||

    '  in uncompressed table: '         ||blkcnt_uncmp||chr(10)||

    'Number of rows per block'          ||chr(10)||

    '  in compressed table:   '         ||row_perblk_cmp||chr(10)||

    '  in uncompressed table: '         ||row_perblk_uncmp||chr(10)||

    'Test done for compression type: '  ||comptype_str||chr(10)||

    'Expected compression ratio: '      ||cmp_ratio||' to 1'

  );

END;

/

anonymous block completed

Number of blocks

  in compressed table:   344

  in uncompressed table: 1058

Number of rows per block

  in compressed table:   216

  in uncompressed table: 70

Test done for compression type: "Compress For OLTP"

Expected compression ratio: 3 to 1

Example how to check compression type for a row

SELECT

  dbms_compression.get_compression_type(

   USER,

   'TEST_TBL_OLTP',

   ROWID) compression_type

FROM test_tbl_oltp

WHERE ROWNUM <3;

COMPRESSION_TYPE
-----------------------------
               2
               2

Constants in DBMS_COMPRESSION package

1 – COMP_NOCOMPRESS – no compression

2 – COMP_FOR_OLTP – OLTP compression

4 – COMP_FOR_QUERY_HIGH – exadata high compression for query operations

8 – COMP_FOR_QUERY_LOW – exadata low compression for query operations

16 – COMP_FOR_ARCHIVE_HIGH – exadata high compression for archive operations

32 – COMP_FOR_ARCHIVE_LOW – exadata low compression for archive operations

Thank you for reading… This is Airy…Enjoy Learning:)

Automatic Storage Management Fast Mirror Resync in Oracle 11g – Airy’s Notes

Automatic Storage Management Fast Mirror Resync in Oracle 11g:

The main purpose of Introducing ASM was to provide a batter Data Redundancy to the database and to discourage RAID at hardware level. ASM provides three types of redundancy.

1. Normal Redundancy
2. High Redundancy
3. External Redundancy.

In normal Redundancy one copy of data is maintained, High Redundancy maintains two copies of data into another disks and External leave it to external sources.
In Oracle 10g, whenever a disk goes offline i.e. not accessible to Automatic Storage Management for data reading/writing purpose. It drops the disk from disk group and as per redundancy start making new copies of extents to other disks and this is a very expensive process for I/O operations.
Suppose the disk failure was a temporary issue and got fixed in few hours and disk is back now, so ASM will again start rebalanced the data using this disk. In conclusion, temporary failure of disk needs two rebalance operations first when disk goes offline and second when disk is online.
In Oracle 11g, this two time balancing is reduced by using two things Disk_repair_time and keeping track of changed extents during disk offline time. Every disk group has a default disk_repair_time (3.6 Hours) for which it can tolerate disk unavailability means By default, Oracle ASM drops a disk in 3.6 hours after it is taken offline.

The time can be specified in units of minutes (m or M) or hours (h orH), If you omit the unit, then the default unit is hours

This parameter can modify using below command.

You can use ALTER DISKGROUP to set the DISK_REPAIR_TIME attribute to a specified hour or minute value, such as 4.5 hours or 270 minutes. For example:

ALTER DISKGROUP data SET ATTRIBUTE 'disk_repair_time' = '4.5h'
ALTER DISKGROUP data SET ATTRIBUTE 'disk_repair_time' = '270m'

Applicable to normal and high redundancy disk groups .Any problems that make a failure group temporarily unavailable are considered transient failures that can be recovered by the Oracle ASM fast mirror resync feature. For example, transient failures can be caused by disk path malfunctions, such as cable failures, host bus adapter failures, controller failures, or disk power supply interruptions.

As the disk goes offline, Automatic Storage Management start keeping track of changed extents using redundant copy of disk. Automatic storage Management will only write changed data into disk when this will be online. This wait will continue until disk_repair_time is over, if disk comes back before this time ASM will only update changed data into disk.
If disk doesn’t come back until disk_repair_time, it will drop the disk from disk group and start rebalance operation.
Using this method, we can choose a time by which Automatic Storage Management will wait disk to come back and can avoid disk rebalanced operations done in previous version.

The REPAIR_TIMER column of V$ASM_DISK shows the amount of time left (in seconds) before an offline disk is dropped. After the specified time has elapsed, Oracle ASM drops the disk.

Query to find disk_repair_time for a disk group:

SQL> select name,value,group_number from v$asm_attribute where name like 'disk_repair%';
 
 NAME                 VALUE      GROUP_NUMBER
 -------------------- ------------ ---------------------------
 disk_repair_time     3.6h                1
 disk_repair_time     3.6h                2

After you repair the disk, run the SQL statement ALTER DISKGROUP ONLINE DISK. This statement brings a repaired disk group back online to enable writes so that no new writes are missed. This statement also starts a procedure to copy of all of the extents that are marked as stale on their redundant copies.

The following example takes disk DATA_001 offline and drops it after five minutes.

ALTER DISKGROUP data OFFLINE DISK DATA_001 DROP AFTER 5m;

The next example takes the disk DATA_001 offline and drops it after the time period designated by DISK_REPAIR_TIME elapses:

ALTER DISKGROUP data OFFLINE DISK DATA_001;

This example takes all of the disks in failure group FG2 offline and drops them after the time period designated by DISK_REPAIR_TIME elapses. If you used a DROP AFTER clause, then the disks would be dropped after the specified time:

ALTER DISKGROUP data OFFLINE DISKS IN FAILGROUP FG2;

The next example brings all of the disks in failure group FG2 online:

ALTER DISKGROUP data ONLINE DISKS IN FAILGROUP FG2;

This example brings only disk DATA_001 online:

ALTER DISKGROUP data ONLINE DISK DATA_001;

This example brings all of the disks in disk group DATA online:

ALTER DISKGROUP data ONLINE ALL;

Querying the V$ASM_OPERATION view while you run ALTER DISKGROUP ONLINE statements displays the name and state of the current operation that you are performing. For example, the following SQL query shows values in the PASS column during an online operation.

SQL> SELECT GROUP_NUMBER, PASS, STATE FROM V$ASM_OPERATION;

GROUP_NUMBER           PASS          STAT

-------------------  --------------  ---------------------

           1         RESYNC           RUN

           1         REBALANCE        WAIT

           1         OMPACT           WAIT

An offline operation does not generate a display in a V$ASM_OPERATION view query.

You can set the FAILGROUP_REPAIR_TIME and CONTENT.TYPE disk group attributes. The FAILGROUP_REPAIR_TIME disk group attribute specifies a default repair time for the failure groups in the disk group. The CONTENT.TYPE disk group attribute specifies the type of data expected to be stored in a disk group. You can set these attributes with ASMCA, ASMCMD mkdg, or SQL CREATE and ALTER DISKGROUP statements. For information about disk group attributes.

The ASMCMD lsop command shows the resync time estimate. There are separate rows in the V$ASM_OPERATION table for different phases of rebalance: disk resync, rebalance, and data compaction.

The ASMCMD online command has a power option to specify the power for the online operation. The SQL ALTER DISKGROUP REPLACE DISK statement also has the power option.

The ASMCMD chdg command provides the replace option in addition to the add and drop tags. The ASMCMD mkdg command has an additional time parameter (-t) to specify the time to offline a failure group.

Practical Approach:

I will create a transient disk path failure, fix the failure and finally put the failed disks back online. The objective is to setup “ASM Fast Mirror Resync” and monitor it before and after a transient disk path failure. In this scenario, I have used 11gR2 Oracle Database and 11gR2 ASM instance on an RHEL5.7. If you would like to replay this scenario yourself then create or find a sand box, in other words, a box which does not have any significant importance to others. Otherwise some tasks performed here may cause terrible consequences that you would never want it to happen to you.

Following tasks will be performed in the order given.

1.Perform prerequisite actions to use ASM fast mirror resync
2.Simulate a transient failure
3.Fix the transient failure
4.Monitor the Fast Mirror Reync in action

All the scripts used in the scenario are listed at the end of this post.

1: Perform prerequisite actions to use ASM fast mirror resync :

As mentioned above we need to use ASM redundancy (not external redundancy). Create a disk group having normal redundancy and a tablespace having datafile using this disk group

[oracle@PawASM ~]$ . oraenv

ORACLE_SID = [MYASM] ? +ASM

The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.2.0/grid is /u01/app/oracle

[oracle@PawASM ~]$sqlplus / as sysasm

SQL> create diskgroup PAWDG normal redundancy

    failgroup PAWDG_FG1 disk

         '/dev/oracleasm/disks/DISK4',

         '/dev/oracleasm/disks/DISK5',

         '/dev/oracleasm/disks/DISK6'

    failgroup PAWDG_FG2 disk

         '/dev/oracleasm/disks/DISK7',

         '/dev/oracleasm/disks/DISK8',

         '/dev/oracleasm/disks/DISK9' ;

SQL> select name,header_status,mount_status,state,path from V$asm_disk;

NAME           HEADER_STATU MOUNT_S STATE    PATH

-------------- ------------ ------- -------- --------------------------------

               FORMER       CLOSED  NORMAL   /dev/oracleasm/disks/DISK10

DATA_0000      MEMBER       CACHED  NORMAL   /dev/oracleasm/disks/DISK1

DATA_0001      MEMBER       CACHED  NORMAL   /dev/oracleasm/disks/DISK2

DATA_0002      MEMBER       CACHED  NORMAL   /dev/oracleasm/disks/DISK3

PAWDG_0000     MEMBER       CACHED  NORMAL   /dev/oracleasm/disks/DISK4

PAWDG_0001     MEMBER       CACHED  NORMAL   /dev/oracleasm/disks/DISK5

PAWDG_0002     MEMBER       CACHED  NORMAL   /dev/oracleasm/disks/DISK6

PAWDG_0003     MEMBER       CACHED  NORMAL   /dev/oracleasm/disks/DISK7

PAWDG_0004     MEMBER       CACHED  NORMAL   /dev/oracleasm/disks/DISK8

PAWDG_0005     MEMBER       CACHED  NORMAL   /dev/oracleasm/disks/DISK9

[oracle@PawASM ~]$ . oraenv

ORACLE_SID = [MYASM] ? +ASM

The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.2.0/grid is /u01/app/oracle

[oracle@PawASM ~]$ asmcmd

 ASMCMD> lsdg

State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name

MOUNTED  NORMAL  N         512   4096  1048576     11460     5030             2144            1443              0             N  DATA/

MOUNTED  NORMAL  N         512   4096  1048576     22920    22810               20           11395              0             N  PAWDG/

ASMCMD>   exit

[oracle@PawASM ~]$ . oraenv

ORACLE_SID = [+ASM] ? MYASM

The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 is /u01/app/oracle

[oracle@PawASM ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Sun Jan 31 14:33:05 2016

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production

With the Partitioning, Automatic Storage Management, OLAP, Data Mining

and Real Application Testing options

SQL> create tablespace TS_ASMSYNC datafile '+PAWDG/TS_ASMSYNC_01.dbf' size 128M;

Tablespace created.

ASM disk group attributes COMPATIBLE.RDBMS and COMPATIBLE.ASM must be set to the minimum software release of 11.1. Therefore a 10g database can not use this feature. Setup Fast Mirror Sync as below. Setting up DISK_REPAIR_TIME is optional. It has a default value of 3.6h

[oracle@PawASM ~]$ sqlplus / as sysasm

SQL*Plus: Release 11.2.0.1.0 Production on Sun Jan 31 14:32:46 2016

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production

With the Automatic Storage Management option

SQL> alter diskgroup PAWDG set attribute 'compatible.asm' = '11.2.0.0.0';

Diskgroup altered.

SQL> alter diskgroup PAWDG set attribute 'compatible.rdbms' = '11.2.0.0.0';

Diskgroup altered.

SQL> alter diskgroup PAWDG set attribute 'disk_repair_time'='5h';

Diskgroup altered.

List the new attributes we have just updated using the script dg_attribs.sql

SQL> @dg_attribs.sql;

dg name              attrib name                    VALUE                READ_ON

-------------------- ------------------------------ -------------------- ---

PAWDG                 disk_repair_time               5h                   N

PAWDG                 au_size                        1048576              Y

PAWDG                 sector_size                    512                  Y

PAWDG                 access_control.umask           066                  N

PAWDG                 compatible.rdbms               11.2.0.0.0           N

PAWDG                 cell.smart_scan_capable        FALSE                N

PAWDG                 access_control.enabled         FALSE                N

PAWDG                 compatible.asm                 11.2.0.0.0           N

8 rows selected.

 2: Simulate a transient failure: 

Below, as it can be seen from path of the ASM disks, I have used ASM Disks as ASM storage. Each raw device corresponds to a disk in a disk group. Since we have the ability to change file access privileges of raw devices, I can create a transient disk path failure by changing read-write permission of the raw devices.

Below is the status of the ASM disks just before the failure.

SQL> @chkasmdiskstat.sql;

 DISK_NUMBER NAME       PATH                        MOUNT_S HEADER_STATU MODE_ST STATE    FAILGROUP  REPAIR_TIMER

----------- ---------- --------------------------- ------- ------------ ------- -------- ---------- ------------

          0  DATA_0000 /dev/oracleasm/disks/DISK1 CACHED  MEMBER       ONLINE  NORMAL   DATA_0000             0

          1  DATA_0001 /dev/oracleasm/disks/DISK2 CACHED  MEMBER       ONLINE  NORMAL   DATA_0001             0

          2  DATA_0002 /dev/oracleasm/disks/DISK3 CACHED  MEMBER       ONLINE  NORMAL   DATA_0002             0

          0 PAWDG_0000 /dev/oracleasm/disks/DISK4 CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG1             0

          1 PAWDG_0001 /dev/oracleasm/disks/DISK5 CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG1             0

          2 PAWDG_0002 /dev/oracleasm/disks/DISK6 CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG1             0

          3 PAWDG_0003 /dev/oracleasm/disks/DISK7 CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG2             0

          4 PAWDG_0004 /dev/oracleasm/disks/DISK8 CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG2             0

          5 PAWDG_0005 /dev/oracleasm/disks/DISK9 CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG2             0

          9           /dev/oracleasm/disks/DISK10 CLOSED  FORMER       ONLINE  NORMAL                         0

10 rows selected.

Below I am simulating a transient disk path failure by modifying the read-write permissions of disks to 000. From this point on, ASM can not access to these disks to read from or to write.

[root@PawASM ~]# chmod 000 /dev/oracleasm/disks/DISK7 /dev/oracleasm/disks/DISK8 /dev/oracleasm/disks/DISK9 

At this stage it is better to execute a workload to make some modifications in the existing redundant extent copies in the disk failure group PAWDG_FG1. It will also make ASM to realize that some disks are missing while it tries to mirror the redundant copies to the extents of failed disks.

SQL> @workload.sql;

Connected.

User created.

Grant succeeded.

User altered.

Connected.

drop table table_workload
           *
ERROR at line 1:

ORA-00942: table or view does not exist
Table created.

PL/SQL procedure successfully completed.

Commit complete.

  COUNT(1)
----------
   1048576

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Automatic Storage Management option

[oracle@PawASM ~]$ 

As it can be seen below, ASM can not access to any disk in failure group PAWDG_FG2 and it has taken these disks offline. Also disk repair timer has started. Besides this information, you can see further details regarding consequences of this failure in alert.log of ASM if you want.

[oracle@PawASM ~]$ sqlplus / as sysasm

SQL*Plus: Release 11.2.0.1.0 Production on Sun Jan 31 15:01:20 2016

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production

With the Automatic Storage Management option

SQL> @chkasmdiskstat.sql;

DISK_NUMBER NAME       PATH                        MOUNT_S HEADER_STATU MODE_ST STATE    FAILGROUP  REPAIR_TIMER

----------- ---------- --------------------------- ------- ------------ ------- -------- ---------- ------------

          0 DATA_0000  /dev/oracleasm/disks/DISK1 CACHED  MEMBER       ONLINE  NORMAL   DATA_0000             0

          1 DATA_0001  /dev/oracleasm/disks/DISK2 CACHED  MEMBER       ONLINE  NORMAL   DATA_0001             0

          2 DATA_0002  /dev/oracleasm/disks/DISK3 CACHED  MEMBER       ONLINE  NORMAL   DATA_0002             0

          0 PAWDG_0000 /dev/oracleasm/disks/DISK4 CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG1             0

          1 PAWDG_0001 /dev/oracleasm/disks/DISK5 CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG1             0

          2 PAWDG_0002 /dev/oracleasm/disks/DISK6 CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG1             0

          3 PAWDG_0003                            MISSING UNKNOWN      OFFLINE NORMAL   PAWDG_FG2            18000

          4 PAWDG_0004                            MISSING UNKNOWN      OFFLINE NORMAL   PAWDG_FG2            18000

          5 PAWDG_0005                            MISSING UNKNOWN      OFFLINE NORMAL   PAWDG_FG2            18000

          0            /dev/oracleasm/disks/DISK7 CLOSED  MEMBER       ONLINE  NORMAL                         0

          1            /dev/oracleasm/disks/DISK8 CLOSED  MEMBER       ONLINE  NORMAL                         0

          2            /dev/oracleasm/disks/DISK9 CLOSED  MEMBER       ONLINE  NORMAL                         0

          9            /dev/oracleasm/disks/DISK10 CLOSED  FORMER       ONLINE  NORMAL                        0

13 rows selected.

3: Fix the transient failure :

The fix is simply to revert the raw device read-write permissions to 660. This makes raw devices accessible by ASM again.

[root@PawASM ~]# chmod 000 /dev/oracleasm/disks/DISK7 /dev/oracleasm/disks/DISK8 /dev/oracleasm/disks/DISK9 

Now it is time to take the disks online.

SQL> alter diskgroup PAWDG online disks in failgroup PAWDG_FG2;

Diskgroup altered.

4: Monitor the Fast Mirror Resync in action:

Below it can be seen that stale extents of the disks taken online are being resynchronized. Look at the devices below having SYNCING value in the the column MODE_STATUS

 SQL> @chkasmdiskstat.sql;

DISK_NUMBER NAME       PATH                        MOUNT_S HEADER_STATU MODE_ST STATE    FAILGROUP  REPAIR_TIMER

----------- ---------- --------------------------- ------- ------------ ------- -------- ---------- ------------

          0 DATA_0000  /dev/oracleasm/disks/DISK1  CACHED  MEMBER       ONLINE  NORMAL   DATA_0000             0

          1 DATA_0001  /dev/oracleasm/disks/DISK2  CACHED  MEMBER       ONLINE  NORMAL   DATA_0001             0

          2 DATA_0002  /dev/oracleasm/disks/DISK3  CACHED  MEMBER       ONLINE  NORMAL   DATA_0002             0

          0 PAWDG_0000 /dev/oracleasm/disks/DISK4  CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG1             0

          1 PAWDG_0001 /dev/oracleasm/disks/DISK5  CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG1             0

          2 PAWDG_0002 /dev/oracleasm/disks/DISK6  CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG1             0

          3 PAWDG_0003 /dev/oracleasm/disks/DISK7  CACHED  MEMBER       SYNCING NORMAL   PAWDG_FG2           17816

          4 PAWDG_0004 /dev/oracleasm/disks/DISK8  CACHED  MEMBER       SYNCING NORMAL   PAWDG_FG2           17816

          5 PAWDG_0005 /dev/oracleasm/disks/DISK9  CACHED  MEMBER       SYNCING NORMAL   PAWDG_FG2           17816

          9            /dev/oracleasm/disks/DISK10 CLOSED  FORMER       ONLINE  NORMAL                         0

10 rows selected.

Also syncing operation is represented with a status ONLINE in the column OPERATION of v$ASM_OPERATION view. See below.

SQL> @chkasmoper.sql

GROUP_NUMBER OPERA STAT

------------ ----- ----

1            ONLIN RUN

After syncing operation completed, redundancy has been restored and disks are online again. See below.

SQL> @chkasmdiskstat.sql;

DISK_NUMBER NAME       PATH                        MOUNT_S HEADER_STATU MODE_ST STATE    FAILGROUP  REPAIR_TIMER

----------- ---------- --------------------------- ------- ------------ ------- -------- ---------- ------------

          0 DATA_0000  /dev/oracleasm/disks/DISK1  CACHED  MEMBER       ONLINE  NORMAL   DATA_0000             0

          1 DATA_0001  /dev/oracleasm/disks/DISK2  CACHED  MEMBER       ONLINE  NORMAL   DATA_0001             0

          2 DATA_0002  /dev/oracleasm/disks/DISK3  CACHED  MEMBER       ONLINE  NORMAL   DATA_0002             0

          0 PAWDG_0000  /dev/oracleasm/disks/DISK4 CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG1             0

          1 PAWDG_0001  /dev/oracleasm/disks/DISK5 CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG1             0

          2 PAWDG_0002  /dev/oracleasm/disks/DISK6 CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG1             0

          3 PAWDG_0003  /dev/oracleasm/disks/DISK7 CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG2             0

          4 PAWDG_0004  /dev/oracleasm/disks/DISK8 CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG2             0

          5 PAWDG_0005  /dev/oracleasm/disks/DISK9 CACHED  MEMBER       ONLINE  NORMAL   PAWDG_FG2             0

          9            /dev/oracleasm/disks/DISK10 CLOSED  FORMER       ONLINE  NORMAL                         0

10 rows selected.

 

Click on Link to download Automatic Storage Management Fast Mirror Resync in Oracle 11g-Scripts used.

Password for to Open document is Airydba@159

Thank you for Reading…This is AIRY…Enjoy:)

Oracle ASM File Access Control for Disk Groups in Oracle 11g – Airy’s Notes

Oracle ASM File Access Control for Disk Groups in Oracle 11g:

Oracle ASM File Access Control provides optional protection for the content of Oracle ASM disk groups from accidental access by unauthorized Oracle ASM clients, such as an unauthorized database.

To set up Oracle ASM File Access Control, you must designate separate operating system groups.

You can manage Oracle ASM file access control with ASMCMD commands, Oracle Enterprise Manager, and SQL statements.

What is Oracle ASM File Access Control?

Oracle ASM File Access Control restricts the access of files to specific Oracle ASM clients that connect as SYSDBA. An Oracle ASM client is typically a database, which is identified as the user that owns the database instance home. Oracle ASM File Access Control uses this user name to identify a database. Oracle ASM File Access Control restricts access based on the operating system effective user identification number of a database owner. For example, the databases are identified as oracle1 andoracle2.

Oracle ASM uses file access control to determine the additional privileges that are given to a database that has been authenticated AS SYSDBA on the Oracle ASM instance. These additional privileges include the ability to modify and delete certain files, aliases, and user groups.

You can set up user groups to specify the list of databases that share the same access permissions to Oracle ASM files. User groups are lists of databases and any database that authenticates AS SYSDBA can create a user group. However, only the creator of a group can delete it or modify its membership list.

Each Oracle ASM file has three categories of privileges: owner, group, and other. Each category can have no permission, read-only permission, or read-write permission.

The file owner is usually the creator of the file and can assign permissions for the file in any of the owner, group, or other categories. The owner can also change the group associated with the file.

When administering Oracle ASM File Access Control, Oracle recommends that you connect as SYSDBA to the database instance that is the owner, or planned owner, of the files in the disk group.

To set up Oracle ASM File Access Control for files in a disk group, perform the following steps:

  1. Alter a new or existing disk group to set the Oracle ASM File Access Control disk group attributes.

For a newly-created disk group, you should set the disk group attributes before creating any files in the disk group.

  1. For files that exist in a disk group before setting the Oracle ASM File Access Control disk group attributes, you must explicitly set the permissions and ownership on those existing files.

Ensure that the user exists before setting ownership or permissions on a file. The file must be closed before setting the ownership or permissions. We can do this by using

ALTER DISKGROUP SET PERMISSION and ALTER DISKGROUP SET OWNERSHIP.

Optionally, you can create user groups that are groups of database users that share the same access permissions to Oracle ASM files.

Setting Disk Group Attributes for Oracle ASM File Access Control

To manage Oracle ASM File Access Control for a disk group, you must set the ACCESS_CONTROL.ENABLED  and  ACCESS_CONTROL.UMASK  disk group attributes. You can set the attributes by altering the disk group with the ALTER DISKGROUP SQL statement or you can set the attributes with the ASMCMD setattr command. For information about the ASMCMD setattr command.

When you set up file access control on an existing disk group, the files previously created remain accessible by everyone, unless you run the ALTERDISKGROUP SET PERMISSION SQL statement or the ASMCMD chmod command to restrict the permissions

The COMPATIBLE.ASM and COMPATIBLE.RDBMS disk group attributes must be set to 11.2 or higher to enable Oracle ASM File Access Control. For information about disk group compatibility attributes.The disk group attributes that control Oracle ASM File Access Control are the following:

  • ACCESS_CONTROL.ENABLED

This attribute determines whether Oracle ASM File Access Control is enabled for a disk group.

The value can be true or false. The default is false.

If the attribute is set to true, accessing Oracle ASM files is subject to access control. If false, any user can access every file in the disk group. All other operations behave independently of this attribute.

  • ACCESS_CONTROL.UMASK

This attribute determines which permissions are masked out on the creation of an Oracle ASM file for the user that owns the file, users in the same user group, and others not in the user group. This attribute applies to all files on a disk group.

The values can be combinations of three digits {0|2|6} {0|2|6} {0|2|6}. The default is 066.

Setting to 0 masks out nothing. Setting to 2 masks out write permission. Setting to 6 masks out both read and write permissions.

Before setting the ACCESS_CONTROL.UMASK disk group attribute, you must set the ACCESS_CONTROL.ENABLED attribute to true to enable Oracle ASM File Access Control.

How to enable Oracle ASM File Access Control for a disk group using SQL*Plus. Below given  example, the permissions setting is 026which enables read-write access for the owner, read access for users in the group, and no access to others not in the group

ALTER DISKGROUP data1 SET ATTRIBUTE ‘access_control.enabled’ = ‘true’;

ALTER DISKGROUP data1 SET ATTRIBUTE ‘access_control.umask’ = ‘026’;

SQL Statements used to Manage Oracle ASM File Access Control :

You can use the ALTER DISKGROUP SQL statement to manage file access control for Oracle ASM disk groups. These SQL statements are available for both database and Oracle ASM instances.

The SQL statements that support disk group access control are:

  • ALTER DISKGROUP ADD USERGROUP … WITH MEMBER

Adds an Oracle ASM user group to a disk group. The user group name is limited to a maximum of 30 characters. The databases identified in the MEMBER clause must be in the disk group, as shown by V$ASM_USER, or the command returns an error. Any users authenticated as SYSASMor SYSDBA can create new user groups. For example:

SQL> SELECT group_number, os_name FROM V$ASM_USER;

GROUP_NUMBER OS_NAME

------------ --------------------------------------------------------------

           1 oracle1

           1 oracle2
SQL> ALTER DISKGROUP data ADD USERGROUP 'test_grp1' WITH MEMBER 'oracle1','oracle2';
  • ALTER DISKGROUP DROP USERGROUP

Drops an Oracle ASM user group from a disk group. Dropping a group might leave some files without a valid group. For those files to have a valid group, you must manually update the group associated with those files to a valid group.

SQL> ALTER DISKGROUP data DROP USERGROUP 'test_grp1';
  • ALTER DISKGROUP MODIFY USERGROUP ADD MEMBER

Adds users to the specified user group. The users must be in the disk group, as shown by V$ASM_USER, or the command returns an error. Only the creator of the group or the Oracle ASM administrator can modify group membership.

SQL> ALTER DISKGROUP data MODIFY USERGROUP 'test_grp2' ADD MEMBER 'oracle2';
  • ALTER DISKGROUP MODIFY USERGROUP DROP MEMBER

Removes users from the specified user group. If a member is not in the user group, then an error is returned. Only the creator of the group or the Oracle ASM administrator can modify group membership.

SQL> ALTER DISKGROUP data MODIFY USERGROUP 'test_grp2' DROP MEMBER 'oracle2';
  • ALTER DISKGROUP ADD USER

Adds operating system (OS) users to an Oracle ASM disk group, so that these users can have access privileges on the disk group. The users must be existing operating system users, and their user names must have a corresponding operating system user ID or system ID. If a user exists in the disk group, as shown by V$ASM_USER, then the command records an error and continues to add other users, if any.

The operating system user of a running database instance is automatically added to a disk group when the database instance accesses that disk group and creates files. However, for a database instance to read files in a disk group without creating any files, then you must use the ADDUSER clause to add that database user to the disk group. Also, you can use this clause to add a database user to an existing disk group immediately after setting the Oracle ASM File Access Control disk group attributes and before creating new files.

SQL> ALTER DISKGROUP DATA ADD USER 'oracle1';
  • ALTER DISKGROUP DROP USER

Drops operating system users from an Oracle ASM disk group. If a user is not in the disk group, then this command records an error and continues to drop other users, if any.

If the user owns any files on the same Oracle ASM disk group, then this command fails with an error, unless the CASCADE keyword is specified. If the latter case, then the user is deleted, along with all the files that the user owns.

If any files owned by the user are currently open, then the DROP USER command fails, and no files are deleted.

SQL>  ALTER DISKGROUP DATA DROP USER 'oracle1';
  • ALTER DISKGROUP SET PERMISSION

Modifies permissions of an Oracle ASM file. Setting read only permission to a file that has read write permission revokes the write permission. Only the file owner or the Oracle ASM administrator can change the permissions of a file. You cannot change the permissions on an open file.

SQL> ALTER DISKGROUP data SET PERMISSION OWNER=read write, GROUP=read only,

     OTHER=none FOR FILE '+data/controlfile.f';
  • ALTER DISKGROUP SET OWNERSHIP

Changes the owner or group of a file to the specified user or user group name, respectively. If the specified user or user group name does not exist, this command fails with an error. Only the owner of the file or the Oracle ASM administrator can run this command, and only the Oracle ASM administrator can change the owner. Also, the user group name must exist, and the owner of the file must be a member of that group. You cannot change the ownership of an open file.

SQL> ALTER DISKGROUP data SET OWNERSHIP OWNER='oracle1', GROUP='test_grp1'

     FOR FILE '+DATA/controlfile.f';

Thank you for reading… This is Airy…Enjoy Learning:)

Using ASM Scalability and Performance Enhancements in Oracle 11g – Airy’s Notes

Using ASM Scalability and Performance Enhancements in Oracle 11g:

This feature increases the maximum data file size that Oracle can support to 128 TB. ASM supports file sizes greater than 128 TB in any redundancy mode. This provides near unlimited capacity for future growth. The ASM file size limits are:

  • External redundancy – 140 PB
  • Normal redundancy – 42 PB
  • High redundancy – 15 PB

Customers can also increase the allocation unit size for a disk group in powers of 2 up to 64 MB.

These improvements reduce database startup time and memory requirements, and allow support for larger ASM files, making it feasible to implement several hundred TB or even PB Oracle databases on ASM. Larger allocation units provide better sequential read performance.

Use the following criteria to determine the number of disk groups to create:

  • Disks in a given disk group should have similar size and performance characteristics. If you have several different types of disks in terms of size and performance, then create several disk groups that contain similar characteristics.
  • Create separate disk groups for your database files and fast recovery area for backup files. This configuration allows fast recovery should a disk group failure occur.

ASM files are stored in a disk group as a collection of extents. In Oracle 10g each extent mapped directly to one allocation unit (AU), but in Oracle 11g an extent can be made up of one or more allocation units. As files get larger, the extent size can grow to reduce the size of the extent map describing the file, thus saving memory.

When the disk group compatibility attributes are set to 11.1 or higher, the extent size will automatically grow as the file grows. In 11.1, the first 20,000 extents match the allocation unit size (1*AU). The next 20,000 extents are made up of 8 allocation units (8*AU). Beyond that point, the extent size becomes 64 allocation units (64*AU). In 11.2 this behavior has changed from 1, 8, 64 to 1, 4, 16.

In addition to the automatic expansion of the extent sizes, Oracle 11g also allows control over the allocation unit size using the ATTRIBUTE clause in the CREATE DISKGROUP statement, with values ranging from 1M to 64M.

  SQL> CREATE DISKGROUP disk_group_2 EXTERNAL REDUNDANCY

       DISK '/dev/sde1'

       ATRRIBUTE 'au_size' = '32M';

The combination of expanding extent sizes and larger allocation units should result in increased I/O performance for very large databases.

Specifying the Allocation Unit Size

An AU is the basic unit of allocation within an ASM disk group. In Oracle Database 10g, each AU had a single extent, which created problems with memory usage. If you have a large database with numerous default-sized allocation units, the database would need a very large amount of memory in the shared pool. The default AU size is only 1MB. File extents contain a minimum of one AU and an ASM file consists of at least one extent. You can set variable size extents with extents of size 1, 4, 16, 32,and 64 megabytes. The ability to set variable-size ASM extents means that ASM can now support larger file size extents while using less memory. ASM sets thresholds for each file and, as a file grows, ASM will increase the extent size based on the file size thresholds. Thus, a file can start with 1MB extents, with ASM increasing the extent size to 4, 16, 32, or 64 megabytes as the file size grows. Note that the size of an extent can vary among files as well as within a file. As a result of the variable extents feature, the database needs fewer extent pointers to describe an ASM file and less memory to manage the extent maps in the shared pool, thus making it easier to implement large ASM configurations.

Variable size extents raise the maximum possible ASM file size and also reduce shared pool memory requirements. You can periodically rebalance a disk group to avoid external fragmentation, which may occur because of allocating and freeing up of small data extents. However, the database will also automatically perform defragmentation when it has a problem finding the right size extent during extent allocation. ASM is also more scalable in Oracle Database 11g as compared to Oracle -Database 10g. The maximum ASM file size for external redundancy is now 140 petabytes, instead of 35 terabytes in Oracle Database 11g. Variable extent sizes enable you to configure ASM installations that are several hundred terabytes or even several petabytes in size. You can set the au_size attribute easily in Oracle Database 11g, by using the new attribute clause in the create diskgroup command, as shown here:

SQL> create diskgroup PAWDG6
 external redundancy
 disk
 '/dev/raw/raw1'
 attribute 'au_size' = '8M';

You can also specify the au_size in bytes instead of megabytes. You can set an au_size of 1, 2, 4, 8, 16, 32, or 64 megabytes.
You can find out the allocation unit sizes for all disk groups by executing the following query on the V$ASM_DISKGROUP view:

 SQL> select name, allocation_unit_size from v$asm_diskgroup;

NAME                                   ALLOCATION_UNIT_SIZE
 ------------------  ---------------------------------------------------
 PAWDGROUP1                               1048576
 PAWDGROUP3                               1048576
 PAWDGGROUP4                              1048576
 PAWDGGROUP2                              1048576

The variable extent sizes feature means that you can tailor the allocation unit sizes to meet the needs of your system

The allocation unit size is specified with the disk group attribute AU_SIZE. The AU size cannot be altered after a disk group is created. Below given example shows how the AU_SIZE is specified with the CREATE DISKGROUP SQL statement.

 SQL> CREATE DISKGROUP data NORMAL REDUNDANCY

     FAILGROUP controller1 DISK

    '/devices/diska1' NAME diska1,

    '/devices/diska2' NAME diska2,

    '/devices/diska3' NAME diska3,

    '/devices/diska4' NAME diska4

  FAILGROUP controller2 DISK

    '/devices/diskb1' NAME diskb1,

    '/devices/diskb2' NAME diskb2,

    '/devices/diskb3' NAME diskb3,

    '/devices/diskb4' NAME diskb4

  ATTRIBUTE 'au_size'='4M',

    'compatible.asm' = '11.2',

    'compatible.rdbms' = '11.2',

    'compatible.advm' = '11.2';

DB_FILE_MULTIBLOCK_READ_COUNT is one of the parameters you can use to minimize I/O during table scans. It specifies the maximum number of blocks read in one I/O operation during a sequential scan. The total number of I/Os needed to perform a full table scan depends on such factors as the size of the table, the multiblock read count, and whether parallel execution is being utilized for the operation.

As of Oracle Database 10g release 2, the default value of this parameter is a value that corresponds to the maximum I/O size that can be performed efficiently. This value is platform-dependent and is 1MB for most platforms. Because the parameter is expressed in blocks, it will be set to a value that is equal to the maximum I/O size that can be performed efficiently divided by the standard block size. Note that if the number of sessions is extremely large the multiblock read count value is decreased to avoid the buffer cache getting flooded with too many table scan buffers.

Even though the default value may be a large value, the optimizer will not favor large plans if you do not set this parameter. It would do so only if you explicitly set this parameter to a large value.

Online transaction processing (OLTP) and batch environments typically have values in the range of 4 to 16 for this parameter. DSS and data warehouse environments tend to benefit most from maximizing the value of this parameter. The optimizer is more likely to choose a full table scan over an index if the value of this parameter is high.

The maximum value is the operating system’s maximum I/O size expressed as Oracle blocks ((max I/O size)/DB_BLOCK_SIZE). If you set this parameter to a value greater than the maximum, Oracle uses the maximum.

To change the DB_FILE_MULTIBLOCK_READ_COUNT setting, use the ALTER SYSTEM SQL statement, Below given example assumes an 8 K block size.

Altering the DB_FILE_MULTIBLOCK_READ_COUNT Setting

SQL> ALTER SYSTEM SET db_file_multiblock_read_count=512 scope=both sid=*

Thank you for reading… This is Airy…Enjoy Learning:)

Oracle 11g ASM Diskgroup Compatibility – Airy’s Notes

Oracle 11g ASM Diskgroup Compatibility:

In Oracle 10.2, the are two new column COMPATIBILITY and DATABASE_COMPATIBILITY added to the V$ASM_DISKGROUP view, but only in Oracle 11g introduce the concept of diskgroup attributes and the V$ASM_ATTRIBUTE view.

New Disk Group Compatibility Attributes

Oracle 11g ASM includes two new compatibility attributes (compatible.asm and compatible.rdbms ) that determine the version of the ASM and database software that can use specific disk groups :

  • COMPATIBLE.ASM – The minimum version of the ASM software that can access the disk group. In 11g, the default setting is 10.1.
  • COMPATIBLE.RDBMS – The minimum COMPATIBLE database initialization parameter setting for any database instance that uses the disk group. In 11g, the default setting is 10.1.

The compatibility versions of a disk group can only be increased, not decreased. If you have increased the version by mistake, you will need to create a new disk group.

The disk group compatibility attributes can be set during disk group creation by adding the ATTRIBUTE clause to the CREATE DISKGROUP command.

compatible.asm is the Oracle disk group compatibility attribute. The Oracle disk group compatible.asm attribute setting defines the format of the data on the ASM disks, and should always be equal to or greater than the Oracle database compatibility parameter.

The compatible.rdbms attribute is the second attribute known as the Oracle database compatibility attribute. The compatible.rdbms attribute defines the minimum version of an Oracle database that can mount a specific disk group. The compatible.rdbms attribute should be set to a value that is equivalent to the lowest compatibility setting of any Oracle database that will be mounting that disk group.

The compatible.asm attribute must always be set to a value that is equal to or greater than the compatible.rdbms attribute. Once the compatible.asm attribute is set, it cannot be rolled back.

Once the compatible.rdbms attribute is set, it cannot be rolled back. This is because ASM will be writing to the disk and communicating with the database in a version specific format. Oracle does allow you to roll the setting forward.

Thus, there are two ways to check the diskgroup attributes in Oracle 11g:

SQL> col COMPATIBILITY for a10               

SQL> col DATABASE_COMPATIBILITY for a10

SQL> col NAME form a20

SQL> select group_number, name, compatibility, database_compatibility from v$asm_diskgroup;

GROUP_NUMBER NAME                 COMPATIBIL DATABASE_C

------------ -------------------- ---------- ----------

           1 PAWFG                 11.2.0.0.0 11.2.0.0.0

           2 DATA                 11.2.0.0.0 10.1.0.0.0

SQL> col value for a10

SQL> @dg_attribs1.sql;

dg name              attrib name                    VALUE                READ_ON

-------------------- ------------------------------ -------------------- -------

PAWFG                 disk_repair_time               5h                   N

PAWFG               au_size                        1048576              Y

PAWFG                access_control.umask           066                  N

PAWFG                access_control.enabled         FALSE                N

PAWFG               cell.smart_scan_capable        FALSE                N

PAWFG              compatible.rdbms               11.2.0.0.0           N

PAWFG                 compatible.asm                 11.2.0.0.0           N

PAWFG                 sector_size                    512                  Y

DATA                 disk_repair_time               3.6h                 N

DATA                 access_control.enabled         FALSE                N

DATA                 cell.smart_scan_capable        FALSE                N

DATA                 compatible.rdbms               10.1.0.0.0           N

DATA                 compatible.asm                 11.2.0.0.0           N

DATA                 sector_size                    512                  Y

DATA                 au_size                        1048576              Y

DATA                 access_control.umask           066                  N

16 rows selected.

Note that V$ASM_ATTRIBUTES is filled only when compatible.asm is set to 11.1. What’s important is that you can only change compatibility level upwards; there is no way to reset it back to the lower value.

Compatibility attributes can be changed online one at a time:

SQL> alter diskgroup DATA set attribute 'compatible.asm'='11.2.0.0.0';

Diskgroup altered.

SQL> SQL> alter diskgroup DATA set attribute 'compatible.rdbms'='11.2.0.0.0';

Diskgroup altered.

 SQL> select group_number, name, compatibility, database_compatibility from v$asm_diskgroup;

GROUP_NUMBER NAME                 COMPATIBIL DATABASE_C

------------ -------------------- ---------- ----------

           1 PAWFG                11.2.0.0.0 11.2.0.0.0

           2 DATA                 11.2.0.0.0 11.2.0.0.0

What are compatibility attributes are useful for? They are very handy for upgrades and migrations when ASM diskgroups need to be available for ASM instances and database instances of different versions. Depending on the migration path, you might need to be able to access some diskgroups from different versions of ASM and different database instances.

It might also be useful for transportable tablespaces between 10g and 11g databases.

The compatible.asm diskgroup attribute controls the format of the ASM diskgroup metadata. Only ASM instances with a software version equal to or greater than compatible.asm can mount the diskgroup.

As we know The compatible.rdbms diskgroup attribute determines the format of ASM files themselves. The diskgroup can be accessed by any database instance with a compatible init.ora parameter set equal to or higher than the compatible.rdbms attribute.

Note that the compatible.rdbms attribute can be set to 10.2 as well, but we couldn’t see if there were any differences in the feature set except that the compatible.rdbms=10.2, setting requires a database instance with the compatible parameter set to 10.2 or higher. compatible.asm cannot be set below 11.1 except when it’s already 10.1 by default.

The compatible.rdbms attribute can be changed only if compatible.asm is advanced to 11.1. Otherwise, you get the following error message:

SQL> alter diskgroup DATA set attribute 'compatible.rdbms'='11.1';

alter diskgroup DATA set attribute 'compatible.rdbms'='11.1'

*

ERROR at line 1:

ORA-15032: not all alterations performed

ORA-15242: could not set attribute compatible.rdbms

ORA-15221: ASM operation requires compatible.asm of 11.1.0.0.0 or higher

When creating a new diskgroup, compatibility attributes can be explicitly specified. It is interesting that the default is still 10.1 for both attributes even when the diskgroup is created from an ASM 10g instance and all connected databases are of 11g version. Oracle is conservative here.

IF we want to set default compatibility , there are two underscore hidden parameter exits which shows, what exactly the default compatibilty: _asm_compatibility and _rdbms_compatibility:

SQL> select i.ksppinm, v.ksppstvl from x$ksppi i, x$ksppcv v where i.ksppinm in ('_rdbms_compatibility','_asm_compatibility') and i.indx=v.indx;

KSPPINM                        KSPPSTVL

------------------------------ ----------

_asm_compatibility             10.1

_rdbms_compatibility           10.1

Let’s test.

SQL> alter system set "_asm_compatibility"='11.1' scope=spfile;

System altered.

SQL> alter system set "_rdbms_compatibility"='11.1' scope=spfile;

System altered.

SQL> shutdown immediate

ASM diskgroups dismounted

ASM instance shutdown

SQL> startup

ASM instance started

Total System Global Area  284565504 bytes

Fixed Size                  1299428 bytes

Variable Size             258100252 bytes

ASM Cache                  25165824 bytes

ASM diskgroups mounted

SQL> select i.ksppinm, v.ksppstvl from x$ksppi i, x$ksppcv v where i.ksppinm in ('_rdbms_compatibility','_asm_compatibility') and i.indx=v.indx;

KSPPINM                        KSPPSTVL

------------------------------ ----------

_asm_compatibility             11.1

_rdbms_compatibility           11.1

SQL> create diskgroup DATA1 external redundancy disk ' /dev/oracleasm/disks/DISK10' ATTRIBUTE 'compatible.asm' = '11.1';

Diskgroup created.

SQL> select name, compatibility, database_compatibility from v$asm_diskgroup where name='DATA1';

NAME                 COMPATIBIL DATABASE_C

-------------------- ---------- ----------

DATA1                  11.1.0.0.0 11.1.0.0.0
SQL> CREATE DISKGROUP DATA2 DISK '/dev/oracleasm/disks/DISK11' ATTRIBUTE 'compatible.rdbms' = '11.2', 'compatible.asm' = '11.1';

Diskgroup created.

SQL> select group_number, allocation_unit_size, compatibility, database_compatibility from v$asm_diskgroup where name='DATA2';

GROUP_NUMBER ALLOCATION_UNIT_SIZE COMPATIBIL DATABASE_C

------------ -------------------- ---------- ----------

           4              2097152 11.2.0.0.0 11.1.0.0.0

Now what features are available with different compatibility settings?

 Diskgroup attributes:

compatible.asm — 11.1
compatible.rdbms — any

 Non-default allocation unit size :

For AU sizes 1-8 MB, both compatibility attributes can be 10.1.

SQL> select group_number, allocation_unit_size, compatibility, database_compatibility from v$asm_diskgroup where name='DATA3';

GROUP_NUMBER ALLOCATION_UNIT_SIZE COMPATIBIL DATABASE_C

------------ -------------------- ---------- ----------

           5             2097152 10.0.0.0.0 10.1.0.0.0

SQL> select group_number, name, value from v$asm_attribute where group_number=4;

no rows selected

The absence of the attribute au_size in V$ASM_ATTRIBUTE is confusing, but recall that attributes are displayed only when compatible.asm is set to 11.1.

SQL> select group_number, name, value from v$asm_attribute where group_number=4;

GROUP_NUMBER NAME                 VALUE

------------ -------------------- --------------------

           3 disk_repair_time     3.6h

           3 au_size              2097152

           3 compatible.asm       11.1.0.0.0

           3 compatible.rdbms     10.1.0.0.0

For AU sizes 16-64 MB both compatibility attributes must be 11.1.

@dg_attribs1.sql

--Show disk group attributes

col "dg name" for a20

col "attrib name" for a30

col value for a20

set pagesize 20

select dg.name  "dg name"

      ,a.name   "attrib name"

      ,a.value

      ,read_only

from v$asm_diskgroup dg,

     v$asm_attribute a

where

dg.group_number = a.group_number

and a.name not like 'template%';

Thank you for reading… This is Airy…Enjoy Learning:)

#asm

ASM Preferred Mirror Read in Oracle 11g R2 – Airy’s Notes

ASM Preferred Mirror Read in Oracle 11g R2:

Until Oracle 10g, Automatic Storage Management used to read only Primary copy of extents. This isn’t a problem when both nodes and both failure groups are all located in the same site (i.e. This works fine if Real Application Cluster environment is not extended), but it can be inefficient for extended clusters, causing needless network traffic. In case of extended RAC (when RAC nodes and Storage are placed at different locations) suppose there are two sites A (with RAC node and storage) and B (with RAC node and storage) located at two different locations.

If node B needs and extent which has primary copy at Storage A and mirror copy at Storage B, So, B has to go to Storage A for primary copy which cause a high traffic on network.

To overcome this, Oracle 11g introduce ASM preferred Mirror Read. Reference from above example, Now, if B has mirror copy of extent at storage B, it will read mirror copy only rather than going to primary extent at site A and will increase the performance. Oracle 11g allows each node to define a preferred failure group, allowing nodes in extended clusters to access local failure groups in preference to remote ones.

How to Setup ASM Preferred Mirror Read:

A new parameter ASM_PREFERRED_READ_FAILURE_GROUP is introduced at ASM instance level. Each Automatic Storage Instance has this parameter value which should points to its local disk groups.

To configure preferred read failure groups the disk group compatibility attributes must be set to 11.1 or higher. Once the compatibility options are correct, the ASM_PREFERRED_READ_FAILURE_GROUPS parameter is set to the preferred failure groups for each node.
Each time when there is a read request and extent exists at local disks either primary or mirror copy it will read from local disk and gives a improved performance.

Command to change parameter:

 SQL> alter system set asm_preferred_read_failure_groups='ams_diskgroup_name.asm_failure_group_name';

Steps to follow :

SQL> select name from v$asm_diskgroup;

NAME

----------

PAWFG

DATA
SQL> select name,failgroup from v$asm_disk;

NAME                      FAILGROUP

---------- ---------------------------

DATA_0000              DATA_0000

DATA_0001              DATA_0001

DATA_0002              DATA_0002

PAWFG_0000           PAW_FG1

PAWFG_0001           PAW_FG1

PAWFG_0002           PAW_FG1

PAWFG_0003           PAW_FG2

PAWFG_0004           PAW_FG2

PAWFG_0005           PAW_FG2

10 rows selected.
SQL> alter system set asm_preferred_read_failure_groups='PAWFG.PAW_FG2';
 
 System altered.

SQL> ALTER SYSTEM SET ASM_PREFERRED_READ_FAILURE_GROUPS = 'DATA.DATA_0000', 'DATA.DATA_0001', 'DATA.DATA_0002';

Crosscheck for implementation:

SQL> SHOW PARAMETER asm_preferred_read_failure_groups;
 
 NAME                                 TYPE        VALUE
 ------------------------------------ ----------- --------------------------
 asm_preferred_read_failure_groups    string      PAWFG.PAW_FG2

Thank you for reading… This is Airy…Enjoy Learning:)

#asm

ASM Enhancements-Airy’s Notes

1: ASM Optimal Disk Placement (ODP) in Oracle 11g R2-Airy’s Notes

2: ASM Preferred Mirror Read in Oracle 11g R2 – Airy’s Notes

3: ASM Restart Feature in Oracle 11g R2 – Airy’s Notes

4: Oracle 11g ASM Diskgroup Compatibility – Airy’s Notes

5: Using ASM Scalability and Performance Enhancements in Oracle 11g – Airy’s Notes

6: Oracle ASM File Access Control for Disk Groups in Oracle 11g –  Airy’s Notes

7: Automatic Storage Management Fast Mirror Resync in Oracle 11g – Airy’s Notes

 

 

Thank you for reading… This is Airy…Enjoy Learning:)

#asm

ASM Optimal Disk Placement (ODP) in Oracle 11g R2-Airy’s Notes

ASM Optimal Disk Placement (ODP) in Oracle 11gR2:

 As of Oracle 11gR2 it is possible to specify on which disk regions the ASM Disks will be placed. This enables us to make use of faster disk zones on the outer regions of disks for better I/O performance.

In spinning disk more data passes below the read/write on the outer portion of the disk because the greater circumference, hence overall throughput is improved, if all the data is in outer most segment then seek latency may reduce as well since the read/write head moves the shorter arc.

Optimal Disk Placement can be specified by associating an ASM Template in which primary extent zone is set to HOT or COLD. If set HOT then these file are stored in faster outer most track of the disk and experience greater throughput and reduce latency. If set COLD then these file are stored in middle track of the disk and experience low throughput and high latency.

ASM Optimal Disk Placement (ODP) in Oracle 11gR2-Pic1

Oracle has been sitting together with storage vendors and found ways to enable ASM to make use of storage intelligence features. These settings can be made on DiskGroup Templates and are changeable afterwards with an ALTER DISKGROUP statement.

Possible options are hot/mirrorhot and cold/mirrorcold.

ALTER DISKGROUP disk_group ADD TEMPLATE template

ATTRIBUTES (

MIRROR|UNPROTECTED

FINE|COARSE

HOT|COLD

MIRRORHOT|MIRRORCOLD )

 

ALTER DISKGROUP diskgroup ALTER FILE ‘filename’

ATTRIBUTES (

HOT|COLD

MIRRORHOT|MIRRORCOLD )

Intelligent Data Placement enables you to specify disk regions on Oracle ASM disks for best performance. Using the disk region settings you can ensure that frequently accessed data is placed on the outermost (hot) tracks which have greater speed and higher bandwidth. In addition, files with similar access patterns are located physically close, reducing latency. Intelligent Data Placement also enables the placement of primary and mirror extents into different hot or cold regions.ASM intelligent file placement : 

Intelligent file placement has been done in non-relational databases since the 1980’s but it is new to Oracle in Oracle 11g release 2.  Thecompatible.asm and compatible.rdbms disk group attributes must be set to 11.2 or higher to use intelligent data placement. Also see the v$asm_disk_iostat view to access this ASM information.

Manual file placement:  Middle tracks

Here is a randomized disk file where read-write head movement cripples response time:

As I note in the plague of super-large Oracle disks, intelligent file placement can dramatically improve throughout by minimizing the movement of the read-write head by intelligently placing popular data files near the middle absolute track and unpopular files on the outer tracks.

Per the “zoned bit recording” architecture, the outer cylinders contain more sectors than the inner platters.

As shown next, high impact data files should be placed to minimize read-write head movement. In this manual method, it is intended to minimize the travel of the read-write head, whose movement comprises over 90% of the cost of a disk I/O.  However, this approach does not hold true when you have disks with fixed read-write heads (a read/write head for every disk cylinder).

ASM Optimal Disk Placement (ODP) in Oracle 11gR2-Pic2

Oracle 11g R2 Intelligent file placement:  Outer tracks :

ASM Optimal Disk Placement (ODP) in Oracle 11gR2-Pic3

Intelligent file placement has been done in non-relational databases since the 1980’s but it is new to Oracle in Oracle 11g release 2 and it uses a different approach than the above manual intelligent file placement.

Oracle 11g release 2 introduced “intelligent data placement” (IDP) within ASM whereby a file or in disk group template can map to these hot and cold area of the Oracle platter.

It is the “Zoned Bit Recording” method (whereby the outer cylinders hold more sectors) that drives intelligent data placement, leveraging on the hardware geometry to optimize I/O into “hot” and “cold” areas.  Remember, it is the DBA who marks the files as hot and cold, Oracle does not do this for you.

ASM Optimal Disk Placement (ODP) in Oracle 11gR2-Pic1

On large disks where the data cannot be fully cached in the data buffers, we might consider placing high I/O data files in the middle absolute track number to minimize read-write head movement.  This is a situation in which the low access data files reside on the inner and outer cylinders of the disk.

The use of intelligent disk placement can improve I/O performance by placing more frequently accessed data in regions furthest from the spindle, while reducing your cost by increasing the usable space on a disk.

Here is hot to mark an ASM datafile as “hot”.  We assume that this causes Oracle to relocate this ASM file to the outermost disk cylinders where you get more data per revolution:

SQL> alter diskgroup
mygroup
modify
file ‘+data/ora/datafile/users’
attribute ( hot  mirrorhot);

The Oracle documentation does not say of the above command immediately moves the data files on the disk, but it is safe to assume that this should be a DBA task that should not be done while the production database is open to the end-users.

The use of intelligent disk placement can improve I/O performance by placing more frequently accessed data in regions furthest from the spindle, while reducing your cost by increasing the usable space on a disk.

The compatible.asm and compatible.rdbms disk group attributes must be set to 11.2 or higher to use intelligent data placement.

Alter diskgroup DATA set attribute ‘compatible.asm’=’11.2’;

Alter diskgroup DATA set attribute‘compatible.rdbms’=’11.2’;

Here is hot to mark an ASM datafile as “hot” and note the new “mirrorhot” argument, indicating that the mirrored disk will also be handled:

SQL> alter diskgroup mygroup

modify

           file ‘+data/ora/datafile/users’

attribute ( hot  mirrorhot);

 The Oracle documentation does not say of the above command immediately moves the data files on the disk, but it is safe to assume that this should be a DBA task that should not be done while the production database is open to the end-users.

The following query can be used to inspect hot reads and how writes:

display_hot_cold_files.sql

SQL> select

dg.name

diskgroup,
t.name,
t.stripe,
t.redundancy,
t.primary_region,
t.mirror_region
from
v$asm_diskgroup dg,
v$asm_template   t
where
dg.group_number = t.group_number
and
dg.name = ‘DATA’ ORDER BY t.name;

SQL> select
dg.name diskgroup,
f.file_number,
f.primary_region,
f.mirror_region,
f.hot_reads,
f.hot_writes,
f.cold_reads,
f.cold_writes
from
v$asm_diskgroup dg,
v$asm_file f
where
dg.group_number = f.group_number and dg.name = ‘DATA’;

Also note that the view v$asm_disk_iostat has new columns for hot_reads and hot_writes and cold_reads and  cold_writes to correspond with this disk segregation techniques that leverages on the geometry of the disk platter to improve overall throughput.  See later in this chapter for a script to measure these access speeds on an ASM disk.

Thank you for reading… This is Airy…Enjoy Learning:)

 

#asm, #odp