RE: ASM - storage virtualization solution

  • From: D'Hooge Freek <Freek.DHooge@xxxxxxxxx>
  • To: "melkin4u@xxxxxxxxx" <melkin4u@xxxxxxxxx>, oracle-l <oracle-l@xxxxxxxxxxxxx>
  • Date: Fri, 6 Aug 2010 17:00:28 +0200

Hi,

Actually, when you have 2 storage boxes and 1 of them failed then you only have 
50% chance that your failover will work. This because one of the storage boxes 
will hold 2 voting disks and if this san failes the entire rac cluster will 
fail.

With 11gR2 the votings disks are placed on the asm disks, but are not visible 
as asm files.
Instead (to my knowledge) they are placed on a fixed region of the disk.
When installing the clusterware using the universal installer, Oracle will ask 
you to specify 3 asm disks to store the voting disks when selecting normal 
redundancy (if you choose to store the voting disks on asm).

Following output comes from an 11gR2 rac cluster:

[grid@arsvorats1 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   7855e73b3d2a4fb3bf901beecad62093 (/dev/oracle/asm_clfilests1p1) 
[DG_CLUSTER]
 2. ONLINE   213259f135934f27bf86b91559a44df5 (/dev/oracle/asm_clfilests2p1) 
[DG_CLUSTER]
 3. ONLINE   1b12d9dc6d2b4fb8bfe63b365e2db555 (/dev/oracle/asm_clfilests3p1) 
[DG_CLUSTER]
Located 3 voting disk(s).


[grid@arsvorats1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       2800
         Available space (kbytes) :     259320
         ID                       : 1074808153
         Device/File Name         : +DG_CLUSTER
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check bypassed due to non-privileged user


[grid@arsvorats1 ~]$ export ORACLE_SID=+ASM1
[grid@arsvorats1 ~]$ asmcmd
ASMCMD> ls
DG_CLUSTER/
DG_DWHTS_DATA/
DG_FRA/
ASMCMD> cd DG_CLUSTER
ASMCMD> ls
arclorats/
ASMCMD> cd arclorats
ASMCMD> ls
ASMPARAMETERFILE/
OCRFILE/
ASMCMD> cd OCRFILE
ASMCMD> ls -l
Type     Redund  Striped  Time             Sys  Name
OCRFILE  MIRROR  COARSE   JUL 10 00:00:00  Y    REGISTRY.255.721844181


SQL> set linesize 120
SQL> column full_alias_path format a70
SQL> column file_type format a15
SQL> select concat('+'||gname, sys_connect_by_path(aname, '/')) 
SQL> full_alias_path,
  2         system_created, alias_directory, file_type
  3  from ( select b.name gname, a.parent_index pindex, a.name aname,
  4                a.reference_index rindex , a.system_created, 
a.alias_directory,
  5                c.type file_type
  6         from v$asm_alias a, v$asm_diskgroup b, v$asm_file c
  7         where a.group_number = b.group_number
  8               and a.group_number = c.group_number(+)
  9               and a.file_number = c.file_number(+)
 10               and a.file_incarnation = c.incarnation(+)
 11       )
 12  where alias_directory = 'N'
 13        and system_created = 'Y'
 14        and file_type like 'OCR%'
 15  start with (mod(pindex, power(2, 24))) = 0
 16              and rindex in
 17                  ( select a.reference_index
 18                    from v$asm_alias a, v$asm_diskgroup b
 19                    where a.group_number = b.group_number
 20                          and (mod(a.parent_index, power(2, 24))) = 0
 21                  )
 22  connect by prior rindex = pindex;

FULL_ALIAS_PATH                                                        S A 
FILE_TYPE
---------------------------------------------------------------------- - - 
---------------
+DG_CLUSTER/arclorats/OCRFILE/REGISTRY.255.721844181                   Y N 
OCRFILE


Regards,

Freek D'Hooge
Uptime
Oracle Database Administrator
email: freek.dhooge@xxxxxxxxx
tel +32(0)3 451 23 82
http://www.uptime.be
disclaimer: www.uptime.be/disclaimer
--
From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On 
Behalf Of Michael Elkin
Sent: donderdag 5 augustus 2010 6:53
To: oracle-l
Subject: ASM - storage virtualization solution

Dear list members,

I would like to know if someone has an experience in the following solution.

Configuration example : 2 IBM servers that are located on different blade 
chassis are connected to 2 separate IBM storages.
Each blade server has a fiber connection to each storage.
On each storage we allocate a lun and each server can see all allocated luns on 
both storages.

The idea is to use ASM on each node with a mirroring and create a disk group 
with ASM "Normal redundancy" using Luns from both storages.
This can give us maximum availability:
1. For DB or node failure we use RAC
2. Blade servers are located on different chassis , so we do not loose all RAC 
nodes in case of a complete blade chassis failure
3. In case of a storage failure we have a second storage that is mirrored by 
ASM and most important is always active and is connected to all RAC servers, 
this means there is no need to perform any manual operations to switch this 
storage to an active mode and connect RAC servers to it.

In general ASM can serve here as a storage virtualization solution. 

Thank you.

-- 
Best Regards
Michael Elkin
--
//www.freelists.org/webpage/oracle-l


Other related posts: