RE: OCR / VD external vs. normal redundancy using NFS.

  • From: D'Hooge Freek <Freek.DHooge@xxxxxxxxx>
  • To: "exriscer@xxxxxxxxx" <exriscer@xxxxxxxxx>, "david.robillard@xxxxxxxxx" <david.robillard@xxxxxxxxx>
  • Date: Thu, 1 Jul 2010 08:02:26 +0200

This is not entirely true.
When setting up a new rac, raw or block devices are no longer supported for the 
voting disks or the cluster registry, but it does not mean that you can't use a 
different cluster filesystem then ASM to hold them.
It is perfectly valid to use NFS as clustered filesystem (except when you use 
standard edition rac)

Also when you upgrade an existing environment, it is still possible to keep 
your voting disks and/or cluster registry on raw/block devices.


Regards,


Freek D'Hooge
Uptime
Oracle Database Administrator
email: freek.dhooge@xxxxxxxxx
tel +32(0)3 451 23 82
http://www.uptime.be
disclaimer: www.uptime.be/disclaimer
________________________________________
From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On 
Behalf Of LS Cheng
Sent: donderdag 1 juli 2010 7:41
To: david.robillard@xxxxxxxxx
Cc: oracle-l mailing list
Subject: Re: OCR / VD external vs. normal redundancy using NFS.

Hi

In 11gR2 Grid Infrastructure you have to store OCR and Voting Disk inside ASM 
Disk Groups. There is no other storage option which you can use during 
installation.

The only exception is when you are upgrading 10g to 11gR2

Thanks

--
LSC
On Wed, Jun 16, 2010 at 4:43 PM, David Robillard <david.robillard@xxxxxxxxx> 
wrote:
Hello everyone,

I'd like to know how does each of us configure redundancy for both the
Grid Infrastructure's Oracle Cluster Registry (OCR) and Voting disks
(VD) when they're stored over NFSv3 on an enterprise grade storage
array with RAID disks. Do you use external or normal redundancy for
OCR and VD?

I'm looking to install Grid Infrastructure 11gR2 on RedHat Enterprise
Linux 5 x86_64 over NFSv3 on a clustered Sun/Oracle Unified Storage
7410. The storage array has built-in redundancy on every components.
Each cluster node has two quad-ethernet network interface cards (NIC).
All network interfaces are built with two ports, one on each NIC, to
create a private (bond0), public (bond1) and storage (bond2)
interface. There are two redundant GbE switches dedicated for a
storage-only non-routable subnet.

Quick note, I'm not using ASM over iSCSI for various reasons. Mostly
because every paper I read from NetApp/Sun/Oracle/EMC says the
performance of ASM with software initiator iSCSI is not as good as
NFS. Also because it's easier to manage the storage array's total disk
space when using NFS instead of iSCSI. I'm also not using Oracle's
dNFS feature simply because I haven't had the time to look at it and
I've been working with NFS for over 10 years. Plus I'm also lucky (?)
enough to be the UNIX sysadmin, the storage array administrator and
the DBA, so I don't need to configure everything from the Oracle
stand-point (i.e. I can't do finger pointing in case things go wrong,
I'm the only one to blame).

With that in mind, which option would you choose and why?

Option A)

Create an NFS share for OCR and another one for VD then use external
redundancy. That would generate the following mount points:

OCR = /u01/ocr/cluster.registry
VD = /u01/vd/voting.disk

-or-

Option B)

Create three different NFS shares for OCR and three other shares for
VD then use normal redundancy. That would create the following mount
points:

OCR 1 = /u01/ocr/ocr1/cluster.registry
OCR 2 = /u02/ocr/ocr2/cluster.registry
OCR 3 = /u03/ocr/ocr3/cluster.registry
VD 1 = /u01/vd/vd1/voting.disk
VD 2 = /u02/vd/vd2/voting.disk
VD 3 = /u03/vd/vd3/voting.disk

Of course there are other options and variations. I welcome all
comments and critics on that setup :)

Many thanks,

David
--
David Robillard, UNIX team leader and Oracle DBA
CISSP, RHCE, SCSA & SCSECA
--
//www.freelists.org/webpage/oracle-l


--
//www.freelists.org/webpage/oracle-l


Other related posts: