RE: Oracle 10g RAC on Linux - hardware confusion #@$!

  • From: "Peter Miller" <Peter.Miller@xxxxxxxxxxxxxx>
  • To: <oracle-l@xxxxxxxxxxxxx>
  • Date: Mon, 14 Jun 2004 16:12:47 +0100

Matt,

thanks for your reply. You mention a disk array in your first paragraph.
Could you describe a h/w shopping list of the kit required to add a
shared disk array to 2 or more generic Linux servers (happen to be DELL
1600 SC servers). Would something like a Dell PowerVault 220 be suitable
for a 2 node system?

I must admit I feel a bit of a fool, having believed all the marketing
hype when told at the various 10g conferences that RAC was built on
simple cheap Linux boxes - they forgot to mention the heart of it (the
database cluster) still resides on expensive SCSI disk arrays. Still I
guess it's all relative. Relative to Sun kit that is.

Regards

Peter Miller

-----Original Message-----
From: Matthew Zito [mailto:mzito@xxxxxxxxxxx]
Sent: 14 June 2004 13:38
To: oracle-l@xxxxxxxxxxxxx
Subject: Re: Oracle 10g RAC on Linux - hardware confusion #@$!



For 10g (and 9i, for that matter), the database files can be located on 
a physically separate server over a network, which is NAS.  However, 
that method doesn't use OCFS - it uses NFS.  OCFS is used when you have 
a disk array (or individual disk) directly attached to multiple nodes.  
While you can mix and match (some datafiles on OCFS, some datafiles on 
NAS), you only need one or the other to have a working RAC database.

For OCFS testing, the cheapest thing to get is Firewire drives - any of 
the LaCie firewire drives work well for this.  Then hook both of your 
servers up to the drive and use it as a shared storage point.  There 
are some good docs on this floating around the net.

For NAS, its tempting to use a linux box as storage and share 
filesystems over NFS, but its not guaranteed to work.  Here at Gridapp, 
we spent quite a while about  a year ago trying to validate Linux as an 
NFS storage platform for RAC.  Given that we have what could be 
described as a troubling number of RAC databases in our lab, the 
ability to just kind of throw cheap Linux boxes for storage at our 
clusters was appealing.  However, we kept finding bugs in the Linux NFS 
code that would result in cluster crashes.  We eventually abandoned it 
and just standardized on Netapp.  You can find very old and cheap 
Netapp boxes on ebay for a few thousand dollars - $8,000 can get you 
the top of the line last-generation filer used.  Newer versions of the 
linux kernel might have fixed these NFS bugs - since Linux isn't a 
supported NFS server storage platform for RAC anyway, we never invested 
huge amounts of effort in getting them fixed.

The last RAC storage option for directly connected storage is ASM.  
Yes, its kind of a pleasant veneer on top of raw devices, but it works 
very well, at least in our lab.  It's not quite as elegant as Oracle 
makes it out to be, but we should all be fairly accustomed to that by 
now.

Let me know if any of this was unclear, and I hope this helps.

Thanks,
Matt

--
Matthew Zito
GridApp Systems
Email: mzito@xxxxxxxxxxx
Cell: 646-220-3551
Phone: 212-358-8211 x 359
http://www.gridapp.com


On Jun 14, 2004, at 6:30 AM, Peter Miller wrote:

> Hi All,
>
> I am trying to set up a 10g RAC on Linux Red Hat AS 2.1
>
> I have 2 servers for the database instances, and another 1 for the
> database files. I have got it into my head that the database files can
> be located on a physically separate server over a network (like a NAS)
> using OCFS. However, when I got to the point of "share" the partition 
> on
> the "database" server, is seems that I have got completely the wrong 
> end
> of the stick.
>
> The "Cluster" files need to be on shared disk system that is
physically
> connected to the instance servers - is this the case?
>
> If so, can someone recommend a scaleable bit of hardware (preferable
> Dell) that I can use with 2+ nodes to support the database files of
the
> RAC.
>
> If not, can someone please enlighten me.

----------------------------------------------------------------
Please see the official ORACLE-L FAQ: http://www.orafaq.com
----------------------------------------------------------------
To unsubscribe send email to:  oracle-l-request@xxxxxxxxxxxxx
put 'unsubscribe' in the subject line.
--
Archives are at //www.freelists.org/archives/oracle-l/
FAQ is at //www.freelists.org/help/fom-serve/cache/1.html
-----------------------------------------------------------------

*************************************************************************
This email, its content and any files transmitted with it are intended 
solely for the addressee(s) and may be legally privileged and/or 
confidential. If you are not the intended recipient please delete and 
contact the sender via email return. Unless expressly stated otherwise, 
and subject to the disclaimer below, this email does not form part of a 
legally binding contract or agreement between the recipient and 
Cogent Defence and Security Networks Ltd.

Messages sent via this medium may be subject to delays, non-delivery 
and unauthorised alteration. This email has been prepared using 
information believed by the author to be reliable and accurate, but 
Cogent Defence and Security Networks Ltd makes no warranty as to accuracy 
or completeness. In particular, Cogent Defence and Security Networks Ltd
does not accept responsibility for changes made to this email after 
it was sent. Any opinions or recommendations expressed herein are solely 
those of the author. They may be subject to change without notice.
*************************************************************************


----------------------------------------------------------------
Please see the official ORACLE-L FAQ: http://www.orafaq.com
----------------------------------------------------------------
To unsubscribe send email to:  oracle-l-request@xxxxxxxxxxxxx
put 'unsubscribe' in the subject line.
--
Archives are at //www.freelists.org/archives/oracle-l/
FAQ is at //www.freelists.org/help/fom-serve/cache/1.html
-----------------------------------------------------------------

Other related posts: