RE: RAC 10g interconnect resilient configuration

  • From: "Pete Sharman" <peter.sharman@xxxxxxxxxx>
  • To: "bthomas@xxxxxxxxxxxxxx" <bthomas@xxxxxxxxxxxxxx>, "Luca.Canali@xxxxxxx" <Luca.Canali@xxxxxxx>, "oracle-l@xxxxxxxxxxxxx" <oracle-l@xxxxxxxxxxxxx>
  • Date: Tue, 21 Feb 2006 09:21:06 +1100

Bryan

Hopefully you've logged those VIP failover issues as bugs.  ;)

Gong back to Luca's original question, depending on your configuration NIC 
bonding may not be the best possible path for you.  I know in some of our 
testing certain OS's didn't work well with NIC bonding.  If you find yourself 
in that situation, you can always fall back on the CLUSTER_INTERCONNECTS 
parameter.
 
Pete
 
"Controlling developers is like herding cats."
Kevin Loney, Oracle DBA Handbook
 
"Oh no, it's not.  It's much harder than that!"
Bruce Pihlamae, long-term Oracle DBA

-----Original Message-----
From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On 
Behalf Of Bryan Thomas
Sent: Tuesday, 21 February 2006 8:06 AM
To: Luca.Canali@xxxxxxx; oracle-l@xxxxxxxxxxxxx
Subject: RE: RAC 10g interconnect resilient configuration

Luca,

I would highly recommend anyone setting up RAC to using bonding for the
interconnect.

We have seen problems with the VIP not failing over properly in certain
circumstances.

Dell has a good document that describes the NIC bonding located here:

http://support.dell.com/support/edocs/software/appora10/linEM64T/en/index.ht
m


Bryan Thomas

Senior Performance Consultant
Performance Tuning Corp.
http://www.perftuning.com
Email: bthomas@xxxxxxxxxxxxxx


-----Original Message-----
From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx]
On Behalf Of Luca Canali
Sent: Monday, February 20, 2006 2:53 PM
To: oracle-l@xxxxxxxxxxxxx
Subject: RAC 10g interconnect resilient configuration

Hi,

I am trying to find a good HA configuration for the interconnect network
of RAC on Linux (10gR2 on RHEL 3). I have redundant switches and NICs
for the interconnect. From my tests and from Oracle documentation I see
that RAC can handle network (switch) failures, by simply bypassing the
failed network, while unfortunately 10gR2 clusterware can only be
configured to use 1 network (if that fails all RAC nodes but one go
down).
From metalink Note:220970.1 I found a reference to using NIC bonding
(unfortunately the link to the doc is broken, which is not a good start
for an HA doc).
I wonder whether implementing bonding is a good idea for the RAC
interconnect, besides the extra complexity, I am afraid of loosing
scalability regarding RAC (cache fusion) being able to load balance over
the NICs (as it does in the 'normal' configuration).

Any thoughts/ experiences to share on this?

Thanks,
L.

-----------
Luca Canali
Information Technology Department
CERN - European Organization for Nuclear Research
Geneva, Switzerland
--
//www.freelists.org/webpage/oracle-l





--
//www.freelists.org/webpage/oracle-l




--
//www.freelists.org/webpage/oracle-l


Other related posts: