Re: 9iR2 RAC networks

  • From: Tim Gorman <tim@xxxxxxxxx>
  • To: "oracle-l@xxxxxxxxxxxxx" <oracle-l@xxxxxxxxxxxxx>
  • Date: Tue, 03 May 2005 10:17:28 -0600

Solaris has something called IPMP (IP Multi-Pathing) which can be used under
10gRAC (perhaps 9iRAC also?) to implement this kind of interconnect-failover
functionality...

Also, the RAC parameter CLUSTER_INTERCONNECTS is apparently designed for
multi-pathing the cluster interconnect across multiple IP interfaces.  It is
documented as a performance enhancement, not as a high-availability
enhancement, leading me to believe that the failure of one of the IP
interfaces specified using this parameter might cause errors.  Might be
worth looking into, though.  Be aware that using CLUSTER_INTERCONNECTS is
documented as exclusive of using Solaris IPMP, though...



on 5/3/05 7:01 AM, K Gopalakrishnan at kaygopal@xxxxxxxxx wrote:

> David:
> 
> Not quite true. If the private interconnct fails (unless you have
> configured NIC failover for private intereconnect) there will be a
> node eviction.There are couple of underscore parameters  to control
> the behavior and the time when it should be evicted. Typically Oracle
> Cluster Manager detects the failure and understands whether that is a
> network failure or node failure from the quorum device.
> 
> If there is a network failure, other node will be still writing in the
> quorum device, but it may not be able to ping. In this case the
> Cluster Manager will detect the failure and the node which owns the
> voting disk will survive (of course there are different algorithms
> used for split-brain resolution, and the above answer is quite over
> simplified)
> 
> Only in AIX, if one interconnect fails , the other network card is
> automatically selected (this is called as  TNFF Transperent Network
> Failover Failback).

--
//www.freelists.org/webpage/oracle-l

Other related posts: