Re: RAC Infiniband Questions

  • From: Dan Norris <dannorris@xxxxxxxxxxxxx>
  • To: Todd.Carlson@xxxxxxx
  • Date: Wed, 8 Sep 2010 12:21:16 -0500

Todd,

You may consult 877012.1 for information on bonding. The only difference
from that note is that you will be bonding ib0 and ib1 (or whichever of the
IB interfaces you are connecting) instead of eth0 and eth1. Otherwise, it is
just the same.

I suggest using mode=1 (active-backup) as it requires no switch
configuration on your switches. In this case, you will have failover, but
not "trunking" or aggregated bandwidth. I don't think you'll find any
shortage of bandwidth using a single IB link since even Oracle Database
Machine uses active-backup without issue.

Dan

On Wed, Sep 1, 2010 at 9:56 AM, Carlson,Todd <Todd.Carlson@xxxxxxx> wrote:

> Hey Matt,
>
>
>
> Thanks for the response! Below is the output from DEV. The problem that we
> are having is that we have extensive experience with IPMP, but no experience
> with RHEL binding. As a result, when we bound 2 channels across cards, we
> really didn’t know what we were doing and we did get the cluster to install,
> but then the traffic across the interconnect stop until we bounced the
> switches. Weird.
>
>
>
> Do you know of a step by step process to bind the channels for RAC that we
> could see? Also, since we have 2 cards with 2 channels, we would end up with
> two bound ports for the interconnect. When we do the cluster install, would
> we select both for the private interconnect?
>
>
>
>
>
> /u01/grid/11.2.0/bin>oifcfg iflist
>
> eth0  10.2.9.0
>
> eth1  10.2.9.0
>
> ib0  192.168.0.0
>
> ib1  192.168.0.0
>
> ib2  192.168.0.0
>
> ib3  192.168.0.0
>
> /u01/grid/11.2.0/bin>oifcfg getif
>
> eth0  10.2.9.0  global  public
>
> ib0  192.168.0.0  global  cluster_interconnect
>
>
>
> Todd
>
>
>
> *From:* Matthew Zito [mailto:mzito@xxxxxxxxxxx]
> *Sent:* Tuesday, August 31, 2010 3:37 PM
> *To:* Carlson,Todd; oracle-l@xxxxxxxxxxxxx
> *Subject:* RE: RAC Infiniband Questions
>
>
>
> You use the bonding driver – just like you do for Ethernet, iirc.    What
> problem are you experiencing?  What do your ifcfg- files look like?
>
>
>
> Matt
>
>
> ------------------------------
>
> *From:* oracle-l-bounce@xxxxxxxxxxxxx [mailto:
> oracle-l-bounce@xxxxxxxxxxxxx] *On Behalf Of *Carlson,Todd
> *Sent:* Tuesday, August 31, 2010 4:06 PM
> *To:* oracle-l@xxxxxxxxxxxxx
> *Subject:* RAC Infiniband Questions
>
>
>
> Hey Guys,
>
>
>
> We are building out our RAC environment on 11.2.0.1.2 with RHEL 5.5 on Sun
> x4270 servers. We are running into problems trying to bind our Infiniband
> connections together. In SAND & DEV, we have 2 node clusters. Each node has
> 2 HCA's (Sun Dual Port 40Gb/sec 4x Infiniband QDR Host Channel Adapter), 1 &
> 2 with 2 ports, A & B. We have 2 Sun 36 port Infiniband switches. What we
> are trying to do is bind 1.A with 2.A and bind 1.B with 2.B. However, there
> is very little documentation from Sun/Oracle or RedHat on how to do this and
> all of our attempts have failed. So, right now these environments are using
> one connection between the nodes using UDP.
>
>
>
> We are currently building out TEST and we need to have this working by the
> 10th of Sept. We will then rebuild DEV & SAND to have the binding working
> correctly. So, I am hoping/praying that you can guide us here on how we go
> about configuring the binding of the IB ports. Is there some documentation
> that you would send me? We have read “RAC Support for RDS Over Infiniband
> [ID 751343.1]” and are essentially stuck at step #2.
>
>
>
> On a Similar vein, once we get the channels bound, we would then have 2
> private, bound interfaces to use. In the cluster install (step 6 of 16),
> would we select both of them or just one?
>
>
>
> Thanks for your help here, I really appreciate it!
>
>
>
> Todd Carlson
>
> Manager – DBA/ERP & EUC Teams
>
> World Wide Technology
>
> (314) 301-2788
>
>
>

Other related posts: