RE: RAC interconnect packet size??

  • From: "Bobak, Mark" <Mark.Bobak@xxxxxxxxxxxx>
  • To: Greg Rahn <greg@xxxxxxxxxxxxxxxxxx>
  • Date: Wed, 22 Apr 2009 02:25:41 -0400

Hi Greg,

I agree.  Allow me to describe what we were seeing:

 - CPU spikes w/ run queues going into the 20s, very low or no %wait for I/O, 
0% idle
 - Looking at V$SESSION_WAIT, lots of waits on gc wait events
 - up to four LMS processes, burning CPU like crazy

(all this on a three node RAC of DL-585s, 4 dual core CPUs per node)

The above seemed to be consistent with a system w/ a busy interconnect and no 
jumbo frames configured.

Only time will tell whether enabling jumbo frames actually solved the problem.

One other thing, assuming that all your hardware (all NICs and interconnect 
switches) supports a jumbo frame configuration, there should really be no 
downside to enabling them.

-Mark


-----Original Message-----
From: Greg Rahn [mailto:greg@xxxxxxxxxxxxxxxxxx] 
Sent: Wednesday, April 22, 2009 2:09 AM
To: Bobak, Mark
Cc: TESTAJ3@xxxxxxxxxxxxxx; oracle-l@xxxxxxxxxxxxx
Subject: Re: RAC interconnect packet size??

Using MTU 9000 Jumbo Frames generally does just one thing: it lowers
the system cpu time for systems that have significant amounts of
interconnect traffic.  The reduction in sys cpu time results from not
having to break up database blocks into multiple frames (less system
calls = less cpu used) when sent over the inconnect.  It is quite
convenient that an 8k db block fits in one single Jumbo Frame.
Perhaps yet another reason that the default 8k db block is a nice
choice.

On Tue, Apr 21, 2009 at 5:52 AM, Bobak, Mark <Mark.Bobak@xxxxxxxxxxxx> wrote:
> You definitely *want* jumbo frames. --

Regards,
Greg Rahn
http://structureddata.org


--
//www.freelists.org/webpage/oracle-l


Other related posts: