Re: Queue depth on QLogic/3par

  • From: "Rajeev Prabhakar" <rprabha01@xxxxxxxxx>
  • To: jeremy.schneider@xxxxxxxxxxxxxx
  • Date: Thu, 31 Jan 2008 05:33:21 -0500

Good Morning Jeremey,

I had done similiar tests against our two node 10g rac oltp
environment (rh4/linux x86_64/
10gRAC/ASM) few months back and what we had concluded was that queue depth did
give us increased throughput / better iops numbers at higher sustained
load levels. So,
the tpm did improve.

Another winner was the RAID layout (0, 1 or 0+1) compared to anything
mixed with
Level 5.

We did experiment with different queue depth levels and settled down at the
magical figure of 16. At that depth, iops / tpm numbers looked good (If you want
precise numbers, I certainly search in test results/logs etc and get
back to you.)

If I recall correctly, besides other things (e.g. load levels of our
db transactions,
incoming transaction rate, intermediate hardware layer i/o capacity w.r.t ports
etc.), we did basic math regarding realistic i/o rate that disks under the LUN
could sustain and the rate at which i/o would flow to the LUNs(SAN).

Prior to tweaking these numbers, we were certainly experiencing
visible i/o level
contention (To our disbelief, coz, we thought our mighty SAN could
take all that
i/o loads thrown at it in stride without any tuning).

HTH,
Rajeev

On Jan 30, 2008 3:53 PM, Jeremy Schneider
<jeremy.schneider@xxxxxxxxxxxxxx> wrote:
> Just wondering if anyone has done much with tweaking the queue depth on the
> HBA.  I'm running some benchmarks with Oracle's Orion tool
> (Redhat5/Opteron/QLogic/3par) and noticed that with the small random I/O
> tests the bottleneck really seems to be the queue depth - even with hundreds
> of threads iostat reports sub-millisecond service time yet shows long queues
> and long wait times.  Since this database being migrated to this new system
> is like 95% "sequential read" from the wait events I figure that there will
> be a lot of these small I/O requests and it seems worth tuning.
>
> There seems to be a option "ql2xmaxqdepth" on the qlogic driver that
> controls the queue depth.  It defaults to 32 but I saw some chatter on the
> VMware forums about increasing it to 64.  But I also saw a note for
> Solaris/Symmetrix saying something about using a max queue depth less than
> or equal to the max queue depth of the LUN (perhaps configured on the SAN?)
> to avoid commands getting rejected by a full queue condition.
>
> I'm not sure exactly how all of this works internally - can different HBA's
> or SAN's only handle a certain max queue depth?  Has anyone else
> experimented with different queue depths?  Does this change something on the
> HBA or just in kernel-land?  I think I'm going to try increasing it and
> re-run the Orion benchmark to see if I get any errors or performance
> difference - but I'm curious what others' experience has been.
>
> -Jeremy
>
> --
> Jeremy Schneider
> Chicago, IL
> http://www.ardentperf.com/category/technical
--
//www.freelists.org/webpage/oracle-l


Other related posts: