Re: Queue depth on QLogic/3par

  • From: "Jeremy Schneider" <jeremy.schneider@xxxxxxxxxxxxxx>
  • To: "Matthew Zito" <mzito@xxxxxxxxxxx>
  • Date: Wed, 30 Jan 2008 20:52:45 -0500

FWIW, I dug up the best practices docs I could find on 3par's customer
support website but didn't see any recommendations about increasing the
queue depth.  On the other had they did mention decreasing this value to
throttle requests for troubleshooting "QUEUE FULL SCSI" errors from the
target array.  Interestingly, Emulex cards offer more functionality in this
area - allowing you to throttle both the port queue depth and the LUN queue
depth; QLogic cards only let you throttle the LUN queue depth.  (That's the
ql2xmaxqdepth parameter which defaults to 32 in the current version of the
qla2xxx driver.)

As far as I can tell, the key to setting this parameter is understanding the
capabilities of your storage array.  The 3par docs said that the main factor
on their devices is the number of "I/O Queues" on each storage server port,
which vary between 500 and 2000 depending on the type of Storage Server HBA
used in your model.  Basically you need to know how many hosts you have that
are accessing the array and how many concurrent I/O requests these hosts can
have (which is controlled by the queue depth).  If you max out the queues on
the server port then you'll start getting errors and experiencing erratic
I/O load on the hosts.

I tried increasing ql2xmaxqdepth on our QLogic HBA's from 32 to 64 and
re-ran the Orion benchmark, then re-ran the test a second time with each
setting to verify the results - and I saw a consistent 6-7% increase in IOPS
(and corresponding decrease in latency).  However this only kicked in once I
had about 150 threads.

Anywho...  I'll probably post some pics in a blog post or something.  Thanks
for the feedback!

-Jeremy


On 1/30/08, Matthew Zito <mzito@xxxxxxxxxxx> wrote:
>
>  Yeah, the biggest thing you want to avoid is getting rejected IOs from
> the storage array.  I can't speak specifically for 3par, but the Symmetrix's
> architecture has got specific limitations based on how it shuffles IOs from
> front-end adapters to disk adapters.  I know 3par has some best practices
> guides I've seen for RH3, and I'm sure they've updated them at least for
> RHEL4.  I believe they included increasing the ql2maxqdepth parameter.
>
>
>
> Thanks,
>
> Matt
>
>
>  ------------------------------
>
> *From:* oracle-l-bounce@xxxxxxxxxxxxx [mailto:
> oracle-l-bounce@xxxxxxxxxxxxx] *On Behalf Of *Jeremy Schneider
> *Sent:* Wednesday, January 30, 2008 3:53 PM
> *To:* oracle-l
> *Subject:* Queue depth on QLogic/3par
>
>
>
> Just wondering if anyone has done much with tweaking the queue depth on
> the HBA.  I'm running some benchmarks with Oracle's Orion tool
> (Redhat5/Opteron/QLogic/3par) and noticed that with the small random I/O
> tests the bottleneck really seems to be the queue depth - even with hundreds
> of threads iostat reports sub-millisecond service time yet shows long queues
> and long wait times.  Since this database being migrated to this new system
> is like 95% "sequential read" from the wait events I figure that there will
> be a lot of these small I/O requests and it seems worth tuning.
>
> There seems to be a option "ql2xmaxqdepth" on the qlogic driver that
> controls the queue depth.  It defaults to 32 but I saw some chatter on the
> VMware forums about increasing it to 64.  But I also saw a note for
> Solaris/Symmetrix saying something about using a max queue depth less than
> or equal to the max queue depth of the LUN (perhaps configured on the SAN?)
> to avoid commands getting rejected by a full queue condition.
>
> I'm not sure exactly how all of this works internally - can different
> HBA's or SAN's only handle a certain max queue depth?  Has anyone else
> experimented with different queue depths?  Does this change something on the
> HBA or just in kernel-land?  I think I'm going to try increasing it and
> re-run the Orion benchmark to see if I get any errors or performance
> difference - but I'm curious what others' experience has been.
>
> -Jeremy
>
>


-- 
Jeremy Schneider
Chicago, IL
http://www.ardentperf.com/category/technical

Other related posts: