Re: Orion, NetApp, and iops

  • From: Jakub Wartak <vnulllists@xxxxxxxxxxxx>
  • To: oracle-l@xxxxxxxxxxxxx, henry@xxxxxxxxxxxxxxx
  • Date: Mon, 3 May 2010 23:21:17 +0200

Dnia poniedziałek, 3 maja 2010 18:24, Henry Poras napisał:
> I was taking some preliminary Orion runs on a new system just handed off
> to us. Orion ran and all the lineshapes looked about right (iops
> increased linearly with load until saturation) and really pretty, but
> there is some stuff I can't explain.
>
> I was running against one LUN (made up of 3 aggregates) comprised of 48
> disks (15K). Each disk can provide ~180 iops so I was expecting
> saturation at ~8640 iops.
>
> orion -write 0 -num_disks 100 -duration 120 -num_large 0 (100% read, all
> 8K reads, -num_disks is high enough to crank the load, increase duration
> to allow enough aio calls in flight)
>
> So for 8K reads and 100% reads I got my iops to saturate at ~67,000
> (instead of the expected 8640).
>
> -Was it a bug in Orion?   I looked at iostat and it was consistent with
> Orion.
> -Was it read cache on the NetApp? Maybe, but I saturated at a load of
> ~35-50 which is consistent with my number of disks. This implies to me
> that I am actually hitting the disks.
> -Could it be some funny read ahead algorithm?  Maybe Orion thinks it is
> doing 8 i/o calls, but the NetApp is doing some read ahead so only one
> call is an actual PIO and the rest are from cache. How can I test this?
> Is this an anomaly of the experiment? How could I disable this to get a
> better feel for production (or maybe this would still happen in
> production?) -Any other thoughts?
>
> I don't like numbers without a model within which to understand them and
> storage is a bit of a black box to me.


We need more data, bascially OS and storage (FC, iSCSI?), versions (OS, Data 
ONTAP), LVM configuration and NetApp model/configuration (sysconfig command 
from SSH on NetApp too), I/O scheduler used if this is Linux, vmstat & iostat 
outputs during the run, etc. NetApp has many optimization settings (like 
no_update_atime on vols), FlexScale & PAM & PAM-II flash accelerators (not to 
mention NVRAM write acceleration...), was deduplication turned on, etc.

There are some shortcut commands that you should launch on filer that could 
solve the mystery, these are "lun show -o -i 1 /vol/path/to/lun" and "stats 
show disk:*:disk_busy". Second is going to show you how much really disks are 
utilised.

BTW: LUN can be only be inside WAFL volume, volume can be only on single 
aggregate. So my understanding is that LUN is in single aggregate that has 3 
raid group sets each consisting of 16 disks, correct? As far as I remember 
the calculation of IOPS in RAID-DP is not as simple as 48*180.

-- 
Jakub Wartak
http://jakub.wartak.pl/blog
--
//www.freelists.org/webpage/oracle-l


Other related posts: