Re: iostat output

  • From: Matthew Zito <mzito@xxxxxxxxxxx>
  • To: oracle-l@xxxxxxxxxxxxx
  • Date: Wed, 28 Apr 2004 21:01:56 -0400

Well, unless I mistake its shape and making quite, it is that shrewd 
and knavish sprite of an iSCSI implementation on Linux.  There's a 
couple of potential problems here, so I'll go through some diagnostic 
stuff step by step:

-Are you using a hardware iSCSI card?  Our internal testing here at 
gridapp shows that a dual-cpu host can push roughly (very roughly!) 40 
MB/sec. of I/O over a dedicated gige link without hardware offload.
-What is the CPU utilization like on your host side while this I/O is 
happening?
-How large are the volumes on the filer side that the luns are being 
presented off of?  sysconfig -r will show this
-Also, look at the disk utilization on the filer side.  use the priv 
set advanced command, followed by statit -b (wait a minute or so during 
a period of heavy i/o) then run statit -e.  Take a look at the data you 
get back from the filer?
-Jumbo frames?  That should be a requirement.
-Also, iostat on linux is known to lie, especially with nonstandard 
block devices - are you actually seeing performance problems?  The 
iostat does show a _lot_ of I/O.
-Finally, why iSCSI?  Why not just use nfs?
-What type of filer?

You can respond to me directly if you like - unless the list at large 
is interested in iSCSI on netapp.

Thanks,
Matt


--
Matthew Zito
GridApp Systems
Email: mzito@xxxxxxxxxxx
Cell: 646-220-3551
Phone: 212-358-8211 x 359
http://www.gridapp.com


On Apr 28, 2004, at 8:10 PM, Sai Selvaganesan wrote:

> hi
>
> the following is the o/p of iostat on a new configured machine with 
> netapp filer over icsci running oracle 9i database(1 tb in size).
>
> Device:  rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s avgrq-sz avgqu-sz   
> await  svctm  %util
> sda      807.31   2.99 252.82  1.33 8486.38   34.55    33.53     8.50  
>  33.07   5.23  13.29
> sda1     807.31   2.99 252.82  1.33 8486.38   34.55    33.53     8.50  
>  33.07   5.23  13.29
> sdb  2791.03   0.33 1055.81  3.99 30790.70   34.55    29.09    44.12   
> 41.13   8.03 85.05
> sdb1 2791.03   0.33 1055.81  3.99 30790.70   34.55    29.09    44.12   
> 41.13   8.03 85.05
> sdc        0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
>  0.00   0.00   0.00
> sda  6446.67   3.00 1444.00  1.33 63122.67   34.67    43.70    91.73   
> 63.49   6.90  99.67
> sda1 6446.67   3.00 1444.00  1.33 63122.67   34.67    43.70    91.73   
> 63.49   6.90 99.67
> sdb   2137.67   0.33 587.33  2.33 21826.67   21.33    37.05    32.13   
> 54.78  13.23  78.00
> sdb1  2137.67   0.33 587.33  2.33 21826.67   21.33    37.05    32.13   
> 54.78  13.23 78.00
> sdc     0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00    
> 0.00   0.00   0.00
> sda    682.67   2.00 201.67  1.33 7080.00   34.67    35.05    14.90   
> 73.23  14.61  29.67
> sda1    682.67   2.00 201.67  1.33 7080.00   34.67    35.05    14.90   
> 73.23  14.61  29.67
> sdb  11828.67   0.00 2243.33  1.67 112573.33   13.33    50.15   125.23 
>   55.68   4.45
> 100.00
> sdb1 11828.67   0.00 2243.33  1.67 112573.33   13.33    50.15   125.23 
>   55.68   4.45 100.00
> sdc        0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
>  0.00   0.00   0.00
> sda  1878.33   3.00 480.33  1.00 18864.00   32.00    39.26    36.10   
> 75.07  20.43  98.33
> sda1 1878.33   3.00 480.33  1.00 18864.00   32.00    39.26    36.10   
> 75.07  20.43  98.33
> sdb   11763.33   0.00 1908.00  1.00 109365.33    8.00    57.29   
> 129.90   68.20   5.24 100.00
> sdb1  11763.33   0.00 1908.00  1.00 109365.33    8.00    57.29   
> 129.90   68.20   5.24 100.00
> sdc        0.00   0.00  0.00  0.00    0.00    0.00     0.00     0.00   
>  0.00   0.00   0.00
>
> i have bolded the avrg wait,srvc time and % utilization and i think 
> given the above stats the i/o is bad on this configuration. the wait 
> times are very high and there is a problem with the way physical 
> configuration of the disks is done and how the datafiles are laid out.
>
> please advise me on this and explain whether my observation is 
> alright. and also tell me what could be a reason for this to happen.
>
> thanks
> sai
>
> ----------------------------------------------------------------
> Please see the official ORACLE-L FAQ: http://www.orafaq.com
> ----------------------------------------------------------------
> To unsubscribe send email to:  oracle-l-request@xxxxxxxxxxxxx
> put 'unsubscribe' in the subject line.
> --
> Archives are at //www.freelists.org/archives/oracle-l/
> FAQ is at //www.freelists.org/help/fom-serve/cache/1.html
> -----------------------------------------------------------------

----------------------------------------------------------------
Please see the official ORACLE-L FAQ: http://www.orafaq.com
----------------------------------------------------------------
To unsubscribe send email to:  oracle-l-request@xxxxxxxxxxxxx
put 'unsubscribe' in the subject line.
--
Archives are at //www.freelists.org/archives/oracle-l/
FAQ is at //www.freelists.org/help/fom-serve/cache/1.html
-----------------------------------------------------------------

Other related posts: