RE: I/O and db_file_multiblock_read_count

  • From: "Kevin Closson" <kevinc@xxxxxxxxxxxxx>
  • To: "oracle-l" <oracle-l@xxxxxxxxxxxxx>
  • Date: Mon, 11 Dec 2006 13:19:28 -0800

OK, we should all throw our dd(1) microbenchmark results out there...
This
is a DL-585 with 2Gb FCP to a PolyServe CFS monted in direct I/O mode.
The
single LUN is RAID 1+0 st_width 1MB striped across 65 15K RPM drives
(hey,
I get to play with nice toys...)
 
The file is 16GB 
 
$ time dd if=f1 of=/dev/zero bs=1024k
16384+0 records in
16384+0 records out
 
real    1m47.220s
user    0m0.009s
sys     0m5.175s
$ time dd if=f1 of=/dev/zero bs=128k
131072+0 records in
131072+0 records out
 
real    2m52.157s
user    0m0.056s
sys     0m7.126s

For grins I through in huge I/O sizes (yes this is acutally issuing 8MB
blocking reads)
 
$
$ time dd if=f1 of=/dev/zero bs=8192k
2048+0 records in
2048+0 records out
 
real    1m32.710s
user    0m0.002s
sys     0m3.984s

 
Large I/Os get chopped up in the scsi midlayer of Linux, but like what
is happening if you
get less tput with larger I/Os is you have few drives and a stripe with
that is causing each
disk to be hit more than once for every I/O (that is bad).  
 
 
 

Other related posts: