Re: I/O and db_file_multiblock_read_count

  • From: "Vlad Sadilovskiy" <vlovsky@xxxxxxxxx>
  • To: kevinc@xxxxxxxxxxxxx
  • Date: Mon, 11 Dec 2006 16:29:22 -0500

How does number of disk hits for larger I/O corresponds to number of hits
for smaller I/O in respect to the whole process. It should not be more
should it? Unless same data gets read twice.

On 12/11/06, Kevin Closson <kevinc@xxxxxxxxxxxxx> wrote:

 OK, we should all throw our dd(1) microbenchmark results out there...
This
is a DL-585 with 2Gb FCP to a PolyServe CFS monted in direct I/O mode. The
single LUN is RAID 1+0 st_width 1MB striped across 65 15K RPM drives (hey,
I get to play with nice toys...)

The file is 16GB

$ time dd if=f1 of=/dev/zero bs=1024k
16384+0 records in
16384+0 records out

real    1m47.220s
user    0m0.009s
sys     0m5.175s
$ time dd if=f1 of=/dev/zero bs=128k
131072+0 records in
131072+0 records out

real    2m52.157s
user    0m0.056s
sys     0m7.126s
For grins I through in huge I/O sizes (yes this is acutally issuing 8MB
blocking reads)

$
$ time dd if=f1 of=/dev/zero bs=8192k
2048+0 records in
2048+0 records out

real    1m32.710s
user    0m0.002s
sys     0m3.984s

Large I/Os get chopped up in the scsi midlayer of Linux, but like what is
happening if you
get less tput with larger I/Os is you have few drives and a stripe with
that is causing each
disk to be hit more than once for every I/O (that is bad).




Other related posts: