We also use Hitachi Sans. I ran dd for 64K and 1M block sizes and the reads with 1M block size were slightly faster. This is on Solaris8 / Sunfire 15K. Strangely, the sys time is much lower with the 1M blocksize. Is there an explanation for that? Keith orapls> time dd if=polaris_data_04.dbf of=/dev/null bs=65536 163840+1 records in 163840+1 records out real 4m15.57s user 0m0.85s sys 2m37.34s ddcspora07 - pol2p - /plsoraprd01/u03/oradata/pol2p orapls> time dd if=polaris_data_04.dbf of=/dev/null bs=1048576 10240+1 records in 10240+1 records out real 3m59.18s user 0m0.02s sys 0m21.93s > do the "same" thing using dd. Whatever your db_block_size is, plug it in > as follows: > > $ time dd if=<datafile_for_the tablespace> of=-/dev/null > bs=<block_size_in_bytes*16> > > then re-run: > $ time dd if=<datafile_for_the tablespace> of=-/dev/null > bs=<block_size_in_bytes*128> > > please let me know what you find > > > > > ________________________________ > > From: oracle-l-bounce@xxxxxxxxxxxxx > [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On Behalf Of Kevin Lidh > Sent: Friday, December 08, 2006 12:46 PM > To: oracle-l > Subject: I/O and db_file_multiblock_read_count > > > I was reading an article about the appropriate setting for > db_file_multiblock_read_count. I'm on a HP-UX 11.11 64-bit system with > Oracle 9.2.0.7.0. The original value was 16 and I bounced the database > and ran a million record full-scan test (10046 trace) and then set the > value to 128 (max value) and re-ran the > > -- //www.freelists.org/webpage/oracle-l