Adding fuel to the fire. This on a somewhat busy Dual CPU linux box. The SAN is Clariion CX700. Disk is a dedicated RAID10, though I'm not positive about the number of spindles. There are I believe 6 spindles. The command was run once, then both run to account for SAN caching. time dd if=/u01/oradata/dv04/cimindxsmall_01.dbf of=/dev/null bs=1048576 1500+1 records in 1500+1 records out real 0m1.207s user 0m0.002s sys 0m1.205s time dd if=/u01/oradata/dv04/cimindxsmall_01.dbf of=/dev/null bs=65536 24000+1 records in 24000+1 records out real 0m0.744s user 0m0.014s sys 0m0.729s Interesting that the 64K blocksize significantly outperforms the 1M blocksize. On 12/11/06, Keith Moore <kmoore@xxxxxxxxxxxx> wrote:
We also use Hitachi Sans. I ran dd for 64K and 1M block sizes and the reads with 1M block size were slightly faster. This is on Solaris8 / Sunfire 15K. Strangely, the sys time is much lower with the 1M blocksize. Is there an explanation for that? Keith orapls> time dd if=polaris_data_04.dbf of=/dev/null bs=65536 163840+1 records in 163840+1 records out real 4m15.57s user 0m0.85s sys 2m37.34s ddcspora07 - pol2p - /plsoraprd01/u03/oradata/pol2p orapls> time dd if=polaris_data_04.dbf of=/dev/null bs=1048576 10240+1 records in 10240+1 records out real 3m59.18s user 0m0.02s sys 0m21.93s > do the "same" thing using dd. Whatever your db_block_size is, plug it in > as follows: > > $ time dd if=<datafile_for_the tablespace> of=-/dev/null > bs=<block_size_in_bytes*16> > > then re-run: > $ time dd if=<datafile_for_the tablespace> of=-/dev/null > bs=<block_size_in_bytes*128> > > please let me know what you find > > > > > ________________________________ > > From: oracle-l-bounce@xxxxxxxxxxxxx > [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On Behalf Of Kevin Lidh > Sent: Friday, December 08, 2006 12:46 PM > To: oracle-l > Subject: I/O and db_file_multiblock_read_count > > > I was reading an article about the appropriate setting for > db_file_multiblock_read_count. I'm on a HP-UX 11.11 64-bit system with > Oracle 9.2.0.7.0. The original value was 16 and I bounced the database > and ran a million record full-scan test (10046 trace) and then set the > value to 128 (max value) and re-ran the > > -- //www.freelists.org/webpage/oracle-l
-- Jared Still Certifiable Oracle DBA and Part Time Perl Evangelist