At this time, we don't have system statistics. I built a million+ row table and didn't give it an index. I then did a select which was an equivalent of "SELECT *" to force a full-table scan. My goal was/is to find the optimal setting for the db_file_multiblock_read_count parameter. I was mostly just surprised that it was faster to grab the same group of data in pieces than in one fell swoop. My guess was that the underlying physical I/O (not what Oracle perceived as an I/O) was actually multiple reads from the disk and that the process of presenting them as a singular response comprised the addition time. On Sat, 2006-12-09 at 00:35 -0500, Mladen Gogala wrote: > On 12/08/2006 03:46:00 PM, Kevin Lidh wrote: > > I was reading an article about the appropriate setting for > > db_file_multiblock_read_count. I'm on a HP-UX 11.11 64-bit system with > > Oracle 9.2.0.7.0. The original value was 16 and I bounced the database and > > ran a million record full-scan test (10046 trace) and then set the value to > > 128 (max value) and re-ran the test. Although I did have less calls to 'db > > file scattered read' (140 compared to 1008), the time to get the same number > > of blocks was longer. A random example from the trace files looks like > > this: > > Of course, you did run dbms_stats.gather_system_stats? > -- //www.freelists.org/webpage/oracle-l