On 3/30/06, Riyaj Shamsudeen <rshamsud@xxxxxxxxxxxx> wrote: > > > BTW, I am surprised to find some inefficiencies in the dbms_stats trace > file (this is probably old news for many folks!): Oracle is trying to run > few SQL couple of times, for example SQL used a sample clause of 1%, and > then went ahead and executed same SQL with a sample clause of 10%(for larger > partition) or 100%. Bottomline is that work is done twice. Almost looks like > some kind of sanity check is done in dbms_stats code to see whether the > statistics are dependable or not, and then executed again with higher > precision. > > Check the number of rows in the partition. I haven't yet tried this with partitions, but on regular heap tables, the second scan is no longer used after the table reaches about 270k rows in size. (this was on block sample) Oracle is using the second scan to get more accurate statistics when it determines that the table is not large enough for the sample size asked for. Try it with different numbers of rows in the table, and you will see that the sample size of the second scan decreases with an increas in the number of rows. The threshold seems to be row-dedendent and not block-dependent. -- Jared Still Certifiable Oracle DBA and Part Time Perl Evangelist