Re: Better cardinality estimate when dialing optimizer_features_enable back

  • From: Randolf Geist <info@xxxxxxxxxxxxxxxxxxxxx>
  • To: Randolf Geist <info@xxxxxxxxxxxxxxxxxxxxx>, Neil Kodner <nkodner@xxxxxxxxx>
  • Date: Wed, 25 Nov 2009 17:51:04 +0100

> Well look what I found while poking around sys.wri$_optstat_histhead_
> history
> (screenshot)
> Could this have been a function of sample size changing? Could 
> parallel have anything to do with it?

Since the density itself was so far not used for the cardinality estimate 
calculation this is merely an indicator that "something" has changed - the old 
density is calculated based on the histogram generated. So a significant change 
in the density might indicate that the underlying histogram has changed.

That was one of the questions I haven't raised yet - what could have changed, 
since I already had the impression that there was a time when the batch job 
performed better from your comments.

You can try to restore the old statistics from the statistics history to see if 
there were significant differences in the histogram generated and the sample 
size used to gather the statistics along with the number of distinct values 
gathered. This way you could also check what NewDensity got calculated with the 
old statistics and what the estimates looked like for your example queries.

The slowly increasing density suggests that the underlying data might slowly 
have changed to something now that might have changed the shape of the 
height-balanced histogram so that the density calculation was significantly 


Oracle related stuff blog:

Co-author of the forthcoming "OakTable Expert Oracle Practices" book:
Preisknaller: WEB.DE DSL Flatrate für nur 16,99 Euro/mtl.!


Other related posts: