Re: Better cardinality estimate when dialing optimizer_features_enable back

  • From: Greg Rahn <greg@xxxxxxxxxxxxxxxxxx>
  • To: Neil Kodner <nkodner@xxxxxxxxx>
  • Date: Wed, 25 Nov 2009 08:27:22 -0800

Did the dbms_stats.gather_* command change there?

What may be happening is that the sample size chosen does not see as
many of the values that only have a few occurrences.  As a result the
NDV is lower which in turn would raise the cardinality, which in turn
would likely make the NLJ a HJ as seen from examples you provided.
Using a 100% sample guarantees that all the distinct values are seen
so the NDV would naturally be higher.  This should be fairly easy to
validate in a test environment by adjusting the dbms_stats.gather_*
parameters and looking at the NDV, cardinality and plans for the
queries

I also think it is important to recognize that the goal here is not to
have absolutely every query have perfect cardinality estimates, but
rather have stats that yield representative stats AND yield the
desired execution plans.  Do not become micro-focused solely on the
cardinality estimates.

On Wed, Nov 25, 2009 at 7:29 AM, Neil Kodner <nkodner@xxxxxxxxx> wrote:
> Well look what I found while poking around sys.wri$_optstat_histhead_history
> http://bit.ly/75oNst   (screenshot)
> Could this have been a function of sample size changing?  Could parallel
> have anything to do with it?


-- 
Regards,
Greg Rahn
http://structureddata.org
--
//www.freelists.org/webpage/oracle-l


Other related posts: