Re: Better cardinality estimate when dialing optimizer_features_enable back

  • From: Neil Kodner <nkodner@xxxxxxxxx>
  • To: Randolf Geist <info@xxxxxxxxxxxxxxxxxxxxx>
  • Date: Wed, 25 Nov 2009 08:17:55 -0500

Very interesting-so you're saying if I set a value for density, even using
the existing value, it ignores the newdensity value?

Looks like that's the case.  In my QA system, I set density
to 0.000430560539754607 and then I received a more reasonable cardinality
estimate.

--but--

Is this actually progress?  I now get a cardinality estimate of 1300
regardless of how many occurrences a prty_id is actually represented in
letter_bin.  For instance, I produced a list of prty_ids that are in
letter_bin 12 times, all 12 received cardinality estimates of 1300.  Did
setting density invalidate my statistics?

Truthfully, I have not tested the results against prty_ids with low
letter_bin counts.

Here's a screenshot of the two 10053 files in filemerge
http://bit.ly/8fqpev

I then went back to my production system, set the OFE to 10.1.0 and ran an
allstats last against a party_id with low cardinality and received E-rows
1674, A-rows 12.  Explain plan results at:
http://www.neilkodner.com/low_cardinality_production.txt.  I have not
modified the value of density in my production environment.

A few follow-up issues:
1-do I need to re-analyze my table or index after setting density?
1a-will this density value get wiped out next time I gather statistics?
2-Is there a certain way I should be gathering statistics?  I've been
subscribing to the 'leave it to the defaults' theory as much as I can.





On Wed, Nov 25, 2009 at 7:39 AM, Randolf Geist
<info@xxxxxxxxxxxxxxxxxxxxx>wrote:

> > The value in user_tab_columns.density for letter_bin.prty_id is 0.
> > 000430560539754607.
> > Take a look at the screenshot http://bit.ly/6AwFE6
> > The left-hand-side is OFE 10.2.0.4 and shows density used as 4.0482e-
> > 06.
> > Compare that with the ofe 10.1.0 version on the right and you'll see
> > it uses 4.3056e-04, which is stored in the data dictionary.
> >
> > Having said that, I'm not sure what to set the density of the column
> > to-I'd appreciate some help with this.
>
> As I said previously - you should set it to the value already shown in the
> dictionary for DENSITY which would be in this case the 4.3056e-04. The other
> value 4.0482e-06 is the NewDensity that is not visible from the dictionary.
>
> Setting it to the value should revert to the previous handling of density -
> picking up the one from the dictionary.
>
> Of course you could set it to different values then - if it helps to come
> up with a reasonable estimate for most of your queries.
>
> > Additionally, look at the highlighted Histogram section-HtBal vs
> > Frequency - could that be a clue?
>
> I think is is a side effect of the different optimizer code paths.
>
> > And finally, I uploaded a few examples of using the dynamic_sampling
> > hint using the simple-case query to
> > http://www.neilkodner.com/dynamic_sampling_results.txt
> >
> > Looks like dynamic_sampling didn't kick in until level 6.
>
> Possibly the sampling results are rejected with lower sampling levels -
> there are some sanity checks applied. This can be seen from the 10053
> optimizer trace file.
>
> It would be interesting to see the EXPLAIN PLAN results with more complex
> queries - in particular those that used an suboptimal plan so far.
>
> Regards,
> Randolf
>
> Oracle related stuff blog:
> http://oracle-randolf.blogspot.com/
>
> Co-author of the forthcoming "OakTable Expert Oracle Practices" book:
> http://www.apress.com/book/view/1430226684
>
> http://www.amazon.com/Expert-Oracle-Practices-Database-Administration/dp/1430226684
> ___________________________________________________________
> Preisknaller: WEB.DE DSL Flatrate für nur 16,99 Euro/mtl.!
> http://produkte.web.de/go/02/
>
>

Other related posts: