Re: Better cardinality estimate when dialing optimizer_features_enable back

  • From: Neil Kodner <nkodner@xxxxxxxxx>
  • To: Kellyn Pedersen <kjped1313@xxxxxxxxx>
  • Date: Mon, 30 Nov 2009 13:22:31 -0700

We're using the default value of 2, across all of our 10g instances.  After
what I've learned/read through the last few weeks, I'm sensing that this
isn't always such a good idea.

On Sun, Nov 29, 2009 at 8:11 PM, Kellyn Pedersen <kjped1313@xxxxxxxxx>wrote:

> After missing a week, I'm just too tired to go through all the responses to
> this thread, so apologies upfront if I'm repeating a question already
> posted-
>
> What is the optimizer_dynamic_sampling set to?
>
> No matter if updated stats are present or not, I keep hearing that little
> voice in my head saying, "all it takes is the DATABASE deciding a dynamic
> sample needs to be taken due to the total pressure on the objects involved,
> by the total of the code involved and all the great stats gathering in the
> world goes right out the window..."  :)
>
> I have a puzzle box of code like this and I had to prove to other folks
> that it was occurring, as I just happened to notice a pattern to what the
> packages were being executed when we had an update statement with a
> different execution plan than the same process for the same statement...
> I used this pattern to know when to trace the process that was affected,
> traced them all out and then showed in the trace file the "OPT_DYN"
> statement that occurred.  When this combination occurred and only when this
> combination occurred, the update statement had a 70% chance of extensive
> long-ops and another 10% of the select for the cursor would estimate days to
> complete.  It was other combinations of packages and procs on the tables
> that were kicking off the dynamic sampling that then affected the simple
> update statement at certain times on another table, using two other tables
> as the sources for the cursor.
> Something to think about...:)
> Kellyn Pedersen
> Multi-Platform DBA
> I-Behavior Inc.
> http://www.linkedin.com/in/kellynpedersen
>
> "Go away before I replace you with a very small and efficient shell
> script..."
>
>
> --- On *Wed, 11/25/09, Neil Kodner <nkodner@xxxxxxxxx>* wrote:
>
>
> From: Neil Kodner <nkodner@xxxxxxxxx>
> Subject: Re: Better cardinality estimate when dialing
> optimizer_features_enable back
> To: "Randolf Geist" <info@xxxxxxxxxxxxxxxxxxxxx>
> Cc: "Greg Rahn" <greg@xxxxxxxxxxxxxxxxxx>, "oracle-l-freelists" <
> oracle-l@xxxxxxxxxxxxx>
> Date: Wednesday, November 25, 2009, 12:12 PM
>
>
> As far as things that might have changed statistics-wise, I'm not too sure.
>  I can ask around, there are other DBAs in the org, but I dont think anyone
> would have changed the stats gathering methods.  the gather_stats_job runs
> but it hasn't touched any of the tables in this specific schema.
>
> One other thing that Randolf touched upon was the single block read time
> taking longer than multi-block.  I calculated workload statistics again and
> received a similar response.  Is this worth bringing up to the Sysadmins?
>  I've since deleted the system stats and computed noworkload statistics
> instead.
>
> On Wed, Nov 25, 2009 at 11:51 AM, Randolf Geist <
> info@xxxxxxxxxxxxxxxxxxxxx<http://us.mc320.mail.yahoo.com/mc/compose?to=info@xxxxxxxxxxxxxxxxxxxxx>
> > wrote:
>
>> > Well look what I found while poking around sys.wri$_optstat_histhead_
>> > history
>> >
>> > http://bit.ly/75oNst (screenshot)
>> >
>> > Could this have been a function of sample size changing? Could
>> > parallel have anything to do with it?
>>
>> Since the density itself was so far not used for the cardinality estimate
>> calculation this is merely an indicator that "something" has changed - the
>> old density is calculated based on the histogram generated. So a significant
>> change in the density might indicate that the underlying histogram has
>> changed.
>>
>> That was one of the questions I haven't raised yet - what could have
>> changed, since I already had the impression that there was a time when the
>> batch job performed better from your comments.
>>
>> You can try to restore the old statistics from the statistics history to
>> see if there were significant differences in the histogram generated and the
>> sample size used to gather the statistics along with the number of distinct
>> values gathered. This way you could also check what NewDensity got
>> calculated with the old statistics and what the estimates looked like for
>> your example queries.
>>
>> The slowly increasing density suggests that the underlying data might
>> slowly have changed to something now that might have changed the shape of
>> the height-balanced histogram so that the density calculation was
>> significantly affected.
>>
>> Regards,
>> Randolf
>>
>> Oracle related stuff blog:
>> http://oracle-randolf.blogspot.com/
>>
>> Co-author of the forthcoming "OakTable Expert Oracle Practices" book:
>> http://www.apress.com/book/view/1430226684
>>
>> http://www.amazon.com/Expert-Oracle-Practices-Database-Administration/dp/1430226684
>>  ___________________________________________________________
>> Preisknaller: WEB.DE <http://web.de/> DSL Flatrate für nur 16,99
>> Euro/mtl.!
>> http://produkte.web.de/go/02/
>>
>>
>
>

Other related posts: