Re: Analyze gather statistics automated in 10g?

Hi,

J. Dex schrieb:
> (specifically 10.2.0.3) or is there still something that needs to be set
> up to get it to regularly run?   What needs to be set up?  How do you
> set it up?
>  
> In the past, we have had a cron job that runs weekly that does the
> following:
> exec dbms_stats.gather_database_stats(estimate_percent=>20,cascade=>true);
>  
> How is everybody else handling this?

Yes, Oracle gathers statitics on its own, with some default values
described by John in the parallel post.

But IMO the major disadvantage of this nightly operations is the time
period they do it in by default. For your very special case, you will
have to look up carefully, if statistics/histograms collected in the
middle of the night are fitting your needs. Especially with OLTP
databases, the contrary may be true:

Imagine a table of orders. The company tries to finish all open orders
before quitting time. So only a minority of order statuses will be in
"order-is-still-open" mode when the stats collector comes and gathers
statistics. He might be mislead to create even histograms based on that
status field contents. (For example: OPEN means 1, closed means 1000.
some single OPEN lines will be considered as massive data skew, as
far-outs.) If you are now inserting lots of open orders (or reopen old
ones) in the morning, all stats and histograms for the column are crap,
and your query plans will honor that. :)

A far better approach for this kind of usage profile might be, to kick
out the auto gather jobs, and create statistics/histograms one time
while the data set is in a realistic state, maybe at average plant
workload. Yes, the performance will degrade in this time period, and you
may experience lots of reparsing at the time, and there may be several
hybrid parses as well, but at least the stats are ok one time. Then lock
the stats, and recreate them only on demand or in a wider interval (each
monday morning, etc.)

Hybrid parses (no clue if the term is useful or correct) are a special
side effect of bind variable peeking: Imagine a join between two tables.
The statement is parsed during your gather stats job. One table already
has the new stats, and the other table still has its old stats. A new
query plan will be created based on this information. Now the statement
is used so frequently (with honored bind variables in service) that it
will not age out of the library cache. So you have proper stats now, all
should be well, but you see a sad query plan anyway all over the day!

My quick and dirty solution prior to 11g: Alter the comment of all
(affected) tables in a superfast loop after gather stats job ran. :)
This hack will invalidate all execution plans operating with the table
in question, because the table "has changed".

HTH
Martin Klier

-- 
Usn's IT Blog for Linux, Oracle, Asterisk
http://www.usn-it.de

--
http://www.freelists.org/webpage/oracle-l


Other related posts: