My $0.02 worth based on experience at a current site I'd opt explicit dbms_stats facilities on some/most tables, with monitoring and gather_stale as the fallback option for those without any specific requirement. My main reasons for not relying solely on monitoring and/or auto sample size are: a) I've found that auto histogramming gives a LOT of histograms, many of which upon further inspection were not actually required. COL_USAGE$ is nice to have though to see roughly how things are being accessed though b) I've been very disappointed with auto-sample size. It runs more expensive (because its implemented as repeated iterations with increasing sample sizes). Also, I've found it be poor with skewed column data - the resulting sample sizes are not large enough c) I've been bitten a few times with sampling on indexes - I can't remember the specifics but it tends to get clustering factor and distinct keys a little wrong. So I generally run without cascade and always do 100% samples if possible on the indexes. Our process automatically turns on monitoring for any tables not covered by special requirements, and runs a gather stale at regular intervals (the fewer the better - we try to minimise the amount of statistics runs we do - generally once per release). We've detected no overheads with monitoring (although admittedly we haven't really spent a great deal of time looking for them) Other small exceptions - we've got dynamic sampling set to 2 (because the app uses temp tables heavily) and we've stats on the dictionary (app does lots of parsing, and having dyn sampling set to 2 means dictionary queries also invoked dyn sampling). hth connor --- Jeremiah Wilton <jeremiah@xxxxxxxxxxx> wrote: > What is the current state of the art WRT CBO best practices? > > I'm working on 9.2.0.4 and considering the 'automated statistics gatherin= > g' approach. This involves turning on monitoring for any and all tables = > that need to ever have stats updated, then periodically running dbms_stat= > s in gather_stale mode. > > How is this working for people? Does monitoring impact DML operations, a= > nd if so, how much? > > Does this approach make any kind of intelligent decisions about sample si= > zes and block sampling? > > When histograms are present, does this approach always/never/sometimes re= > generate the histogram with the correct number of buckets? > > Does it seem to reliably choose the correct tables to analyze? > > -- > Jeremiah Wilton > ORA-600 Consulting > http://www.ora-600.net > > > -- > //www.freelists.org/webpage/oracle-l > ===== Connor McDonald Co-author: "Mastering Oracle PL/SQL - Practical Solutions" ISBN: 1590592174 web: http://www.oracledba.co.uk web: http://www.oaktable.net email: connor_mcdonald@xxxxxxxxx Coming Soon! "Oracle Insight - Tales of the OakTable" "GIVE a man a fish and he will eat for a day. But TEACH him how to fish, and...he will sit in a boat and drink beer all day" ------------------------------------------------------------ __________________________________ Do you Yahoo!? Yahoo! Mail - Helps protect you from nasty viruses. http://promotions.yahoo.com/new_mail -- //www.freelists.org/webpage/oracle-l