Re: Optimizer issue - cost of full table scans

  • From: Niall Litchfield <niall.litchfield@xxxxxxxxx>
  • To: greg@xxxxxxxxxxxxxxxxxx
  • Date: Mon, 13 Sep 2010 18:57:41 +0100

Hi Greg,
I meant to ask previously, but didn't. Are there any changes to system stat
calculation for exadata? You earlier talked about getting stats
representative, I wonder (in the absence of an exadata play box), how
representative the system stat IO and CPU costing is in this environment?

On 13 Sep 2010 17:49, "Greg Rahn" <greg@xxxxxxxxxxxxxxxxxx> wrote:


This is actually a poor recommendation.  Using a "estimate_percent=>null"
will be very very costly and since the OP is on 11g (11.2 in fact as he is
on Exadata V2), the default value for estimate_percent of
dbms_stats.auto_sample_size is much faster and usually within
>99% accuracy of a 100% sample.

More details on why:

On Mon, Sep 13, 2010 at 6:14 AM, Pavel Ermakov <ocp.pauler@xxxxxxxxx> wrote:
> Hi
> Try to ga...

Greg Rahn

Other related posts: