Re: Optimizer issue - cost of full table scans

  • From: Kellyn Pedersen <kjped1313@xxxxxxxxx>
  • To: niall.litchfield@xxxxxxxxx, andrew.kerber@xxxxxxxxx
  • Date: Tue, 14 Sep 2010 07:34:34 -0700 (PDT)

Hey Andrew,
That's what trips me-  the lister actually mentioned that this corrects the 
problem when he performs an "alter session" with this value for the parameter.  
I'm more in the line of addressing problems at the statement level vs. database 
level, but it appears this is the goal he is trying to attain, so I'm with 
Andrew on this one, (albeit agreeing with Niall and Greg on the proper way of 
handling the issue-  I'm such a moderate! :))  
I think the lister answered his own question, just may not be taking the 
correct approach to figuring out what is causing the problem...


Kellyn Pedersen
Sr. Database Administrator
I-Behavior Inc.
http://www.linkedin.com/in/kellynpedersen
www.dbakevlar.blogspot.com
 
"Go away before I replace you with a very small and efficient shell script..."

--- On Tue, 9/14/10, Andrew Kerber <andrew.kerber@xxxxxxxxx> wrote:


From: Andrew Kerber <andrew.kerber@xxxxxxxxx>
Subject: Re: Optimizer issue - cost of full table scans
To: niall.litchfield@xxxxxxxxx
Cc: "Job Miller" <jobmiller@xxxxxxxxx>, "ORACLE-L" <oracle-l@xxxxxxxxxxxxx>
Date: Tuesday, September 14, 2010, 6:33 AM


Did someone already mention the optimizer_index_cost_adj setting?  Sets the 
relative cost of index use versus table scans, Set it at something like 200 
would probably force full table scans.  I have never tried that though.


On Tue, Sep 14, 2010 at 4:05 AM, Niall Litchfield <niall.litchfield@xxxxxxxxx> 
wrote:


Well, system stats generally boil down to what happens to sreadtim and mreadtim 
and the ratio between them. I'm not clear what would happen on a Database 
Machine, but I can imagine it *might* break the assumptions of the system stat 
costing model, not least that Multi Block reads are slower than single block 
reads. 


I personally on a non-exadata enabled machine would deal with the stability 
issue that Greg mentions by grabbing system stats repeatedly over a period (a 
week or a month most likely) to a user defined stattab and then sticking them 
in a spreadsheet to determine what values are representative of the system. You 
can then just set the stats once and for all. After all I/O and CPU capability 
rarely changes in a normal server (VMs of course blow this assumption out of 
the water). I may be overly optimistic but I personally believe (and it is 
mostly a matter of logic/faith sadly I don't have enough actual data to say one 
way or the other) that representative system statistics are likely to be a 
sensible choice for *most* systems - certainly ones where the OICA and OIC 
tricks are being used.     

On 13 Sep 2010 21:43, "Job Miller" <jobmiller@xxxxxxxxx> wrote:

Niall,

I don't know how system stats are calculated, but if they are measured from the 
perspective of the database server, wouldn't they reflect the benefits of 
exadata storage?


The server knows it needs to do a FTS, the high water mark says that requires 
reading N blocks, exadata storage skips 50% of the blocks via the storage 
indexes, and the multi-block read times from the servers perspective look 
really fast because storage optimized away some of the i/o.

If that's true, the system stats would measure the typical system benefit from 
exadata storage experienced during the system stats workload capture.

Does that make sense?  What am I missing? 



> How does one maintain how much of a table is on disk vs memory? A> costing 
> model that considers all those cases becomes increasingly more 


> difficult to program and manage.
The costing model doesn't really attempt to consider all that though does it? I 
just thought that it measures the average system-wide performance based on your 
workload during capture.  Hopefully what happens during that workload 
represents your cache ratios for data, your data placement ratios and your true 
typical multi-block average read rates.

That average system-wide multi-block read should lead to pretty good plans for 
everything.  instead of good only for a few special cases.

--- On Mon, 9/13/10, Greg Rahn <greg@xxxxxxxxxxxxxxxxxx> wrote:

> From: Greg Rahn <greg@xxxxxxxxxxxxxxxxxx>
> Subject: Re: Optimizer issue - cost of full table scans
> To: "Niall Litchfield" <niall.litchfield@xxxxxxxxx>
> Cc: "ORACLE-L" <oracle-l@xxxxxxxxxxxxx>
> Date: Monday, September 13, 2010, 2:16 PM


> There are no special changes as of
> today.  When I made the reference
> to representative stats, ... 


> --
> //www.freelists.org/webpage/oracle-l
> 
> 
> 






-- 
Andrew W. Kerber

'If at first you dont succeed, dont take up skydiving.'



      

Other related posts: