Lyall, To determine if the full table scans are "wrong", you should check how much of the rows are actually used in the resultset. For instance, if Oracle is doing a full table scan, only to discard 95% of the rows in the next step then the full table scan is probably not the correct access path. Generate an explain plan for this query, if possible (depending on your Oracle version) with the /*+ gather_plan_statistics */ hint to see the difference between what Oracle (based upon the cbo statistics) expects and the real world. Also google for "tuning by cardinality feedback". Jonathan Lewis (http://jonathanlewis.wordpress.com) has blogged about this and Wolfgang Breitling (http://www.centrexcc.com/) has done presentations on this topic. If it is determined the the full table scan is not the correct access method, then you can check the predicates to see if adding an index can help the query (yes, also in canned applications you can add additional indexes if your data pattern demands it). If there is already an index in place you can investigate why it is not used. Maybe you have an implicit datatype conversion or the cbo statistics need some tweaking. If the full table scan is the optimal path (or is caused by a flawed design, which can't be changed) you can try to enable caching for this table (normally only a small subset of the buffer cache is used to store the blocks coming from a full table scan) or if the table is to big for memory and you have the money get the db smart cache option (flash cards). Another thing you can do is check the throughput on your io subsystem to see if it is slowing you down. Kind regards, Freek D'Hooge Uptime Oracle Database Administrator email: freek.dhooge@xxxxxxxxx tel +32(0)3 451 23 82 http://www.uptime.be disclaimer: www.uptime.be/disclaimer ________________________________________ From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On Behalf Of lyallbarbour@xxxxxxxxxxxxxxx Sent: woensdag 1 december 2010 21:04 To: oracle-l@xxxxxxxxxxxxx Subject: db file scattered reads Hello everyone, From what i understand in my reading of this wait event, it happens the most with Fast Full Table Scans. As we all know, FFTS's aren't necessarily a bad thing. Looking through ADDM's and ASH reports from one of our databases, db file scattered reads on one table is the main culprit for the bottlenecks on this system. This is an off the self application that i can't touch, so the only SQL tuning i can do is with SQL Profiles. Is there anything i can do under the scenes to release some of this bottleneck? Move the table into it's own tablespace? Change the storage settings? That's one thing i can't find through google, is what to do if you get db file scattered reads. I'm still waiting for my tuning guide from Rampant, it's on backorder... Thanks, Lyall -- //www.freelists.org/webpage/oracle-l