All I'm almost sure that I read an article linked to here, or perhaps just a response, on the wisdom of setting dbfmbrc to an appropriately high value (so that Oracle tries to read large chunks of disk at once in the event that it does do a table scan) if system statistics are set (so the high dbfmbrc doesn't figure in the cost calculations any more). I can't however find the article. Is my memory going more than I thought or does such an article in fact exist? If not can anyone think of any nasty side effects from following a strategy like the one I outline above. As a supplementary I'm intending that we spend some time getting system stats "right" - following a suggestion made here a while ago - but then not revisiting them unless the hardware changes. Do people do this, or do you collect on a schedule? -- Niall Litchfield Oracle DBA http://www.orawin.info