If you already know your process generates a lot of redo, it means you
already ran it before.
What if you use logminer to analyse archivelog generated in that period and
filter your specific session or your specific statement to separate your
redo generation from the normal database usage? Then you would probably
have an idea of how much redo has been generated last time.
On 11 October 2017 at 09:48, Niall Litchfield <niall.litchfield@xxxxxxxxx>
I *do* like that. Having said that, as John also suggests, we can use the
Fermi method and make sensible assumptions as a first start (and then
measure on which more later). We might say
1. Redo will record the block changes for the data we change, the undo
blocks plus a bit for the transaction tables
2. If there is an index, then redo must also record change vectors for
3. If there are out of line LOB columns then redo will record changes for
the affected LOB chunks (this may have changed in more recent versions)
4. If there are triggers affected by the transaction that affect other
tables then add something for that.
So if you have an idea of how much data/how many blocks you will change,
how many indexes you have, and if there are any LOBS then you can start to
estimate a sensible value for redo generated ahead of time.
This won't be *accurate* but it will help you think about likely levels of
I've hardly ever met anyone who actually does this mind, the usual
approach is to add more storage as becomes necessary :)
On Wed, Oct 11, 2017 at 7:13 AM, Mladen Gogala <gogala.mladen@xxxxxxxxx>
Here is the best method for estimating the redo space that the statement
On 10/10/2017 05:27 PM, Michael Calisi wrote:
I don't have a test system for this particular sql stmt
Tel: (347) 321-1217