RE: Oracle Read Consistent Overhead

  • From: Matt McClernon <mccmx@xxxxxxxxxxx>
  • To: <jonathan@xxxxxxxxxxxxxxxxxx>, <oracle-l@xxxxxxxxxxxxx>
  • Date: Mon, 18 Apr 2011 00:59:47 +0000

> 
> How large was the "medium-sized" transaction.

I just repeated the test again the and medium sized transaction was 100,000 
rows (update of one column)
> Read-consistency costs are largely about the number of undo records applied, 
> not 
> about the number of blocks in the underlying object, and the number of undo 
> records is (generally) related to the number of changes, which often means 
> number of rows.

Interesting.  I did suspect that the multi-versioning might be row based, not 
block based, but I ruled it out because it seemed too inefficient.
In my re-run of the simulation today I saw 200,000 consistent reads in the 
second session which is 2 CR blocks per row update.  That still seems a little 
high.
More interesting than that though is that I repeated the test with an index on 
the MV log and the CR count was 3.8 million..!  which is 38 CR blocks per row 
update.  That is highly suspicious.  

                                          

Other related posts: