Mark, A few remarks. First: 10 Mb redo logs seem rather small to me. Try bigger ones. Frequent logswitching requires a lot of checkpointing etc. Do you get any information on this from the alert.log? Second: a script with 1,000,000 statements circumvents the fact that every single SQL in a Logical Standby has to be generated in place. That is faster whatsoever. Third: Does your script the same update? In other words, does it contain the same set of unique statements, in the same order? If the original update processes the table in physical order, and your script ordered the statements differently, that can affect the order blocks have to be read from disk, and therefore generate another type of load. Fourth: Did you consider to perform a Cary (trace 10046) on the SQL Apply process? Maybe you get some insight at that point. I'm curious to see the results. Just my EUR 0.02 Best regards, Carel-Jan Engel === If you think education is expensive, try ignorance. (Derek Bok) ===