I'm using SGA_TARGET, so my other SGA memory parameters aren't set. SGA_TARGET=1500M. That's small but it's a test database. I did test with a larger SGA (3500M) and with the logical standby parameter MAX_SGA set to 2000. No difference. I am neither CPU nor IO bound. I ran OSWatcher during the tests. I can't easily set up a Solaris 10 OS environment at the moment. The Apply process uses a single update SQL statement with bind variables that it executes 1,000,000 times. It does use the table's primary key index. When I tested 1,000,000 unique update statements (i.e. with literals, not bind variables) running directly in the standby from a SQL script, it took 50 minutes, even with all that hard parsing and not-large SGA. Here's the update statement that is executed by SQL Apply:
update /*+ streams restrict_all_ref_cons */ "MILA"."RATEMARGINADD" p set "ANEWCOLUMN"=decode(:1,'N',"ANEWCOLUMN",:2), "BACKUP_MARGIN"=decode(:3,'N',"BACKUP_MARGIN",:4), "DATELASTMODIFIED"=decode(:5,'N',"DATELASTMODIFIED",:6), "DATERECORDADDED"=decode(:7,'N',"DATERECORDADDED",:8), "MARGIN"=decode(:9,'N',"MARGIN",:10), "NAME"=decode(:11,'N',"NAME",:12), "OID"=decode(:13,'N',"OID",:14), "OIDMORTGAGE"=decode(:15,'N',"OIDMORTGAGE",:16), "ORDINAL"=decode(:17,'N',"ORDINAL",:18), "RATE"=decode(:19,'N',"RATE",:20) where (:21='N' or (decode(:22,'N','Y',decode(:23,"ANEWCOLUMN",'Y'))='Y' and decode(:24,'N','Y',decode(:25,"BACKUP_MARGIN",'Y'))='Y' and decode(:26,'N','Y',decode(:27,"DATELASTMODIFIED",'Y'))='Y' and decode(:28,'N','Y',decode(:29,"DATERECORDADDED",'Y'))='Y' and decode(:30,'N','Y',decode(:31,"MARGIN",'Y'))='Y' and decode(:32,'N','Y',decode(:33,"NAME",'Y'))='Y' and 1=1 and decode(:34,'N','Y',decode(:35,"OIDMORTGAGE",'Y'))='Y' and decode(:36,'N','Y',decode(:37,"ORDINAL",'Y'))='Y' and decode(:38,'N','Y',decode(:39,"RATE",'Y'))='Y')) and(:40="OID") and rownum < 2;
What is the size of your streams pool?
In your snapshots, what are the values for CPU time, User IO time and Sys IO time for the apply reader and the logminer reader? Ie. are you CPU or IO bound?
Can you set up a Solaris 10 OS env? If so, maybe dtrace on the apply process would help...
Also, the apply process should just use straight SQL as noted by Carel-Jan Engel. A tkprof'd level 12 10046 trace of the apply process during the test might also prove useful.
-----Original Message----- From: Mark Strickland [mailto:strickland.mark@xxxxxxxxx] Sent: Thursday, August 10, 2006 1:55 PM To: Rich Amick Cc: oracle-l@xxxxxxxxxxxxx Subject: Re: Logical Standby Issues (cont.)
Oh yeah, I've taken AWR snapshots and run the logical standby diagnostics script provided on Metalink. I've set logical standby parameters as recommended by Oracle Support. I've set various events and levels of tracing. Yes, it is the Applier process that is consuming the CPU. In 10gR1, there is latch contention while Log Miner is scraping DMLs out of the archived logs (Log Miner Work Area latch) but the latch contention goes away once Log Mining finishes. Log Mining takes about half of the total elapsed time of the test. However, SQL Apply does not suddenly speed up. It continues its linear trajectory toward more slowness. In my 10gR2 tests, the latch contention doesn't exist but the test still takes close to the same amount of time as in 10gR1. Log Mining takes about 34 minutes which is much faster than in 10gR1. I've graphed the amount of time between log switches in the standby. It gets longer and longer and the graph is linear pointing Northeast. In the 10gR1 tests, each redo log takes about 28 seconds longer on average to fill than the last one for roughly the same number of SCNs. In the 10gR2 tests, each redo log takes about 10 seconds longer to fill than the last one but there are more log switches than in 10gR1. Don't know why and haven't looked into it. At any rate 10gR2 is a little faster than 10gR1 but not by much. From the AWR reports I can easily see from the number of executions of the update statement for each time period covered by the AWR snapshot that SQL Apply is slowing down.
Thanks for responding.