Kart,If that is the evidence that is being observed, then the Log Buffer has nothing to do it.
Leaving aside considerations about DataGuard and Streams, think about the basic workflow for redo...
1. server-processes ==> 2. Log Buffer ==> 3. LGWR-process ==> 4. online redo log files ==> 5. ARCn-processes ==> 6. archived redo log filesSo, if you see a message like "log file waiting to be archived" (as that event doesn't exist, probably either "log file switch (archiving needed)" or "log file switch completion"), isn't log archival downstream from the Log Buffer? That is, looking at the diagram above, we're talking about redo data sitting in the online redo log files (step #4) and waiting for the ARCn processes (step #5) to write that redo data to the archived redo log files (step #6). The Log Buffer (back in step #2) has no involvement at that point in the workflow.
So let's forget about the Log Buffer and look more closely at archival...I'm willing to bet that it is the size of your online redo log files (i.e. 3Gb) that is causing that wait event that you're seeing in TOAD. Those are some pretty big online redo log files, and I imagine that it takes each of the ARCn processes a fair amount of time to copy them. It might be illuminating to look at your database alert.log file and figure out how long it is taking each one of those files to be archived, based on the archival start and completion messages logged there, perhaps?
What are the settings for the ARCn processes? Is it possible that LOG_ARCHIVE_MAX_PROCESSES is set too low, perhaps at the default of 2? If so, please consider setting it higher, so that tying up one ARCn process while it is schlepping one of those massive 3Gb online redo log files does not prevent a subsequent online file from being archived.
Please let me know what you think? Thanks! -Tim On 8/15/2011 5:13 PM, K R wrote:
Tim,once in a while we see wait on the toad ( log file waiting to be archived ) and as the log buffer is only 20MB and the online logs are 3GB so we were discussing if we can do as per any recommended sizing .Thanks KartOn Mon, Aug 15, 2011 at 1:12 PM, Tim Gorman <tim@xxxxxxxxx <mailto:tim@xxxxxxxxx>> wrote:Kart, There is no magic bullet in that query, it was just for your information, so you can provide something more useful than the size of your online redo log files. So, asking again: what are you seeing that made you say, "the log buffer seems too small". From this query, you can see when redo generation gets high. During those times, are you seeing anything that indicates any kind of "bottleneck" in redo processing? Thanks! -Tim -----Original Message----- *From:* K R [mailto:kp0773@xxxxxxxxx <mailto:kp0773@xxxxxxxxx>] *Sent:* Monday, August 15, 2011 02:01 PM *To:* tim@xxxxxxxxx <mailto:tim@xxxxxxxxx> *Cc:* Oracle-L@xxxxxxxxxxxxx <mailto:Oracle-L@xxxxxxxxxxxxx> *Subject:* Re: log_buffer sizing Tim, this is a typical workload when the batch is running on the database. DAY HR NBR_SWITCHES GB ----------- ----- ------------ ----------- 13-AUG-2011 00:00 12 3.84 01:00 12 8.46 02:00 30 51.52 03:00 38 60.86 04:00 36 61.56 05:00 43 72.49 06:00 43 55.48 07:00 12 0.45 08:00 12 0.03 09:00 12 0.26 10:00 12 6.42 11:00 12 3.89 12:00 12 0.25 13:00 12 2.35 14:00 12 0.04 15:00 13 1.90 16:00 41 72.73 17:00 32 55.27 18:00 37 44.95 19:00 12 0.10 20:00 12 0.05 21:00 12 0.05 22:00 12 3.65 23:00 14 16.64 *********** ***** ----------- sum 523.25 On Mon, Aug 15, 2011 at 12:52 PM, Tim Gorman <tim@xxxxxxxxx <mailto:tim@xxxxxxxxx>> wrote: Kart, What exactly are you seeing in the database that makes you say "it just seems too low"? Bear in mind that the size of the online redo log files has no causal relation to the size of the log buffer, and vice-versa. What matters is the rate with which redo is written, and the frequency of COMMIT commands by the application. For the first bit of information, try the following SQL*Plus script... clear breaks break on day skip 1 on hr on report compute sum of gb on day compute sum of gb on report col sort0 noprint col cnt format 999,990 col gb format 999,990.00 select trunc(completion_time,'HH24') sort0,to_char(trunc(completion_time,'HH24'),'DD-MON-YYYY') day, to_char(trunc(completion_time,'HH24'),'HH24')||':00' hr,inst_id, count(*) nbr_switches, sum(blocks*block_size)/(1048576*1024) gb from gv$archived_log group by trunc(completion_time,'HH24'),to_char(trunc(completion_time,'HH24'),'DD-MON-YYYY'), to_char(trunc(completion_time,'HH24'),'HH24')||':00',inst_id order by 1, 2, 3, 4; That will give you some idea of the rate and volume. Of course, this query may end up double-counting redo volumes if you have additional archive destinations set up, so be aware of that and adjust accordingly. Hope this helps... Thanks! -Tim -----Original Message----- *From:* K R [mailto:kp0773@xxxxxxxxx <mailto:kp0773@xxxxxxxxx>] *Sent:* Monday, August 15, 2011 01:22 PM *To:* Oracle-L@xxxxxxxxxxxxx <mailto:Oracle-L@xxxxxxxxxxxxx> *Subject:* log_buffer sizing All , my online redo logs are 3G . This is a batch intensive datawarehouse application. What should i be keeping my log_buffer . currently it is set at 29425664 and it just seems too low . thanks Kart