This is what I have...
In terms of log switches for the last 2 days.
SQL> l 1 SELECT trunc (first_time) "Date", 2 to_char (trunc
(first_time),'Dy') "Day", 3 sum (decode (to_number (to_char (FIRST_TIME,
'HH24')), 0, 1)) "00", 4 sum (decode (to_number (to_char (FIRST_TIME,
'HH24')), 1, 1)) "01", 5 sum (decode (to_number (to_char (FIRST_TIME,
'HH24')), 2, 1)) "02", 6 sum (decode (to_number (to_char (FIRST_TIME,
'HH24')), 3, 1)) "03", 7 sum (decode (to_number (to_char (FIRST_TIME,
'HH24')), 4, 1)) "04", 8 sum (decode (to_number (to_char (FIRST_TIME,
'HH24')), 5, 1)) "05", 9 sum (decode (to_number (to_char (FIRST_TIME,
'HH24')), 6, 1)) "06", 10 sum (decode (to_number (to_char (FIRST_TIME,
'HH24')), 7, 1)) "07", 11 sum (decode (to_number (to_char (FIRST_TIME,
'HH24')), 8, 1)) "08", 12 sum (decode (to_number (to_char (FIRST_TIME,
'HH24')), 9, 1)) "09", 13 sum (decode (to_number (to_char (FIRST_TIME,
'HH24')), 10, 1)) "10", 14 sum (decode (to_number (to_char (FIRST_TIME,
'HH24')), 11, 1)) "11", 15 sum (decode (to_number (to_char (FIRST_TIME,
'HH24')), 12, 1)) "12", 16 sum (decode (to_number (to_char (FIRST_TIME,
'HH24')), 13, 1)) "13", 17 sum (decode (to_number (to_char (FIRST_TIME,
'HH24')), 14, 1)) "14", 18 sum (decode (to_number (to_char (FIRST_TIME,
'HH24')), 15, 1)) "15", 19 sum (decode (to_number (to_char (FIRST_TIME,
'HH24')), 16, 1)) "16", 20 sum (decode (to_number (to_char (FIRST_TIME,
'HH24')), 17, 1)) "17", 21 sum (decode (to_number (to_char (FIRST_TIME,
'HH24')), 18, 1)) "18", 22 sum (decode (to_number (to_char (FIRST_TIME,
'HH24')), 19, 1)) "19", 23 sum (decode (to_number (to_char (FIRST_TIME,
'HH24')), 20, 1)) "20", 24 sum (decode (to_number (to_char (FIRST_TIME,
'HH24')), 21, 1)) "21", 25 sum (decode (to_number (to_char (FIRST_TIME,
'HH24')), 22, 1)) "22", 26 sum (decode (to_number (to_char (FIRST_TIME,
'HH24')), 23, 1)) "23" 27 from v$log_history 28 where trunc (first_time) >=
(trunc(sysdate) - 1) -- last X days. 0 = today only. 1 = today and yesterday 29
group by trunc (first_time) 30* order by trunc (first_time) DESCSQL> /
Date Day 00:00 1am 2am 3am 4am 5am 6am 7am 8am 9am 10am 11am 12:00 1pm 2pm
3pm 4pm 5pm 6pm 7pm 8pm 9pm 10pm 11pm--------- --- ----- --- --- --- --- ---
--- --- --- --- ---- ---- ----- --- --- --- --- --- --- --- --- --- ----
----23-AUG-17 Wed 14 14 11 11 10 10 26 11 16 20 19 922-AUG-17
Tue 15 14 11 7 14 8 13 11 16 16 19 23 22 23 23 19 23
20 16 20 17 13 16 16
SQL>
In terms of size of archive redo logs...
1 select thread#, sequence#, name, round(blocks*block_size/1024/1024) MBytes
2 from v$archived_log 3 where standby_dest='NO' 4 and rownum < 20 5*
order by thread#, sequence#SQL> /
THREAD# SEQUENCE# NAME
MBYTES---------- ----------
------------------------------------------------------------ ----------
1 52023
1197 1 52024
1364 1 52025
1089 1 52026
1357 1 52027
166 2 106444
1358 2
106445
1361 2 106446
1358 2 106447
1362 2 106448
1359 2 106449
1357 2 106450
1361 2
106451
1357 3 45449
901 3 45450
1047 3 45451
1061 4 45847
1166 4 45848
1358 4
45849
982
19 rows selected.
SQL>
Also I have noticed from the alert log that although we have 16 groups per
thread, we only ever have used groups 3,4,22 and 26 on Thread 1.
Thanks
On Wednesday, 23 August 2017, 15:37, "Powell, Mark" <mark.powell2@xxxxxxx>
wrote:
#yiv3288046785 #yiv3288046785 -- P
{margin-top:0;margin-bottom:0;}#yiv3288046785
How frequently are your online redo logs switching? Can you post the log
switch history for 24 hours. Just the time of the switch or maybe a summarized
listing per hour.
Are your archived redo logs files mostly near 1.5G in size?
Mark PowellDatabase Administration(313) 592-5148
From: Zabair Ahmed <roon987@xxxxxxxxxxx>
Sent: Wednesday, August 23, 2017 10:06:59 AM
To: gogala.mladen@xxxxxxxxx
Cc: Powell, Mark; oracle-l@xxxxxxxxxxxxx
Subject: Re: Redo log size on a 4 node RAC cluster. 16 groups per thread.
Thanks
On Wednesday, 23 August 2017, 15:03, Mladen Gogala <gogala.mladen@xxxxxxxxx>
wrote:
On Wed, 23 Aug 2017 13:49:48 +0000 (UTC)
"Zabair Ahmed" <dmarc-noreply@xxxxxxxxxxxxx> (Redacted sender "roon987" for
DMARC) wrote:
We have DG in place. With that in mind we have created16 groups. Not sure if
this is too many??