Re: Data Mirroring on two data centers -- How to use ASM ?

  • From: Carel-Jan Engel <cjpengel.dbalert@xxxxxxxxx>
  • To: tanel.poder.003@xxxxxxx
  • Date: Sat, 20 May 2006 21:32:34 +0200

Well, Tanel,

I perfectley understand you read Data Guard between the lines in my
answer ;-). However, it wasn't there. I stayed away from mentioning Data
Guard deliberately, because that is just another solution. In this
stage, when requirements are not clear, it's to early to talk solutions.

Having said that, I totally agree with your point. I would add to it
that storage based replication of the whole database also suffers from
archiving the redo. Those blockchanges have to be replicated too. The
same happens to controlfile updates. It will all get worse when you have
multiple controlfiles and online redo log file group members. 
Apart from saving bandwidth Data Guard allows you to maintain a delay in
applying archives. This feature helps to survive hardware failures AND
'human' failures, like accidently dropping/truncating a table, 'new
application features' and so on. Several CT sites have survived serious
human errors thanks to this feature. SAN replication would have
replicated the disaster immediately, leaving the DBA with no other
option than a time consuming TSPITR.

For the SAN-based archive replication I'd suggest to set
ARCHIVE_LAG_TARGET=n. This will force log switches every n seconds, also
when the log file isn't full yet.

Best regards,

Carel-Jan Engel

===
If you think education is expensive, try ignorance. (Derek Bok)
===

On Sat, 2006-05-20 at 22:03 +0800, Tanel PÃder wrote:

> ï 
> 
> One more reason to use data guard instead of storage/LVM level
> replication in high-activity OLTP environments is that redolog entry
> based shipping is much more more fine grained than storage block level
> replication.
>  
> I once asked one EMC admin, they told that the minimum block size for
> SRDF is 32k. So if you update one row and commit, you'd need to ship
> few hundred bytes to standby, while with SRDF you'd need to transfer
> 32 kilobytes over the fibre when the block is written to disk plus you
> need to continuously transfer redolog writes before the datafile
> blocks are sent to remote.
>  
> If your management definitely requires SAN based replication, then you
> could just keep your archivelogs on a replicated volume/storage and do
> frequent log switches to keep the lag small in case of primary
> failure.
>  
> Tanel.
>  
> ----- Original Message ----- 
>         From: Carel-Jan Engel 
>         To: db.mail.1@xxxxxxxxx 
>         Cc: oracle-l@xxxxxxxxxxxxx 
>         Sent: Saturday, May 20, 2006 3:06 AM
>         Subject: Re: Data Mirroring on two data centers -- How to use
>         ASM ?
>         
>         
>         
>         Hi Madhu, 
>         
>         I'm wondering your primary 'requirement' of mirroring data
>         across TWO data centers.
>         
>         IMHO, mirroring between data centers is a solution, or if you
>         like, tool. Whatever, it isn't a requirement. 
>         
>         Requirements could be something like:
>         - After a server failure, the database should be available
>         again within 30 minutes
>         - After a server failure, no more than 5 minutes worth of
>         transactions may be lost
>         - After a database corruption, the database should be
>         available again within 6 hours
>         - After a database corruption, no more than 30 minutes of
>         transactions may be lost
>         - Restoring of the database to any point in time between now
>         and now - 6 days must be possible


Other related posts: