Re: creative use of storage snapshots.

  • From: David Roberts <big.dave.roberts@xxxxxxxxxxxxxx>
  • To: Oracle L <oracle-l@xxxxxxxxxxxxx>
  • Date: Wed, 22 Dec 2010 19:27:31 +0000

While I accept your primary argument that loosing data on a SAN
is difficult. I would also observe that the not 'out of the box' precautions
you have taken in addition to this also reduce the chances of data loss.

Nevertheless, there are those that will be using non-SAN based replication
(We in the past used SNDR from a local server to a remote server) and there
are these persistent tales of data loss from SANs, the validity and
numerical significance of which are difficult to judge.

By nature, disasters tend to be unexpected and different in nature to those
that you have tackled before.

I do admit that at a certain level of protection it might not be cost
effective or economically justifiable to implement the highest levels of
data resilience for all organisations. However, the comfort that DG could be
replicating my data between 2 systems at a higher level (than a SAN or
operating system does) would give me a greater degree of confidence. In one
case (with DG) I could be replicating from one manufactures hardware and
operating system to another manufactures hardware and operating system. And
I would tend to trust the highest level of replication (apart from bespoke
replication codded by local developers and not implemented elsewhere) more
than that provided by hardware providers.


Always remember, 'There's software in those BIOSs'

Regards,

David Roberts

On Wed, Dec 22, 2010 at 10:59 AM, Nuno Souto <dbvision@xxxxxxxxxxxx> wrote:

> That is an argument often invoked to support DG, but it doesn't take into
> account how replication is done in most modern SAN devices.
>
> For example: in our EMC we replicate the FRA with the last RMAN backup,
> then
> every 2 hours we archive redo logs to FRA and replicate them using
> asynchronous
> command-line initiated replication.  Due to the way EMC does replication,
> any
> potential disparity between blocks will get corrected on the next send.
>  And it
> has not happened once in nearly 2 years we've been doing it!
>
> Oh, BTW: it's not the disk controller that does that, it's a completely
> different mechanism than the SAN disk io.  I suspect whoever came up with
> that
> "danger" really hasn't used a late generation SAN.
>
> I'll wear the risk of two consecutive transmission errors on FC - recall
> that it
> is subjected to parity and ECC as well - against what it'd cost us to get
> an
> IP-based connection resilient and performant enough to do DG at our volume
> and
> performance point.  In fact, I know exactly what it'd cost us and it's
> simply
> not feasible or cost-effective.
>
> What Oracle should do is make DG independent of the transport layer.  IE,
> if I
> want to use Oracle's IP-based transport, or ftp, or scp, or a script, or
> navicli, or dark fibre non-IP, or carrier pigeons/smoke signs, it's
> entirely up
> to me and let me do it.  There is really no reason why DG has to be
> IP-only.
>
> --
> Cheers
> Nuno Souto
> dbvision@xxxxxxxxxxxx
>
>
> David Roberts wrote,on my timestamp of 22/12/2010 6:17 AM:
>
>> One point, that I don't see mentioned (unless I missed it) is if you are
>> using
>> some form of block level replication as a DR solution, what happens when
>> the
>> disaster is the disk controller writing garbage to your disk.
>>
>> If you are using DG, then depending on the type you will
>> get varying early opportunities to spot the corruption or opportunities to
>> recover from it. Opportunities that are lacking when you blindly have
>> hardware copping data blocks.
>>
>> I agree that these are fine solutions to providing development and testing
>> environments, but I would suggest caution with regards adopting these
>> technologies for DR purposes.
>>
>
>
> --
>
> //www.freelists.org/webpage/oracle-l
>
>
>

Other related posts: