Re: Does it matter where the binaries are?

  • From: jungwolf <spatenau@xxxxxxxxx>
  • To: stephen booth <>, "Oracle-L (E-mail)" <Oracle-l@xxxxxxxxxxxxx>
  • Date: Thu, 10 Mar 2005 16:52:56 -0600

On Thu, 10 Mar 2005 22:08:50 +0000, stephen booth
<> wrote:
> On Thu, 10 Mar 2005 14:29:10 -0600, jungwolf <spatenau@xxxxxxxxx> wrote:
> > Stephen,
> >
> > I'm not sure why you are running a standby if everything is pointing
> > at the same filer (using NFS, right?).
> It's basically belt and braces.  Some of these systems are safety
> critical, if a system goes down at the wrong time or if we lose the
> wrong bit of data then someone could end up dead before we can get the
> data out of paper records.
> More to the point the project manager likes the idea.

Definitely understand point two.  Regarding point one, it seems to me
that having a standby on the same NAS doesn't gain you much while
costing about double the disk and memory space (and various
bandwidths).  If the NAS fails, you are out of luck either way.  If a
server fails, instead of failover to the standby you just do instance
recovery on another server in the collective.  You could either do it
manually or have a monitoring daemon do it automatically.

You do get some coverage from the standby for datafile corruption, for
a NAS volume failure (if everything is separated), or for a logical
problem (delayed redo application to protect against, say, accidental
table drop).  Obviously it depends on what kind of cost/risk tradeoffs
the project manager is willing to accept.  I guess it just seems to me
that if the PM has already budgeted for the resources to house a
standby locally, why not go whole hog and develop a full disaster
recovery site?

"Why not?"  Well, cost and infrastructure limitations of course.


Other related posts: