Re: Shared APPL_TOP filesystem question

  • From: Chuck Edwards <chuck@xxxxxxxxxxxxx>
  • To: ora-apps-dba@xxxxxxxxxxxxx
  • Date: Fri, 6 Nov 2009 11:04:58 -0800

Agreed - with some serious scripting, use of VIPs, and general clustering experience you can go quite a ways with a commodity NFS setup; the more time and expertise you put into it, the higher the "brain cost."


To be sure, commercial NFS is going to add a lot of features (OpenFiler may indeed replicate some of these - excellent point) but unless there is a larger need for HA NFS in the enterprise, I might want to point my $$ at the skilled OS person vs the commercial filer; I can get a lot of good things out of that person beyond shared application file system / NFS automation. Situations and budgets will of course dictate that decision.

The thing I really like about this setup (regardless of the failover sophistication) is that it employs physically separate servers for failover. NetApp, EMC, etc all have internal redundancy on many models - dual heads, multipathing, etc - but nothing beats having 2 of something for redundancy. It might not always fall under the rubric of high availability, but it's sure nice to know that if one of the units happens to catch on fire or befall some other unspeakable tragedy, the other unit is racked safely racked a couple of cabinets away.

Chuck





On Nov 6, 2009, at 10:43 AM, Luis Freitas wrote:

Chuck,

  I don't agree that this setup can't deliver automated NFS failover.

It should be possible to set these boxes with a shared device, DRDB, OCSF2, whatever, and use Oracle CRS to manage the NFS server and virtual IP.

The Linux NFS server already has some provisions to allow for a high availability deployment. There are some parts missing, like proper nfs locks replay, but it can be made to work as an application file system, where these bits are not critical. Not sure on how this goes with other vendors, as they would expect you to purchase their clusterware.

I am playing with a configuration like this here, with the added complication of mixing Solaris clients with the Linux NFS server, over a OCFS2 filesystem. But we will purchase a NAS device next year so we may not need to go into production with this setup. To ensure high availability we mounted the filesystem using one of the RAC VIPs already setup on the server, since the NFS server is also a RAC database server, and added some configuration parameters on the NFS server to allow for a transparent reconnection on the client side.

But a NetApp filer provides a lot of functionality, like automated snapshots, replication across remote or DR sites, etc.

Also we should not consider only the hardware cost, but the overal cost. Implementing a solution like this with linux or Unix will need work from a specialized O/S consultant, and also the post- implementation suport cost, for which the filer could have better management tools. Depending on the size of the deployment the suport costs offset the initial implementation cost.

Btw, you could install OpenFiler on those two boxes. I don't know if the High Availability bits are on the community edition.

Best Regards,
Luis Freitas

--- On Fri, 11/6/09, Chuck Edwards <chuck@xxxxxxxxxxxxx> wrote:

From: Chuck Edwards <chuck@xxxxxxxxxxxxx>
Subject: Re: Shared APPL_TOP filesystem question
To: ora-apps-dba@xxxxxxxxxxxxx
Date: Friday, November 6, 2009, 4:03 PM
From a performance requirement
perspective, commercial NFS is overkill for shared
application file systems.  Standard Linux/UNIX NFS will
do just fine and is quite cheap.  For failover, you can
simply set up two servers and have them rsync changes from
one to another.  If you experience a failure, simply
mount the file system from the failover on your application
servers and off you go.  It's not instantaneous, to be
sure, but it provides a couple of advantages:

1.  Cost is very low.  A pair of ~$900 white
boxes with mirrored disk and 2 - 4GB of memory will do
nicely.
2.  Since you have two separate servers, you can place
them in physically separate cabinets, an advantage that even
expensive, internally-redundant NFS filers don't enjoy.

Again, if instant, auto-magical failover for shared
application file systems is an absolute requirement, this
solution cannot deliver; however, performance would be just
fine. ( If you take minimal steps to pre-configure mount
points and /etc/fstab, then script up the mounting and
dismounting, failover can be quite fast, though. )

I suspect that with some of the new ASM capabilities in the
11.2 database release, we're going to see certification for
shared application file systems in ASM, which will change
this discussion substantially.

Chuck Edwards
chuck@xxxxxxxxxxxxx


On Nov 6, 2009, at 9:48 AM, Luis Freitas wrote:

Hmmm,

   NFS thirdy party vendors are $$$, but
operating system clusterware to manage NFS high availability
can also be $$$.

   Sun cluster suite, HP cluster, or
Veritas is not exactly cheap. I don't know how the
clusterware licensing goes for AIX. For Redhat, RHCS used to
be licensed separatelly but on the latest release they
included it on the operating system license.

    Of course you can use Oracle CRS, but if
the NFS servers doesn't have any other oracle products, you
need to pay for a license too, and it won't be integrated
with the O/S NFS server, so you can't expect any support
from your OS vendor with the failover procedures that need
to be implemented on CRS.

Best Regards,
Luis Freitas










Other related posts: