RE: future of ocfs2

  • From: "Mark W. Farnham" <mwf@xxxxxxxx>
  • To: <jeremy.schneider@xxxxxxxxxxxxxx>, <ocfs2-users@xxxxxxxxxxxxxx>, <oracle-l@xxxxxxxxxxxxx>
  • Date: Fri, 6 Feb 2009 09:06:28 -0500

I don't know the future of ocfs2.

But a very useful way to avoid the need for a clustered file system (though
it still requires shareable disks) is for each node to have its own
/${ORACLE_BASE and /${ARCH_TOP} mounted rw with ro mounts to each
/${ORACLE_BASE}<node_nickname> and /${ARCH_TOP}<node_nickname>.

If $ORACLE_BASE is oracle and $ARCH_TOP is oarch and the node_nicknames are
the thread number, a simple three node cluster looks like this:

/oracle
/oracle2 (r/o)
/oracle3 (r/o)
/oarch
/oarch2 (r/o)
/oarch3 (r/o)

from the node hosting thread 1 and I expect you can fill in the rotation.

For a given UNIX, check that it will allow you to mount ro when the volume
is already mounted rw somewhere else, and whether you have to umount / mount
to see the latest changed directory entries immediately. If it won't do the
former you only mount it when a given node is down. It it won't do the
latter it usually does show you the latest changes if you umount/mount. OEL4
seems well behaved, and though I can't promise the inode updates are
instantaneous they seem quick enough.

This allows being able to see what is going on in the admin/diag dirs and to
access the archived redo logs on all the nodes from any node, and in
particular continues to work when some other node is down, including being
able to check for loose ends of why a node crashed.

mwf

-----Original Message-----
From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx]
On Behalf Of Jeremy Schneider
Sent: Thursday, February 05, 2009 7:29 PM
To: ocfs2-users@xxxxxxxxxxxxxx; oracle-l@xxxxxxxxxxxxx
Subject: future of ocfs2

At the company where I'm working right now, I'm part of an architecture 
effort to come up with our standard design for RAC on Linux across the 
firm. There will be dozens or possibly hundreds of deployments globally 
using the design we settle on.

We're internally debating whether or not we should include OCFS2 in this 
design right now, and I'm curious if anyone has arguments one way or the 
other to share. Our standard design on Solaris does utilize a cluster 
filesystem and we would welcome a similar design, but there are some 
concerns about the readiness, stability and future of OCFS2.

OCFS2 is being considered for these four use cases:
- database binaries (vs local files or NFS)
- diag top (11g) or admin tree (10g) (vs local files or NFS)
- archived logs
- backups

Other files will be stored in ASM.

I have seen mention in blogs such as 
http://bigdaveroberts.wordpress.com/ of something called ASMFS in 11gR2 
and I'm wondering - will this feature (if included) have any impact on 
Oracle's commitment to OCFS2 development? Could Oracle conceivably 
develop a whole new cluster filesystem and put their full weight behind 
it as they did for ASM storage, leaving OCFS2 as a lower priority for 
new features and improvements? Has Oracle demonstrated significant 
commitment to OCFS2 development and support in the past, and is this a 
mature enough technology for wide-scale deployment?

Just looking for opinions. :)

Thanks,
Jeremy

-- 
Jeremy Schneider
Chicago, IL
http://www.ardentperf.com
--
//www.freelists.org/webpage/oracle-l




--
//www.freelists.org/webpage/oracle-l


Other related posts: