Assuming you mean to create the quorum file larger, I already tried this= . My cm.log file in fact didn't have any error messages pertaining to o= pening or writing to the quorum file, which I think were/are the symptom= s that make the enlargening work. Thanks anyway! ----- Original Message ----- From: Senthil Ramanujam <datum.data@xxxxxxxxx> To: jclarke@xxxxxxxxxxxxxxx Sent: Wed, 15 Sep 2004 16:39:46 -0400 Subject: Re: oracm dying, ocfs not mounting from multiple systems > Hi John, >=20 > I recently ran into the same issue. After running dd command(suggested > by metalink), i dont see this issue again. Give it a shot and see if > that helps. >=20 > thanks. >=20 > senthil >=20 >=20 > On Wed, 15 Sep 2004 16:17:09 -0400, John Clarke <jclarke@xxxxxxxxxxxxx= om> > wrote: > > I'm trying to build a 9.2.0.4 RAC on OCFS filesystem, RHAS21, both h= osts=3D > > running as VMWare 4.5 virtual machines on top of a Windows XP host, = and=3D > > oracm dies on the 2nd node I start it on. The cm.log file is quite = inc=3D > > onclusive, as it only gives me one thing that looks at all like an e= rror=3D > > : > >=20 > > InitClusterDB(): getservbyname on CMSrvr failed: -22 > >=20 > > Lots of posts on Metalink that say things like "create your quorum f= ile =3D > > larger" and so forth, and none of these suggestions work. The last s= ugge=3D > > stion I got was from Rich Jesse, which was to build my quorum file (= and =3D > > other) on raw devices instead of OCFS filesystem. I really would li= ke t=3D > > o get OCFS working though and I've got a feeling my oracm problem is= rel=3D > > ated to an issue w/ OCFS but would like to confirm. > >=20 > > My question is this ... > >=20 > > If I format and mount an OCFS filesystem from 2 nodes and run /sbin/= moun=3D > > ted.ocfs against the device, should I see both nodes in the "nodes" = line=3D > > =3D3F My guess is yes on this, but for some reason I only see the l= ocal n=3D > > ode, regardless of where I run it from. After mounting the ocfs fil= esys=3D > > tem, I can create a quorum and srvm file from one node, but am not a= ble =3D > > to "see it" from the other node without unmounting and mounting. > >=20 > > The darndest thing about this whoe thing is that I had a 92R4 cluste= r wo=3D > > rking fine in this OCFS configuration for about a year, until a mont= h ag=3D > > o when oracm started dying. I've since scratched and rebuilt both VM= s, t=3D > > o no avail. > >=20 > > Thoughts=3D3F > >=20 > > -- > > //www.freelists.org/webpage/oracle-l > > >=20 -- //www.freelists.org/webpage/oracle-l