Weird - that does it, changing the group on the oracle executable lets the DB
start..
The weird part is that the DBs on node 1 start regardless, in spite of the OS
group on the executable being the same as on node 2 when it wouldn't start..
I'm now even more perplexed than before...
on the broken node:
-rwsr-s--x 1 oracle oinstall 229M Apr 27 23:40
/u01/app/oracle/product/11.2.0.4/dbhome_1/bin/oracle*
-rwsr-s--x 1 grid oinstall 279M Mar 10 12:15 /u01/app/12.1.0.2/grid/bin/oracle*
on the GOOD node:
-rwsr-s--x 1 oracle oinstall 229M Apr 27 23:39
/u01/app/oracle/product/11.2.0.4/dbhome_1/bin/oracle*
-rwsr-s--x 1 grid oinstall 279M Mar 10 17:19 /u01/app/12.1.0.2/grid/bin/oracle*
== the same! Yet databases ran in one node, not in the other...
Thanks,
Tony
On 28/04/17 00:18, Ronan Merrick wrote:
it might be a simliar issue:
https://asanga-pradeep.blogspot.co.at/2012/06/ora-27303-additional-information.html
On Thu, Apr 27, 2017 at 4:14 PM, De DBA <dedba@xxxxxxxxxx
<mailto:dedba@xxxxxxxxxx>> wrote:
I don't think so - all disks that error are in ASM disk groups. I'm not
sure why the database is complaining about the disks.. shouldn't the ASM layer
hide it from the database?
On 28/04/17 00:09, Andrew Kerber wrote:
Is there a problem with the mount point itself? Has it gone to read only?
You might try the microsoft solution and bounce node 2, sometimes that clears
out any issues.
On Thu, Apr 27, 2017 at 9:06 AM, De DBA <dedba@xxxxxxxxxx
<mailto:dedba@xxxxxxxxxx>> wrote:
Hi,
I'm applying a PSU to a clustered home for the 1st time in my life and
of course hit a snag.. This is 11.2.0.4 database on a 2-node ASM (12.1.0.2 )
cluster on an ODA and I am applying the 11.2.0.4 October 2016 PSU. All
databases are single-node, just ASM is clustered. I am patching only the
database homes, not ASM.
I ran opatch on node 1 and it propagated automatically to node 2.
Before the patch, both nodes had databases running without errors.
Opatch reported no errors on either node.
After the patch, I can restart the databases on node 1 without problems and run
the post-install actions, but on node 2 start attempts end in ORA-205 "Error
identifying control file". In the alert log are masses of errors like this:
…
Thu Apr 27 23:42:51 2017
ALTER DATABASE MOUNT
NOTE: Loaded library: System
ORA-15025: could not open disk "/dev/mapper/HDD_E0_S00_372178872p1"
ORA-27041: unable to open file
Linux-x86_64 Error: 13: Permission denied
Additional information: 9
…
I checked the permissions/ownership of the oracle executable, but that
is the same on both nodes. The file permissions on the disk devices are also
the same and ASM has never gone down during or after patching. I'm stumped...
Cheers,
Tony
-- Andrew W. Kerber
'If at first you dont succeed, dont take up skydiving.'