Weird - that does it, changing the group on the oracle executable lets the DB
The weird part is that the DBs on node 1 start regardless, in spite of the OS
group on the executable being the same as on node 2 when it wouldn't start..
I'm now even more perplexed than before...
on the broken node:
-rwsr-s--x 1 oracle oinstall 229M Apr 27 23:40
-rwsr-s--x 1 grid oinstall 279M Mar 10 12:15 /u01/app/188.8.131.52/grid/bin/oracle*
on the GOOD node:
-rwsr-s--x 1 oracle oinstall 229M Apr 27 23:39
-rwsr-s--x 1 grid oinstall 279M Mar 10 17:19 /u01/app/184.108.40.206/grid/bin/oracle*
== the same! Yet databases ran in one node, not in the other...
On 28/04/17 00:18, Ronan Merrick wrote:
it might be a simliar issue:
On Thu, Apr 27, 2017 at 4:14 PM, De DBA <dedba@xxxxxxxxxx
I don't think so - all disks that error are in ASM disk groups. I'm not
sure why the database is complaining about the disks.. shouldn't the ASM layer
hide it from the database?
On 28/04/17 00:09, Andrew Kerber wrote:
Is there a problem with the mount point itself? Has it gone to read only?
You might try the microsoft solution and bounce node 2, sometimes that clears
out any issues.
On Thu, Apr 27, 2017 at 9:06 AM, De DBA <dedba@xxxxxxxxxx
I'm applying a PSU to a clustered home for the 1st time in my life and
of course hit a snag.. This is 220.127.116.11 database on a 2-node ASM (18.104.22.168 )
cluster on an ODA and I am applying the 22.214.171.124 October 2016 PSU. All
databases are single-node, just ASM is clustered. I am patching only the
database homes, not ASM.
I ran opatch on node 1 and it propagated automatically to node 2.
Before the patch, both nodes had databases running without errors.
Opatch reported no errors on either node.
After the patch, I can restart the databases on node 1 without problems and run
the post-install actions, but on node 2 start attempts end in ORA-205 "Error
identifying control file". In the alert log are masses of errors like this:
Thu Apr 27 23:42:51 2017
ALTER DATABASE MOUNT
NOTE: Loaded library: System
ORA-15025: could not open disk "/dev/mapper/HDD_E0_S00_372178872p1"
ORA-27041: unable to open file
Linux-x86_64 Error: 13: Permission denied
Additional information: 9
I checked the permissions/ownership of the oracle executable, but that
is the same on both nodes. The file permissions on the disk devices are also
the same and ASM has never gone down during or after patching. I'm stumped...
-- Andrew W. Kerber
'If at first you dont succeed, dont take up skydiving.'