I don't think so - all disks that error are in ASM disk groups. I'm not sure
why the database is complaining about the disks.. shouldn't the ASM layer hide
it from the database?
On 28/04/17 00:09, Andrew Kerber wrote:
Is there a problem with the mount point itself? Has it gone to read only? You
might try the microsoft solution and bounce node 2, sometimes that clears out
On Thu, Apr 27, 2017 at 9:06 AM, De DBA <dedba@xxxxxxxxxx
I'm applying a PSU to a clustered home for the 1st time in my life and of
course hit a snag.. This is 22.214.171.124 database on a 2-node ASM (126.96.36.199 )
cluster on an ODA and I am applying the 188.8.131.52 October 2016 PSU. All
databases are single-node, just ASM is clustered. I am patching only the
database homes, not ASM.
I ran opatch on node 1 and it propagated automatically to node 2. Before
the patch, both nodes had databases running without errors.
Opatch reported no errors on either node.
After the patch, I can restart the databases on node 1 without problems and run the
post-install actions, but on node 2 start attempts end in ORA-205 "Error identifying
control file". In the alert log are masses of errors like this:
Thu Apr 27 23:42:51 2017
ALTER DATABASE MOUNT
NOTE: Loaded library: System
ORA-15025: could not open disk "/dev/mapper/HDD_E0_S00_372178872p1"
ORA-27041: unable to open file
Linux-x86_64 Error: 13: Permission denied
Additional information: 9
I checked the permissions/ownership of the oracle executable, but that is
the same on both nodes. The file permissions on the disk devices are also the
same and ASM has never gone down during or after patching. I'm stumped...
Andrew W. Kerber
'If at first you dont succeed, dont take up skydiving.'