I got past this stage. We use emc multipathing. One of the things I did which I think helped is include this ORACLEASM_SCANORDER= and ORACLEASM_SCANEXCLUDE /usr/sbin/oracleasm restart and then ran root.sh on node1 and node2 (The only thing was that even on my first failed install, I waited forroot.sh to complete on first node to complete before running it on node 2, but this time around I waited for 15-20 min before kicking off the root.sh on second node. ) - Kumar On Fri, Jan 28, 2011 p2:15 PM, Kumar Madduri <ksmadduri@xxxxxxxxx> wrote:n > Hello All: > Any idea on why this may be happening? > root.sh ran successfully on node 1 > When running it on node 2, it fails with this error. > [root@asiadbg3dev2 grid]# /app/11.2.0/grid/root.sh > Running Oracle 11g root script... > The following environment variables are set as: > ORACLE_OWNER= oracle > ORACLE_HOME= /app/11.2.0/grid > Enter the full pathname of the local bin directory: [/usr/local/bin]: > The contents of "dbhome" have not changed. No need to overwrite. > The contents of "oraenv" have not changed. No need to overwrite. > The contents of "coraenv" have not changed. No need to overwrite. > Entries will be added to the /etc/oratab file as needed by > Database Configuration Assistant when a database is created > Finished running generic part of root script. > Now product-specific root actions will be performed. > Using configuration parameter file: > /app/11.2.0/grid/crs/install/crsconfig_params > CRS-2672: Attempting to start 'ora.cssdmonitor' on 'asiadbg3dev2' > CRS-2676: Start of 'ora.cssdmonitor' on 'asiadbg3dev2' succeeded > CRS-2672: Attempting to start 'ora.cssd' on 'asiadbg3dev2' > CRS-2672: Attempting to start 'ora.diskmon' on 'asiadbg3dev2' > CRS-2676: Start of 'ora.diskmon' on 'asiadbg3dev2' succeeded > CRS-2676: Start of 'ora.cssd' on 'asiadbg3dev2' succeeded > Mounting Disk Group DATA failed with the following message: > *ORA-15032: not all alterations performed > ORA-15017: diskgroup "DATA" cannot be mounted > ORA-15003: diskgroup "DATA" already mounted in another lock name space* > > Configuration of ASM ... failed > see asmca logs at /app/oracle_base/cfgtoollogs/asmca for details > Did not succssfully configure and start ASM at > /app/11.2.0/grid/crs/install/crsconfig_lib.pm line 6464. > /app/11.2.0/grid/perl/bin/perl -I/app/11.2.0/grid/perl/lib > -I/app/11.2.0/grid/crs/install > /app/11.2.0/grid/crs/install/rootcrs.plexecution failed > > > DATA diskgroup is using a device that is mounted across both nodes. root.sh > should not try to create the disk group on node 2. > This gives the same output on both nodes > [root@asiadbg3dev2 asmca]# /usr/sbin/oracleasm listdisks > *FSS_POC_ASM* > [root@asiadbg3dev1 crsconfig]# /usr/sbin/oracleasm querydisk > /dev/emcpowerb1 > *Device "/dev/emcpowerb1" is marked an ASM disk with the label > "FSS_POC_ASM"* > > ** > ** > Thank you > Kumar > > >