RE: RACnode communication problem?

Thanks, I had verified ping worked, but I must have done it as root. 
Passwordless ssh was fine, per my example, but oracle didn't have perms to 
ping. You think they could just put that message in there.
From: Allan Nelson [mailto:anelson77388@xxxxxxxxx]
Sent: Wednesday, September 28, 2011 7:28 PM
To: Walker, Jed S
Cc: oracle-l@xxxxxxxxxxxxx
Subject: Re: RACnode communication problem?

When you set up ssh did you set them up to be passwordless for host1 and 
host1.domain.com<http://host1.domain.com>.  If you don't set these up I have 
seen cluvfy fail the way you describe.

Alla
On Wed, Sep 28, 2011 at 10:51 AM, Walker, Jed S 
<Jed_Walker@xxxxxxxxxxxxxxxxx<mailto:Jed_Walker@xxxxxxxxxxxxxxxxx>> wrote:
Hi,
I'm trying to install RAC 11.2.0 - Linux-x86-64, RedHat 5.3, oracle 11.2.0.3.0 
(fyi - I also tried with 11.2.0.2.0 and got the same result)

Has anyone seen this before, and have a solution? runcluvfy.sh and runInstaller 
complain that the nodes can't talk to each other, but I can ssh (passwordless) 
between them and oracle has even copied files from node1 to the other nodes in 
/tmp. I believe there is nothing wrong, but need to figure out why oracle 
suddenly thinks something is wrong.

BTW, yesterday, it wasn't complaining at all, but the servers were rebooted 
last night. It works fine when I do things manually, but oracle doesn't seem to 
think it works. I'm hoping I'm either doing something "RAC newbie" or it is 
just a weird oracle thing

Here is what runcluvfy says (and runInstaller)
(note: yes, changed the a/b on network so I don't get in trouble)

[oracle@flux-rac-node-wcdp-01 grid]$ ./runcluvfy.sh stage -pre crsinst -n 
flux-rac-node-wcdp-01,flux-rac-node-wcdp-02

Performing pre-checks for cluster services setup

Checking node reachability...
Node reachability check passed from node "flux-rac-node-wcdp-01"


Checking user equivalence...
User equivalence check passed for user "oracle"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful


WARNING:
Make sure IP address "bond0 : 99.99.230.195 [99.99.230.128] " is up and is a 
valid IP address on node "flux-rac-node-wcdp-02"

WARNING:
Make sure IP address "bond0 : 99.99.230.194 [99.99.230.128] " is up and is a 
valid IP address on node "flux-rac-node-wcdp-01"

ERROR:
PRVF-7616 : Node connectivity failed for subnet "99.99.230.128" between 
"flux-rac-node-wcdp-02 - bond0 : 99.99.230.195" and "flux-rac-node-wcdp-01 - 
bond0 : 99.99.230.194"
Checking multicast communication...

Checking subnet "99.99.230.128" for multicast communication with multicast 
group "230.0.1.0"...

I've checked ifconfig and everything looks fine, and here you can see I can ssh 
between them (and yes, I've done it both ways). Also, the runcluvfy.sh 
apparently is able to connect over because when I run it, node2 gets oracle 
files created in /tmp.

[oracle@flux-rac-node-wcdp-02 ~]$ ssh flux-rac-node-wcdp-02 ls -l /tmp
WARNING:

 This system is solely for  ...
total 4
drwxr-xr-x 3 oracle dba 4096 Sep 28 15:36 CVU_11.2.0.3.0_oracle


--
http://www.freelists.org/webpage/oracle-l



--
http://www.freelists.org/webpage/oracle-l


Other related posts: