Re: OCR / VD external vs. normal redundancy using NFS.

  • From: David Robillard <david.robillard@xxxxxxxxx>
  • To: "D'Hooge Freek" <Freek.DHooge@xxxxxxxxx>
  • Date: Thu, 15 Jul 2010 10:18:34 -0400

Hello Freek,

> I think the ownership in the udev script did not work, because in your script 
> you are only creating a symlink and not the actual block device.

Indeed, that's what I first thought. But I used SYMLINK instead of
NAME because that's the way it was explained in the RedHat knowledge
base article [1]. Also, the udev(7) man page says this about NAME:

<quote>
NAME   Only one rule can set the node name, all later rules with a
NAME key will be ignored.
</quote>

Since I have more than one iSCSI device, I figured I wasn't going to
use NAME and wrote a little script to take care of the permissions.

But seeing that your own udev script seems to work, I gave it a try. I
changed my udev rules from using SYMLINK like this:

<SYMLINK_udev>
# /dev/iscsi/crs1.
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -gus %p",
RESULT=="3600144f0aa6313ac00004c3dd997000d", SYMLINK+="iscsi/crs1p%n"
</SYMLINK_udev>

To using NAME, OWNER, GROUP and MODE like so...

<NAME_udev>
# /dev/iscsi/crs1.
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -gus %p",
RESULT=="3600144f0aa6313ac00004c3dd997000d", NAME="iscsi/crs1p%n",
OWNER="grid", GROUP="oinstall", MODE="0660"
</NAME_udev>

...then disabled my iscsi.asm script and rebooted the node (with my
fingers crossed!)

But, unfortunately, that didn't work. When cluster node I had modified
came back, udev had indeed created the block devices instead of
symbolic links. But the permissions were set to root:disk and 0640
like this:

brw-r-----  1 root disk  8, 128 Jul 15 09:48 crs1p

This, of course, broke ASM :( So I reverted the 20-names.rules file
back to using SYMLINK and activated the iscsi.asm script. After a
reboot, I was back to normal.

Now is it because the udev rule is broken? Do I have a syntax error? I
don't think so because udevtest(8) didn't report any. Is it because
I'm not using multipathd? I don't think I need it since I'm not using
two FC/iSCSI HBAs, but the software iSCSI initiator with two bonded
NICs that go to two different switches.

But what I do know is that we have yet another case of "There's more
than one way to do it" :)

Cheers,

David

[1] https://access.redhat.com/kb/docs/DOC-7319

> I have used the udev script below to manage multipath luns (named via 
> multipath.conf).
> What it does is check if the device is a multipath block device (dm-) and if 
> the name of the device starts with "asm_" (alias name giving by multipath). 
> If so a block device is created in /dev/oracle/ with the same name as given 
> by multipath. The ownership of that device is set to grid and the group to 
> dba with mode 660.
> The same is done for the partitions.
>
> I also specified that no other rules can be applied on the device I created.
>
> SUBSYSTEM!="block", GOTO="end_oracle"
> KERNEL!="dm-[0-9]*", GOTO="end_oracle"
> PROGRAM!="/sbin/mpath_wait %M %m", GOTO="end_oracle"
> ACTION=="add", RUN+="/sbin/dmsetup ls --target multipath --exec '/sbin/kpartx 
> -a -p p' -j %M -m %m"
> PROGRAM=="/sbin/dmsetup ls --target multipath --exec /bin/basename -j %M -m 
> %m", RESULT=="asm_*", NAME="oracle/%c", OWNER="grid", GROUP="dba", 
> MODE="0660", OPTIONS="last_rule"
> PROGRAM!="/bin/bash -c '/sbin/dmsetup info -c --noheadings -j %M -m %m | 
> /bin/grep -q .*:.*:.*:.*:.*:.*:.*:part[0-9]*-mpath-'", GOTO="end_oracle"
> PROGRAM=="/sbin/dmsetup ls --target linear --exec /bin/basename -j %M -m %m", 
> RESULT=="asm_*", NAME="oracle/%c", OWNER="grid", GROUP="dba", MODE="0660", 
> OPTIONS="last_rule"
> LABEL="end_oracle"
>
> Regards,
>
>
> Freek D'Hooge
> Uptime
> Oracle Database Administrator
> email: freek.dhooge@xxxxxxxxx
> tel +32(0)3 451 23 82
> http://www.uptime.be
> disclaimer: www.uptime.be/disclaimer
--
//www.freelists.org/webpage/oracle-l


Other related posts: