RE: ORION num_disks

  • From: "Crisler, Jon" <Jon.Crisler@xxxxxxx>
  • To: <exriscer@xxxxxxxxx>, "Mark W. Farnham" <mwf@xxxxxxxx>
  • Date: Fri, 12 Sep 2008 12:29:28 -0400

Its logical disks from an OS view- if Orion lets you point to the same
device multiple times, you could do that, but it would be hard to
predict if this would be helpful or harmful for performance-.  I just
wish Orion would support filesystems rather than raw devices- not a big
deal but I keep running into situations where the filesystem are already
built and cannot be removed. - that said, I have not touched Orion in
quite a while.

 

________________________________

From: oracle-l-bounce@xxxxxxxxxxxxx
[mailto:oracle-l-bounce@xxxxxxxxxxxxx] On Behalf Of LS Cheng
Sent: Friday, September 12, 2008 11:42 AM
To: Mark W. Farnham
Cc: Oracle-L Freelists
Subject: Re: ORION num_disks

 

Hi

Four LUNs are created for ASM usage, due to HBA/SCSI queue depth which
can limit concurrent request (such as lun-qeueu-depth in HBA
configuration or sd_max_throttle in Solaris) to a single disk from OS
view it has been proceeded to create 4 in order to avoid this situation.
The other reason is not use huge size LUN for ASM so when more space is
requiered you dont have to add another huge LUN.

I dont understand very well what is Orion user guide suggesting, it
simply states:

num_disks ==> number of physical disks

Since we are in a virtualized environment I am not sure if it really
means the number of physical disks from SAN view or logical disks from
OS View that is why my post :-)

The fact is I have tested with 8 and 4 disks for this argument but see
no differences in the results so I am not sure how is this argument used
internally in Orion and wonder if anyone know if this argument makes any
difference at all.

TIA

--
LSC






On Fri, Sep 12, 2008 at 12:40 PM, Mark W. Farnham <mwf@xxxxxxxx> wrote:

I'm missing why you think the number is not 1.

 

I'm curious why 4 LUNs were created - was it merely to keep the total
size down for a single LUN?  (Some storage mechanisms parcel out various
maximum resource amounts per LUN, so there are reasons beyond the size
is a single LUN getting scary to carve up a stripe set into multiple
LUNs. I'm not aware of per LUN limits like that on Clariions. If there
are some I'm always ready to be enlightened.)

 

These disks have been linked together as a single interdependent unit of
i/o. Think about it this way: Can you predict whether any two
simultaneous i/o actions against this set of 4 LUNs will collide on a
physical disk?

 

You have a single stripe set. (If by your description you mean the disks
are pairwise plexed and then striped across the four resulting resilient
logical drives, and then those are carved into 4 LUNs. It would be
possible from your description to consider a pair of two resilent drive
raid groups. Then you would have two independent units of i/o within the
Clariion box and possibly beyond depending on how the i/o controllers
are set up and whether sharing capacity on them can be nearly ignored
[even with 90% headroom on total throughput you'll get some collisions
on request initiation.])

 

Regards,

 

mwf

 

________________________________

From: oracle-l-bounce@xxxxxxxxxxxxx
[mailto:oracle-l-bounce@xxxxxxxxxxxxx] On Behalf Of LS Cheng
Sent: Friday, September 12, 2008 5:28 AM
To: Oracle-L Freelists
Subject: ORION num_disks

 

Hi

 

Does anyone know what value should be used for num_disks argument when
running I/O stress tests with ORION?

 

For example consider I have a RAID Group within an EMC Clariion CX-700
SAN, formed by 8 physical disks and striped using RAID 1+0, then from
this RAID Group 4 LUN are created and presented to the server.

 

What is the value for num_disks, 8 or 4? 

 

TIA

 

--

LSC

 

 

Other related posts: