Are they giving you physical or virtual disks? You need physical and more is
better though 40 seems high. We have a 1.5 TB db that gradually deteriorated in
performance as more and more clients were using the SAN. We ended up moving to
6 disks of 300Gb each on SSD.
Are they using NVME SSD?
If they won't budge it's easy enough to change later if you're using ASM. It's
also easy enough to test beforehand with SLOB.
Just make sure you keep lots of baseline data.
________________________________
From: oracle-l-bounce@xxxxxxxxxxxxx <oracle-l-bounce@xxxxxxxxxxxxx> on behalf
of Niall Litchfield <niall.litchfield@xxxxxxxxx>
Sent: October 21, 2017 5:43 AM
To: Ram Raman; ORACLE-L
Subject: Re: SSDs and LUNs
That's solution dependent, so I can't give a general answer and will require
conversations with the storage, O/S and virtualization teams assuming you are
organized that way. In addition, you've got external consultants to talk to as
well. When talking to them it's important to understand that multiple paths to
storage offer two advantages. You want to ensure that the offered solution
covers both cases.
1. If one of the paths fails due to a hardware failure, then the system can
continue - this is why physical servers typically have 2 HBAs for example.
2. Each O/S device typically can handle X concurrent requests (not the same
thing as IOPS). X is solution dependent but can be as low as 32. Adding paths
adds concurrent I/O request capacity. The term for this is Queue Depth.
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html-single/DM_Multipath/index.html#MPIO_description
is a good simple overview of how this works in a traditional world,
essentially the more HBAs, switch ports and controllers you have the greater
the available number of routes to the SAN. Your O/S and SAN people should be
familiar with this.
You said this was a VM solution, however. There are a greater number of storage
options there and a number of ways that path redundancy and outstanding I/O
capacity can be provided. For example disks can be attached directly to the VM
just as they would be to a physical host, storage can be provided by the
virtualization solution (in a number of forms) and storage might be provided
over the network via iSCSI. I've added the list back in, in case anyone has a
simple overview of good configuration principles for Oracle storage in a VM
world. Again, actual details will be technology specific but the two principles
above should allow you and your system administrators to ask good questions of
the proposed solution.
Separately you asked how many paths we used. Off the top of my head, the answer
is 4, but it could be 8. My advice would be to start small but at least 2 to
provide redundancy, you can monitor actual queue depth statistics on Linux
using iostat -x. avgqu-sz would be the column of interest. You'd need to
compare that to the device queue size your setup is configured for, if the two
figures are closed and your I/O is slower than you expect but the storage array
says all is fine you've probably got an issue. That said, in most cases, you'll
probably find that 2 paths provide enough capacity as well as enough redundancy.
On Sat, Oct 21, 2017 at 8:27 AM, Ram Raman
<veeeraman@xxxxxxxxx<mailto:veeeraman@xxxxxxxxx>> wrote:
Thanks Niall.
"What you would normally do is to increase the number of paths to the storage"
How would you do that?
On Thu, Oct 19, 2017 at 2:58 PM, Niall Litchfield
<niall.litchfield@xxxxxxxxx<mailto:niall.litchfield@xxxxxxxxx>> wrote:
What you would normally do is to increase the number of paths to the storage,
and use the mpath device as your ASM disk. You'll get an O/S queue per path
that way. It is, of course, possible to generate queueing within the array by
being over enthusiastic.
As a point of reference, we are happy with larger LUNS (4tb) and more paths for
our all flash array based databases. If you do use LUN sizes larger than 2T
*and* use ACFS make sure you are on a current version ( > 12.1.0.2.5) otherwise
once your ACFS filesystem gets more than 2T of data added to it, you'll lose it
all :( https://support.oracle.com/epmos/faces/DocumentDisplay?id=2065748.1
On Thu, Oct 19, 2017 at 8:09 PM, Ram Raman
<veeeraman@xxxxxxxxx<mailto:veeeraman@xxxxxxxxx>> wrote:
"If there is a layer at which you have a single queue to each LUN you will have
an I/O bottleneck" is exactly what I am thinking of. just 2 queues for TB of
data?
Let me read up on the link Connor emailed.
Rich J: I cannot understand your question since those terms sound new to me. I
will have to research that.
Mladen: It is going to be in house, not on cloud. and 1.5Tb LUNs are common
(with SSDs?)? How is the IO in those installations? I understand that SSDs
outperform HDDs, but I am wondering, just wondering, that by providing 2 or 4
luns we are losing the advantage that SSDs give us?
On Thu, Oct 19, 2017 at 1:39 AM, Jonathan Lewis
<jonathan@xxxxxxxxxxxxxxxxxx<mailto:jonathan@xxxxxxxxxxxxxxxxxx>> wrote:
At some layer between Oracle and the silicon the various software components
will have some queues. If there is a layer at which you have a single queue to
each LUN you will have an I/O bottleneck when you've got lots of Oracle
processes trying to read from just 2 (or 4) LUNs.
I'm not an expert with stuff that far away from the Oracle software but I would
be a little surprised if you got bad performance because you were configured as
40 LUNs, while I have seen bad performance from a system where the solid state
SAN had been configured as just 2 LUNs (one for data, one for redo).
Regards
Jonathan Lewis
________________________________________
From: oracle-l-bounce@xxxxxxxxxxxxx<mailto:oracle-l-bounce@xxxxxxxxxxxxx>
<oracle-l-bounce@xxxxxxxxxxxxx<mailto:oracle-l-bounce@xxxxxxxxxxxxx>> on behalf
of Ram Raman <veeeraman@xxxxxxxxx<mailto:veeeraman@xxxxxxxxx>>
Sent: 19 October 2017 06:49:23
To: ORACLE-L
Subject: SSDs and LUNs
We are moving one of the systems to vm. The consultants who have been hired to
do the implementation are recommending that we create just 2 or 4 'LUNS' for
data diskgroup for the db that is 3Tb in size which exhibits hybrid IO. They
are promising it is best rather than having 30 or 40 LUNs since the new disks
will all be SSDs.They are claiming that it will perform better than having 40
'LUNs'. I still have the 'old way of thinking' when it comes to IO. Can
someone confirm one way or other, or point to any paper. thanks.
Ram.
--
--
--
Niall Litchfield
Oracle DBA
http://www.orawin.info
--
--
Niall Litchfield
Oracle DBA
http://www.orawin.info