RE: raid 5 disaster

  • From: "Mark W. Farnham" <mwf@xxxxxxxx>
  • To: <oracle-l@xxxxxxxxxxxxx>
  • Date: Mon, 16 Aug 2004 09:06:25 -0400

So it's your IOPS and total data cache requirements versus their cache size
without a test?

Three things I can think of have a *chance* of being convincing. (Please
note there is *not* a 100% correlation between technical accuracy and being
convincing to non-technical management in the face of professional
salesmen.)

1) Your peak transaction commit write to redo rate exceeds the IOPs capacity
of getting data to the non-volatile component of your disk farm. "Do you
want critical transactions sitting in cache?"

2) 80/20 rule. They'll probably claim that a tiny percentage of total
database size is a reasonable cache. Demand 20%, or else you're sure you'll
flood too often. If they are actually proposing 20 gigabytes of cache
dedicated to a 100 gigabyte database, you might be okay. Probably a waste of
money, but unless your IOPs required to go to non-volatile rate is higher
than their back end, 20% usually really is enough. More likely, though, the
proposal is a few tens of gigabytes against a terabyte of data. Or less.

3) Find your "trial balance/open GL Period" month end process. Note how much
data must be spooled through to produce reports and generate the new
monolith for the next period. Now yours might not be financials, but you
probably have some cyclical atypical load against a time constrained window.
Pick the one that tends to cost your company money or lost opportunity.
"Well, when we do flood cache every (hour|day|week|month|quarter) while
we're processing (aged receivables|MRP|payroll|etc.), we'll miss our
processing window and have to delay access to the system by interactive
users, including management forecast reports you use to make decisions."

Good luck with the politics.

If you need to have a scaling simulation run, I can arrange to spend a lot
more of your money to demonstrate whether or not the i/o complex is actually
adequate. Chances are the demonstration costs more than buying an adequate
disk farm with a back end i/o signature that can be parallized and spread
out.

(For example, if you take the proposed  cost of the study and use it to buy
enough disks to arrange them in stripe sets of multiplexed disk images to
handle your target load, then you'll probably have money left over to add
additional stripe sets if production loads exceed your estimates. The
relevant cost comparison of additional stripe sets is as compared to buying
additional cache, since if the original solution was based on "But it has a
huge cache, You'll never flood it," then the next round will be "Wow! Your
I/O requirements are atypically high. You'll need to buy even more cache!")

Disk is cheap, but not free.

Insufficient IOPs usually turn out to be expensive.

mwf

-----Original Message-----
From: oracle-l-bounce@xxxxxxxxxxxxx
[mailto:oracle-l-bounce@xxxxxxxxxxxxx]On Behalf Of Hostetter, Jay M
Sent: Monday, August 16, 2004 7:25 AM
To: oracle-l@xxxxxxxxxxxxx
Subject: RE: raid 5 disaster


"But is has a huge cache.  You'll never flood it..."

We're setting up a Shark now, and this is the argument we heard.  How do I =
counter that argument, other then providing this link:

http://www.baarf.com/

And statistics after the fact?

Jay
=20

-----Original Message-----
From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] =
On Behalf Of Jared Still
Sent: Saturday, August 14, 2004 11:28 AM
To: Oracle-L Freelists
Subject: Re: raid 5 disaster

On Sat, 2004-08-14 at 07:05, Mogens N=F8rgaard wrote:

> Since this was the old Shark, we knew it was using cheap, slow, old=20
> disks (7500 RPM). And 8-pack contains 6 data disks, one dedicated=20
> parity disk (RAID-4) and one hot spare. So a total of 300 IO's per=20
> second per 8-pack was to be expected.
>=20


We purchased this same system several year ago at a previous employer.

When I sat down with the IBM technical consultant, I was somewhat aghast to=
 learn of this configuration.  But, it was actually worse than you have por=
trayed it.

There were 4x8 packs in this system, and they could not all be configured t=
he same way.  Two of them were allowed to have one less disk dedicated to o=
verhead, which created some very odd stripe sizes.

It wasn't bad enough that the only configuration available was RAID-4/5.

I wasn't impressed.  The Sharks didn't go into production until after I lef=
t the company, but I was told by a project mgr on the DW project the perfor=
mance was indeed less than stellar.

BAARF!

Wish I had had my 'No RAID5' hat then.

Jared



**DISCLAIMER
This e-mail message and any files transmitted with it are intended for the =
use of the individual or entity to which they are addressed and may contain=
 information that is privileged, proprietary and confidential. If you are n=
ot the intended recipient, you may not use, copy or disclose to anyone the =
message or any information contained in the message. If you have received t=
his communication in error, please notify the sender and delete this e-mail=
 message. The contents do not represent the opinion of D&E except to the ex=
tent that it relates to their official business.
----------------------------------------------------------------
Please see the official ORACLE-L FAQ: http://www.orafaq.com
----------------------------------------------------------------
To unsubscribe send email to:  oracle-l-request@xxxxxxxxxxxxx
put 'unsubscribe' in the subject line.
--
Archives are at //www.freelists.org/archives/oracle-l/
FAQ is at //www.freelists.org/help/fom-serve/cache/1.html
-----------------------------------------------------------------


----------------------------------------------------------------
Please see the official ORACLE-L FAQ: http://www.orafaq.com
----------------------------------------------------------------
To unsubscribe send email to:  oracle-l-request@xxxxxxxxxxxxx
put 'unsubscribe' in the subject line.
--
Archives are at //www.freelists.org/archives/oracle-l/
FAQ is at //www.freelists.org/help/fom-serve/cache/1.html
-----------------------------------------------------------------

Other related posts: