RE: Sun T4 Storage Arrray and BAARF

  • From: "Allen, Brandon" <Brandon.Allen@xxxxxxxxxxx>
  • To: <ian@xxxxxxxxxxxxxxxxx>, "ORACLE-L" <oracle-l@xxxxxxxxxxxxx>
  • Date: Thu, 4 Aug 2005 11:37:05 -0700

It takes at least 4 disks to implement RAID 1+0.  RAID 1+0 means you are 
mirroring (1) and striping (0), so you have two mirrored pairs (2 x 2) that 
your data is striped across.  If you are only using two disks, then they are 
only RAID1 or RAID0 - not RAID10.

My experiences would support that RAID 5 is okay as long as you are doing 90%+ 
reads - but beware of the times when your application's I/O pattern changes or 
you have to do occasional loads, batch jobs, etc. with heavy writes - RAID 5 is 
horrible when it comes to writes.  If you have a lot of cache, you may be able 
to mask the problem though - that seems to be the trend from the SAN sales 
folks - RAID5 with massive cache.  We had a problem on one of our arrays where 
we found that when one disk went out, the cache was automatically disabled for 
the entire array - performance was horrible for 2 days until we figured out 
what had happened.

You should be able to easily calculate your read/write ratio - just look in 
v$sysstat for 'physical reads' and 'physical writes'.


-----Original Message-----
From: oracle-l-bounce@xxxxxxxxxxxxx
[mailto:oracle-l-bounce@xxxxxxxxxxxxx]On Behalf Of MacGregor, Ian A.

. . .

Due to space requirements, I've always divided a Storedge array thus:  Two 
disks, total as a RAID 10

. . .

I don't know the distribution of reads and writes.

. . .

Privileged/Confidential Information may be contained in this message or 
attachments hereto. Please advise immediately if you or your employer do not 
consent to Internet email for messages of this kind. Opinions, conclusions and 
other information in this message that do not relate to the official business 
of this company shall be understood as neither given nor endorsed by it.


Other related posts: