Re: Storage array advice anyone?

  • From: Stephen Evans <evans036@xxxxxxxxxxx>
  • To: Stephen.Lee@xxxxxxxx
  • Date: Mon, 13 Dec 2004 15:22:13 -0500

stephen,
i havent use this specific kind of storage array, but here are some 
thoughts anyway...

- one large storage pool (ie everything striped accross all devices) can 
create problems when trying to increase capacity and/or when you lose a 
spindle. this is because the storage controller must 're-level' all 
storage to account for the new spindle count. this process can take many 
hours (days) and cause the array to get quite busy.

- you may want to consider separating raid 5 from raid 10 (for redo logs 
etc)

- if you are like us, you dont have a good understanding of the 
applications that will be placed on the array and so its safer 
(performance wise) just to put everything everywhere. we also lack the 
staffing & performance toole and infrstructure to identify performance 
bottlenecks before they happen

- as long as the storage arrray is set up right (ie spare disks assignable 
for a group of disks that have not single point of failure) fault 
tolerance should not be an issue.

- we have experienced mullti spindle outages at the same time (after power 
loss of a few hours) so dont assume you're safe because you can recover 
from single spindle loss

- yes, mutliple mirrored (or duplexed or whatever) controllers in 
differemt locations is the way to go.

good luck

steve






"Stephen Lee" <Stephen.Lee@xxxxxxxx>
Sent by: oracle-l-bounce@xxxxxxxxxxxxx
12/13/2004 01:29 PM
Please respond to Stephen.Lee
 
        To:     <oracle-l@xxxxxxxxxxxxx>
        cc: 
        Subject:        Storage array advice anyone?



There is a little debate going on here about how best to setup a new
system which will consist of IBM pSeries and a Hitachi TagmaStore 9990
array of 144 146-gig drives (approx. 20 terabytes).  One way is to go
with what I am interpreting is the "normal" way to operate where the
drives are all aggregated as a big storage farm -- all reads/writes go
to all drives.  The other way is to manually allocate drives for
specific file systems.

Some around here are inclined to believe the performance specs and
real-world experience of others that say the best way is keep your hands
off and let the storage hardware do its thing.

Others want to manually allocate drives for specific file systems.
Although they might be backing off (albeit reluctantly) on their claims
that is it required for performance reasons, they still insist that
segregation is required for fault tolerance.  Those opposed to that
claim insist that the only way (practically speaking) to lose a file
system is to lose the array hardware itself in which case all is lost
anyway no matter how the drives were segregated, and if they really
wanted fault tolerance they would have bought more than one array.  And
around and around the arguments go.

Is there anyone on the list who would like to weigh in with some real
world experience and knowledge on the subject of using what I suppose is
a rather beefy, high-performance array.

--
//www.freelists.org/webpage/oracle-l



--
//www.freelists.org/webpage/oracle-l

Other related posts: