Storage array advice anyone?

  • From: "Stephen Lee" <Stephen.Lee@xxxxxxxx>
  • To: <oracle-l@xxxxxxxxxxxxx>
  • Date: Mon, 13 Dec 2004 12:29:48 -0600

There is a little debate going on here about how best to setup a new
system which will consist of IBM pSeries and a Hitachi TagmaStore 9990
array of 144 146-gig drives (approx. 20 terabytes).  One way is to go
with what I am interpreting is the "normal" way to operate where the
drives are all aggregated as a big storage farm -- all reads/writes go
to all drives.  The other way is to manually allocate drives for
specific file systems.

Some around here are inclined to believe the performance specs and
real-world experience of others that say the best way is keep your hands
off and let the storage hardware do its thing.

Others want to manually allocate drives for specific file systems.
Although they might be backing off (albeit reluctantly) on their claims
that is it required for performance reasons, they still insist that
segregation is required for fault tolerance.  Those opposed to that
claim insist that the only way (practically speaking) to lose a file
system is to lose the array hardware itself in which case all is lost
anyway no matter how the drives were segregated, and if they really
wanted fault tolerance they would have bought more than one array.  And
around and around the arguments go.

Is there anyone on the list who would like to weigh in with some real
world experience and knowledge on the subject of using what I suppose is
a rather beefy, high-performance array.

--
//www.freelists.org/webpage/oracle-l

Other related posts: