Re: Solid State Drives

  • From: Kerry Osborne <kerry.osborne@xxxxxxxxxxx>
  • To: tanel@xxxxxxxxxx
  • Date: Sat, 2 May 2009 15:58:56 -0500

I have to say that Tanel's comments so far make the most sense (as seems to often be the case). Memory is in my opinion the more cost effective solution at this point, for all the reasons Tanel has already mentioned. I had a chance to work with a system last year that had not one but two large SSD systems in production. Nevertheless, they were spending 40-50% of their time waiting on "db file sequential read" during their nightly batch job. The i/o was about an order of magnitude faster than it would have been without the SSD (on the order of 0.1-0.2ms), but the batch job still wasn't fast enough. Increasing the 3G buffer cache to 6G eliminated the vast majority of the i/o and most of that wait time went away. And it didn't cost a penny in this case. But even if they had to buy additional memory it would still have been much more economical than the SSD solution.


Kerry Osborne
Enkitec
blog: kerryosborne.oracle-guy.com






On May 2, 2009, at 9:51 AM, Tanel Poder wrote:

Whatever the external device is based on (flash or plain RAM) - it is
probably cheaper and faster to have more RAM for the server itself than that
RAM in some external device.

And talking about Oracle databases:

1) As far as undo is considered, it's definitely cheaper to keep it in
buffer cache than ping it in and back from external device (you have logical IOs + physical IOs + external bus traffic versus just logical IOs in case of
buffer cache)

2) As far as temp tablespaces are concerned, it's definitely cheaper to keep
the sort+hash data in PGAs and temp tables in buffer cache, for above
mentioned reasons.

3) As far as redo writes are concerned the storage array write cache should take care of it and consolidate the small writes into larger ones. I haven't seen a corporation for years who'd run their database on an array *without* write caching ability. You just need to configure your cache right that it
wouldn't be polluted by unneccessary stuff by other systems and reads
(basically set write cache as a large portion of storage array cache and leave only a little for reads - as read cache should reside in server RAM -
buffer cache).

My point here is - yes, adding SSDs to storage array *can* help, but adding
*RAM* to servers can help more, it's cheaper, faster and involves less
complexity. Of course someone might say that "hey my little dual-CPU pizza box server doesn't support more than 8GB of memory and that's why I need SSD"... um... it's time to buy a real server first in this case. A single
DL785 supports up to 512GB of RAM for example.

--
Regards,
Tanel Poder
http://blog.tanelpoder.com


-----Original Message-----
From: oracle-l-bounce@xxxxxxxxxxxxx
[mailto:oracle-l-bounce@xxxxxxxxxxxxx] On Behalf Of Mark W. Farnham
Sent: 02 May 2009 15:46
To: tanel@xxxxxxxxxx; jeremy.schneider@xxxxxxxxxxxxxx;
mzito@xxxxxxxxxxx
Cc: andrew.kerber@xxxxxxxxx; dofreeman@xxxxxxxxxxx;
oracle-l@xxxxxxxxxxxxx
Subject: RE: Solid State Drives

We seem to have adopted an SSD==Flash assumption on this
thread. Given the faster cost drop in flash than in other
types of solid state memory, that may be appropriate. Still
there are other choices and whether they are economic or not
going forward, it has long been the case that if you really
needed isolated throughput to persistent storage for modest


--
//www.freelists.org/webpage/oracle-l



--
//www.freelists.org/webpage/oracle-l


Other related posts: