I have to say that Tanel's comments so far make the most sense (as seems to often be the case). Memory is in my opinion the more cost effective solution at this point, for all the reasons Tanel has already mentioned. I had a chance to work with a system last year that had not one but two large SSD systems in production. Nevertheless, they were spending 40-50% of their time waiting on "db file sequential read" during their nightly batch job. The i/o was about an order of magnitude faster than it would have been without the SSD (on the order of 0.1-0.2ms), but the batch job still wasn't fast enough. Increasing the 3G buffer cache to 6G eliminated the vast majority of the i/o and most of that wait time went away. And it didn't cost a penny in this case. But even if they had to buy additional memory it would still have been much more economical than the SSD solution.
Kerry Osborne Enkitec blog: kerryosborne.oracle-guy.com On May 2, 2009, at 9:51 AM, Tanel Poder wrote:
Whatever the external device is based on (flash or plain RAM) - it isprobably cheaper and faster to have more RAM for the server itself than thatRAM in some external device. And talking about Oracle databases: 1) As far as undo is considered, it's definitely cheaper to keep it inbuffer cache than ping it in and back from external device (you have logical IOs + physical IOs + external bus traffic versus just logical IOs in case ofbuffer cache)2) As far as temp tablespaces are concerned, it's definitely cheaper to keepthe sort+hash data in PGAs and temp tables in buffer cache, for above mentioned reasons.3) As far as redo writes are concerned the storage array write cache should take care of it and consolidate the small writes into larger ones. I haven't seen a corporation for years who'd run their database on an array *without* write caching ability. You just need to configure your cache right that itwouldn't be polluted by unneccessary stuff by other systems and reads(basically set write cache as a large portion of storage array cache and leave only a little for reads - as read cache should reside in server RAM -buffer cache).My point here is - yes, adding SSDs to storage array *can* help, but adding*RAM* to servers can help more, it's cheaper, faster and involves lesscomplexity. Of course someone might say that "hey my little dual-CPU pizza box server doesn't support more than 8GB of memory and that's why I need SSD"... um... it's time to buy a real server first in this case. A singleDL785 supports up to 512GB of RAM for example. -- Regards, Tanel Poder http://blog.tanelpoder.com-----Original Message----- From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On Behalf Of Mark W. Farnham Sent: 02 May 2009 15:46 To: tanel@xxxxxxxxxx; jeremy.schneider@xxxxxxxxxxxxxx; mzito@xxxxxxxxxxx Cc: andrew.kerber@xxxxxxxxx; dofreeman@xxxxxxxxxxx; oracle-l@xxxxxxxxxxxxx Subject: RE: Solid State Drives We seem to have adopted an SSD==Flash assumption on this thread. Given the faster cost drop in flash than in other types of solid state memory, that may be appropriate. Still there are other choices and whether they are economic or not going forward, it has long been the case that if you really needed isolated throughput to persistent storage for modest-- //www.freelists.org/webpage/oracle-l
-- //www.freelists.org/webpage/oracle-l