RE: Flash technology based HDD will it make significant difference for OLTP applications?

  • From: <krish.hariharan@xxxxxxxxxxxx>
  • To: <j.velikanovs@xxxxxxxxx>
  • Date: Sun, 6 Jan 2008 21:17:03 -0700

I tend to agree and would like to add that the applications I have seen over
the last 12 years are not IO bound in spite of well tuned SQL but primarily
because of poorly written SQL statements; SQL statements that read or
processes more data than is needed to glean the information it wants. At the
risk of distorting and generalizing at the same time, I would like to think
that most access paths for OLTP systems are likely to be nested loops with
index and rowid access paths. To the extent that is true, I would think that
that data (index particularly) is in the SGA and consequently in ?real?
memory, with the response time component of such an operation bound to the
disk io only for the duration of a singleton fetch of the data row served
within15-60ms. 

 

I am sure there are a number of applications that need sub-sub second
response times. I wonder if these applications are better served with a
bigger SGA since I would like to think that for singleton reads the vast
majority of the time to get the data is in IO call set up and not in
retrieving the 2-8k block. I must also wonder if an in memory db like times
10 or a system such as Gemstone are better suited for these applications

 

Long story short, all that rant is about two questions, in my mind

1. Would that $5k would be a worthwhile spend given today?s advancements in
disk caches and disk technologies vis-à-vis OLTP systems using a
conventional RDBMS?

2. And as Dan indicated below, would it solve the fundamental problem that
is plaguing the application

 

Regards,

-Krish

 

 

 

  _____  

From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx]
On Behalf Of Dan Norris
Sent: Sunday, January 06, 2008 3:11 PM
To: j.velikanovs@xxxxxxxxx; oracle-l
Subject: Re: Flash technology based HDD will it make significant difference
for OLTP applications?

 

Hi Jurijs,

I don't have any direct experiences to share, but I can say that I have
talked to a lot of customers about memory-based storage in the past. The
biggest problem I've seen when memory-based storage is considered is that
the "solution" is considered before determining what the problem is. That
is, if you have a CPU bottleneck, then making the storage faster is not
likely to make a positive impact (in fact, possibly a negative impact). 

If storage latency is your primary issue, then making the disk service time
shorter will likely make a big difference. I'm also interested in hearing
real-world experiences as I've only talked about this--not implemented it.
In most of the cases that I've encountered, I ended up talking the customer
out of making a memory-based storage purchase because it wouldn't have
addressed the issue that they were trying to solve at the time. I always add
the caveat that tuning is an iterative process, so it is likely that
eventually storage will become the bottleneck and it may be worth
considering such a purchase at that time.




 

Other related posts: