Re: I/O performance

  • From: Purav Chovatia <puravc@xxxxxxxxx>
  • To: niall.litchfield@xxxxxxxxx
  • Date: Fri, 8 Jun 2012 13:48:24 +0530

Hi Niall,
We benchmark all of our products in our labs on hardware similar to what we
deploy in production. During benchmarking we always measure and analyze IO
stats at an interval of 1 minute which we will soon reduce to 10 second.

We also run a forensic home-grown script in production that captures IO
stats again at a granularity of 1 minute which will soon be reduced.

However, when I am personally diagnosing issue, especially if IO seems to
be the bottleneck (and as we know, that is one of the most common reasons
of performance issues) then I gather the IO stats at a granularity of 2
seconds. I know it adds overheads but that is only for a period of 10
minute or so.

Very rarely do we have a SAN (say 1 or 2 out of 500 deployments), we always
have a DAS (or a shared storage in case of RAC). Let me know if more
details of the same helps. One thing that is always true is that for redo
(which in our case is always on a dedicated volume) the service time is 0.2
- 0.3 msec. And for data it is less than 5-7 msec.

HTH

On Fri, Jun 8, 2012 at 11:33 AM, Niall Litchfield <
niall.litchfield@xxxxxxxxx> wrote:

> I'm curious how many of you measure I/O performance ( IOPS, service times
> and MB/s ) regularly on your databases? And for those in SAN environments
> if you have access to ballpark figures for what you should be getting.
>
> --
> //www.freelists.org/webpage/oracle-l
>
>
>


--
//www.freelists.org/webpage/oracle-l


Other related posts: