I am not graduated from Computer Science major, so I only talk about my own experience. I have seen both cases, 1. CPU runs at 100% and application performance does not degrade. (We run tuxedo middleware and I measured response time via txrpt). This application use database heavily and SQL was not good.CPU number was 4 and load (uptime) was 8-10. The response time was same between CPU usage at 60% and 100%. The bottleneck in the database was purely Logical IO(no disk IO contention and lock contention etc). 2. CPU usage rise to 70% and we see a lot of application report error. (Time out)This was because business grows and our server reached its IO capacity. A lot of disk io wait event and also a lot of Enqueue wait contention. On Fri, 22 Oct 2004 19:41:22 -0500, Nelson, Allan <anelson@xxxxxxxxxxx> wrote: > Response time for interactive users is non-linear with respect to cpu > utilization. The curve looks like a hockey stick with the puck striking > surface pointing up. If you are past the knee of the curve your > interactive users are receiving unpredictable response times. If you > are at 90% utilization and you have 4 CPU's or less you are probably > driving your interactive users mad. Query response time for them will > swing wildly between "normal" to 10 to 100 times normal. Your batch > jobs and your total throughput on the other hand might be doing pretty > well. You can get Cary Millsap and Jeff Holt's Optimizing Oracle > Performance from O'Reilly. You can go to hotsos.com and register for a > =66ree acount and get some white papers and an excel spreadsheet that will > show you where the knee on the curve is relative to the number of CPU's. > \-- Regards Zhu Chao www.cnoug.org -- //www.freelists.org/webpage/oracle-l