Question asked in such generality really doesn't make much sense and can only have one answer: it depends. Mostly, it depends on what is CPU doing. Well optimized queries will typically have a short burst or two of intense CPU activity and then will finish. Using 100% of CPU power is, unfortunately, also characteristic for "well cached" queries which can perform a gazillion logical block gets with no phyisical disk reads. An example of such query is the following: SELECT COUNT(*) FROM EMP,EMP,EMP,EMP,EMP,EMP,EMP,EMP,EMP,EMP; Table emp normally has 14 rows so the number of rows to count is POW(14,10) and RDBMS process will be spinning using 100% of CPU for approximately 30 minutes. What will you get? A worhless number which could have been computed in a much, much cheaper fashion. So, the answer to your question is: optimize your most expensive queries and only then try predicting scalability and growth of your database. -- Mladen Gogala A & E TV Network Ext. 1216 > -----Original Message----- > From: ryan_gaffuri@xxxxxxxxxxx [mailto:ryan_gaffuri@xxxxxxxxxxx] > Sent: Friday, October 22, 2004 3:56 PM > To: oracle-l@xxxxxxxxxxxxx > Subject: question about cpu usage > > > I'm not a hardware guy or sys admin person so forgive me if > this is a stupid question. Leaving out all other > variables(such as IO), should I expect performance to be the > same in a databse if the server it is riding at is at 90% cpu > usages as opposed to 10%? since there would still be spare > cycles? Or is there a declining returns as you get closing to > the maximum available cpu usage? > -- > //www.freelists.org/webpage/oracle-l > -- //www.freelists.org/webpage/oracle-l