Ryan Perhaps you can tolerate a mini-O.S. lesson. A process can be in several states -- running (only one process/CPU), waiting (for I/O, user response, etc.), or ready-to-run (or run queue). The O.S. lets each process have the CPU (running) for a short interval of time, or until it must wait for something like I/O. Once a process has its I/O satisfied, it is put on the ready-to-run queue until the O.S. decides to put it back on the CPU. Basically there is a queue of processes waiting for the CPU. If the system is nearly idle, that queue will be very short. When the process is back from the I/O, it is quickly put back on the CPU. If the system is heavily loaded, then the queue will get very long. If you are interested and on Unix, check out the uptime command, which will show you the length of the run queue on your server. Many people find the run queue a better indicator of how heavily loaded an OLTP server is. Dennis Williams DBA Lifetouch, Inc. -----Original Message----- From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On Behalf Of ryan_gaffuri@xxxxxxxxxxx Sent: Friday, October 22, 2004 2:56 PM To: oracle-l@xxxxxxxxxxxxx Subject: question about cpu usage I'm not a hardware guy or sys admin person so forgive me if this is a stupid question. Leaving out all other variables(such as IO), should I expect performance to be the same in a databse if the server it is riding at is at 90% cpu usages as opposed to 10%? since there would still be spare cycles? Or is there a declining returns as you get closing to the maximum available cpu usage? -- //www.freelists.org/webpage/oracle-l -- //www.freelists.org/webpage/oracle-l