My point is that it isn't a per app server connection problem. I know one customer who thought that just 2 connections per app server was ok, but they had 5000 app servers. Needless to say their db system was very unstable. From experience I've seen a 10x reduction or more in db connections result in 3x or more throughput gains with reduced transaction times. The impact of such a large number of processes is very significant on the kernel and scheduler. For whatever reason, apps people think they need way more connections that is really necessary. Sometimes it's a result of "connection squatting" (holding connections for longer than the required db requests take), so their solution was simply to add more. Similar to how some fix cursor leaks - they just set open_cursors so some crazy large number so the error messages stop. I'm hesitant to believe that one needs a 130:1 (75000/576) db connection to db CPU ratio to achieve the desire throughput if the db connections are actually doing work. In reality a 2:1 or maybe 4:1 ratio should be more than enough for connection friendly applications. On Wed, Oct 13, 2010 at 11:06 PM, Tanel Poder <tanel@xxxxxxxxxxxxxx> wrote: > Well it depends of course, if you have 1000 application servers in a (HPC) > farm all directly connecting to that database, then 75 connections per app > server doesn't sound that much. > MTS is a memory-saver, but not CPU saver, especially when fetching/loading > lots of data. If you have the memory to afford all these dedicated > connection processes, it'll be faster (and less CPU hungry) way to use > dedicated servers... > > On Thu, Oct 14, 2010 at 6:13 AM, Greg Rahn <greg@xxxxxxxxxxxxxxxxxx> wrote: >> >> There may be issues with MTS but no where near the issues of having an >> application that needs 75k db connections. This must be one poorly >> written application. >> >> I can't imagine how much CPU the dispatchers would burn for that many >> connections. >> -- Regards, Greg Rahn http://structureddata.org -- //www.freelists.org/webpage/oracle-l