Re: full-scan vs index for "small" tables

  • From: "Stig Hornuff" <hornuff@xxxxxxxxxxx>
  • To: oracle-l@xxxxxxxxxxxxx
  • Date: Wed, 28 Jun 2006 07:07:59 +0000

Cary, I tend to agree with you when analyzing a given problem, but a first glance or the first indication is often an excessive amount of I/O (logical, physical - or both). And when we hit the jackpot it's usually in the 2 points covering the 80% (shitty sequel and no-brain 3GL).

Mladen, you're also right. It really IS so much more fun to guess. And the more experience we have in analyzing the better we are at guessing - and we even have more fun as it goes (well... maybe not always..) .... ;)) - unfortunately is doesn't solve all the problems.....

In my humble experience I've started to see problems more frequently on the O/S side - in particular in the choice of filesystems and/or configurations.

One example being Veritas Cluster Filesystem used by RAC. RAC is not the problem (ehh.... almost not), but VCFS is, despite their own description: Near Rawdevice Speed! - not quite true in my humble experience.
The filesystems are owned by a single coordinating process on one of the participating nodes, which makes the full I/O rate dependent of how freaking fast this single process can handle UDP sync packages accross the interconnect. So when the system tops the speed, the average node (having 6 cpu's) is utilized ~60%, when I/O requests are rather small on average (~64K accross all nodes with the majority at 16K) - giving a total throughput of ~650MB/sec for all nodes (not each node - but ALL together).
During backup and batch where I/O request size tops at 512KB, the I/O rate increases to ~800-900MB/sec.


Monitoring the system shows that I/O tops when the VCFS coordinator takes +90% of one cpu.

The theoretical I/O limit for this RAC system is the capacity of 12 2Gbit Fiber HBA's, ie. ~2GB/s.

In this case I totally agree with Cary that I/O is not the real issue and I had a lot of analysis in finding out the true cause. Guessing gave a good head start - but without the experience guessing is random instead of being "qualified guessing".


Just my 2c...

/Stig

----Original Message Follows----
From: Steve Perry <sperry@xxxxxxxxxxx>
Reply-To: sperry@xxxxxxxxxxx
To: cary.millsap@xxxxxxxxxx
CC: "Mladen Gogala" <gogala@xxxxxxxxxxxxx>,  <oracle-l@xxxxxxxxxxxxx>
Subject: Re: full-scan vs index for "small" tables
Date: Tue, 27 Jun 2006 21:46:52 -0500

On Jun 27, 2006, at 09:52 AM, Cary Millsap wrote:

"Most people guess ?It?s got to be I/O.? But ?it? is I/O in fewer than 5% of cases I?ve witnessed since about 1995"


What have been the majority of problems you've run into?

For me, IO used to be the problem ( after 2GB drives went away and before SANs showed up). Mainly because the sysadmins treated database file systems like "file servers". they'd put everything on a few spindles and fill them up. then mgt. would say the servers were under utilized if the average utilization was below 60% so they would load up the servers with applications to max out the memory.
After I changed companies and started using SANs, most problems seem to be cpu bound on the server. I've seen a few really good nested loops queries on small tables that would take hours to complete.


I'm curious what others have run into.


-- //www.freelists.org/webpage/oracle-l


Other related posts: