Okay, I'm a little OLTP-slow, so please be patient with this issue. I've got a table that's used heavily in an OLTP manner, residing in a DW (really an ODS). It averages 21,000,000 logical reads per day, 2300 buffer busy waits per day, and 7800 block changes per day. Its a status table used by an internally developed job scheduler, serving job schedules across 8 servers and 11 databases. I came across this table and have brought it up as an issue because of performance problems we're having in production, with this table understandably showing up at the top of worst offenders for various categories within STATSPACK reports.
Possibly the worse offender in STATSPACK, but really, how bad is that?
Is it causing a problem?
Is the job scheduler falling behind due to the BBW?
With 21M logical reads, 2300 BBW doesn't sound so bad, but then again, it depends on how long the BBW are and who had to wait for them.
the time (job status), with a composite key for the PK (never updated). The PK is in design only, with no physical index and/or constraint. Not sure why no index was created, but I'm thinking that may help a bit. My thought is to create a tablespace with a small block size, say 2 or 4K (vs. the 32K one this table is residing in now) and move it, along with various other tables used in the same OLTP fashion, to this 2k blocksize tablespace. I then thought I should create a physical PK within the same tablespace. My line of thinking is that access is nearly 95% of the time by PK columns - with an index and small blocksize, the number of blocks brought into cache and residing there would be very small and ideally cut down on the buffer busy waits. The logical reads should be cut down as well, since it'd be indexed access instead of full scans in the cache.
I am not following the thought process here. If 100k rows are spread across 300 32 blocks (assume 100 bytes per row), those 100k rows are going to require > 5000 blocks with 2k blocks.
If 10% of the blocks are accessed, this will require putting 500 blocks in cache with 2k blocks, whereas it only requires 30 blocks in the 32k tablespace.
Or maybe it's just because it's Friday. I'm particularly dense on Fridays.
I do agree that more smaller blocks could reduce BBW, but it would require testing in your environment.
In general variable blocksize TBS are not for tuning, they are for transportable tablespaces. That said, sometimes a feature meant for one purpose can be subverted for some other purpose.
If you want to reduce contention for blocks, you could also just build the table with a higher PCTFREE and reduce the rows per block, or just use MINIMIZE ROWS PER BLOCK.
design for this table, but am out of ideas. Anything I'm missing? I mean besides this is Oracle 126.96.36.199 on Tru64 5.1b, GS1280 with 8 CPUs and 32GB of memory.
It comes down to this:
1. Is this an actual problem in production?
2. Or are you just trying to reduce the #1 contention seen in a statspack report?
If #2 the solution is simple: Quit reading statspack reports.
-- Jared Still Certifiable Oracle DBA and Part Time Perl Evangelist